Jump to content

darkfalz

Members
  • Posts

    6
  • Joined

  • Last visited

Reputation

0 Neutral
  1. Please fix this terrible bug, I will donate...
  2. Defraggler still has the bug with 100% core usage and defragging only one small file every 30 seconds or so... so sad.
  3. There is clearly a bug in the algorithm or it is extremely poorly optimised. While there are a large number of files remaining to defrag, Defraggler goes extremely slow with 100% core usage, defragmenting only one small file every 30 seconds or so. I see the software is now essentially unsupported and discontinued, which sucks because it is the best defragger IMO just with some major bugs / usage issues (being able to do multiple drives at once, even via a second instance, disable sleep while running etc.).
  4. There's a bug with defraggler where a large file that is "defragmented" around some unmovable files (ie. $MFT) seems to overly fragment them instead. For example the file X where U is the unmovable file. Rather than splitting it into two contiguous segments either side of U, it somehow gets "defragmented" into hundreds of fragments on full disk defrag, even though it's occupying a contiguous area on the disk, it's all in fragments out of order. XXXUXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX I have observed this on a number of occasions any time Defraggler places a large file around unmovable files.
  5. Second this and should also apply to Defrag Files mode. I use Defrag files (check all) like a "quick" defrag method - it should respect the Move to end of disk setting and just like it moves fragmented file to the first available appropriately sized free space block on the disk, it should move the end of disk files to the last available free space block.
  6. Defraggler seems to be pretty bad at finding loose (unconsolidated) files to fill gaps that appear in defragmented and consolidated space - granted, there aren't always available files to fill such gaps. But Defraggler's all or nothing approach means watching it move 100s of GB of data across one block at a time to adjust for small disk changes even if you did a full Defrag a day ago. Insanity! Perfect consolidation is only necessary if you want to shrink a volume by the maximum amount. Otherwise, it's not necessary. I suggestion an option that allows Defraggler to have a certain amount of empty space between defragmented/consolidated files. You could either implement it as a slack space "bank" (of say, 1GB) or as a percentage of the total disk, ie. aim for 99% consolidation (instead of effectively 100%). Both could be configurable. This slack space would also be filled in naturally by smaller files ie. log files and such. Also, please allow processing of multiple disks at once, and if possible, multithread the shuffle algorithm - the disk should always be the bottleneck here, never the processing. I would definitely buy a pro version with these options.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.