Jump to content

HansSchulze

Members
  • Posts

    1
  • Joined

  • Last visited

Reputation

0 Neutral
  1. I created a 2 TB USB3 file-by-file copy of another 2 TB drive, and due to Windows XP32 reserving a lot of space on the drive, the last few large files got horribly fragmented. I had to use "fsutil file create xxx" to get back some of the space so DF could have some space to work (250 GB). Then I watched 2.05.315 try to clean up the drive. BTW my settings include to put huge files on the top. The Defrag Free Space option only moved a couple of files, then finished, less than 5 minutes, but lots of small fragments left. DF full moved EVERY file, even the huge areas that weren't fragmented, and still left about 20 GB of contiguous fragments (8x8 blocks on 25x16 monitor), over a period of almost 2 days, before it reached the lower end of the large files that were DF'd by a previous version (and almost no free space). There are essentially three levels of defrag you can do... a ) make the largest free space hole as large as possible, with optional forced fragmentation (got that) b ) go crazy and move every file (what I see sometimes) c ) Try to move the fragmented files so as to best fill up the holes. I like this the best. I would expect that if my maximum file size is say 1 GB I should not get a contiguous hole bigger than 1 GB. If there are file sizes of 20 MB-1000MB, I should not have many holes larger than 20MB. Generally if you move slightly smaller files into the smaller holes, it also helps to sort the files by size such that the largest file at the top of the drive is the largest file on the drive, and smaller as you go towards the middle, so that you almost always find a smaller file from the middle of the drive to fill a new hole near the end. Likewise for the even smaller files at the beginning of the drive. The outside of the drive gives faster reading speed to often-used larger files, while the small ones don't take time anyways. Unless you are sorting based on file sizes and/or usage count, I hope that you don't have to move every file during a "normal" defrag. Or figure out how "serious" a user is about how much to move by giving maybe a third or fourth option. I do believe that this "slowness" is version specific, I didn't have this sort of optimization happen every time on every machine. I like the product, clean, sensible, not expensive. Keep up the good work. I will send in some money, because good work deserves it I do recommend a drive cleaning strategy for polluted older systems: - turn off hybernation (os specific commands) and swap file if possible (need 2GB RAM for Windows). - Turn off the USN journal - Disable and delete the restore points - Run CCleaner and add the option to delete the updates uninstall files (hundreds of MB) - Toss large movie/game files (500MB+) files to the top of the drive - DF * the slow part - Turn swap file back on, reboot. Another comment: due to the number of bad sectors on large drives, it seems silly to call a large file that hops over a bad sector to be "fragmented". I would ignore those, otherwise you will never hit 0%. It's bad enough that there are $MFT and $USNJRNL and other hidden system files that you usually can't move.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.