Jump to content

Xuefer

Members
  • Posts

    3
  • Joined

  • Last visited

Reputation

0 Neutral
  1. benchmark shows my hd is capable of around 1MB/s 4k random seek read or(not and) write, while around 70~90MB/s sequence read or write when defragging big files it looks like on random 4k reads a lot, HD tune monitor says 16MB/s which is better than 4k random seek but much worse than sequence read/write can you please make it more sequence read for big files? and possibly take advantage of read-ahead/cache-warm-up something for small files
  2. small files are same but not affected because it's rare to stop defragment in the middle of small files for a big file, each time you stop (not pause) and and defrag again on the specified big file, you start from 0% this is not a display issue, say it already done 50% but when it re-do the remaining 50% it show 0%. it simply start over. i'm sure about it because i can see the drive map seeing that previous moved to, is still being read from this is not only a stop/resume feature request which you may claim not supported, it's defrag poor speed which mean the whole file need to be moved to a whole new place even if there's enough space after the fragment let's say, number=space, letter=data fragment a=1 | b=_ | c=2 | d=3 | e=_ | f=_ | g=_ | h=other i think is can and should only move 2 to b, 3 to c but it move 1 to e, c to f, 3 to g right after it's done moving 1 i stop it, looks like a=_ | b=_ | c=2 | d=3 | e=1 | f=_ | g=_ | h=other since f and g cannot hold 1 2 3, even if f g can hold 2 3, it decide to move 1 2 3 to else where start by moving 1 to some where
  3. Parallel support is nice, but simply remove the pre-condition check is wrong. like [removed brand name] and many others do, defrag on multiple partition on the same disk will slow down by making more seeks than necessary. parallel on multiple disk basis not partition basis Edited by nergal: removed competition name
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.