Last night I stopped the defrag that claimed that it had only one minute to go. Today I restarted it, and so far it has been running for about 12 hours. This is on a disk that is basically unused. That's just one example of how bad the estimates are.
It seems to me that it should be easy to have a quite accurate estimate. The way I see it, defrag can be done in 3 stages:
1) Build list of files and blocks
2) Decide what blocks need to be moved where
3) Move the blocks
After stage 2, you know exactly how many blocks need to be moved, as well as how much fragmentation will remain. At the very least you could have a progress indicator which simply shows the number of blocks remaining to be moved.
Once stage 3 has run for a few minutes, you've moved at least hundreds of blocks, and you have a pretty good idea what the average time per block is. Easy peasy.
even Microsoft and the estimated time remaining when copying or moving files suffers the same problem, and has since day one.
it's all due to moving goal posts.
whenever the data in question is constantly changing, no progress bar I've seen gets it right.
sadly it can't be 'easy peasy' - if it was they'd address it don't you think rather than leaving it there and sniggering - "hee, hee, this will mess with their heads"
your 3 step process is missing quiet a few fundamental steps which all take time.
The closer you get towards the slower area of the disk the defrag process will slow down, and if there's allot of small files that will also cause a speed decrease.
I personally pay no attention to Defraggler's remaining time because often on my system it grossly over estimates giving the illusion it will take say 45 minutes and instead it's done in 10 minutes.
People have complained about it enough over the years I'm surprised they have just removed the remaining time, or turn it off by default and instead make it an option that has to be turned on.
In my case, the data is not changing. Small files have no impact since I'm assuming we generate a list of disk blocks to move. Whether those blocks comprise a lot of small files, or one huge file, makes little difference. OK, depending on where the block pointers live, it might require more writes to the index, but we're talking a difference of factor 2 or 3 at most. In my case, I believe there are few files less than 1MB.
If you can generate the entire list of block moves in stage 2, then you also know where the blocks are, and can easily compensate for those which are on the slower side. Even without any such compensation, the total error would be far less than one order of magnitude.
Apparently the algorithm used does not create the entire list of required moves in stage 2. I'd be pleased to have my ignorance relieved as to why the task is more complex than I have suggested. I'm guessing it has to do with the fact that the disk is mounted, arguably requiring a more dynamic approach.