Jump to content

Less Strict Fragment Defintion, Display & Reporting


Recommended Posts

If you read the Defraggler discussion forum, it is not long before you notice that new users, are confused by high fragmentation figures after runs. I believe this is often caused by large multi-fragment files, saved (mostly) in large chunks :

 

o File fragments larger than 50MB which are ignored by Quick Defrag

o System Volume Information

o Large Files like pagefile.sys & hiberfil.sys which may be in only 2 or 3 chunks

o System Files like $MFT & $UsnJml:$J

 

Much Defraggler End User time would be saved, if blocks were only shown in red, when there was a real performance problem. It is human nature to want an "Blue" drive map display, and a defrag figure near 0%. The forum threads show plenty of evidence of people battling with multiple passes, and posting in frustration out of concern for high fragmentation figures. It is also annoying when using the "Move Large File" to end of drive option, to find a folder, or system file of some type, break the file up, meaning Defraggler tries to reassemble the 100+ MB fragments, despite them not being a real performance problem. Similarly there's complaints about running Defraggler and finding MORE not less fragmentation after the run, due to file layout changes.

 

I would propose, that some tuneable chunk size (for example even a tiny 4MB chunk would be 1000 4KB blocks/pages, 8000 512 byte sectors) not count as a fragment to be reported, and that smaller fragments equal to the filesize not count as a fragment either. That would avoid the diminishing returns of laying out large files perfectly contiguously in 1 extent, and accept some (tuneable) chunk size as reasonable, as good enough. The performance costs of striving for a perfect on disk layout, are far greater than the real gains; the current fragmentation reporting are causing people to waste time "gaming" Defraggler & Windows to try and achieve all blue and 0%, turning off system restore points & fiddling with page files.

 

Those who pedantically insist on the current behaviour could have the tuneable set to either a huge value or 0; which would likely aid in algorithm testing.

 

The Quick Defrag "Have Fragments Smaller than" (50MB) option seems sensible, though perhaps a tuneable for full defrag is less doable, though avoiding shifting large file fragments seems equally desirable for performance reasons.

Link to comment
Share on other sites

  • 4 weeks later...

So, long story short, you want:

 

- Large system files (pagefile/hiberfile/sys volume etc.) to be excluded from the defrag report by default (but adv users can defrag it if desired)

- Files grouped by default in a predefined defrag mask that sets 4k clusters (or some other predefined cluster set) to optimize defrag.

 

Right?

Link to comment
Share on other sites

  • 2 weeks later...

In short I suggest a saner more pragmatic reporting of fragmentation and indeed toleration of fragments in large chunks (like Quick Defrag does already). Fragmentation has a large overhead when files are split into very many small pieces, not when they are merely in several large chunks requiring a small number of extra disk seeks to read them.

 

At present (unlike the Windows 7 defragger) defraggler is highlighting every additional fragment, no matter how small the proportionate overhead the extra fragment will really cause.

 

The way it is currently reported is causing the end user to waste time and resources defragging hyper thoroughly, hitting diminishing returns to be rid of an alarmingingly high fragmentation %. They are also regularly posting questions in the forum concerned about the high reported fragmentation %.

Link to comment
Share on other sites

It is up to the user's discretion to defrag when they want. To only have Defraggler notify when a significant performance gain can be achieved would be impossible for Defraggler to gauge unless it ran a benchmark test everytime it analyzed the drive, and more often then not with modern hard drives you won't shave off much time even if you defrag. While your approach to restrict parts of the report would probably prevent some user confusion ultimately the only solution is a higher degree of computer literacy.

 

Maybe Defraggler could notify if a file is a system one. That could save some users some hassle in their quest for a blue drive.

 

By the way, if you set the page file to one size and then do a boot-time defrag you'll never have to worry about a fragmented page file again.

Link to comment
Share on other sites

First, thanks for entering into discussion, hopefully answering your points will make it clearer, why I think a tunable chunk size will in practice help end users without preventing an expert from reviewing every single fragment.

 

In fact considerting how to improve the "computer literacy" and be more useful to the expert, more information could be conveyed, by using red when there's small fragments present, and a purplish blue, where files are not contigous but are in large chunks to draw attention of those wanting even huge files to be contiguous despite the heavy overhead over time maintaining such a layout (compacting). The colour coding would be very similiar to the allocation density idea, with heavily used blocks in darker blue and sparse blocks shown in light blue.

 

The frequent recommendation to remove System Restore points, and other tips to get to a 0% (or near) fragmentation %, can't really be the best use of user time, hard disk bandwidth or electrical power. System Restore's a useful feature when you do discover you need it, even if that is relatively rare.

 

It is up to the user's discretion to defrag when they want.

PCs are meant to aid productivity, there's all sorts of things chosen for the end user by the hardware and software designers. A tunable chunk size, allows the expert to turn off filtering when desired to drill down to every fragment, but avoids drawing attention on every run to isignificant levels of fragmentation. Those who wish to have all fragmentation reported can chose it, when they want. At present I cannot turn off this misleading reporting, but have to check the file lists and simply ignore the block colouring and global defrag % because the info presented is not really useful, only the filesize and fragment count is.

 

The current situation rather than educate the user, effectively misleads, the slightest level of fragementation results in huge areas of red blocks, and an alarmingly high "fragmentation %" which suggests it's a problem, rather than adding a few milli-second to a large file access which may take the disk a minute to read in at maximum speed.

 

The Windows Defrag utility, avoidss alarming the user, but is inflexible, in that you can't defrag selectively a small number of hevily fragmented small files (in a cache say), which can be expected to be re-read and would benefit from being contiguous, without defragging the whole filesystem.

 

Effectively the current reporting is underplaying the advantages of Defragglers file based defragmentation feature, by steering new users towards performing a full defrag (by saying the file system is 30% fragmented). In actual fact, most likely they don't need the file system Defrag at all but just use Quick Defrag, and manually select a few extra files (because Win 7 & Vista have a regular scheduled defrag by default).

 

To only have Defraggler notify when a significant performance gain can be achieved would be impossible for Defraggler to gauge unless it ran a benchmark test everytime it analyzed the drive, and more often then not with modern hard drives you won't shave off much time even if you defrag.

Whilst I do understand your point, I think there's no need in practice to calculate exactly, as you say that one doesn't notice the improved lay out with modern disks, what you would notice is reduced defrag time.

 

Rather than dynamically benchmark, just realise that when reading large amounts of data, the seek time becomes insignificant. The contigous pieces cannot be read back in one transfer, and whilst reading large files back, there's likely to be lots of other seeking anyway in service of other tasks. The fragments are adding mili-seconds, which the OS can anticipate anyway for sequential access via read ahead, for files that take many seconds to read.

 

 

On the forum, there's a good sample of confused posters, who probably are in the minority of those who become alarmed. Rather than being steered towards the unique strength of Defraggler (file defragmentation), they're becoming concerned about increased fragmentation and anomalous %'s reported.

Link to comment
Share on other sites

I'm not against the purple coloured block idea although the default MFT area colour would have to be changed of course.

 

Actually most of the high reported fragmentation and confusion arises from attempting to defragment system files especially those using Windows Vista or Windows 7 trying to defragment System Volume Information which is why I suggested notifying if a file is a system or otherwise undefragmentable file, or excluding them altogether although I favour the former suggestion.

 

The chuck size idea of course would help the novice user but it doesn't do anything to inform those that don't realise that you don't need to quash every fragmented file. Once users discover the chuck size setting and decide to set it to 1PB the issue arises again. The problem is that a lot of people that switch from Windows Defrag to other defrag software do it because they think it will give them big performance gains and 0% fragmentation=10000% performance boost and they are adamant this is the case, driven on by articles postulating this point. It will always be a 'battle' between those that are more informed and those that aren't as much.

 

One thing I would like is there to be more of a presence of file defrags because most people new to Defraggler begin by clicking on drive defrag instead of doing a file defrag and thinking that Defraggler is slow and useless.

Link to comment
Share on other sites

I'm not against the purple coloured block idea although the default MFT area colour would have to be changed of course.

Good point, a reddish-blue for partly defragmented area seemed logical, perhaps it would be distinguishable from MFT colour.

 

Actually most of the high reported fragmentation and confusion arises from attempting to defragment system files especially those using Windows Vista or Windows 7 trying to defragment System Volume Information which is why I suggested notifying if a file is a system or otherwise undefragmentable file, or excluding them altogether although I favour the former suggestion.

But not all of it, I have had similar with large AVI files, patches distributed as exe's and iso's. The SVI info would generally be excluded by a tuneable chunk size, as it appears some care is taken over the block allocation. You hit this particularly if you have any non NTFS FAT32 filesystems in use, as the folders aren't relocatable by defrag it seems, and tend to annoyingly end up inflating the fragmentation figure by splitting large files.

 

The chuck size idea of course would help the novice user

Only if the default is to have it set to a sensible value like the 50 MB used by Quick defrag. Not compacting these large chunks would probably help to, so the folders and small files can occupy a denser area hoping to regularly make some seeks redundant (more folders & files read in short space of time in same cylinder).

 

but it doesn't do anything to inform those that don't realise that you don't need to quash every fragmented file. Once users discover the chuck size setting and decide to set it to 1PB the issue arises again. The problem is that a lot of people that switch from Windows Defrag to other defrag software do it because they think it will give them big performance gains and 0% fragmentation=10000% performance boost and they are adamant this is the case, driven on by articles postulating this point. It will always be a 'battle' between those that are more informed and those that aren't as much.

Yes, which is reason I sugested a tuneable, I was actually expecting more negative reaction from the "no fragmentation is acceptable" camp. It does not take most people too long though to realise, that 0% fragmentation and compacting is causing a lot of I/O work, for no perceptible performance gain.

 

One thing I would like is there to be more of a presence of file defrags because most people new to Defraggler begin by clicking on drive defrag instead of doing a file defrag and thinking that Defraggler is slow and useless.

I agree, I ponder suggesting change of buttons, call "Quick Defrag" something like "File Defrag" and default to it, and change "Defrag" to "Thorough Defrag" or "Compacting Defrag".

 

As non-Advanced users, probably are benefitting from scheduled Windows Defrag utility anyway, most of the performance gain would be realised via the quick option.

Link to comment
Share on other sites

Another thread illustrating the point about the effect of current fragmentation reporting = defrag result higher and higher

 

Incidentally a benefit of "chunking" could be to speed compaction, as rather than needing to shift huge very variable sized fragments, you'ld have many more standard size large file chunks plus smaller over size last fragments within a more constrained range. Holes could more often be filled by relocating chunks, rather than shifting GB's of files one after the other as tends to happen currently.

Link to comment
Share on other sites

  • 4 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.