If you read the Defraggler discussion forum, it is not long before you notice that new users, are confused by high fragmentation figures after runs. I believe this is often caused by large multi-fragment files, saved (mostly) in large chunks :
o File fragments larger than 50MB which are ignored by Quick Defrag
o System Volume Information
o Large Files like pagefile.sys & hiberfil.sys which may be in only 2 or 3 chunks
o System Files like $MFT & $UsnJml:$J
Much Defraggler End User time would be saved, if blocks were only shown in red, when there was a real performance problem. It is human nature to want an "Blue" drive map display, and a defrag figure near 0%. The forum threads show plenty of evidence of people battling with multiple passes, and posting in frustration out of concern for high fragmentation figures. It is also annoying when using the "Move Large File" to end of drive option, to find a folder, or system file of some type, break the file up, meaning Defraggler tries to reassemble the 100+ MB fragments, despite them not being a real performance problem. Similarly there's complaints about running Defraggler and finding MORE not less fragmentation after the run, due to file layout changes.
I would propose, that some tuneable chunk size (for example even a tiny 4MB chunk would be 1000 4KB blocks/pages, 8000 512 byte sectors) not count as a fragment to be reported, and that smaller fragments equal to the filesize not count as a fragment either. That would avoid the diminishing returns of laying out large files perfectly contiguously in 1 extent, and accept some (tuneable) chunk size as reasonable, as good enough. The performance costs of striving for a perfect on disk layout, are far greater than the real gains; the current fragmentation reporting are causing people to waste time "gaming" Defraggler & Windows to try and achieve all blue and 0%, turning off system restore points & fiddling with page files.
Those who pedantically insist on the current behaviour could have the tuneable set to either a huge value or 0; which would likely aid in algorithm testing.
The Quick Defrag "Have Fragments Smaller than" (50MB) option seems sensible, though perhaps a tuneable for full defrag is less doable, though avoiding shifting large file fragments seems equally desirable for performance reasons.