Jump to content

Rob Defraggle

Experienced Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by Rob Defraggle

  1. What more reason does one need than to reduce development & testing time by dropping support of legacy Windows in future releases? The version 1 software still works as well as it did before, why not keep these old systems stable and stop upgrading the utititlies (and presumbably they've worked well enough for almost 10 years now or the OS would have been upgraded long ago).
  2. What's the advantage of being unable to login whilst it's defragging?
  3. No the Alt key combo fits the description in changelog of a new "Hot Key for Quick Defrag", when I run 1.21 ALT, A gets the Action menu up, but no key combos for Defrag of any kind, so it makes sense that Alt, A, D started full defrag in 2.0, and the new thing is the addition of Alt, A, Q.
  4. And suddenly it's so obvious!! Alt highlights the Hot Key with an Underscore under the hot letters for the possibilities; firstly Actions, Settings and Help Menus respectively. Then similar letters are undered to trigger next level option.
  5. It's the Master File Table, and part of the NTFS filesystem internals (likely reserved a large block to avoid fragmenting it).
  6. I agree it's annoying, but you should use some common sense on this. For example, I have a file system that would be "perfectly" layed out, if only 2 big files weren't fragmented. One is an 150 MB ISO file, the other a 250 MB avi video file. The ISO is in 4 pieces, the video in 2. Now whilst it is irritating to have 3% fragmentation shown, with 342 MB of fragmentation, 2 files, 6 pieces; the reality is these fragments do not have a significant performance impact. We defragment filesystems, to avoid the performance penality of highly fragmented files ie small files requiring vast number of disk seeks to read them in, not to avoid huge files being saved in small number of chunks. Striving for perfection, we hit diminishing returns where the costs of perfect defragmentation, out weigh massively, the benefits of faster reads.
  7. This would be generally useful, I have at least 9 drives shown of which 4 are just totally pointless having in the list, anyone with removeable storage is going to face this, inserting USB flash drives adds to it. Another reason is that I've noticed you cannot remove USB memory disks safely in the usual way, when Defraggler is started using the Win7 system tray icon. It claims the drive is in use, though I've found ejecting the drive from the "Computer" window, allows one to "Continue" from a warning popup, and actually does then allow safe removal according to the icon, with Defraggler not seeing the drive as mounted anymore.
  8. Don't worry about those files, they're System Volume Information used for the System Restore. They'll get deleted when the System Protection system recovers the space (you can adjust the amount set aside in Win7 from Control Panel System Systerm Protection). 8 fragments for a 1.5 GB file is NOT a problem. If you are sure you won't need to recover from past restore point you can turn off System Restore temporarily, before you defrag. In CCcleaner you can in Tools System Restore delete all but the last Restore point. The high fragmentation figure 7% is just a result of the way it's calculated, large files fragmented into a few huge chunks because of their size increase the fragmentation figure, far more than small files split into hundreds of pieces. The headline fragmentation % as it stands, just isn't very useful predictor of performance issues.
  9. If you read the Defraggler discussion forum, it is not long before you notice that new users, are confused by high fragmentation figures after runs. I believe this is often caused by large multi-fragment files, saved (mostly) in large chunks : o File fragments larger than 50MB which are ignored by Quick Defrag o System Volume Information o Large Files like pagefile.sys & hiberfil.sys which may be in only 2 or 3 chunks o System Files like $MFT & $UsnJml:$J Much Defraggler End User time would be saved, if blocks were only shown in red, when there was a real performance problem. It is human nature to want an "Blue" drive map display, and a defrag figure near 0%. The forum threads show plenty of evidence of people battling with multiple passes, and posting in frustration out of concern for high fragmentation figures. It is also annoying when using the "Move Large File" to end of drive option, to find a folder, or system file of some type, break the file up, meaning Defraggler tries to reassemble the 100+ MB fragments, despite them not being a real performance problem. Similarly there's complaints about running Defraggler and finding MORE not less fragmentation after the run, due to file layout changes. I would propose, that some tuneable chunk size (for example even a tiny 4MB chunk would be 1000 4KB blocks/pages, 8000 512 byte sectors) not count as a fragment to be reported, and that smaller fragments equal to the filesize not count as a fragment either. That would avoid the diminishing returns of laying out large files perfectly contiguously in 1 extent, and accept some (tuneable) chunk size as reasonable, as good enough. The performance costs of striving for a perfect on disk layout, are far greater than the real gains; the current fragmentation reporting are causing people to waste time "gaming" Defraggler & Windows to try and achieve all blue and 0%, turning off system restore points & fiddling with page files. Those who pedantically insist on the current behaviour could have the tuneable set to either a huge value or 0; which would likely aid in algorithm testing. The Quick Defrag "Have Fragments Smaller than" (50MB) option seems sensible, though perhaps a tuneable for full defrag is less doable, though avoiding shifting large file fragments seems equally desirable for performance reasons.
  10. When using Defraggler on a system with more than one active hard drive filesystem, rather than everything in drive C:, it can make sense to use different Defrag & Quick Defrag options to suit usage of each filesystem. One global policy doesn't make sense, and it is hard to twiddle the options interactively and remember to undo them and mitigates against using scheduled defrags. The suggestion would be to have Global options over ridden by Drive specific options, requiring a way to set these for each drive. This might provide better way to : - Choose file size and extension to be moved to end of drive - Set filesizes to be ignored by Quick Defrag - Disable Defrag entirely for an SSD drive or SD card for example (wouldn't it be nice if my drive list seperated out the undefraggable drives to rather than solely going by drive letter)
  11. The way to have pagefile.sys in 1 fragment is to recreate it in the System Advanced Performance Virtual Memory Tab, setting a fixed size (set Min & Max size to same value), after defragging so there's large free space available. You may need to create a pagefile in a seperate disk partition temporarily (or with enough RAM disable pagefile altogether temporarily). Turning off the hibernation facility, to remove hiberfil.sys, and then re-enabling it after finishing file & free space defragging, and then hibernating the system right away; has worked for me. Whilst I agree, that large files broken into huge chunks are NOT a real performance problem, them showing up as red and inflating the reported fragmentation figure of defraggler, is irritating and tends to mask excessive fragmentation of small cache files and the like. Having the huge files in single pieces tends to reduce the number of holes to fill, and make it more likely other files won't be fragmented later.
  12. Exposing the File List can show exactly which files have not been defragged. Now let's examine the figures displayed, 15 Files occupy 73 GB and are in 160 fragments. That actually means on average the fragments are about 1/2 a GB, which means they contain about a 100,000 4K blocks. The red blocks and high % figure, 73 GB fragmented out of approx. 200 GB used, look bad, but aren't showing a real problem. Each of those fragments are likely larger than most of the unfragmented files on the system.
  13. Seems like I just might have been unlucky after giving 2.0 a try, the reinstall of 1.21 shows similar issues now with some large files being stubborn, though the 1.21 runs weren't taking as long as the first few 2.0 trials. Obviously as each defrag alters the disk state, it's hard to make an objective comparison.
  14. If you have a huge file like pagefile.sys in only 2 big fragments on a fresh Win7 64 install it could account for most of the fragmentation you're seeing. I think from what I can see, the Windows defragger ignores fragments above a certain size, correctly regarding the extra disk seeks as insignifiant to disk read performance. Have a look at the file list, large files in only a few fragments can increase the headline fragmentation % alarmingly.
  15. Updating from 1.21 to 2.0 under Win7 64bit, running interactively the "Move Large File" option works as expected with Minimum size set to 100MB. Have noticed a 2.0 scheduled defrag, re-located all the large files from end of drive C:, re-running defrag interactively shifted them back as expected as per the options. There is one file over the 250MB default for minimum size in C: and it was moved to (C:\Windows\Installer\19230e-4.msp)
  16. I'll add others were files with names like $Bitmap which are NTFS internals and most likely not safe to defrag with filesystem online.
  17. Exactly, we defrag the disks to speed up read access to regularly used items. Hopefully we won't be using the "System Restore" points at all, so it makes sense to ignore them. May be the real problem, is that Defraggler includes such in the "Fragmentation %", similarly a 10 GB file stored in 2 multi GB chunks may massively increase the reported fragmentation, despite a neglible effect on read performance.
  18. Oh thanks, though from http://www.piriform.com/defraggler/download it's hard to see that File Hippo is preferred, but the archive is nice.
  19. Not necessarily, you can create more than 1 extra partition and have pagefile.sys reasonably close to multiple Windows installs (say Vista & Win 7 upgrade), with it shared between them. With a good amount of RAM eg) 4GB the pagefile may not even be used very heavily. Having a partition at end of disk, for all those huge media files and downloads, and less heavily used data, would actually be much more effiicient, at price of added complexity of not having everything in one huge C: drive.
  20. I managed to re-download the last Version 1 of Defraggler today, by simply adjusing the dfsetup URL substituting "200" -> "121", for those who need it http://download.piriform.com/dfsetup121.exe works for me in FireFox
  21. Anybody else noticed very slow defrag with v2.0 on FAT32? I have "move large file" option enabled, and whilst I have seen issues at times with large files on v1, actually causing a moved file to be fragmented, it seems like v2.0 is moving such files in some highly ineffiecient manner causing an issue which later versions of V1 weren't triggering. A 29.3 GB FAT32 filesystem with 11.6 GB used, can take more than hour to defrag with v2.0 now, where it would usually be a few minutes. The partition resided on same disk as NTFS filesystems which are defragging at normal speed. A freespace degrag appears normal, and copying the files off to 2nd disk and back, results in very speedy transfers. There was basically nothing to do, but shift some large video files, and try to defrag one with it's folder splitting it apparently.
  22. Not files and not fragmented (blue not red), think I discovered such were folders, that weren't compacted to front of drive. With time and defragging the freespace, some of the filesystems speckled by such have gradually had the white blocks gradually become less and less speckled. Had that with V1 defraggler, and only just switching to V2.
  23. I have seen similar, if you click on the blocks of red files and then look in the "Highlighted" tab, you will find what files are not getting defragged. They may be files like pagefile.sys & hiberfil.sys, or other system files, that are not defraggable. Another possibility I have seen is where a folder is apparently "embedded" in the middle of a large file moved to end of disk. The defraggler % shown and the red tends to look worse than it really is. A 400 MB file could be in 100 pieces, and will look terrible but may actually not harm performance significantly if the fragments are clusters of 4MB for instance and represent about a 1000 4KiB blocks. The Windows defragmenter appears to not worry about huge fragments, probably neither should you.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.