Jump to content
CCleaner Community Forums

Rob Defraggle

Experienced Members
  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Rob Defraggle

  • Rank
    Advanced Member
  1. It generally doesn't. A FAQ is "why is my disk filled with large fragmented files?" almost always System Volume Info used for restore. Was your disk fairly full, and not defragged before with Defraggler? It tends to do more defrag work, and compacts unlike the Win defrag.
  2. It looks like it's defragging a file in C: drive, not G:
  3. Thank you for the explanation and helpful interest by the way So for interest, I'll republish for both Win7 & Vista System disk for comparison purposes. Never ran Contig, on the Win7 system partition as I haven't had any anomaly with the $MFT which seemed to need fixing. The 0xC0000 offset must be what romanov meant by "first cluster" of $MFT, so Defragglers Full Defrag layout policy sounds like it's seriously compromised, if it is filling in very many small holes, at the beginning of the Full Defrag (as it appears to be doing). Win7 64 bit upgrade - fresh NTFS made during i
  4. Ah OK, sneeky I missed the 'not deleted' file option. Filename: $MFT Path: V:\ Size: 178 MB (187,105,280) State: Not deleted Creation time: 19/10/2007 23:20 Last modification time: 19/10/2007 23:20 Last access time: 19/10/2007 23:20 Comment: No overwritten clusters detected. 4 cluster(s) allocated at offset 786432 1 cluster(s) allocated at offset 1233057 1 cluster(s) allocated at offset 1224516 6 cluster(s) allocated at offset 4458165 The last Contig run to fix V:\$MFT, I didn't save the actual output, it was similar to the previous : Processing V:\$Mft: Sca
  5. Still present with 2.03.282 According to the Contig tool the $MFT is in 1 piece and moved (may be it is fibbing and discounting first cluster of $MFT), but at least it allocates the rest in 1 piece though I'd prefer nearer the main file blocks and $Bitmap next to it. I've installed and tried to use Recuva, but it just seems to want to look for deleted files. If it can report on $MFT cluster allocation, it is not obvious enough how to do it to me even after looking through the program feature documentation. Could you explain how, please? If I know what clusters are free, I think
  6. They have in 2.03 speeded up the defrag a lot on my FAT filesystem partition, for me it's been the best release so far but perhaps you have discovered a regression. I am suspecting it may be slower when compacting many small files during full defrag, after I did some software uninstalling in an NTFS partition, but I don't think I could repeat it easily trying with an older version. A bug I found & reported in the Bug Report section of forum, caused massive fragmentation of the $Mft. To provide more info, I was asked to create a debug logfile to upload and submit that in the Bug Repor
  7. Very likely a non-zero "large file size" threshold is turning on the feature, and the default is zero which turns it off; re-using a value in this way to mean 2 slightly different things is quite common. With Windows 7 and as far as I can see in Vista, it isn't done that way anymore. A pagefile when used heavily, is likely to need fast Random Access, rather than maximum sustained sequential read/write speeds, similarly the $Mft. Rather than locating the pagefile on fastest part of disk, they seem to go around the centre of freespace, after most system & user files, until they fi
  8. Use the drive map feature to discover what these clusters contain, and then an answer will likely become clear.
  9. Good point! The "Quick Defrag" would suffice to catch any small heavily fragmented files to catch cases where defragging really helps alot.
  10. Because hibernation is really turned off, but with the state saved so it can be restored on reboot. This is faster for most people, than having the desktop restarted and all those services restarted. Sane people, don't want a 25s boot, only to then spend a minute on mouse clicks to reorganise their applications. They'd rather have it all just come back the way they left it, which happens with Hibernate and allows a full power off including the PC power suppy.
  11. Is this really significant? The most popular UNIX style filesystems, have traditionally actually deliberately spread files around as a deliberate strategy, so that large files can be stored in contiguous chunks, and small files stored near their containing directories (folders) unless the disk is very full. By doing this they reduce greatly the need to run defragmentation programs. Whilst some speed up in benchmarks would be obtained for sustained sequential read speed by compaction, I'm not at all convinced any really perceivable difference would be noticed by the end user in daily desk
  12. Hibernate works reliably with solid drivers, I hardly ever Shutdown Windows 7 because the regular OS updates mean frequent reboots anyway. Not only does it turn your machine off and re-boots to useable Desktop faster, but Hibernate restores the desktop appliction setup to exactly where you left off, so you can resume next morning without pain of re-logging in and restarting applications. Sleeping is not a feature necessary for me, as I have it kick in automatically on idle. What would be useful is if Defraggler was able to disable all Power Saving Suspends, until it finished so you
  13. Yes, I would like option to Hibernate rather than Shutdown. I don't tend to use the "Shutdown" feature though, as when I did try it, on powering on, Windows would start and then shutdown again requiring an additional reboot for some reason.
  14. I regularly use "Defrag Freespace" and it is not doing that for me (think someone else reported similar), Windows 7 64 bit. The point is though, that compaction is already done by full defrag; there's an option to "Move Large Files" by a selection criteria to end of disk, but howwever there's no analagous way of selecting files to be preferentially moved to beginning of disk, before all the mass after compaction. I can imagine some people wanting maximum sequential read speed on some large files for example, or localising a folder and small files to be close together on disk via such a f
  15. If it's fairly simple to do, based on code already existing for the defrags, I would find it useful on occasion to be able to multi-select drives for analysis, and then view the drive maps, rather than have to select drive, choose analyse and then look at the map. It would be more orthogonal if the UI worked same way for all drive actions, allowing queuing.
  • Create New...