Jump to content

Rob Defraggle

Experienced Members
  • Posts

    98
  • Joined

  • Last visited

Posts posted by Rob Defraggle

  1. Running Defraggler deletes all of my system restore files in Windows 7 Ultimate. Can someone advise as to how to change the settings in Defraggler so it does not delete those system restore files

    It generally doesn't. A FAQ is "why is my disk filled with large fragmented files?" almost always System Volume Info used for restore.

     

    Was your disk fairly full, and not defragged before with Defraggler? It tends to do more defrag work, and compacts unlike the Win defrag.

  2. Thank you for the explanation and helpful interest by the way :D

     

    So for interest, I'll republish for both Win7 & Vista System disk for comparison purposes.

    Never ran Contig, on the Win7 system partition as I haven't had any anomaly with the $MFT which seemed to need fixing.

     

    The 0xC0000 offset must be what romanov meant by "first cluster" of $MFT, so Defragglers Full Defrag layout policy sounds like it's seriously compromised, if it is filling in very many small holes, at the beginning of the Full Defrag (as it appears to be doing).

     

    Win7 64 bit upgrade - fresh NTFS made during installation

    Filename: $MFT
    Path: C:\
    
    Size: 150 MB (157,286,400)
    
    State: Not deleted
    
    Creation time: 06/11/2009 01:05
    
    Last modification time: 06/11/2009 01:05
    
    Last access time: 06/11/2009 01:05
    
    Comment: No overwritten clusters detected.
    
    38400 cluster(s) allocated at offset 786432
    6 cluster(s) allocated at offset 4085789

     

    Vista 32bit

    Filename: $MFT
    Path: V:\
    
    Size: 178 MB (187,105,280)
    
    State: Not deleted
    
    Creation time: 19/10/2007 23:20
    
    Last modification time: 19/10/2007 23:20
    
    Last access time: 19/10/2007 23:20
    
    Comment: No overwritten clusters detected.
    
    4 cluster(s) allocated at offset 786432
    1 cluster(s) allocated at offset 1233057
    1 cluster(s) allocated at offset 1224516
    6 cluster(s) allocated at offset 4458165
    

     

    Although the Win7 is on a later slower part of disk, the scan was noticeably fast compared to that of V:

     

    I have of course tried multiple passes of full defrag & freespace defrag & run of Contig to hope to get a reasonable $MFT that Defraggler likes, rather than relocating it on every Full Defrag run.

  3. Ah OK, sneeky I missed the 'not deleted' file option.

     

    Filename: $MFT

    Path: V:\

    Size: 178 MB (187,105,280)

    State: Not deleted

     

    Creation time: 19/10/2007 23:20

    Last modification time: 19/10/2007 23:20

    Last access time: 19/10/2007 23:20

     

    Comment: No overwritten clusters detected.

     

    4 cluster(s) allocated at offset 786432

    1 cluster(s) allocated at offset 1233057

    1 cluster(s) allocated at offset 1224516

    6 cluster(s) allocated at offset 4458165

     

     

    The last Contig run to fix V:\$MFT, I didn't save the actual output, it was similar to the previous :

     

    Processing V:\$Mft:

    Scanning file...

    Scanning disk...

    File is 45679 physical clusters in length.

    File is in 1699 fragments.

     

    Moving 45679 clusters at file offset cluster 4 to disk cluster 13894015

    Moving 45679 clusters at file offset cluster 4 to disk cluster 13894015

     

    My thinking was I could engineer a free spot for the cluster allocation, by either making a file and deleting that occupied roughtly the right place, or when it's tidy, subtract a good number of clusters from the $Bitmap allocation found or something. See those files are in goodly free space, after the full Defrag runs; I had a good clean up of this partition V:\

     

    If the 4 cluster(s) allocated at offset 786432 is $MFT and 1 cluster(s) allocated at offset 1233057 is $Bitmap as shown by Defraggler drive map, then the numbers seem plausible. Shame I didn't log my last Contig run, it seems to fit romanov's explanation anyway.

     

    It's confusing if Contig is lying somewhat in it's output, and then you see $MFT & $MFT::Bitmap being 1 fragment each, and then Defraggler seeing 2 frags for a supposedly defragged file.

     

    Suffice to say, I probably am best ditching V: and re-initialising the FS beore re-using. Though it's a real weakness in the $MFT allocation doing full Defrag, if this happens after plenty of effort to free up space, and defrag files and free list.

  4. Still present with 2.03.282

     

    According to the Contig tool the $MFT is in 1 piece and moved (may be it is fibbing and discounting first cluster of $MFT), but at least it allocates the rest in 1 piece though I'd prefer nearer the main file blocks and $Bitmap next to it.

     

    You could use Recuva (as above) to see the MFT cluster allocation

    I've installed and tried to use Recuva, but it just seems to want to look for deleted files. If it can report on $MFT cluster allocation, it is not obvious enough how to do it to me even after looking through the program feature documentation. Could you explain how, please?

     

    If I know what clusters are free, I think Contig can defrag to a specific location, if manually set on command line, but I've no safe way to know where to try to relocate it at moment, so allow it to automatically decide.

  5. I am a Software Engineer, I usually don't complain as I understand what sometimes seems like a small enhancement can amount to a huge modification for the application programmers.

     

    However, I find I must say that I've found the defragment operation of version 2.03 to be 3 - 5 times slower than version 2.02 .

     

    There should be very little difference in speed from version to version as this is handled primarily by the OS API. Possibly it is a loop somewhere in the code that misses a limit then goes up to the language max (since it appears to be slow for each non-contiguous piece defragged). In my experience it can sometimes be a global type or value that can cause the. Or maybe object logic, not sure

    They have in 2.03 speeded up the defrag a lot on my FAT filesystem partition, for me it's been the best release so far but perhaps you have discovered a regression. I am suspecting it may be slower when compacting many small files during full defrag, after I did some software uninstalling in an NTFS partition, but I don't think I could repeat it easily trying with an older version.

     

    A bug I found & reported in the Bug Report section of forum, caused massive fragmentation of the $Mft. To provide more info, I was asked to create a debug logfile to upload and submit that in the Bug Reporting part of the forum.

  6. Very likely a non-zero "large file size" threshold is turning on the feature, and the default is zero which turns it off; re-using a value in this way to mean 2 slightly different things is quite common.

     

    You know when you freshly format an NTFS volume and you get a blank bit at the start (fastest part of the disk) then pagefile, then MFT, and the MFT reserved zone, then finally the remaining free space? The idea being, windows has the pagefile/mft/reserved mft zone at the beginning of the disk

    With Windows 7 and as far as I can see in Vista, it isn't done that way anymore.

     

    A pagefile when used heavily, is likely to need fast Random Access, rather than maximum sustained sequential read/write speeds, similarly the $Mft. Rather than locating the pagefile on fastest part of disk, they seem to go around the centre of freespace, after most system & user files, until they fill the disk with rubbish from youtube & itunes.

     

    If you can it's probably faster to do a full backup to another disk and remake & restore the NTFS volume fresh, rather than move files on same disk twice.

  7. USB 2.0 (if it's operating at that speed) is slower than IDE / SATA therefore defragging files will take time.

    Personally I wouldn't bother defragging an external drive because the slow USB speed would mean it would take the same amount of time to transfer a fragmented file as one that's not fragmented

    Good point! The "Quick Defrag" would suffice to catch any small heavily fragmented files to catch cases where defragging really helps alot.

  8. - Hibernation is a deep sleep. Standby is a light sleep. Sure, hibernation is much closer to being "off". But since my system can fully load windows in less than 25 seconds (or less) why do I want to waste time with hibernation when it takes 15 seconds (or more) to fully waken? I'd rather just shut it down or use standby. If it isn't pretty instant, I have no use for it. And hibernate is not instant

    Because hibernation is really turned off, but with the state saved so it can be restored on reboot. This is faster for most people, than having the desktop restarted and all those services restarted.

     

    Sane people, don't want a 25s boot, only to then spend a minute on mouse clicks to reorganise their applications. They'd rather have it all just come back the way they left it, which happens with Hibernate and allows a full power off including the PC power suppy.

  9. The files are spread all over the drive, however. The files need to be consolidated so the free space is contiguous, because with most of the free space blocks having some kind of data in it, the computer speed will suffer

    Is this really significant? The most popular UNIX style filesystems, have traditionally actually deliberately spread files around as a deliberate strategy, so that large files can be stored in contiguous chunks, and small files stored near their containing directories (folders) unless the disk is very full. By doing this they reduce greatly the need to run defragmentation programs.

     

    Whilst some speed up in benchmarks would be obtained for sustained sequential read speed by compaction, I'm not at all convinced any really perceivable difference would be noticed by the end user in daily desktop use, once the small fragments are removed.

  10. Hibernate works finicky & even causes some systems to lock up trying to come out of it!

    Plus, it takes as long or longer than a reboot, on some systems!

    Why use hibernate if rebooting is just as quick, & it causes crashes?

     

    Hibernate works reliably with solid drivers, I hardly ever Shutdown Windows 7 because the regular OS updates mean frequent reboots anyway.

     

    Not only does it turn your machine off and re-boots to useable Desktop faster, but Hibernate restores the desktop appliction setup to exactly where you left off, so you can resume next morning without pain of re-logging in and restarting applications.

     

    Sleeping is not a feature necessary for me, as I have it kick in automatically on idle.

     

    What would be useful is if Defraggler was able to disable all Power Saving Suspends, until it finished so you can leave system unattended without altering the Power Management policy to allow time to finish the defrag.

  11. Defragger has the following for freespace: "defrag freespace" and "defrag freespace (allow fragmentation)"

     

    I regularly use "Defrag Freespace" and it is not doing that for me (think someone else reported similar), Windows 7 64 bit.

     

    The point is though, that compaction is already done by full defrag; there's an option to "Move Large Files" by a selection criteria to end of disk, but howwever there's no analagous way of selecting files to be preferentially moved to beginning of disk, before all the mass after compaction. I can imagine some people wanting maximum sequential read speed on some large files for example, or localising a folder and small files to be close together on disk via such a feature.

     

    As it stands, you'ld need to reuse a seperate partition on early part of disk; and it might be very difficult to create such a space, if Windows C: drive is installed in the most common way.

  12. I second the suggestion to run the filesystem checker, not so much for bad blocks but to eliminate the possibility that inconsistency is causing Defraggler the troubles.

     

    Do not expect just to see 1 blue block & 1 white block, there's files which get allocated in the freespace area which are not compacted, as well as things like the System Volume Info.

     

    In an old Vista filesytem, I found defrag with Defraggler causes a serious amount of fragmentation in the $Mft, which I can defragment with the tool Contig.

  13. Try running the disk checker tool available from the drive properties in Windows "Computer" (it may need to be scheduled on next reboot).

    On rare occasions where Defraggler crashes this has fixed the problem for me.

  14. Very possibly the Defraggler % includes System Volume Info, that best measure of fragmentation % is not as obvious as it first apperas, the Windows Defrag shows much lower than Defraggler to, suggestion on this is here Less Strict Fragment Defintion, Display & Reporting End users are confused by high fragmentation % and many red blocks

    3rd party Defraggers have an incentive to be pessimistic, to appear more beneficial (compare with noisy personal firewalls in security field).

  15. No, I don't think it's normal. For me the strength of Defraggler is the Quick Defrag and GUI based file level defrag. The full defrag runs considerably quicker than Win 7 defrag, in my experience, despite it liking to compact data. I have seen cases where defrag seems to get stuck, re-defragging certain files, and there are reports on forum of long defrag times for full defrag.

     

    I partition my disk, to make even full defrag fairly quick, so I'm not sure on expected performance with TB filesystems.

  16. nope my problem is that my files (songs, reg keys, program files, downloads etc.) get split in two equal parts and one part is at the end and one part in the front..

     

    has this to do with move large files to the end of disk?

     

    p.s. I just unvinked the option of moving large files to the end of the drive while defragging drive...

    I hope this works... but apart of that I can't think of anything...

     

    thanks for the comments people your great =D

     

    I think we've not been following you. If you're saying 2 areas of the disk are used, with some files moved to the end, then "Move Large Files" option would be likely.

     

    If you're saying many of your files are stored in 2 chunks, at beginning & end of disk, then it would be very puzzling.

     

    The comments in replies were explaining the 4% fragmentation figure.

  17. Yes, I have given that a try. That is the way I unmount all of my flash drives and removable media. The issue I'm having is that whenever I have defraggler running (or even just open) any flash drive/removable media that I try to dismount gives the "volume is in use" alert, or whatever the wording is. After closing defraggler, the drive ejects without a hitch. This is a bit annoying because if I have defraggler defragmenting a drive then I have to wait for it to finish before I can eject my flash drives.

    What I was telling you, is that if you tell the popup to eject the disk anyway, it does it in Windows 7.

     

    If you try using the notification area eject drive, you don't get a different pop up and you don't get chance to safely unmount without closing Defraggler.

     

    You didn't mention your windows version in your original post, but if you have 7 it should work for you to.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.