Jump to content

markmpm

Members
  • Posts

    4
  • Joined

  • Last visited

Posts posted by markmpm

  1. To quote Raxco, makers of PerfectDisk (amongst other things) 'One would think that with multi-gigabyte drives finding contiguous free space would not be a problem. The MicroSoft file allocation algorithm is proprietary, but simple testing demonstrates that extreme file fragmentation can occur even when there is ample contiguous free space on the disk.'

     

    If the disk is only used for backups then fragmentation isn't really a problem, unless you're really tight for space. The additional time taken to read a backup when you need to is insignificant.

     

    Thanks, Augeas. That's a good practical view of the issue. I will eventually get lots of fragmentation, but I'll just do a big long defrag maybe quarterly, and not worry over it.

  2. Just because you have plenty of contiguous freespace doesn't necessarily mean it would get used the way you want it.

    Windows isn't particularly smart when it comes to copying files hence the need to occasionally defrag the file system.

     

    As for prevention I doubt this would be possible but defragging freespace should help to minimise fragmentation.

    If you use your shared drive for other files you might be better off having a dedicated partition purely for backups.

     

    Richard S.

     

    This drive is being used only for the backups I mentioned. Anyway, the point is that there is ample contiguous free space both before and after these backups. I would expect NO fragmentation at all. But I get a lot of it.

     

    My understanding is that the following algorythm is fundamental to most file systems, including NTFS:

    ==============================================================================================================

    START given an object (a file or fragment) to be stored,

    IF one or more available contiguous spaces are sufficient to store the entire object, use the smallest one of them and you're DONE.

    ELSE use the largest available contiguous space. Use all of it, storing as much of the object as will fit in the space.

    This leaves a fragment still to be stored. GO TO START.

    ==============================================================================================================

    So, why do I get fragments?

    Maybe because of the Acronis software, or because it's coming in over-the-LAN, or both?

  3. Maybe we need to state the obvious here:

    Get an additional 1TB drive. Move half of your content over to it.

    Then any defrag program you choose will work OK.

    What is surprising is that ANY defrag program would not simply choke-and-puke on what you described above.

  4. A 2TB eSATA drive on my computer is shared on my LAN.

     

    Several computers on the LAN use Acronis bkp software backing up to this drive. They back up one-at-a-time, never concurrently. They each create a single backup file of 300 MB to 32 GB.

     

    Although Defraggler's graphic shows ample contiguous free space both before and after the backups, each file ends up fragmented - sometimes in many fragments. How can this be? How can I prevent it?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.