Jump to content

Move to end of disk and leave space for expansion


Ian Murphy

Recommended Posts

I've been trying to get a big disk defragmented and have been having a bit of a nightmare. The disk contains virtual machines with dynamic disks and a backup directory for an SQL server - which changes every day.

 

Now, the problem is that the virtual machine disks are huge. I have configured it to move the big (>10gb) files to the end of the disk, however if you stop defraggler and restart it, it finds that the last file on the disk has extended by a few Mb and it starts moving the 200Gb of virtual disk files in front of it to make a space - it never finishes and ends up moving the last file on the disk constantly rather than defragmenting the remaining virtual disks.

 

Would it be possible to add a way to leave a gap for expansion?

 

Now I realise that this generates another problem. On the next defrag the gap will have reduced due to expansion and so the logic would say to move it again to increase the gap size, which would defeat the purpose.

 

Maybe a facility which allowed the sysadmin to specify position on the disk for a file. I'm thinking specifically of defragmenting servers where you always have big files. Maybe a 'don't touch the first fragment of this file' marker. At least this would allow us to defragment these files down to a couple of fragments.

 

Anyone got any good ideas? What other tecniques exist out there? Everyone has got to be hitting the same problem.

 

Ian

Link to comment
Share on other sites

  • 4 weeks later...

I haven't used the "move large files to end" option, but...

 

Now, the problem is that the virtual machine disks are huge. I have configured it to move the big (>10gb) files to the end of the disk, however if you stop defraggler and restart it, it finds that the last file on the disk has extended by a few Mb and it starts moving the 200Gb of virtual disk files in front of it to make a space - it never finishes and ends up moving the last file on the disk constantly rather than defragmenting the remaining virtual disks.

In the interim, when that happens, stop it, add an exclusion for the file it won't get past, then restart it.

Would it be possible to add a way to leave a gap for expansion?

I would think a "move to end" option without this would be of limited utility.

Now I realise that this generates another problem. On the next defrag the gap will have reduced due to expansion and so the logic would say to move it again to increase the gap size, which would defeat the purpose.

 

Maybe a facility which allowed the sysadmin to specify position on the disk for a file. I'm thinking specifically of defragmenting servers where you always have big files.

Pssst. Why are you using NTFS? Go to ext3/4 and never think about it again. ;)

Maybe a 'don't touch the first fragment of this file' marker. At least this would allow us to defragment these files down to a couple of fragments.

 

Anyone got any good ideas? What other tecniques exist out there? Everyone has got to be hitting the same problem.

I'd think the simplest solution to this would be similar to the one used in thermostats to keep them from constantly cycling on and off: make your target significantly "safer" than your threshold. For example, when the trailing free space is less than... idk, 5% of the file's size--or maybe just when it reaches zero and starts fragmenting-- trigger a move and/or defragmentation that'll leave it with, e.g., 15% to spare. It won't be moved again until it's eaten up the difference.

You could apply this for not only the last file on the disk, but each one ahead of it. So it'd be targeting a certain amount free space after each file, regardless of whether the end of that space is marked by another big file or the end of the disk.

 

I have a really cool (I think :P) idea growing off this--a particular AI-driven proactive defragmentation--but... I can't just offer it up for free!

Link to comment
Share on other sites

I've been trying to get a big disk defragmented and have been having a bit of a nightmare. The disk contains virtual machines with dynamic disks and a backup directory for an SQL server - which changes every day.

 

I am also working with virtual machine images, and just posted a new suggestion to make use of a second spindle, where available, for temporary storage. This should make a significant difference in performance by reducing head seek activity.

 

Would it be possible to add a way to leave a gap for expansion?

 

Now that's an interesting thought, but for one small problem, I think. You're assuming that the OS will make use of the gap to expand the file. I suspect NTFS will not.

 

I'm intrigued at the problems in defragging, but not yet enough so to begin writing my own ;)

 

One possibility you may wish to consider: I understand that Ghost does an excellent job of packing files in contiguous chunks, free from gaps. You could do a Ghost backup from the source drive to a different target, then back again. Yeah, I know, two moves, where one should serve. But on the move back, both drives should spend most of their time simply stepping to adjacent tracks, which is way faster than random seeks.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.