Jump to content

Defraggler should use more buffer memory


xiau.ling

Recommended Posts

Hy All!

 

I only have one suggestion: Defraggler should use more buffer memory. I think it's unacceptable, that it took about 8 minutes to defrag about 270 files - 87 megabytes! I made a new system disc from a harddrive, i used it only for storaging before. Well it had such noises, that i never heard earlier. Scraching, tearing, clacking noises. It was tooooo much seeking for 87 megabytes. (i made my registration here, and it was still working. 87 megabytes)

 

Wouldn't it be easier to sort the file pieces, if they were read into a big buffer (lots of [smaller] files read in), and than written out in one step each after another, without that much seek? I wouldn't mind if it takes 2 or 3 hundred of megabytes from the ram (or more!), just don't shorten the lifespan of the harddrive, with the unnecessary seeks.

 

The hard drive is a western digital wd4000aajs, not a high speed model, but reliable. First partition of the drive. 9,3 gigabytes of 30 gb used. All smart data is ok, so the drive should be ok. It can create truecrypt partitions with 80 mb/s, so the sequential speed is also ok.

 

Thank You for reading this post!

Xiau.Ling

Hungary

Link to comment
Share on other sites

I only have one suggestion: Defraggler should use more buffer memory. I think it's unacceptable, that it took about 8 minutes to defrag about 270 files - 87 megabytes!

Not sure, if I understand that post correctly. You are referring to the fact, that at the end of defraging, when only small files are left, Defraggler moves every single file individually instead of just reading say 100MB of small files and move those? I fully support this request. When dealing with large files Defraggler can move 10 blocks of data in a minute and when collecting all those tiny files it takes 10 minutes for just one block.

 

But I think Defraggler is using the Windows API routines for moving clusters. Maybe they set a limitation to accessing only one file at a time. Just a guess... :huh:

Link to comment
Share on other sites

When dealing with large files Defraggler can move 10 blocks of data in a minute and when collecting all those tiny files it takes 10 minutes for just one block.

 

 

What you mention is only half of my problem. Durring the process as the fragmentation is getting lower, more and more light blue blocks are visible, Defraggler relocates sometimes the half of the harddrive. Such files are moved to another place, that weren't even listed in the fragmented files list. That is ok, if our goal is to make the largest possible homogeneous free space on the drive. But what is if i deleted some files during the day, i created some and so on. Free spaces were made in the used space, and Defraggler wants to make it homogeneous even if it takes about 15 minutes to move files. The fragmentation was about 1%, ~57 Fragemnted files, ~30 megabytes. I deleted 3 gigabytes before and Defragger is still working, and my harddrive has the spooky noises. It is not worth the trouble, to move that much files. If Defraggler is daily scheduled, than it kills the hard drive slowly.

For the defragmantation of the mentioned 30 megabytes it moved the whole Netbeans and Openoffice and more (none were listed). The number of fragmented files after 20 minutes are still 40. Something is not good in the core.

Link to comment
Share on other sites

  • 2 weeks later...
Hy All!

 

I only have one suggestion: Defraggler should use more buffer memory. I think it's unacceptable, that it took about 8 minutes to defrag about 270 files - 87 megabytes! I made a new system disc from a harddrive, i used it only for storaging before. Well it had such noises, that i never heard earlier. Scraching, tearing, clacking noises. It was tooooo much seeking for 87 megabytes. (i made my registration here, and it was still working. 87 megabytes)

 

Wouldn't it be easier to sort the file pieces, if they were read into a big buffer (lots of [smaller] files read in), and than written out in one step each after another, without that much seek? I wouldn't mind if it takes 2 or 3 hundred of megabytes from the ram (or more!), just don't shorten the lifespan of the harddrive, with the unnecessary seeks.

 

The hard drive is a western digital wd4000aajs, not a high speed model, but reliable. First partition of the drive. 9,3 gigabytes of 30 gb used. All smart data is ok, so the drive should be ok. It can create truecrypt partitions with 80 mb/s, so the sequential speed is also ok.

 

Thank You for reading this post!

Xiau.Ling

Hungary

 

This is an interesting request. You mean, kind of like Nero burning rom buffers for a CD/DVD + the software buffer? Nero uses a buffer to speed things along for buffer under-run protection. It would be very interesting to see an adjustable buffer in Defraggler, say, from 0 mb for low mem systems, up to 500 MB for high end machines. This would dramatically speed things up if smaller chunks are read into memory, then written back out.

 

The only downside I can think about in doing that is what if there were a power failure & 500 mb in mem were lost? Perhaps it can copy things over in mem, then relocate to another section of the drive. Once successfully copied, delete the source, then prepare to move those files back in a contiguous manner! Also, I did have defraggler lock-up recently on a laptop I was working on. This was not the fault of defraggler, nor did defraggler cause this. The laptop harddisk had bad sectors, so every time Defraggler attempted to read or write from that area, the laptop would freeze! I corrected this by marking off the bad sectors into a 3rd partition & then deleting the 3rd partition once into windows.

 

So now they have the C:/Windows partition & also a D:/Multimedia partition with all their music, pics, vids. I tested with defraggler when I was done, & it ran perfectly after quarantining those 5 GB of areas with bad sectors! I did make sure to back up all I could before the re-partitioning, & it took about 50 reboots because on some areas, I had to try to copy a single file at a time to identify sectors that were back (translate/it would lock up, so I would have to reboot).

 

Everything works great now, although I recommended they burn their important data to permanent media, & get ready to get a new HD if that one goes out!

Link to comment
Share on other sites

  • 4 weeks later...

I was looking to start a subject like this but shall add here to continue the flow...

 

I was wondering why DF doesn't use the big areas of space on my drive to create a copy of the LARGE files I have - one of them HAS to be 285MB+ and DF can't seem to defrag anything over 50MB - or at least it isn't able to for me and I'm using ver 1.15.163 so am up to date...

 

I have a few files that are over 100MB and one of them is 31 fragments and is very slow to access :(

 

If DF created a copy of these large files into an unfragmented copy, then deleted the fragmented one it would then be able to move files about to make space for the big ones at the end of the drive like it's supposed to in the options...

 

At the moment I'm being shown that there's lots of little files all over the drive, including the end of the drive where I've asked for all large files to be moved to...

 

Nick

Link to comment
Share on other sites

I was looking to start a subject like this but shall add here to continue the flow...

 

I was wondering why DF doesn't use the big areas of space on my drive to create a copy of the LARGE files I have - one of them HAS to be 285MB+ and DF can't seem to defrag anything over 50MB - or at least it isn't able to for me and I'm using ver 1.15.163 so am up to date...

 

I have a few files that are over 100MB and one of them is 31 fragments and is very slow to access :(

 

If DF created a copy of these large files into an unfragmented copy, then deleted the fragmented one it would then be able to move files about to make space for the big ones at the end of the drive like it's supposed to in the options...

 

At the moment I'm being shown that there's lots of little files all over the drive, including the end of the drive where I've asked for all large files to be moved to...

 

Nick

 

 

I have also noticed that large files take a long time when highlighting files that need to be defragged.

These files are about 4 GB's. Another thing is that when I make an ISO image of a dvd with Nero it will not defrag completley after a long time of trying.

 

Prior versions did defrag these large ISO files.

Link to comment
Share on other sites

  • 2 weeks later...
  • 4 weeks later...

The suggestion makes sense but unfortunately is not possible. Defraggler doesn't actually do the disk IO. Windows NTFS has a special call to move a block from one place to another which was created especially for defragmenting. All you can do is move a block from one place to another.

 

The API is safe, so the author of a defragmenter doesn't have to worry about disk corruption, NTFS takes care of that.

 

The problem is that you are limited to what the API provides and that is move-block.

 

If you use the contig.exe utility from sysinternals in verbose mode it tells you which block it is moving - essentially it displays a line of log for each call to the API.

 

Ian

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.