Jump to content

a possible solution to speed problems ?


Bethrezen

Recommended Posts

hi all

 

in a previous posting on this subject I highlighted a pretty nasty performance issue I was seeing with Defraggler in that even on a relatively empty and relatively un-fragmented drive Defraggler was wanting hours to complete a job that should have in reality taken no more than 20 to 30 mins now I see from reading the change logs for subsequent versions that you have tried to address this issue and I have seen a slight increase in performance sadly however Defraggler is still running extremely slowly on my system

 

I've been pondering over why that might be because when using say Auslogics Disk Defrag I see quiet good performance and even in a heavily fragmented disk something on the order of 50% Auslogics Disk Defrag never seems to take more than an hour or so to get the job done and that’s doing a full defrag and optimise including moving system files to the start of the drive for faster operation and it’s got my puzzled as to why Defraggler is so slow in comparison.

 

Thinking about it the though occurs that it might be something to do with the way the 2 to apps process files Defraggler seems to try and process entire files at once which on a high powered system isn't a problem my system however only has 1gb or ram and therefore when processing very large files such as though found the World of Warcraft folder the whole thing chokes and Defraggler moves with all the speed of a glassier where as Auslogics Disk Defrag seems to process files cluster by cluster which makes for much better performance on low powered system because the Ram isn't being overloaded with more data than it has capacity to handle.

 

I'm no programmer so I could be wrong admittedly these are only visual observation from watching the two programmes operate but if you want to improve the speed of Defraggler further and actually get it to operate and somewhere near a reasonable level for all system regardless of there hardware then it might be a good idea to have a look at the way Defraggler operates as there might be a better more efficient way to do thing ie processing smaller chunks of data sequentially rather then trying to process entire files at one time that's fine for small files but not for large multi gigabyte files not on a lower end system.

Link to comment
Share on other sites

If RAM is too small and the Pagefile takes up the slack,

that might double or treble the time taken with the larger files that exceed the size of RAM.

 

It might possibly be very much worse if Windows is managing its Pagefile,

in which case its boundaries would change and Defraggler might find that the shift-n-shuffle it had planned must be re-done.

 

Just possibly fixing an absolute size to Pagefile might give it stable boundaries and allow defraggler to complete what it planned.

Link to comment
Share on other sites

I don't think that's the issue. Both programs use Windows defragmentation APIs. I think the issue must lie somewhere else.

If your reply was to me then I think you have misunderstood me.

 

My key phrase was

Defraggler might find that the shift-n-shuffle it had planned must be re-done

 

I am sure that the API will somehow do the job it is told to do with a specific file,

and that somehow it will resolve the conflict if the O.S. wishes to expand the PageFile and occupy the space that Defraggler has stipulated for use by the current file,

but I really doubt that the API will have been told by defraggler what the future plans are.

 

Assume E are Empty clusters and P are Pagefile clusters, and other files are represented by lower case letters

 

Possible scenario :-

a a a a E a a a E E E E E b b c c c d d d f f f f f

a a a a a a a E E E E E E b b c c c d d d f f f f f

a a a a a a a c c c E E E b b E E E d d d f f f f f

a a a a a a a c c c d d d b b E E E E E E f f f f f

etc. the plan is coming together, another 50,000 files to go and the job is done

 

Alternatively :-

a a a a E a a a E E E E E b b c c c d d d f f f f f

a a a a a a a E E E E E E b b c c c d d d f f f f f

a a a a a a a c c c E E E b b E E E d d d f f f f f

a a a a a a a c c c P E E b b E E E d d d f f f f f

The plan falls apart,

Pagefile has increased during or after file "c" processing and grabbed space that defraggler intended for file "d" but has not yet "committed" to the API

This thwarts plans and defraggler may have to undo some of its existing work,

hence one or more files may be temporarily relocated to make space available before they get a further relocation to a final home.

 

I could of course be wrong,

defraggler may blindly stumble from file to file without making any plans beyond the one file it is currently thinking about.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.