Jump to content

Sector Skew for best performance


TTwrs

Recommended Posts

Defraggler does a good job of putting files in sequential sectors on a disk, but pulling the data off sequential sectors does not necessarily equate to maximum performance because of the time it takes to transfer the data from disk to memory. Oftentimes a given 'sector skew' (i.e., NOT sequential sectors) will actually allow for faster performance, dependent upon the disk AND the computer it's being used with. Since Defraggler is capable of analyzing disk performance, have you given any thought to having Defraggler lay the data down for Maximum Performance, instead of minimum fragmentation, by including a calculated skew? As an option, perhaps?

Link to comment
Share on other sites

I agree. Sort of...

 

Sequential (non fragmented) is lots faster than a badly fragmented drive.

But faster, still, is moving all System files together, then user files.

 

This eliminates Windows having to locate a system file in 1 sector, then jump across multi-GB user videos/files/games etc to find the next System file on another sector!

If the pattern were optimized, that is to say, instead of just moving files to be sequential, put the SYSTEM files together, THEN the user files, Defraggler would make a system fly!

Link to comment
Share on other sites

WinApp, the processor can handle the harddrive input fine. Usually. Unless it's some super fast SSD or something.

 

But the slowdown occurs because on HDD, the files get spread out. HDD have a limited read speed, which is greatly affected by file placement.

 

By moving all system files together, followed by user files, followed by free space, this would bring drive speed back.

Link to comment
Share on other sites

I vaguely recall it was easy to implement with DOS 3.3? when formatting a 720 Kilo Byte 5.25" Floppy Disc.

I forget the nunbers but I think there were 26 sectors per track with identities in a sequence such as 1,9,17,2,10,18,3,11,26 etc.

The CPU could process the data held in one sector during the time that two seconds crept past the head,

and it needed three rotations to process an entire track.

If it took too long to process one sector it missed the next I.D. and had to wait for a complete rotation for the next attempt.

 

The modern CPU has a lot more agility, and a modern computer has an awful lot more complex silicon support,

and disc drives themselves have Megabytes of RAM cache.

 

I cannot see the need for skew and even if there was such a need,

it would be met by the HDD manufacturer BEFORE the media held any data.

 

With fingers crossed and a partition image backup ready in case of need,

I am beginning to expect that a standard defrag will be free of data loss.

 

ReSizing or shifting the boundaries of a partition with a Partition Manager is, in my view, a danger to avoid.

 

A low level format that preserves original data - what can I say.

Link to comment
Share on other sites

I agree with Alan here.

 

Modern drives probably would have a diminishing returns on this, since they already include a CACHE buffer that can range from 512 KB, to 32 MB or more.

 

In addition to looking for larger drives with faster rotation rates, the cache temporarily stores data needed, so it is arguably many times faster than a sector skew...

Link to comment
Share on other sites

  • 4 weeks later...

Dare I even bring up native command queuing and variable spindle speed? :D

 

While true that HDD drives are capable of a wide range of speed, they all have a max throughput & a max RPM speed.

The outer surfaces of the drive make larger 360 degree circles than the inner tracks at the same RPM.

 

Therefore, a 15,000 RPM Seagate drive would have far faster data throughput on the outer edges of the drive, than the central smaller rings.

It would never make sense to locate the data at the slower part of the drive, leaving the faster part unused.

Link to comment
Share on other sites

  • 4 weeks later...

Back in the dim and distant past - when I started using AT MFM type PATA disk driver I remember a product by a "Mr Steve Gibson" called "Spinrite" that used to do all sorts of "magical things" to the low-level formatting (physical arrangement of data on hard disk drive surface) - I suspect however that a lot of the details that earlier posters worry about i.e. optimising it so that the time taken for the disk heads to physically reach the "next" sector is just right for the system to be ready to "read" that sector is now "automagically" handled by low-level system/hardware stuff - the measurements that the disk benchmarking provides are likely coming from those processes anyhow!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.