Jump to content

Wipe Free Space - Multi-Pass ? Stupid Thing ?


Alan_B

Recommended Posts

Am I correct in understanding that Wipe Free space is governed by the 1/3/7/35 Pass Options under Tools => Drive Wiper => Security ?

 

I consider anything more than a single pass to be a self-evident waste of time because :-

 

If all files are securely deleted, then all free space is already wiped,

and over one year most of it will have never received a file so a weekly wipe * 35 pass is tending towards overkill ! ! !

 

If a file has been deleted without being securely deleted the sectors which it used have become part of free space.

When a new file is created it uses sectors within free space,

and I believe the O.S. is likely to avoid the slower access end of the disc,

and will therefore allocate sectors with faster access, such as the newly deleted file.

A subsequent Wipe Freespace will NOT wipe the sectors that have been over-written by the new file.

Any old deleted file that has been over-written by a file with random noise pattern data has had the equivalent of a single pass random wipe.

Any old deleted file that has been over-written by a file with non-random information has had the equivalent of a single pass NON-random wipe.

I suggest you need luck for the deleted file sectors to still be available as free space for a 35 pass wipe.

Link to comment
Share on other sites

  • Moderators

Hi Alan,

 

Am I correct in understanding that Wipe Free space is governed by the 1/3/7/35 Pass Options under Tools => Drive Wiper => Security ?;

I don't think so. I understand that WFS uses one pass of zeroes, but I've never run it to check.

 

If all files are securely deleted, then all free space is already wiped,

Not necessarily. There will be edit copies, user and prefetch defrags, sys logs etc that won't be obverwritten by secure deletion.

 

If a file has been deleted without being securely deleted the sectors which it used have become part of free space.

True, but true of secure deletions also.

 

When a new file is created it uses sectors within free space, and I believe the O.S. is likely to avoid the slower access end of the disc, and will therefore allocate sectors with faster access, such as the newly deleted file.

I don't know what algorithms Windows uses for space allocation (unlike my former IBM life). A partitioned or multi-plattered drive will have start, centre and end clusters not necessarily at the physical start, middle or end of the disk.

 

A subsequent Wipe Freespace will NOT wipe the sectors that have been over-written by the new file.

True, a WFS won't overwrite any live files, one hopes.

 

Any old deleted file that has been over-written by a file with random noise pattern data has had the equivalent of a single pass random wipe.

I'm not sure what you mean: I guess so.

 

Any old deleted file that has been over-written by a file with non-random information has had the equivalent of a single pass NON-random wipe.

I think I see what you mean: any deleted file that has been overwritten by any live file is truly wiped.

 

I suggest you need luck for the deleted file sectors to still be available as free space for a 35 pass wipe.

Nope, I don't know what this means.

 

PS I have run WFS on a small FAT flash drive just to see what happens, and the overwriting seems to be zeroes with a small unintelligible header. I don't think I'm going to dig too deeply.

Link to comment
Share on other sites

  • Moderators

Anything more than one pass, or even running WFS more than once is a waste of time!!! Just run Recuva afterwards and see how much WFS is able to remove with only one pass.

Link to comment
Share on other sites

When a new file is created it uses sectors within free space, and I believe the O.S. is likely to avoid the slower access end of the disc, and will therefore allocate sectors with faster access, such as the newly deleted file.

I don't know what algorithms Windows uses for space allocation (unlike my former IBM life). A partitioned or multi-plattered drive will have start, centre and end clusters not necessarily at the physical start, middle or end of the disk.

I do not know either, but it seems reasonable that the faster part of the disk should be preferred.

 

I never defrag until I decide to create a FULL partition image,

and I observe that when a defrag analyse shows the clusters in use,

all the odd files that are not within the main body remain close to it,

there never seem to be any written to the slow remote end as would happen if new files were allocated free sectors in sequence from the start, or allocated at random.

Link to comment
Share on other sites

  • Moderators

I wouldn't think so (we're way off topic here). When I worked on simple IBM mainframes space allocation was same track, same cylinder, cylinder +/- 1, 2 or 3. Whether anything like that is used in Windows I don't know.

 

I don't think that files are conciously written to the fast part of the partition. If you selected a cluster or clusters from the bit map then file allocation would (possibly) tend to be from the low cluster number area.

 

In any event all this fast/slow stuff is, or could be, just imaginary. When you run a defrag analyse the clusters all appear in a nice linear fashion. It isn't the truth, it's just listing the logical cluster numbers. My disk has two platters. How many sides are used? Two, three, four? Does the logical cluster allocation go from start to end of one platter, then start to end of the next platter? In which case I could have three or four areas of fast access, and three or four areas of slow access. Or does cluster allocation go down the cylinders first, in which case I will have one fast and one slow area. If the disk is partitioned it's worse. Cluster number one could be near the centre of the disk and be the slowest access, and cluster 10 million at the fastest.

 

The fatuity of all this is if you have an SSD. Look at the defrag analyse, clusters all displayed in linear fashion yet we know that SSD clusters are allocated anywhere and everywhere.

 

What a defrag does is to defrag the dataruns in the MFT. The disk sectors just tag along behind.

 

Oh Yes, multi-pass a stupid thing? Well, foolish and unneccessary.

Link to comment
Share on other sites

Perhaps I have been deceived by the sales hype from all the defragmenter suppliers,

but my beliefs have some consistency.

 

I have read and believed a user of Ghost that there was no need for a defragmenter.

All you had to do was make a backup image of the fragmented partition and when the image was restored there would be zero fragmentation.

 

Perfect Disk showed that this was true for Acronis also,

it also showed that Acronis placed the Metadata at the end of the files and immediately before Free Space.

When Perfect Disc defragged and optimised the Metadata was shifted out towards the centre of free space,

and when I asked the developers they gave me a link to Microsoft where this was the Microsoft recommendation.

 

I would like to believe that when any defragger optimises the disc and all the free space is at one end then all the files are at the faster end.

 

My secondary GPT drive has 129 heads.

Does that mean 65 platters each with two sides and one out of the 130 heads failed Quality Approval ! ! !

 

The more I learn the more confused I become. Thank you so much Microsoft.

 

NB Macrium does NOT defrag the disc, I am very pleased to say.

 

Macrium creates a 6 GB intelligent copy FULL image backup file of the 16 GB data on my System Partition.

A daily Incremental backup file is typically 120 MB because Windows shifts files each day to new sectors, and Firefox has a 40 MB contribution.

I know that defragging will cause the next incremental to be 3 GB because although the files are the same they have been shifted a bit,

which is why I only defrag before making the next Full image backup.

 

When an unwanted Microsoft Update invaded me and totally and permanently trashed Windows 7,

it would boot but never live more than 2 minutes before it froze,

the Macrium Rescue disc restored the previous days image of a working system and all was well.

 

The next Incremental backup image relative to the existing previous incremental was about 200 MB,

but had Macrium "defragged" whilst restoring the incremental would have been 3 GB.

 

One thing I am sure of and agree to, Multi-pass is unnecessary.

 

Regards

Alan

Link to comment
Share on other sites

  • Moderators

Hi Alan,

 

We're way off track now. I would think that your (very modern GPT) drive has 3 or 4 platters, and thus 6 or 8 heads max. I read that some disk manufacturers (WD used to do, Maxtor may still do) display disk capacity in a virtual disk form of cyl, head and sector, with confusing results. Maybe this is what you are looking at.

 

As for me I don't use Macrium and I have never defragged, so I'm an unbeliever really. I'm not at all bothered about fast sectors of disks. I can wait a few milliseconds more. I've wasted several hundreds of lifetimes of disk latency in frock shops.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.