I typically use CCleaner wipe mft capability from time to time. This last run left me with 317,000 files named .z.z.z....z.z.zzz......zz.z.... and they're resident in the $mft itself. how do I get rid of those?
I have pro-level tool that do the job. But what about consumer layperson level tools?
An alternative is to format the disk and then re-copy everything back, thus beginning again with a fresh $mft. But that's not really practical.
I'm also looking at the "problem" as a speed & tidy issue, not security related. To have the CPU and disk subsystem load and sort through all that is a lot of unnecessary busywork.
So far I've investigated a number of utilities, but no go.
The MFT is something special to the NTFS filing system. The MFT doesn't contain any files at all. It's more or less an extension of the directory file. The directory file can't hold too much info about files and therefore the MFT contains more info about those files.
When one uses the CC's free space wipe feature then it first cleans the MFT and then after that it wipes the free drive space. And that takes A LOT OF time.
When I want to REALLY clean the MFT and the free space on a disk then I use "Cleandisk Security". This program does the opposite of what CC does. First clean the free space on a disk (by writing lots of files to that disk). A as result of that the MFT shrinks in size. And then "cleaning" the MFT takes (much) less time.
@winapp2.ini And that will "smash down" the mft? By virtue of the $MFT having to sacrifice its un-used entries?
@willy2 I'm not interested in actually cleaning or changing the information in un-used mft entries. I want to get rid of them and reduce the $mft size. Having 200,000+ (and more) entries is too much. There are notable performance degradations.
Yes, it will reduce the size of the MFT (temporarily).
But there's another problem to this solution. As a result of the method used both CC and Cleandisk Security wipe the System Restore Points (SRPs). Does your PRO tool wipe these SRPs as well ? I would assume it does. Because, as far as I know, there's no "Reduce MFT size" or "Clean MFT" API. One has to use a "Work Around" method to produce the desired outcome.
Yes. You'd have to kill SRP in order to maintain consistency. You're correct, there are no APIs that I know of that support this operation. Reducing the number of $MFT records has to be done offline.
Both CC and CleanDisk Security (CDS) write A LOT OF large files to disk in order to "clean" free space on a disk, effectively overwriting all the old files. Windows keeps monitoring that process, notices that the disk gets filled and at some point Windows decides to kill (one or more) SRPs and to reduce the size of the MFT as much as possible, in order to make room for the new files. Reducing the size of the MFT and killing the SRPs are effectively unintended consequences of the way these two programs work.
When CC and CDS notice that a disk is nearly completely filled (the user then gets a "disk full" warniung from Windows) then they delete all those temporary files. The end result is that a drive/disk is "cleaned". But one's SRPs are gone as well.
That's why I am curious whether or not your PRO tool operates the same way.
It should in theory. At the very least it will prune out all the "old" entries in the MFT, but I don't know if it'll shrink the amount of space allocated to it (I think it should)
You can't reduce the number of entries in the MFT. All records have a unique sequence no which is used as part of the indexing between records, so to remove one would make all that follow invalid. Maybe you could chop the records after the last live record, but those aren't used anyway so wouldn't hinder access speed. But I doubt whether any deleted MFT records hinder a binary search noticeably anyway.
The best way, as Keetah says, is to unload, format, restore. So what's your magical pro tool that removes deleted entries?
Willy, loading lots of files doesn't shrink the MFT, just the MFT zone. It doesn't even stop MFT expansion.
My uber pro kit is internal and in-house. It is both a hardware and software solution. The name of it isn't speakable in this forum without getting a reprimand, as are many of our custom made utilities.
I can tell you it shadows the existing metafiles into ram, all of them. Then simulates a format by erasing the existing on-disk table, it then repopulates the table from the top down. Sometimes moving a file, sometimes adjusting a record. It isn't all that efficient and consumes a lot of time. Hence its unspeakable name. There's nothing really proprietary about it other than the time needed to code it. I suppose. I classify it as brute force.
There was discussion on the MYDEFRAG (JK Defrag) forums a while back about this and the author had cited lack of documentation as the reason for not developing the feature. I guess the whole forum and website is a gonner nowadays.
I'm investigating something much more elegant. Something from Paragon. I wanted to play with it a while ago but couldn't get time till now. This is Hard Disk Manager Professional. And I consider it semi-pro level, the cost is about $100.00 for a toolkit that does a bunch of partitioning things and stuff.
But it has a seemingly magic bullet that truncates and pushes all the MFT records to the top of the table.
In my preliminary test I had an MFT that was 63MB in size, and after HDMpro at it the MFT was down to 37MB. Recuva had found 24,363 un-used entries, ripe for overwriting. And after HDMpro Recuva came up with 20 records. I can assume these 20 records were created during re-boot after. So apparently HDMpro had squished out the un-used entries and gained back 26MB of space. Or better thought of as your CPU and FileSystem code had 26MB less data to sort through.
I can guess and say fractions of a second on usb disk if you eliminate 500,000 entries.
I suspect, but will need to verify, that a properly compressed and de-slacked MFT will improve defrag operations. Not only the initial reading of the map, but throughout the job. I think we could gain several minutes.
again - no malice - but doesn't that seem like the effort is not worth the reward?
I mean, you can run CC or DF, as examples, and you tend to see some benefit, obviously this is relatively short lived and directly related to how often you perform those tasks.
I think I'm just not understanding the how 'cleaning' it will effect my PC experience.
If I'm pushing around thousands of small files quickly. It does indeed make a difference. I'll need to benchmark to see what the exact results are.
The effort is rather small. With Paragon doing the compaction and 1/2 size reduction it takes about 1-2 minutes on a 750,000 record mft. If it's a system disk you need to go offline. If it's a data disk, you can do it from within windows.
And the slower the interface like usb the bigger the improvement.
I'm going to guess working with a smaller $mft is beneficial to SSD.
At the moment I'm "developing" an exact procedure to do it all from a 1-click script.
There may be none, IDK. It may only manifest itself in certain cases. One of my systems had 1,800,000 orphaned entries or some even bigger number. With a mechanical disk there's either seek delay or just plain latency as the cpu works through those. Especially if some are at the bottom of the tree, or way out on a branch. Lots of empty holes to jump over.
With today's SSDs, garbage collection is still pretty dumb, and the disk is going to happily keep those orphaned entries safe and sound. If they're updated frequently one "spot" or branch is going to bring all the nearby branches with it when wear leveling kicks in on a block level.
Take for example analogy of a 2meg text file, and you only update 1 line in it. You still have to push those other 1.9999 megs around, if you're doing SSD. If you're doing a spinner, then you can precisely pluck and place a single entry at will. You can thank the outmoded block writing methods inherent in Flash memory for that.
Having less dead spots in the MFT allows more of it to fit in the SSD cache ram, and it's smaller to boot. And less blocks are erased with metafile activity.
Hope that makes some sense. It is truly late here and it's bedtime.
Perhaps someone should build a tool that rebuilds the $MFT from itself, by reading the individual links and pruning deadones, then reconnecting the live links. I would think it would only be useful for people who deal with many many thousands of files, but it might be useful to run something like that on an older machine and see the effects.
Whilst I have no experience of the 'MFT compactors' mentioned (I might try the trial version of Paragon's s/w), I find it very doubtful in my mind whether these dead records in the MFT can be removed. There are just too many critical internal links in the record to be changed safely. Of course compact has more than one definition. A defrag of the extents, plus a chop of the slack space on the end of the file, is one, and quite realisable too.
In any event file access is done, in the main, on file name, and that means reading through the folder to file chain. There are no more records accessed to get to a file if the MFT has a million deleted records in it than if it has none. Only software such as Recuva reads all the MFT records sequentially. The point is that although it is very nice to have a clean MFT, there would be an immeasurably small increase in access speed. What software manufacturer would risk working on a proprietary undocumented system, on the most vital of system files, for the smallest of gains? I would say you can count Piriform out.
Paragon killed all the orphaned links. I set up a 2GB test partition. Filled it to 1.8GB and added and deleted 500 small files.
Recuva found all 500 entires and marked them as recoverable. After running HDMpro Recuva couldn't find any entries. Repeating the test with 500,000 files, I could see the mft expand and then contract back.
Granted, I'll need to look through everything to be sure, but it seems to do what it claims to do. As far as I know, no other company is making that claim.