Really shrinking the $MFT

At first I thought this would be interesting...

http://www.faqs.org/patents/app/20120131072

and it was, but by the time I reached the end it didn't think it was quite what it set out to be, and there are a few questions hanging over it.

I'm satisfied with the performance gains on a mechanical disk. On older systems the Start Menu populates much faster. And operations with small files is noticeably faster. Especially in my case where that's what I'm working with recently. Didn't need a stopwatch to give it a thumbs up.

And cutting it down to size, 420 to 250MB is a good thing. Regardless of how the database is structured, the smaller the better. This is simply less data to transfer over the bus, and less searching and less skipping over empty holes. Less processing. More of the table can stay in memory. Seems like system housekeeping tasks are just a little snappier.

Taking the MFT down to size is one of the hidden performance benefits of reformatting. Now I can appreciate, a little, why some people are so keen on doing that activity. I guess.

Indeed, I asked my colleague to look into how it all works. Entries in the database that are not pointing to any valid file on the disk are indeed removed. The hive is re-written sans those entries.

Understand this is different than defragging the MFT. Defragging is simply bringing the separate parts (that are strewn across the disk) together into one area. Getting rid of the dead entries is a whole other animal. As far as we can tell. The stuff from Paragon is the only utility (usable by non-technical people) that can do this.

I'm tired of seeing 700,000 entries in Recuva! I'm done looking for utilities to compact MFT!

I agree, it would be nice to see a clean MFT without all those spurious entries. However a compacted MFT would force all new file allocations - all those temp files that are created and deleted continuously - to the end of the MFT, so there would be quite a lot of skipping to and fro anyway - starting at the root directory in logical record 5.

I've downloaded a trial of Paragon Disk Manager Pro but it won't install under Sandboxie, and I don't want to do a proper install/uninstall with all the dregs left behind (clogging up the MFT). I'll try to install it on an older XP box later.

I guess my point is that the conceptual approach to compacting the MFT involves moving one record into another slot, and that is too complex and dangerous. Perhaps there's a different and smarter way to do it, such as move a file to a different partition, then move it back again. Repeat ad infinitum. That would close up the MT and NTFS would do all the work safely. If your source is willing to talk then I'd be interested.

By the way the link above is wrong in so many ways, starting with the description. No wonder the patent hasn't been granted.

After removing 290,000 records from another system, and letting it sit quiescent for a few minutes. The amount of memory in use was 12MB less than it was prior.

The paragon tool seems to do a lot of work in memory, because over a 2 minute period, there isn't a lot of disk access. And the whole job is done in about 3 or so.

I don't know how much more I'm going to look into this. But I'll be happy to pass along any questions and see what answers come back.

Well, I installed Para Drive Mgr Pro on the XP box, and ran a defrag MFT to test. Then I requested a Compact MFT and to my great annoyance, after it had accepted the request, a message said that it wasn't available with the trial version. So I can't test it.

I'm just curious how it does it. That's the question.

I was able to run a compact mft with Total Defrag trial version.

FWIW, I tried it on all of my drives, and the results were slim on most of them, except my system drive, where it trimmed the $MFT file from 460.75MB to 156.25MB. My assumption is that this is because there was a larger difference between deleted and actual files on my system drive than on my storage drives, which for the most part I don't delete much on.

@winapp2,

so you have noticed marginal speed benefit and minimal space gain?

how long did the process take?

Indeed I too have loaded a trial of Total Defrag and run it against a 100mb partition. The (small) MFT had no spare records to start with, so I loaded a modest folder with 50 or so files, then another, and then deleted the first, leaving a space of 50 deleted records in the centre of the MFT. I then ran TD Compact/no truncate and using WinHex and Recuva I could see that the MFT had been compacted, and with no restart required.

I will dig a little deeper later. I know that you (being an application) can't modify the MFT. Edit it with a hex editor and a minute later the changes are backed out by NTFS.

Having only 170 records the process took only a few minutes

If it can lock the volume, e.g. External USB, then it will process right away. C:\ requires reboot.

On an older Dothan Core 1.7GHz machine, I noticed a faster Start menu response. The icons and sub-folds there populated noticeably quicker. For a little more performance I used that Disktrix Ultimate Defrag to bring all the $Metafiles close together. Synergystically these two work well together.

I was able to knock 4 seconds off of loading Orbiter Spaceflight Simulator.

About 3-5 seconds off Photoshop CS2.

A modern i7 hexcore and similar would bury any latency with a sledgehammer I'd fathom..

@winapp2,

so you have noticed marginal speed benefit and minimal space gain?

how long did the process take?

I haven't except for one place*, but that's presumably because my system drive is solid state.

I left the room to grab a drink while it restarted and did its thing, and by the time I got back it was finished.

* After trimming the $MFT files across my drives, the program "Search Everything" appears to populate a bit more quickly.