Jump to content

does anyone defrag their SSD


mta

Recommended Posts

  • Moderators

i know the 'expert' advice is that you shouldn't defrag a SSD and they even add there is no need to.

there's TRIM and wear leveling to also help spread the load over all the NAND chips (or whatever the current ones are using).

and they say with the limited number of read/write cycles that a defrag, or any excessive I/O operation will shorten the life cycle of a SSD but i was wondering if anyone does defrag their SSD's.

 

i assume their basic reasoning is that even a fragmented super fast drive will still be super fast and the small speed gain in defragging doesn't warrant the downside of shortening the life span.

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

I have never and will probably never defrag an SSD.

My understanding is :-

 

When Windows defrags a file it causes the file to be held in a contiguous sequence of LBA

http://en.wikipedia....l_Block_Address

 

The effect for a HDD is that when the file is written/read the mechanism needs a single rotation at 5400/7200/10??? RPM to access every sector on a complete track.

Were the file fragmented then the same quantity of LBA would require many rotations plus many head track seeks.

 

The effect on an SSD is nothing at all.

The SSD has many banks each of which has many pages.

When a file is sent to the SSD an internal controller will write the contents of each LBA to one page in one bank and the next page to a different bank.

The actual page and bank are totally unrelated to the LBA and governed solely by what the controller firmware chooses to equalise wear and extend SSD Life.

The controller can write the data very quickly.

The SSD Flash cells take very much longer to absorb new values.(but even so a much shorter time than one rotation of a HDD)

(I remember using Flash several decades ago - it took 15 minutes to erase under Ultra-Violet light and a few minutes to write 1 Kilo Byte with 50 volt pulses)

The effect is that many pages are simultaneously being written / absorbed in parallel.

I believe Reading back from Flash is also a slow process (but faster then the initial write/absorb),

but again many pages each from a different bank can be read back in parallel.

 

Each LBA value that Windows uses is remembered by the SSD controller and held in a look-up table against the Bank/Page location in which that cluster was written,

and when Windows reads the file back the controller looks up where it placed each of the LBA contents

 

Sequential LBA mean nothing to an SSD.

Link to comment
Share on other sites

  • Moderators

Yes, it is pointless physically defragging an SSD, as also is wiping anything in any form. But a defrag does more than re-arrange clusters on an HDD or SSD. It also consolidates the cluster run info in the MFT, or in English it replaces many separate cluster runs in the MFT record with one large one. The point of this is that it reduces many logical I/Os issued by the file system to one. This is where much of the speed increase is gained after a defrag, before you even get near to an HDD or SSD.

 

What is needed on an SSD is a half defrag, where the cluster run info is defragged in the MFT, and the lba mapping tables in the Flash Transition Layer are updated accordingly. There's no need to move any data, except to rewrite the MFT which happens anyway. Of course there would need to be some work by M/S and perhaps the SSD manufacturers for this to come about. Anyway, I bags the patent, and a small royalty on every O/S or upgraded SSD sold. Or both.

Link to comment
Share on other sites

Some while ago I saw that Diskeeper claimed to have special skill at defragging an SSD,

and I thought it was mostly snake oil.

I thought that Windows might need fewer CPU cycles to instruct a Disk to read a contiguous sequence of LBA instead of looking up a random sequence,

but with GHz clock rates that takes little time.

I will concur that a defragged MFT can be read much faster,

but would have thought that each cluster in the MFT would keep the Disk busy with another 100 or more LBA to access.

 

I have just used the search phrase

diskeeper SSD

I found this of interest

http://www.tomshardw...efrag,6848.html

Link to comment
Share on other sites

  • Moderators

Good find, Alan: that was written nearly four years ago, and hasn't really taken the world by storm. I have read that there's some gain to having contiguous pages of file data in the same SSD block, but how an application would manage that I don't know, nor how long it would last.

 

I think that many if not most people (present posters excepted) don't realise that the storage device is abstracted from the file system, and what they are seeing, and what defraggers are seeing, is a logical picture of the drive built entirely from the cluster bitmap and lba addresses held in the MFT. It's those lba's in the MFT that are defragged, not the storage device. As the defragger modifies the lba's the clusters on the st/device are moved to match, and on an HDD will more or less match the lba's held in the MFT. But on an SSD the pretty map shown by the defragger bears no relation to the underlying turmoil on the SSD. Defraggers never actually look at the drive, they can only ask ask it nicely to read or write something at a particular lba.

 

SSD's are even further abstracted, as an erased page (cluster) is simply unmapped from the lba mapping tables and thrown into the pool of available pages. The lba doesn't point to anything. So when Recuva, Winhex, or any application looks at an empty cluster the SSD FTL just sends a default cluster of zeroes without looking at the disk. This stuff is priceless.

 

On writing this I wondered, what happens when a defragger requests a page move on an SSD? Is it a delete and a rewrite, or is there the equivalent of a move command? And if so, does the SSD FTL just update the mapping tables without actually moving the data, as a move would be pointless? Bang goes my patent, and my royalties.

Link to comment
Share on other sites

  • Moderators

Some interesting info here on how Windows handles SSD defragging on Windows 8 compared to Windows 7.

 

In Windows 7 - we turned off defrag for SSDs as you mention in your entry; but in Windows 8, we have changed the defrag tool to do a general optimization tool that handles different kinds of storage, and in the case of SSD's it will send 'trim' hints for the entire volume;

 

http://windowssecrets.com/forums/showthread.php/147433-SSD-Defrag-on-Windows-8?highlight=drive+defragment

 

Support contact

https://support.ccleaner.com/s/contact-form?language=en_US&form=general

or

support@ccleaner.com

 

Link to comment
Share on other sites

  • Moderators

thanks for the link Hazel, I was unaware MS had changed their auto defrag feature under W8

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

  • Moderators

The volume TRIM is more equivalent to a wipe free space, as files will remain in just as many fragments as before. However the TRIM'd clusters may well be background erased ready for use, so it could help stressed drives.

Link to comment
Share on other sites

It is worth noting that even if a brand new empty SSD was populated with files that were fully defragged so far as BOTH the SSD internal organisation as well as the Windows host,

When a file is deleted nothing IMMEDIATELY happens to SSD contents other than a "mental note" that corresponding pages hold redundant data.

 

At some stage the writing of a new file will involve over-writing redundant data - BUT that cannot happen immediately.

FIRST the redundant data has to be erased but the SSD is incapable of erasing a solitary PAGE,

instead it can only erase ALL pages held by one (or more) BANKS.

Unless a bank has totally redundant pages of data then the SSD controller has to copy all the non-redundant data to pages in other bank(s) before it can erase the whole bank.

 

CONSEQUENCE :-

Any file that was originally in "perfect state" state will suddenly have some of its pages of data scattered across other pages in multiple banks.

Link to comment
Share on other sites

My understanding of Defrag and Trim by Windows 8 is that it is not much different from Windows 7 excepting that :-

 

If the user chooses to defrag then Windows 7 will waste time and SSD life by doing what it ought not to do,

BUT with this Windows 8 benefit :-

Windows 8 claims indicate that it does NOT restrict itself to a single instance for each LBA of a file that has just been deleted.

Windows 8 will give "reminders".

 

Far more importantly,

With 25 GB used and 30 GB Free space on my 55 GB SSD,

If I wish to restore a partition image I first need to burn a Linux Boot CD to perform a "Secure ATA Erase" so that all pages in all banks are empty,

If that stage is omitted then the SSD will still be holding 25 GB of ancient data when the image restore throws another 25 GB onto it.

According to Murphy's law the new data will not replace the ancient data,

so Windows might see 30 GB of free space,

but the SSD is left holding 25 GB of ancient data that will never be used plus 25 GB of restored data,

so it has only 5 GB of slack space in which to do its re-organisation magic - horribly chaotic and slow.

 

Windows 7 will never issue a TRIM command to purge ancient data which it knows nothing about.

Windows 8 MAY possibly issue a TRIM to absolutely every LBA which it considers redundant,

so after some days the 25 GB of ancient data could be disposed of.

Link to comment
Share on other sites

  • Moderators
Windows 8 claims indicate that it does NOT restrict itself to a single instance for each LBA of a file that has just been deleted.

 

I don't understand what this means, Alan.

 

I don't want to go too far off topic here, but have you ever issued a secure ATA erase? How? Is your example hypothetical?

 

When you restore your partition (sans SAE) the data will be written back without any reference to what was there before, as any data write would do. The file system tells the storage device - the SSD - to write this data at lba nnnx. So lba nnnx gets written, then lba nnny etc, depending on how the image has been taken. Much of the old data will be overwritten. Of course as this is an SSD the data writes will be to new erased pages, the lba mapping updated, and the old pages marked as invalid, ready for erasure by the Garbage Collector.

Link to comment
Share on other sites

From the first post in Hazel's link

http://windowssecret...rive+defragment

In Windows 8, when the Storage Optimizer (the new defrag tool) detects that the volume is mounted on an SSD - it sends a complete set of trim hints for the entire volume again - this is done at idle time and helps to allow for SSDs that were unable to cleanup earlier - a chance to react to these hints and cleanup and optimizer for the best performance.

 

Yes, I have burnt a Linux Boot CD to hold an OCZ ToolBox which takes a few seconds to issue the special Secure ATA Erase,

and too my relief it worked.

 

Starting from scratch then Windows + applications etc. might occupy the first 25 GB of the LBA memory space with very few gaps, as seen by Windows.

A partition image could then be created.

 

After updating and replacing applications etc there could be many more gaps, and some of the previously unused 30 GB might become occupied.

Then something bad happens and you decide to restore the image of "happier times".

The image writes to the first 25 GB of LBA memory space, but the top 30 GB are left as they were.

Therefore the SSD will not mark any of that 30 GB for Garbage Collection.

 

I would guess that before restoring the image there might be data totalling 1 GB in the last 30 GB of space,

and it is that 1 GB in the last 30 GB that is not marked for Garbage Collection.

As you said, "Much of the old data will be overwritten", and this will be marked for G.C.

 

After a 40 year career in engineering I have seen too many examples of Murphy's Law in action.

According to Murphy the worst possible things only happen at the most inconvenient time,

and somewhat more than 1 GB (e.g. 25 GB) may be held in the top 30 GB before the image restore, and never get marked for G.C.

Link to comment
Share on other sites

  • Moderators

Ah, but you made the mistake in 'The image writes to the first 25 GB' in thinking that the lba's will be mapped to the first 25 gb of data, which isn't true.

 

When the SSD is new the lba's will map to the physical pages, more or less. As data is written it will start at page 1 and continue sequentially. As more data is written, and existing data updated, new pages will be used until all pages are used once. This could happen in a few days, or even hours, depending on pc use.

 

More data writing will use pages flagged as invalid, i.e. the data held there has been updated and written elsewhere, or deleted and TRIM'd. Garbage collection will erase these pages ready for writes. Also the wear levelling will, after extended use, swap static live data with heavily-used cells, freeing up the low-use cells and reducing stress on heavily-used cells. So live data could be anywhere, physically.

 

So you do a restore image, and write to say lba's 1-1m, which held the original 25 gb. The SSD has already had some use, possibly a great deal, so the lba to physical page mapping is, well, anyone's guess. The data will be loaded all over the disk. It will not 'write to the first 25 GB of LBA memory space, but the top 30 GB are left as they were'.

 

If any part of the original 25 gb of data were still live before the restore, then the lba's would not have changed, but the physical pages would, for reasons given above. The restore will reuse the same lba's, but the physical pages will be different, as writes use empty pages.

 

Thus any live data on pages mapped to the restore range of lba's will be flagged as invalid and emptied for reuse.

 

Any pages for live data written since the image was taken and outside of the restore range of lba's would not be flagged as invalid, as the SSD is unaware of what the file system is doing.

 

This is of course a huge simplification of the complex SSD controller components.

Link to comment
Share on other sites

Ah, but you made the mistake in 'The image writes to the first 25 GB' in thinking that the lba's will be mapped to the first 25 gb of data, which isn't true.

...

The data will be loaded all over the disk. It will not 'write to the first 25 GB of LBA memory space, but the top 30 GB are left as they were.

Sorry, this is where a misunderstanding occurred.

 

All along I have the view that so far as Windows is concerned the initial installation of both Windows and applications on the system drive will start with the first 25 GB of LBA memory space,

and similarly it is the same first 25 GB of LBA memory space AS SEEN BY WINDOWS that will be written when an image is restored.

 

I fully understand that if an image is restored immediately after it has been created,

the SSD controller will store the data in the NEXT 25 GB of FLASH space and because the same memory space LBA numbers were sent by Windows it will look up what previously corresponded and mark the old data as Garbage.

 

So far as I am concerned LBA memory space is a range of numbers, each of which will :-

Always correspond to the same Track and Sector locations on an HDD

Never correspond to the same Bank and Page on an SSD, but is just used as an index so that when Windows wants to read the contents of the LBA the SSD controller will know what to access.

 

When an image is restored, the same LBA memory space will be written, and the SSD will write to the next 25 GB worth of banks and pages,

and ALSO mark as garbage each of the first 25 GB pages for which the same LBA numbers have been re-used.

 

I do believe that with luck most of the old stuff will be marked and queued for garbage collection,

but I do not trust luck when it is needed.

Link to comment
Share on other sites

I use SSD's in some of my configurations. I do not defrag them. I do not use and fancy-shmancy optimization utilities either. The very newest drives are becoming better at consolidating free space (which is where the slowdowns can come from). And Samsung drives' controllers are aware of NTFS metafiles and optimize themselves accordingly. Perhaps when SSD's mature into a viable consumer product across the board, the defragging notion will finally disappear once and for all.

 

In the meantime, I tend to recommend leaving some free space, maybe 25%, to allow the disk to do its background thing of consolidation and preparation of contiguous free space. Lack of contiguous free space *is* a problem - because cost effective nand flash can be read in much smaller blocks than as in writing.

 

Well.. all that aside flash technology as we know it won't be around much past 2020, by that time we'll have run into the fundamental laws of physics and and a new tech will need to be developed, flash SSD's will top out at about 4-6 TB, and be 50% overprovisioned due to the increasing unreliability of the ever shrinking storage element. You'd be like this dog if you truly understood how precarious your data is on one of these things, and more horrified to know that even now ECC and spare overprovisioning is at 40%! Be aware that the new TLC chips, while allowing for 256GB and higher drives at the consumer price point, have lifespans as little at 500 write cycles. The mfg. is relying more and more on the controller to shuffle data to lesser-used blocks. And keeping track of all this requires even more spare overprovisioning, bringing us back to lower densities in the end.

post-25750-0-36952400-1351407296_thumb.jpg

Instead you want to stick with a small SSD with low density, like SLC. And keep mechanical drives for long-term storage of a lot of material.

 

I can't wait to see the data recovery industry boom coming in as little as 3 years from now!

Link to comment
Share on other sites

@MTA, I try it on USB flash disks, & I would try it on an SSD drive if I had one.

 

Just to see what happens. Because if the drive fails, oh well, I get another one.

 

I experiment to gain knowledge. You can learn a lot by doing the forbidden.

 

* Not recommended that others try it. I just have time to kill, & I like to learn.

Link to comment
Share on other sites

  • Moderators

@SF

couldn't agree more, you learn alot from experience - even more from mistakes.

(i seem to have opened a spirited debate at least)

Backup now & backup often.
It's your digital life - protect it with a backup.
Three things are certain; Birth, Death and loss of data. You control the last.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.