Jump to content

Augeas

Moderators
  • Posts

    4,542
  • Joined

Posts posted by Augeas

  1. I think that SSD rationale has changed over the past few years. We've seen that Win 8/10 runs what Microsoft describes as a 'Traditional' defrag on SSDs under certain conditions, and we also know that excessive fragmentation causes excessive I/Os as NTFS ploughs through the MFT. So an occasional defrag won't hurt. Does any manufacturer forbid defrags?

    As for longevity, TLC SSD has an erase/write limit of a little over 300. Before anyone panics if I write 1 gb a day on my minute 120 gb device (and that is far more than I do) then the SSD should start slowing down in 110 years.

  2. Or to try to answer Don's original post, dump Defraggler. If you're on Win 8 or 10 (and have sys restore enabled, and have more than 10% fragmentation) then you'll get the Windows defrag described above. Mercifully Win 10 has sys restore off by default so those defaulters (me included) won't get the defrag.

    Why were you running a defrag against your SSD in the first place? What was the fragmented percentage?

     

  3. As TRIM came out with Win 7 in 2009, and SSDs before that, your audience might be limited. Zero filling a non-TRIM SSD might improve write performance but not for the reasons above (and also falsely claimed in the OCZ forums some years ago).

    An empty SSD block contains ones not zeroes, and no software on earth can erase NAND flash blocks. What zero filling does is to allocate a dummy page of zeroes to the LBAs instead of a physical page. The freed pages are subject to garbage collection, which erases them. Thus a pool of erased blocks is available for wrtes, and performance increases.

    Zero filling a TRIM SSD is a waste of time, effort and the life of the SSD.

  4. Sort of, but with live files, and NTFS knows exactly where they are. NTFS doesn't care a bit about deleted files, and I can see no way of performing a recovery to the source disk: the mechanics of it are horrendous. (Well, I occasionally run a single file recovery to the source disk, but that's another matter.) It won't happen without a new file system, and I can't see anyone spending millions to enable this.

  5. That isn't how Recuva, or recovery software in general, works. Recuva will copy files it is recovering to a separate drive, so you will need an additional 3tb+ drive to hold the recovered files. In Advanced Mode go to Options/Actions and at the bottom of the pane check Restore Folder Structure. Then run your recovery.

    Although Recuva can handle the massive disks that some use today it's very difficult for a human to handle the millions of files involved. Beyond backup, beyond recovery.

  6. If you have 50,000 files on your SSD then 500 fragments is just a rounding error. If you have one file with 500 fragments, that's another matter.

    If you're on Win 8 or 10 and if you have Sys Restore enabled, and fragmentation is greater than 10% (whatever that means) then Windows Defragger will defrag your SSD once a month. Before anyone says this is an Optimise, it is in Microsoft's words 'a 'Traditional Defrag'. This is apparently to manage the fragmants in volsnap files. Does Defraggler replace Defragger? (I'm not a Defraggler user.)

    Unless you have many fragments in one file (say 100+) and you use that file frequently, then forget it.

  7. I didn't know what benefit other than very slight it would make towards renaming the $R files. I don't know the circumstances under what the two software apps run, let alone what they do internally. Folders exist as a record in the MFT and the owning folder for a file is found by linking back from an offset held the file's MFT record. As this psyically exists or it doesn't I can't see why one piece of software can find it and another can't.

    The recycler folder is extremely unlikely to have been deleted so it still should exist in the MFT, and the link back value in the file's MFT record should be valid, so Recuva should find it. I know I have previously run Recuva and seen the recycler folder, so I know Recuva can find it. I run Recuva in Advanced Mode with the top three boxes checked in Options/Actions, if that's any help.

  8. As far as I know  there's no way of wiping the MFT without also wiping free space. Do you mean the Wipe MFT component of Drive Wiper, or what? If so then the notice that Wipe MFT has completed may not necessarily coincide with the actual event.

    Wipe MFT fills the MFT by allocating enough small files to fill it (and overwrite what was there before) and then deleting them. This is a NTFS heavy process. I can't see that the cluster size would have much, if any, affect, except by reducing he number of I/Os required.

  9. In a normal scan Rcuva doesn't look at the disk but gets data from the Master File Table. All files are listed in the MFT, and although the records flagged as deleted in the MFT are reusable it is common for there to be more files listed in the MFT than are on the disk. Foe example fill a disk with 1,000 files of 100 mb each, delete them, then fill it with 500 files of 200 mb each. The disk will be full but the MFT, and Recuva, will show 500 deleted files addressing 500 x 100 mb of data. I would expect the deleted files to have a state of overwritten, as there is no real free space available, but I don't know the exact circumstances that shows free space in your case.

  10. If a scan takes 11 hours then it's a deep scan. No date iformation of any sort is stored in deep scan found files. A normal scan has a date last modified (and has had for years) which is as near to what you want as you will get. Depending on how files were deleted, this is the deleted date.

  11. That certainly is an old thread. Well, it makes me feel old.

    Files sent to the recycler (which is just a system folder) are renamed with a $i and $R suffix. The $R file is the complete file and the $I file, which is 544 bytes if I remember correctly) holds the file name and folder. You can recover and read the $I file with Notepad.

    You can recover the $R file as well, I don't know if it is the original file untouched, try renaming and opening it. You could also try the copy back to the recycler as the old thread describes, at your own risk.

  12. Recovering a 'simple text file' is no easier, or more difficult, than recovering the most securely triple-encrypted file. A text file is merely one where the bit sequence can be translated into a pattern recognisable by a human being. It means nothing special to Recuva, or the disk, or the operating system. To them it's just a string of bits.

    As stated above, Recuva will copy faithfully whatever is in the deleted file's clusters. What's recovered is what is on the disk. So Recuva did what it should have done. That the end result is not what was expected is unfortunate, but Recuva can't conjure up what isn't there.

    I's still puzzled by the file being empty yet being in json format. Either it is empty, or it contains some Javascript coding.

    Now you say that you reinstalled Chrome. I'm not even sure if you're looking at the new bookmarks file, presumably empty or nearly so, or what.

  13. Recovery software does not guarantee that any file can be recovered in its entirety. Recovery is a copying operation, and Recuva will copy exactly what is in the deleted file's clusters to another location. Recuva in advanced mode will show the cluster number, so if you use a hex editor you can go to the start sector on the disk and see that what you have recovered is exactly the same as the deleted file: or you can take my word for it. A state of excellent means that no clusters from a live file are overwriting your deleted clusters at the time Recuva was run.

    We've no idea how the file was deleted nor what has happened on your pc since, nor what the file system is, so it's not possible to say why you are not seeing what you want to see. I don't even know what should be seen in a Chrome bookmarks file.

    A brief few minutes on Google shows that json files have no file signature, so how did you know you were looking at a json file if there were no contents?

  14. That info is being plucked from the internet somewhere. If you really only have the first fragment of the file, as described above, then there's no practical way to retrieve and join the other fragments without professional help. Other recovery software might make a wild guess, but that's all it is. If it's a commercial album then what you want is available elsewhere, easily and cheaply.

  15. A deep scan runs a mormal scan first, so any files and folders listed are from the normal scan. A deep scan does not access the MFT so file names and folders can't be extracted. A file found by the deep scan will have a file name of [001234].ext, no path info and will always be in an excellent state.

    However a deep scan can, and will, only retrieve the first extent of a fragmented file, as there is no information available to link file fragments together. The other fragments may still exist but there is no way to link them without significant skill or professional help. This may be your problem.

     

     

  16. What? This isn't really a suggestion, is it? A CC normal deletion just runs the equivalent of a shift/del, so the file is flagged as deleted in the MFT and the data remains until overwritten. A secure erase will overwrite the data before deletion, one pass is quite sufficient.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.