Jump to content

Augeas

Moderators
  • Posts

    4,542
  • Joined

Everything posted by Augeas

  1. The point of Recuva is to recover previously deleted files, and the point of NTFS, which I assume we're dealing with here, is to ensure the integrity of live metadata and live user files. The two don't always go together. Recuva reads the MFT and lists all records for files that have the deleted flag set. It doesn't select or exclude any files apart from those options chosen by the user. What you see is what there is in the MFT. If any deleted record has been reused by NTFS then the deleted file's information has gone and can't be shown. With an SSD the process is the same but the outcome is different. Although the deleted file list is still shown very little data is recoverable, due to the way the SSD's controller handles deleted pages. Recovered files will contain zeroes, so using Recuva or any file recovery software on an SSD is likely to be futile. However I have noticed a difference bewtween running Recuva when I had an HDD and when I moved to an SSD. On the HDD the list of files includes many recognisable user files, and there is a good chance of recovering many of them. With the SSD I see a large list of what appear to be system files, and a very short list (fewer than 20) user files. This is puzzling, but correlation isn't necessarily causation.. The only quick explanation I can see is that there are a larger number of dynamic file allocations and deletions taking place that are reusing the deleted user file records in the MFT, wiping out user deletions. When I moved to an SSD I also moved from Win 8 to Win 10. I don't believe that the SSD is relevant here, as it knows nothing of either NTFS or the MFT. NTFS version 3.1 has been the same on disk since Windows XP, so is Win 10 (or Firefox, or both of them) now upping dynamic file allocation? Is it Win updates? I don't know. The point is that Recuva is doing what it always has done, reading the MFT and listing the deleted file records. That it isn't showing what the user might want it to is frustrating, but just how it is. (Put as far as I know before every sentence.)
  2. Because - probably - a deep scan runs a normal scan first, and a normal scan reads the MFT where the file names are held. The file names are listed, but the clusters which held the file's data should contain zeroes, or more correctly a read request will return zeroes (who knows what he clusters contain, they are unaccessible). It is quite usual for files recently deleted being not found. A file deletion leaves the MFT record marked as available for reuse by any activity, and even opening Recuva writes a few files. I have a sneaky suspicion that NTFS reuses available MFT records held in memory first, so a recently deleted file's record is very exposed to reuse. As you say, running a deep scan on an SSD is pretty much pointless, there's next to nothing to return.
  3. I wonder how CC knows - if it does - that these ZZZ files are CC's files and not user files? I could create a file called ZZZ.ZZ and put whatever in there. Is there a file signature?
  4. You must have a different Recuva from me then. However willing the moderators are here they don't write any of Piriform's code, nor do they deny much either.
  5. I don't know what you mean by step 5, Recuva (free) only has three stages . Recuva does not change any attributes on any file on the source drive, so I've no idea what is happening with your drive.
  6. Nobody can say whether you can, or will, recover any deleted files. All you can do is try. A deep scan runs a normal scan first, so when you chopped the deep scan you would have seen the results from the normal scan. This scans the MFT which is very fast. Running a deep scan on the recycler is not feasible, as the directory information is held in the MFT not at file level, and a deep scan looks for clusters containing files, not directories. Files sent to the recycler are renamed, to $Ixxx.ext and $Rxxx.ext. The data part is held in the $R file. You could run a normal scan with $R in the filename box, or just look for $R files. A deep scan will not list the files under this or any name, as filenames are held in the MFT. (I have seen files deleted from the recycler return to their original names, I don't really know what rules the recycler follows.)
  7. FAT32 is a beefed-up version of FAT16. However it needs four bytes to hold the first cluster number (in the FAT tables) instead of two, so it uses two additional bytes from elsewhere (the actual address of the start of the file is held in two separate halves). When a file is deleted the additional two bytes of the address, the high end, are wiped by the file system for some reason, and as a result the address of the file is corrupted. This is why you get the overwritten file message, Recuva is looking in the wrong place. It isn't possible to find the right place, except by guessing. A deep scan looks for clusters with a recognisable file signature, so can be useful in cases such as this. However a text file has no file signature, so is not identified by Recuva. It may be possible to find this file with a hex editor, but as it's only a test it isn't worth bothering.
  8. Because your drive is an SSD, and WFS is pointless on an SSD. And WFS three passes on an SSD is three times more so.
  9. That's nothing to worry about, it's only 500,000 live and deleted files. On my 120 gb C drive SSD my MFT is 472 mb, and I am only using 36 gb. Win 10 install allocates and deletes a lot, a very large lot, of files. Remember that large files, and large directories, will use multiple MFT records so the total file count is probably under 500k. WFS will not touch the MFT, unless it's an entire disk erase. Windows does not reduce the size of the MFT, nothing does, apart from a reformat. When a disk is nearly full then NTFS will allocate files within the MFT Zone, which is not the same as reducing the size of the MFT.
  10. You could read through http://kcall.co.uk/ntfs/index.html although it is heavy going. The part headed MFT Records, or MFT Extension Records, describes the index clusters (called Folder Entry in Defraggler) for a file, the principle is the same for a folder. Microsoft sometimes calls directories indexes, and we call them folders. It is confusing. The MFT is a file, which holds one or more 1k records for every file on the drive, including itself. A folder, or directory, consists of one or more records in the MFT. Large files, or large folders, may have separate index clusters allocated which hold the addresses of the many MFT records used by the file or folder. Yes, I have noted your signature, and agree entirely. We're a long way from Recuva Suggestions, but it's fun, sort of.
  11. Yes it does. Use a hex editor to find a directory record in the MFT. You will see that file names are held in ascending name order in the $Index_Root attribute. Delete one of those files and the remaining file names are moved up to fill the gap and an EOF marker overrides what was the last file name. Larger directories will have MFT extension records, and one or more Index clusters. The principle is the same. This is easy to observe with a small directory, and I have. There is no process I know that can flag a file within a directory as deleted. Show me an MFT directory record containing a deleted file. Yes it would. Recuva doesn't look at the directory records, so it doesn't matter what's in them. Recuva looks for deleted FILE records in the MFT, and lists those. Deleted files are not listed in directory records in the MFT. Directory information for a deleted file is found by following the directory chain-back address in the deleted file's record. (As far as I understand from my experience using Recuva.) If you run Recuva you will see many deleted files listed that have no directory information - the directory records no longer exist, but Recuva finds the files. This is a misunderstanding of the structure of the MFT and what has been said here. No MFT records are shuffled anywhere, nor has that claim been made. The shuffling is of file names and associated info within the MFT records for the directory, not MFT records themselves. My three other replies stand. These are unallocated MFT records. They are not the same as file names held within a MFT directory record. I'm beginning to wonder whether the terms 'MFT' and 'Directory' are being confused?
  12. These Folder entries are the index clusters for large folders as described in my previous post, and indeed as described in https://www.ccleaner.com/docs/defraggler/defraggler-settings/defraggler-options-advanced-tab However the topic is Recuva suggestions, not Defraggler, and Recuva cannot amend the size of a directory, which includes these index clusters.
  13. You're quite welcome to disagree. An NTFS directory is a record in the MFT. A large directory may have several MFT records. An exceptionally large directory may have a separate index cluster allocated. I don't know whether we are using the same definition for directory. By the way I have just edited my big response, please use the later version. More by the way, no, I have never used Defraggler.
  14. I don't know, I've never seen this. All I get is an increasing percentage to 100% when the scan is finished.
  15. Yes, could be an Option in Options/Actions. The scan would still be as long as before, as all records in te MFT have to be scanned to see if they are recoverable or not. A directory is a record, or number of records, in the MFT. NTFS will reduce the size of the directory on file deletion, shuffling the live entries up to overwrite the deleted entry. There are no references to deleted files in an NTFS directory. The MFT is heavily protected and any writes to it with, for instance, a hex editor are backed out in seconds. There is no concept of a 'vacant' directory entry. See above. Directory size is maintained by NTFS. See above. See lots of above. I couldn't say what that was doing. Some older software did things that are not legit or possible today. (All the above is 'Generally speaking'. There can be exceptions.
  16. WFS would only increase the storage capacity if it wiped live files - wouldn't it?
  17. Well, this is heresy, but here goes: Remove the multiple pass overwrite option on secure file deletion and Drive Wiper, leaving one zero-byte pass only, saving decades of lost time and tons of CO2 in wasted energy. Surprisingly for a tech and science dominated field, the multipass myth has achieved unquestionable god-like status. It was thoroughly debunked over twenty years ago. Chances of being adopted - close to zero.
  18. The Ignored Files includes live files, zero-byte files, system files etc, which you would not normally want to recover. Also if there is any file name or path in the aptly named Filename or Path box then this will restrict the results, possibly down to zero. The free and paid version have the same recovery facilities.
  19. If by 'Exited the list' you mean closed the program, yes, you will have to do a rerun. If you mean that you have, for example, typed a search word in the File/Path box, then clearing that should restore the original list. Scan results are held in memory to avoid overwriting data, so when the program closes the results are gone. If your storage device is an SSD then post back here first.
  20. Google it, and you will know as much as I do.
  21. How should I know? I doubt if it was supported years ago as 64k+ clusters were only introduced in NTFS in late 2017. Why not try Windows defragger?
  22. This seems to indicate that not much supports clusters over 64k, at the time it was written. It seems a pretty serious change. https://dfir.ru/2019/04/23/ntfs-large-clusters/ And the About Defraggler page states that defragging NTFS clusters greater than 64k isn't supported. https://support.piriform.com/hc/en-us/articles/360048065892-What-Defraggler-can-and-can-t-do
  23. Good to hear that copies come to the rescue. As for disabling TRIM, that advice sounds as if you disable TRIM after discovering that files have been accidentally deleted. This would be ineffective as TRIM is an asynchronous command issued on file deletion and disabling it afterwards is shutting the stable door when the nag has well and truly galloped off down the road. For this method to work you'd have to have TRIM disabled permanently, which is possible if not recommended (although I'm not fully convinced of it's worth these days).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.