Jump to content

File directory issue


Recommended Posts

I hope this bug hasn't been reported before.

 

About once every month I defragment my 4 GB SD card (FAT 32). Windows tags this SD card as the F: drive. It works fine except for one thing. DF noticed that a directory file was fragmented. So, i ordered DF to defragment that file directory (multiple times) but to no avail. See attachment.

 

Another thing that stands out. When I look at my C: drive then all fragmented file directories are NEVER zero bytes in length. But DF assumes that the file directory is zero bytes in length. Odd. Then it's no surprise that DF can't defragment the file directory.

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

Guest Keatah

Defragmenting a memory card is pretty much a no-no. It uses the same memory and memory controller style as an SSD. The bigger they are the more SSD-like they are. The controller does internal wear leveling, garbage collection, its own version of TRIM. The whole 9 yards. And data will fragment itself as you use the card anyways.

 

SD cards have fewer extra spare blocks than SSD's and are built with lesser quality Nand. These cards are disposable and become obsolete as speeds and capacities evolve over time. Another check mark against using top of the line materials.

 

The best recommend action, is to occasionally format the card in the camera (or in your case, audio playback device) in which it's being used. If not, then just do it in the computer.

 

The real way to "naturally" prolong the life of current-tech SSD's and memory cards is to always have some free space on them. This allows for the previously mentioned wear leveling activity to take place. Some space to shuffle things around, you see. The internal controller does all that on its own. And like SSD's, memory cards won't slow down if and when they get fragmented.

Link to comment
Share on other sites

  • Moderators

to defrag a SD/Flash drive Cut everything off of it (paste to desktop or other temporary location) the re-add (cut paste) it back to the drive.

 

ADVICE FOR USING CCleaner'S REGISTRY INTEGRITY SECTION

DON'T JUST CLEAN EVERYTHING THAT'S CHECKED OFF.

Do your Registry Cleaning in small bits (at the very least Check-mark by Check-mark)

ALWAYS BACKUP THE ENTRY, YOU NEVER KNOW WHAT YOU'LL BREAK IF YOU DON'T.

Support at https://support.ccleaner.com/s/?language=en_US

Pro users file a PRIORITY SUPPORT via email support@ccleaner.com

Link to comment
Share on other sites

Guest Keatah

Please understand that moving data around in a solid disk means you're only re-organizing the upper directory structure. There is a second layer that you and your OS do not see. While a file may appear to be contiguous after a lengthy defrag, the second layer will show the file is still scattered all over the place. You need special pro-level mfg. tools to see that layer.

 

If you have to defrag a solid state disk, for psychological OCD reasons, then Nergal's method of copy/paste might work. But you must do a full format to get any "chance".. Which adds more write ops anyhow. You have to wipe the 1st layer directory completely. You have to make the card think it's blank so that it doesn't rely on past history to fine-tune the wear leveling. And then depending on controller logic implementation, the second layer might follow through. Maybe not. Basically you're cheating the system for no advantage whatsoever.

 

Any freeware or commercially-sold defragger can only show you the 1st layer. Which is nothing more than the table of contents page from a book. Really! The second layer, the real live map of the data handled by the controller is something you and the OS will never ever EVER see. Data can be fragmented as it is actually written. As the controller starts committing it to storage it might come across some blocks that are close to end of life, it will skip those, and only come back, 500 files later, to use them. Thus causing fragmentation by default. Or one of those blocks might have been part of the 1st level directory that suddenly became available. While at the end of the "map", there's no reason why the controller won't use it. Thus fragmenting your file again.

 

The overwhelming need to organize and straighten up your files in the name of performance and reliability is a grand illusion for SSD's to be sure. Please let the controller do its job!

 

Defragging works on mechanical disks for a number of reasons, there is latency stemming from rotational movements and head seek movements. It to everyone's and everything's advantage to pack the data close in and have it all organized. Some of the more advanced defraggers will place files in that which they are loaded. Thus eliminating head movements for groups of files. All the drive has to do is wait for the data to rotate around and under the heads. The heads only move when the entire track has been sequentially read. A mechanical disk benefits from an advanced defragmenter.

 

One more thing about the life of solid disks. In some of the newest flash chips, the amount of read/write activity to a single specific cell can be a shockingly low 700. That's seven hundred! The controller does all the work of figuring out which blocks and bits are going to be flipped the least and relies on heinously complex algorithms to spread the data out in what would appear to be nightmarish wear leveling scheme. It works. It is patchwork to a race to the bottom trend.

 

That is the cheap garbage you buy at Animal Direct and Big Box. Real Enterprise-class gear has nand (could be nor or xor or any other technologies, I use the term generically here) memory that can handle 100,000 erase cycles minimum, oftentimes testing to several million cycles.

 

I would argue that having more free space on the card/disk is the best way to optimize performance. Allow the controller room to work and thus letting it garbage collect, trim, and concatenate or separate out free blocks and put them to the side for good write performance.

 

Having said all that, CCleaner on SSD is a good thing. A great thing! Far bettery And it will do more for your SSD performance/lifespan than any other utility. It eliminates un-needed files, giving room to the controller to work its magic.

 

 

--EDIT ADDED--

While I tended to come at this from the SSD viewpoint. As SD card capacities continue to grow, they are adopting more and more SSD characteristics in their logic. They have to.

 

And one more thing, if it wasn't clear. I can assure you that the filemap you see in your defragmenter program is not what is actually in the chips. Never ever ever. This is not some trick, but a necessity for longevity and efficiency.

 

I bet you'd be horrified to learn that a 250MB .PST (outlook email) file is normally fragmented in thousands of parts when placed into a solid state disk. I had pieced together one that was over 7500 fragments. And the top layer map the OS and your defragger sees had indicated it was in 11 parts. Down in the memory array it's all over the place!

 

There is no way to get a contiguous file unless you can instruct the controller to use each block sequentially regardless of wear count. Maybe on a new card that hasn't seen much activity, initially, yeh. But you don't know the wear level thresholds. And neither does your OS. Only the controller does.

Link to comment
Share on other sites

  • Moderators

wow tl:dr much ;)

 

however yes I left out a step (by the way not copy CUT)

Format should for best results take place between the cut/paste action.

Edited by Nergal

 

ADVICE FOR USING CCleaner'S REGISTRY INTEGRITY SECTION

DON'T JUST CLEAN EVERYTHING THAT'S CHECKED OFF.

Do your Registry Cleaning in small bits (at the very least Check-mark by Check-mark)

ALWAYS BACKUP THE ENTRY, YOU NEVER KNOW WHAT YOU'LL BREAK IF YOU DON'T.

Support at https://support.ccleaner.com/s/?language=en_US

Pro users file a PRIORITY SUPPORT via email support@ccleaner.com

Link to comment
Share on other sites

Guest Keatah

Ehh.. Cut, copy, paste, duplicate, reproduce, remove - To the second layer it's all the same. Each block has write history with it. And files aren't going to be be defragged much, if any.

Link to comment
Share on other sites

@Andavari: Right. And that FAT32 issue is precisely why the Piriform need to look into this problem.

Are you saying that Keatah's arguments against defragging are ONLY applicable to NTFS but NOT to FAT32 ?

Link to comment
Share on other sites

@Alan_B: Nope. The point that needs to get across is that this DF error occurs on a FAT32 drive/card. I agree that defragging a SD card or a SSD (sharply) reduces the lifespan of such a device. But I didn't touch that discussion at all in my first post. That's why I am so baffled by the Keatah's "rant".

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

Did you take notice of this point in Keatah's well reasoned dissertation

Defragging works on mechanical disks for a number of reasons, there is latency stemming from rotational movements and head seek movements. It to everyone's and everything's advantage to pack the data close in and have it all organized. Some of the more advanced defraggers will place files in that which they are loaded. Thus eliminating head movements for groups of files. All the drive has to do is wait for the data to rotate around and under the heads. The heads only move when the entire track has been sequentially read. A mechanical disk benefits from an advanced defragmenter.

How many r.p.m. does you card rotate at ? :)

Link to comment
Share on other sites

DF also fails to display the proper size of a directory file. I came across such a file and I know that directory isn't empty, it contains 7 files, each at least 14 MB in size. But DF says that that directory file is zero bytes in length. See attachment.

System setup: http://speccy.piriform.com/results/gcNzIPEjEb0B2khOOBVCHPc

 

A discussion always stimulates the braincells !!!

Link to comment
Share on other sites

We seem to have different opinions upon what Folder Entries contain.

You have a folder that contains at least is 98 MB

 

The Folder Entry holds NONE of that 98 MB

All it holds is data about that 98 MB of data,

specifically it lists names and sizes and date/time stamps of files and SubFolders.

 

When you move a 10 GB file from one folder to another you do not perform 10 GB of writing plus 10 GB of deleting,

you just re-write a few kB within the directory structure.

Link to comment
Share on other sites

True, but does defragging have relevance to a Folder Entry ?

 

I noticed years ago that when Windows booted up then if the External USB drive was connected it took longer to reach the stage at which I could log in,

and that after logging in I could do a directory listing of the external drive without any USB delays or external drive LED flashing,

which tells me that Folder Entries are only read once on start-up and thereafter the information is held in and retrieved from RAM.

Link to comment
Share on other sites

Guest Keatah

There's a lot more than reading directory contents. There's volume mounting and controller init and other stuff happening. Directory reading is only a small part of that delay.

 

Defragging and compacting a directory only shows performance gains when your working with hundreds of thousands of entries.

 

Contents of the directories is held in ram. But many operations invoke a complete re-read and lots of partial re-writes. More often than you'd guess I guess.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.