Please understand that moving data around in a solid disk means you're only re-organizing the upper directory structure. There is a second layer that you and your OS do not see. While a file may appear to be contiguous after a lengthy defrag, the second layer will show the file is still scattered all over the place. You need special pro-level mfg. tools to see that layer.
If you have to defrag a solid state disk, for psychological OCD reasons, then Nergal's method of copy/paste might work. But you must do a full format to get any "chance".. Which adds more write ops anyhow. You have to wipe the 1st layer directory completely. You have to make the card think it's blank so that it doesn't rely on past history to fine-tune the wear leveling. And then depending on controller logic implementation, the second layer might follow through. Maybe not. Basically you're cheating the system for no advantage whatsoever.
Any freeware or commercially-sold defragger can only show you the 1st layer. Which is nothing more than the table of contents page from a book. Really! The second layer, the real live map of the data handled by the controller is something you and the OS will never ever EVER see. Data can be fragmented as it is actually written. As the controller starts committing it to storage it might come across some blocks that are close to end of life, it will skip those, and only come back, 500 files later, to use them. Thus causing fragmentation by default. Or one of those blocks might have been part of the 1st level directory that suddenly became available. While at the end of the "map", there's no reason why the controller won't use it. Thus fragmenting your file again.
The overwhelming need to organize and straighten up your files in the name of performance and reliability is a grand illusion for SSD's to be sure. Please let the controller do its job!
Defragging works on mechanical disks for a number of reasons, there is latency stemming from rotational movements and head seek movements. It to everyone's and everything's advantage to pack the data close in and have it all organized. Some of the more advanced defraggers will place files in that which they are loaded. Thus eliminating head movements for groups of files. All the drive has to do is wait for the data to rotate around and under the heads. The heads only move when the entire track has been sequentially read. A mechanical disk benefits from an advanced defragmenter.
One more thing about the life of solid disks. In some of the newest flash chips, the amount of read/write activity to a single specific cell can be a shockingly low 700. That's seven hundred! The controller does all the work of figuring out which blocks and bits are going to be flipped the least and relies on heinously complex algorithms to spread the data out in what would appear to be nightmarish wear leveling scheme. It works. It is patchwork to a race to the bottom trend.
That is the cheap garbage you buy at Animal Direct and Big Box. Real Enterprise-class gear has nand (could be nor or xor or any other technologies, I use the term generically here) memory that can handle 100,000 erase cycles minimum, oftentimes testing to several million cycles.
I would argue that having more free space on the card/disk is the best way to optimize performance. Allow the controller room to work and thus letting it garbage collect, trim, and concatenate or separate out free blocks and put them to the side for good write performance.
Having said all that, CCleaner on SSD is a good thing. A great thing! Far bettery And it will do more for your SSD performance/lifespan than any other utility. It eliminates un-needed files, giving room to the controller to work its magic.
--EDIT ADDED--
While I tended to come at this from the SSD viewpoint. As SD card capacities continue to grow, they are adopting more and more SSD characteristics in their logic. They have to.
And one more thing, if it wasn't clear. I can assure you that the filemap you see in your defragmenter program is not what is actually in the chips. Never ever ever. This is not some trick, but a necessity for longevity and efficiency.
I bet you'd be horrified to learn that a 250MB .PST (outlook email) file is normally fragmented in thousands of parts when placed into a solid state disk. I had pieced together one that was over 7500 fragments. And the top layer map the OS and your defragger sees had indicated it was in 11 parts. Down in the memory array it's all over the place!
There is no way to get a contiguous file unless you can instruct the controller to use each block sequentially regardless of wear count. Maybe on a new card that hasn't seen much activity, initially, yeh. But you don't know the wear level thresholds. And neither does your OS. Only the controller does.