Ah, but you made the mistake in 'The image writes to the first 25 GB' in thinking that the lba's will be mapped to the first 25 gb of data, which isn't true.
When the SSD is new the lba's will map to the physical pages, more or less. As data is written it will start at page 1 and continue sequentially. As more data is written, and existing data updated, new pages will be used until all pages are used once. This could happen in a few days, or even hours, depending on pc use.
More data writing will use pages flagged as invalid, i.e. the data held there has been updated and written elsewhere, or deleted and TRIM'd. Garbage collection will erase these pages ready for writes. Also the wear levelling will, after extended use, swap static live data with heavily-used cells, freeing up the low-use cells and reducing stress on heavily-used cells. So live data could be anywhere, physically.
So you do a restore image, and write to say lba's 1-1m, which held the original 25 gb. The SSD has already had some use, possibly a great deal, so the lba to physical page mapping is, well, anyone's guess. The data will be loaded all over the disk. It will not 'write to the first 25 GB of LBA memory space, but the top 30 GB are left as they were'.
If any part of the original 25 gb of data were still live before the restore, then the lba's would not have changed, but the physical pages would, for reasons given above. The restore will reuse the same lba's, but the physical pages will be different, as writes use empty pages.
Thus any live data on pages mapped to the restore range of lba's will be flagged as invalid and emptied for reuse.
Any pages for live data written since the image was taken and outside of the restore range of lba's would not be flagged as invalid, as the SSD is unaware of what the file system is doing.
This is of course a huge simplification of the complex SSD controller components.