I haven't used the "move large files to end" option, but...
In the interim, when that happens, stop it, add an exclusion for the file it won't get past, then restart it.
I would think a "move to end" option without this would be of limited utility.
Pssst. Why are you using NTFS? Go to ext3/4 and never think about it again.
I'd think the simplest solution to this would be similar to the one used in thermostats to keep them from constantly cycling on and off: make your target significantly "safer" than your threshold. For example, when the trailing free space is less than... idk, 5% of the file's size--or maybe just when it reaches zero and starts fragmenting-- trigger a move and/or defragmentation that'll leave it with, e.g., 15% to spare. It won't be moved again until it's eaten up the difference.
You could apply this for not only the last file on the disk, but each one ahead of it. So it'd be targeting a certain amount free space after each file, regardless of whether the end of that space is marked by another big file or the end of the disk.
I have a really cool (I think ) idea growing off this--a particular AI-driven proactive defragmentation--but... I can't just offer it up for free!