Jump to content

Rob Defraggle

Experienced Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by Rob Defraggle

  1. A drawback of strict ordering by age of writing, is that will cause greater fragmentation within compacted area, or leave holes with the free area unconsolidated, because files which fit neatly around immoveable files would not be packed around these new blocks. Perhaps a better proposal would be to have "age bands" : < 1 month, 1-4 months and the "fossils". Assuming the most churn (rewriting, replacement and deletion) of young files, they'd need to be moved less, as they'd tend to be allocated in about the right place. There'd be more chance of reducing shifts as (hopefully) each band would have many candidate file sizes, allowing moving 1 or 2 files to fill a hole, rather than shifting the whole lot. Though I'm not really sure if this intelligence is actually implemented, I do think most of the shifting is caused by compaction of large files, without enough small files to fill the gaps.
  2. I've just managed to get rid of a "light blue box" (caused by a Folder in a FAT filesystem in an incovenient spot at end of disk where large files are moved to), by renaming the folder, creating a replacement, moving the file in, and deleting the original and emptying recycle bin. The highlight view, has an open containing folder option, so it was quite quick to do. Because small files have many varying sizes the defrag can pack around immoveables without showing fragmentation in most cases, at the beginning of the disk. That's the reason they show up in free area and at end of disk, but not in the densely compacted parts containing small files. Now I suspect where you have many of these it is likely a result of having full (or highly fragmented) disk at some point in past, for stubborn system files, you may even need to back up files, re-format the partition and recopy back at the file level, alot of trouble for small benefit.
  3. Buffering (requesting disk blocks before they're required) avoids such pauses due to changing fragments, if you skip frames the cause is some other latency inducing factor, a rough calculation shows a 2 hour 11GB file requires disk transfer rate of only 3 MB/s. Even my 10 yr old UDMA33 drives handle that with ease. To find out what the light blues boxes contain, choose "Analyse" then on coloured map click on light blue box, and look in the "Highlighted" Tab, which tells you the file names. Doing that I see for instance one of my light blues contain $MFT & $Bitmap which are NTFS internals used for bookkeeping. If you list the files which are not compacted (or defragged) then it'll be clearer what's happened.
  4. Good point, a reddish-blue for partly defragmented area seemed logical, perhaps it would be distinguishable from MFT colour. But not all of it, I have had similar with large AVI files, patches distributed as exe's and iso's. The SVI info would generally be excluded by a tuneable chunk size, as it appears some care is taken over the block allocation. You hit this particularly if you have any non NTFS FAT32 filesystems in use, as the folders aren't relocatable by defrag it seems, and tend to annoyingly end up inflating the fragmentation figure by splitting large files. Only if the default is to have it set to a sensible value like the 50 MB used by Quick defrag. Not compacting these large chunks would probably help to, so the folders and small files can occupy a denser area hoping to regularly make some seeks redundant (more folders & files read in short space of time in same cylinder). Yes, which is reason I sugested a tuneable, I was actually expecting more negative reaction from the "no fragmentation is acceptable" camp. It does not take most people too long though to realise, that 0% fragmentation and compacting is causing a lot of I/O work, for no perceptible performance gain. I agree, I ponder suggesting change of buttons, call "Quick Defrag" something like "File Defrag" and default to it, and change "Defrag" to "Thorough Defrag" or "Compacting Defrag". As non-Advanced users, probably are benefitting from scheduled Windows Defrag utility anyway, most of the performance gain would be realised via the quick option.
  5. Did a scheduled defrag run after you defragged with large file option enabled? I had large files compacted "forward" within partition, and then moved back when defragging interactively
  6. First, thanks for entering into discussion, hopefully answering your points will make it clearer, why I think a tunable chunk size will in practice help end users without preventing an expert from reviewing every single fragment. In fact considerting how to improve the "computer literacy" and be more useful to the expert, more information could be conveyed, by using red when there's small fragments present, and a purplish blue, where files are not contigous but are in large chunks to draw attention of those wanting even huge files to be contiguous despite the heavy overhead over time maintaining such a layout (compacting). The colour coding would be very similiar to the allocation density idea, with heavily used blocks in darker blue and sparse blocks shown in light blue. The frequent recommendation to remove System Restore points, and other tips to get to a 0% (or near) fragmentation %, can't really be the best use of user time, hard disk bandwidth or electrical power. System Restore's a useful feature when you do discover you need it, even if that is relatively rare. PCs are meant to aid productivity, there's all sorts of things chosen for the end user by the hardware and software designers. A tunable chunk size, allows the expert to turn off filtering when desired to drill down to every fragment, but avoids drawing attention on every run to isignificant levels of fragmentation. Those who wish to have all fragmentation reported can chose it, when they want. At present I cannot turn off this misleading reporting, but have to check the file lists and simply ignore the block colouring and global defrag % because the info presented is not really useful, only the filesize and fragment count is. The current situation rather than educate the user, effectively misleads, the slightest level of fragementation results in huge areas of red blocks, and an alarmingly high "fragmentation %" which suggests it's a problem, rather than adding a few milli-second to a large file access which may take the disk a minute to read in at maximum speed. The Windows Defrag utility, avoidss alarming the user, but is inflexible, in that you can't defrag selectively a small number of hevily fragmented small files (in a cache say), which can be expected to be re-read and would benefit from being contiguous, without defragging the whole filesystem. Effectively the current reporting is underplaying the advantages of Defragglers file based defragmentation feature, by steering new users towards performing a full defrag (by saying the file system is 30% fragmented). In actual fact, most likely they don't need the file system Defrag at all but just use Quick Defrag, and manually select a few extra files (because Win 7 & Vista have a regular scheduled defrag by default). Whilst I do understand your point, I think there's no need in practice to calculate exactly, as you say that one doesn't notice the improved lay out with modern disks, what you would notice is reduced defrag time. Rather than dynamically benchmark, just realise that when reading large amounts of data, the seek time becomes insignificant. The contigous pieces cannot be read back in one transfer, and whilst reading large files back, there's likely to be lots of other seeking anyway in service of other tasks. The fragments are adding mili-seconds, which the OS can anticipate anyway for sequential access via read ahead, for files that take many seconds to read. On the forum, there's a good sample of confused posters, who probably are in the minority of those who become alarmed. Rather than being steered towards the unique strength of Defraggler (file defragmentation), they're becoming concerned about increased fragmentation and anomalous %'s reported.
  7. Does the Quick Defrag, or selecting after analysis selecting the fragmented files via the checkbox in the file list and defragging then not do this? If you want Quick Defrag to tackle even the largest fragments, with little to be gained by defragging them, then increase the option from the default 50 MB to an even larger value.
  8. Recent example forum thread illustrating my point Defraggler is Fragmenting
  9. What I can see is very similar to what I see on my system, and do not worry about. I was partly guessing when I answered, though it seemed likely, so it's interesting to have your screenshot seem to confirm it, though the 2nd one may not be sorted on most fragmented files. Defragging system restore points appears to me pointless, most likely you won't use them and they'll just be deleted, and if you do use them, you read them once. The ones I've seen are in large chunk sizes so proportionately there's miniscule performance to be gained, whilst the defrag of those files would be expensive.
  10. Defraggler counts any file in more than 1 piece as fragmented, this suggestion Less Strict Fragment Defintion, Display & Reporting is aimed at improving Defraggler for your usage. System Restore points are one particular cause of inflated fragmentation figures. Basically, run a Quick Defrag, then look at the remaining fragmented files in the File List, double click on fragment column to sort the fragmented files to top, and if they're large and in a relatively small number of pieces it's nothing to worry about.
  11. I suppose it might be able simply to show count of the total number of files to process, subracting those known compacted at beginning and end (when move large files option is used) of disk. Then every time progress is made, and perhaps large blocks of files are skipped, the count can be lowered accordingly. Seems to me, Defraggler usually tends to accelerate as it goes, though I have seen it seem to get "stuck" at times on certain "Defragmenting %"'s for a reason that's not clear to me, but appeared to be involving multiple passes and re-defragging certain areas. So a problem might well be innacurate prediction of progress.
  12. If you think CP/M is still relevant... the page clearly mention NTFS and other modern filesystems, perhaps you should give that more weighting, I would.
  13. Won't that be hard to predict up front? Because after the first files are defragged and shifted up is when you know the size of the holes available, which files stored from later in disk may fit well (rather than shifting every single file by a few blocks). Isn't Quick Defrag & Defrag fairly clear already about what to expect in speed terms?
  14. That is what I think the OP was suggesting, that "stable" files be prioritised to beginning of disk to minimise the work done compacting. The problem is of course, that these "stable" frequently used files are not so stable if you're installing the OS & application updates like you're supposed to. Also as you have pointed out having only few small holes, does leads to file being allocated in many small widely scattered pieces when initially stored. I also like allowing small files to be densely stored by seperating out large files to end of the partion (as mostly large files are performance non-critical sequential access, less frequently used and using proportionately large number of disk blocks). Why would you want to lose the locality of small files stored in a folder, putting a newly modified one at end of disk just because it was recently updated? Personally I think, if file system optimisation is a concern then using disk partitions is more effective than heuristics in a defrag tool and much reduces the defrag times as well, minimising the shifting around of data files.
  15. Except that would have been gramatically clumsy and also in the general technically innacurate (there's more than one kind of fragmentation than the one you're considering). If you had put a search into Wiki Extent (file systems) So I don't think your criticism is very fair, I said precisely what I meant, and it's the way to avoid fragmentation.
  16. OK, but with Windows 7 (and Vista) is the system scheduled defrag not giving that (relatively high performance)? I'm using Defraggler mainly for the Quick Defrag, on small cache files which tend to be stored in a horribly large number of pieces otherwise, and can be expected to be reaccessed so reducing the seek time makes sense. The new boot time defrag feature is another reason, as is the freespace defragging feature in advasnced. Somone else, may make "having a fragmentation free file system" as their goal, and then the recipe is going to require advice about restore points, making contiguous space for page files, multiple passes and other tricks to get to 0% fragmentation. I don't think performance numbers would support such an extreme focus, after all the PC is still going to be opening and reading 1000's of individual small files each requiring seeks to be located on disk, no matter how well laid out the large files are.
  17. What I meant by "all in same extent" was that the whole file is stored contiguously in one piece, 1 fragment seemed misleading as there'd be no fragmentation; so using "extent" seemed clearer. The term "Extents" are generally used for filesystems where the range of blocks are stored, rather than a list of every block used to store a file, so I'm surprised you had trouble finding a simpler explanation. BTW Alan B's suggestion of using boot time frag is good, but might not work if you don't have a large contiguous area to create the page file, and may need to be redone when the automatically managed pagefile grows dynamically.
  18. I agree with the OP the there are diminishing returns on coalescing large fragments into one extent. See Less Strict Fragment Defintion, Display & Reporting I don't think it makes sense though to say that 5 GB files, can have 10 fragments, and say 500 MB files 5, nor is there much value in exctly specifying such. What matters is fragments are in good size chunks so that the additional seek to access the fragment costs small amount of time compared to the time to read each chunk of data. So having some "chunk" size defined, where the fragment shall be left alone, ought to suffice. It appears the Windows 7 (likely Vista defrag) has this notion already from it's reporting, so I'm using that for full defrag and using Defraggler for the Quick Defrag and Free space defrag only.
  19. A simple "cookbook" is an attractive idea, but what makes sense is going to depend on your computer configuration and how you use it; there are tradeoffs. There are also going to be differing opinions, depending on the goals of the "expert", a perfect layout with every file (no matter how large) in one extent or simply high system performance for mimimal maintenance effort. For example an optimal way for my system appears to be : o Regular weekly scheduled run of Windows 7 defragger o Run defraggler "Quick Defrag" regularly to catch new files (perhaps daily) o Occasionally running defrag freespace (perhaps monthly) o Seperate out a cache with lots of small files, and a very large database file into a small filesystem which also contains the pagefile o Seperate out large archive files like media or downloads into filesystem on end of disk o Moving large files to the end of partitions to avoid them being regularly shuffled about when defrag is compacting the used area after deletions This works well for me, resulting in good performance for little effort, with scheduled defrag mostly being unnoticed, but benefits from the disk partitioning scheme I planned. If everything was in one huge C: drive moving the large files, could be less attractive. I stopped using defraggler for full defrag because it was tending to work too hard (coalescing even very large huge fragments so the defrag costs were larger than fragment overheads), and I had issue with the scheduled defrag not operating with the same options as interactive defrag (which meant large files were moved back to start of disk).
  20. In short I suggest a saner more pragmatic reporting of fragmentation and indeed toleration of fragments in large chunks (like Quick Defrag does already). Fragmentation has a large overhead when files are split into very many small pieces, not when they are merely in several large chunks requiring a small number of extra disk seeks to read them. At present (unlike the Windows 7 defragger) defraggler is highlighting every additional fragment, no matter how small the proportionate overhead the extra fragment will really cause. The way it is currently reported is causing the end user to waste time and resources defragging hyper thoroughly, hitting diminishing returns to be rid of an alarmingingly high fragmentation %. They are also regularly posting questions in the forum concerned about the high reported fragmentation %.
  21. Use the "no page" file, clean up drive ( eg) recycle bin & temporaries found by cccleaner), turn off hibernation to remove hibefil.sys and remove restore points before doing the defrag. Then once you have a large contiguous area available, renable the paging file, but this time pre-allocate it to a fixed fixed size by setting the min & max size to the size you need; it should be all be in same extent.
  22. The "fragmented" new files will be re-written pointlessly as well as the "location" meta information for the files and cause extra erase cycles to the flash pages, with especially small files causing a write amplification effect (a 4KiB write often requires read and write of whole Flash 512 KiB page). SSD's tend to slow down as pages are re-written, rather than using fresh unused pages (hence TRIM command to tell SSD controller that blocks are unused and allowing creation of lists of empty flash pages). Many OS writes can be buffered and deferred, allowing writes to be coalesced to full flash page sizes, which may not be true, when re-locating a file. Buy an SSD so you can forget about defragging, you've no spinning disk or disk arm to move, causing long seek times.
  23. On the File Hippo Download page, 1.21 is still available which is a recent release that works under Win 2000. It has bug fixes and some feature improvments. It's well known that the Windows block allocation policies are rather crude, so there's nothing unusual about small files being fragmented (more modern Windows may do a bit better). You can analyse, and then click on file list entries to show the layout of fragmented files, as well as go through the blocks to find the reason why some files are fragmented (split by unmoveable system files). The $MFT file is not defraggable, and having a large reserved area for it seems to be a feature of older Win versions (pre Vista). You may be able to free space that's wasted after Windows updates, the free program WinDirStat can help track down where to concentrate. On an old XP machine I looked at, 30% of disk was taken up by such that the owner knew nothing about. If a 3GB file is fragmented in 100 pieces, each chunk is still about 3O MB so is unlikely to be a performance issue, having it in 2 chunks really is not a real performance problem. I'm guessing as you install Ubuntu onto the hard drive, you really want to free up space at end of drive to shrink the Windows partition, which is why "unmoveable" files are so bothersome rather than the actual (small level of) fragmentation which defrag algorithmns may be ignoring as not worth the bother to solve. To avoid issues like that, I prefer to install Windows C: into a moderately sized prepared partition, with other partitions for data or installs of large disk hog programs like games, rather than allow it the whole disk.
  24. Yes, I have used Win7 64 Defrag and Defraggler on 32 bit Vista C: drive; that did seem to improve it but the $MFT is still rather heavily fragmented, and I am not sure it really makes much difference to Defraggler, the Other Window's C: is still a mounted NTFS filesystem and live.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.