Jump to content

Rob Defraggle

Experienced Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by Rob Defraggle

  1. See the thread Less Strict Fragment Defintion, Display & Reporting End users are confused by high fragmentation % and many red blocks You probably shouldn't worry about it. Use file list and check file option, or see if it's the usual suspects (pagefile & system volume information) which are being reported in the fragmentation figure.
  2. Windows since Vista has a scheduled defrag that the lazy & incompentent would have to find to turn off; so as people move off XP as it End of Life's you'll find less and less benefit to manually defragging (fewer fragments even if defraggler reports high frag %). I agree parallel defrag is a benefit and should help even on a single core system, as defrag speed ought for much of the run be mainly dependant on number of disk arms, rather than CPU time.
  3. Ideally parallel defrag would support 1 queue per physical disk, with drive letters queued for defrag getting assigned to appropriate disks queue, then the user doesn't have to think about serial v parallel but simply choose which drives to defrag.
  4. If there is a way, I haven't found it. In general this would not make sense, because you could have files which would be compacted at start of disk by defrag, in the area the large files shall be moved to which is likely to cause fragmentation and increase the total amount of work, as later the hole will need re-compacting the "end of the drive" meaning large files are likely to be shifted up.
  5. This is my preferred method to, and I turn off Hibernation to so hiberfil.sys is removed & recreated after defragging. If you don't have plenty of RAM, then you can create new pagefiles in another filesystem (Drive partition) and setting min & max to same size forces Windows to pre-allocate the space in one piece.
  6. I don't think the Background or Foreground priority makes much difference, because it is considering CPU, on an operation that is basically I/O bound for most of it's running time, so it would seem like cluttering the UI to offer things like CPU affinity, rather than leave it to the OS to schedule as it sees fit around other tasks. If the O/S offered I/O priority and soaking "background idle I/O capacity" only; then that would make prioritisation more interesting. No it's not silly, Server Admins of older OSes have known that it's faster to serialise I/O heavy jobs, as it minimises disk seeking and memory contention. Some would create 1 job wide batch queues, if it wasn't convenient to run the tasks from one Master control script. On the desktop such tasks tend to increase latency, especially if they cause memory shortages, causing high level of paging. It's just far more complicated than slowing things by "using the CPU".
  7. Whilst I prefer another browser to, unless you can uninstall IE completely (which seems to be hard due to MS decisions) you still need toinstall the updates. I don't buy the "market share" argument as a reason for many serious exploits, it is a consequence of MS's past attitude to prioritise "ease of use" over security, even in obvious target situations like email clients and browsers. It was not unforseable, they were warned! The fact that very many Windows user have reason to distrust MS's updates compounds the issue. But this is seems like a discussion for another place, so don't try and draw me further, the point was that the OS files do get changed, quite regularly; if you want to benefit from reduction in long seeks then partitioning the disk and storing as much data as you can outside of the System partition helps these issues considerably, at cost of having to choose another drive via dialogs more frequrently.
  8. Have you tried using the eject drive option from "Computer", right clicking on the drive. IIRC this method, generates a pop up warning, which if you confirm it should continue, then successfully and safely unmounts it. This works for me under Win7
  9. So if it's definitely not restore stuff, then have a look after analysis at what is contained in the red blocks. In past I have had system files that were immoveable or with FAT32 folders, that fragmented large files even at the end of the drive for example, which is usually empty. Of course something has changed the layout or modified those files. There's going to be a reason, but if it's not the usual suspects, then you'll need to dig for more details. I haven't seen unfragmented files be moved or fragmented for no reason.
  10. You have a large file with a file that follows it, then you append new data to the file and it needs new extents to grow, so it is now fragmented. I agree it is likely to be system restore, if the files are large and in large chunks, then this fragmentation does not matter a bit. If you aren't reading back a fragmented file, you don't suffer any performance penalty. it's small files in very many small fragments that cause poor performance if many of them have to be read in by system or applications, or if somethign like the $Mft becomes fragmented into 1700 pieces.
  11. I wouldn't even bother doing that purge, they'll be automatically culled if space is tight. Actually I have noticed with heavily fragged filesystems, that free space increased somewhat after defragging, I assume because less housekeeping information was required to know which blocks the fragment extents were using.
  12. It ought be faster even with single core, because the disks can operate in parallel unless Windows serialises things itself (modern disk transfers use DMA so interrupt driven and using little CPU time). If your Defrags are taking a very long time, I suggest considering partitioning the disk which significantly reduces the amount of compaction work and allows greater use of "Move Large File to End" option, without paying a significant performance penalty.
  13. Thanks for the explanation, I should say this is significant : MFT increased from 2 to 1700 fragments according to Defraggler, Contig counts $Mft and $Mft::Bitmap as different files. Microsoft Windows [Version 6.1.7600] Copyright © 2009 Microsoft Corporation. All rights reserved. T:\Win\Contig>Contig Contig v1.6 - Makes files contiguous Copyright © 1998-2010 Mark Russinovich Sysinternals - www.sysinternals.com Contig is a utility that defragments a specified file or files. Use it to optimize execution of your frequently used files. Usage: Contig [-a] [-s] [-q] [-v] [existing file] or Contig [-f] [-q] [-v] [drive:] or Contig [-v] -n [new file] [new file length] -a: Analyze fragmentation -f: Analyze free space fragmentation -q: Quiet mode -s: Recurse subdirectories -v: Verbose Contig can also analyze and defragment the following NTFS metadata files: $Mft $LogFile $Volume $AttrDef $Bitmap $Boot $BadClus $Secure $UpCase $Extend T:\Win\Contig>Contig -v V:\$Mft Contig v1.6 - Makes files contiguous Copyright © 1998-2010 Mark Russinovich Sysinternals - www.sysinternals.com ------------------------ Processing V:\$Mft: Scanning file... V:\$Mft is already in 1 fragment. ------------------------ Processing V:\$Mft::$BITMAP: Scanning file... V:\$Mft::$BITMAP is already in 1 fragment. ------------------------ Summary: Number of files processed : 2 Number of files defragmented: 0 All files were either already defragmented or unable to be defragmented. // Run Defraggler Defrag on V: T:\Win\Contig>Contig -v V:\$Mft Contig v1.6 - Makes files contiguous Copyright © 1998-2010 Mark Russinovich Sysinternals - www.sysinternals.com ------------------------ Processing V:\$Mft: Scanning file... Scanning disk... File is 45679 physical clusters in length. File is in 1699 fragments. Moving 45679 clusters at file offset cluster 4 to disk cluster 13894015 File size: 187105280 bytes Fragments before: 1699 Fragments after : 1 ------------------------ Processing V:\$Mft::$BITMAP: Scanning file... V:\$Mft::$BITMAP is already in 1 fragment. ------------------------ Summary: Number of files processed : 2 Number of files defragmented: 1 Average fragmentation before: 850 frags/file Average fragmentation after : 1 frags/file T:\Win\Contig> Debug log attached : Defraggler64.exe.2_2_2532011-02-17_12-07.txt
  14. What was put on the end of the drive? You can find out by clicking on the blocks in the drive map. If you used the "Move Large Files" to end of drive option, then uncheck it, and defrag again, the data should get compacted to front.
  15. I run the command shell "as administrator". Then run contig something like "Contig -v V:\$Mft". That defragged both $Mft and $Mft::Bitmap. I couldn't copy & paste the text output directly as presumably MS don't consider text command support important enough to implement it.
  16. Have noticed $Mft seriously fragmented in old Vista 32 NTFS system filesystem via Drive Map. Have found in Win 7 64 that Defraggler Defrag has probably caused this fragmentation, as when $Mft is only fragmented file, it relocates the $Mft file into holes on apparently first fit basis. Though that does not explain the very high number for framents (1,600+ on 182MB file) as there tend to be only 5 or so holes to fill, mostly shared with 0 or 1 file, and only one has about 20 smallish non-fragmented files in it. The tool Contig, can relocate the $Mft into 2 fragments, there's discussion in thread Defragging the MFT So at present it's repeatable, and undoable.
  17. After having Win 7 defragger consolidate the file allocations, the holes have moved, doing a Defrag resulted in part of $Mft being put in first hole, then allocated interspersed with other files, as they were being compacted. $Mft is now in 1692 fragments after Defraggler Defrag So the good news is, this is repeatable for me, I presume it's a property of the $Mft causing this, as some report success defragmenting it withing Defraggler via checking it's box.
  18. Yes, I didn't doubt that the 200 MB is what's supposed to happen. Contig defragged the $Mft moving it into middle of free space into 2 fragments, as shown by Analyse. But guess what! Running Defraggler Full Defrag aferwards re-fragments it, putting it back into the 5 or so holes (in the generally compacted beginning of disk with just a few other files held in them) where it was before Contig defragmented it. This seems like a real bug, as it's deciding to re-locate a 182MB 2 fragment file, but apparently storing the relocated blocks are on a first fit basis where the fragmentation HAS to be greater than before. So it's actually seems like Defraggler can indeed actually cause seriously increased fragmentation of the $Mft. I found a Defrag Freespace caused $Mft to be in 256 pieces afterwards partially moved back. Running the Defrag again, I got back to the very high defrag level. I can't exclude V:\$Mft either as the exclude dialog doesn't have a way to select, or let me type $Mft in it, that I can find. The odd things is in most of the blocks according to Drive Map, $Mft was the only file in it and in another block there's about 20 files only, all non fragmented. This is reproducible on my Vista file system at the moment, Defraggler 1.21 & 2.0.2 are affected. hopefully if the hole left is filled in by something else, $Mft which is in 2 fragments won't be moved back and split up. I'm afraid I've never installed or used Recuva, so I'm going by what Drive Map tells me, rather than use that tool to look at the Cluster Allocation.
  19. You do know that you can highlight multiple drive holding CNTRL key, then choose "Defrag" or "Quick Defrag" with them processed in a queue one aftern another. From your last post, it seems that might help you, though it doesn't defrag the physical drives in parallel, the quick defrag is likely to be fast enough to be done by time you return to machine.
  20. Nice theory, but on my original Vista 32 filesystem (which ran Vista since before SP1) is claimed to be 182,720 KB, and is in 1,673 fragments. It appears to be in (on average) 109 KB pieces, no where near 200 MB chunks which would indeed be very satisfactory, though there's only 4 regions containing it on the Drive Map and no obvious reason why it shouldn't be relatively contiguous. Yes, the Vista filesystem seems noticeably more sluggish for some things than the fresher Win 7 one, held on same disk. Checking it via Highlighted Tab from Drive Map, and doing "Defrag Checked" does not successfully defrag the file though it does try it pops up "No files were defragmented". I have managed to defrag other internal NTFS files this way (think $Usn$Jnl). Running an FS check did find some unallocated blocks marked as allocated in bitmap and fixed those but no change to the situation. Have downloaded Contig 1.6 and will give it a try to see if it can improve that MFT file. Thanks to OP for posting about $MFT!
  21. When I'm defragging a file, it's often painfully obvious that "gathering" the scattered sectors is rather slow (perhaps taking several seconds for even 10MB), suggesting the read/write head movements is the bottle neck, not the bandwidth to/from main memory. They appear possibly to be synchronous for data safety reasons reallocating an extent. On RAID 0, 1, 10 or 0+1 setups, with multiple diskarms one would not see a performance improvement if disk access was controller bandwidth limited, but you do, the extra disk arms really do help. Given many Mobo's have more than 1 SATA controller, or it's possible to use PCIexpress cards, wouldn't they see a benefit from simultaneiously defragging 2 disks? In past with PATA one did see benefits with multiple UDMA hard disks despite the read/write speed having less headroom, now only the fastest SDD's are saturating 3 GB/s SATA controllers. Disks just have never been able to sustain peak transfer rates. If your disk is not able to read fragmented files to max out the controller throughput, then you would see a benefit; if the disk can max out the throughput despite the fragmentation then why bother defragging in the first place?
  22. I have had Defraggler crashing similarly, scheduling a filesystem verification on boot is what fixed it.
  23. Actually due to the SVI rising during a defrag, the reported fragmentation can indeed rise on a defrag, it lowers significantly later when the restore points are removed (happens automatically as new ones are regularly made). The OP (marie-helene) probably (IMO) need not worry about this fragmentation %, it's generally high due to very large files in relatively small number of big pieces, which are not even used heavily, rather than small heavily used files fragmented in very many small pieces which reduces disk read peformance. If you examine the fragmented filelist after an analyse, you can check. Running the Quick Defrag, to tidy up new files should maintain good performance, with occasionally running an Advanced->Defrag Freespace to consoidate that and improve (hopefully) future block allocation. The Win 7 defragger is scheduled to run regular enough by default, for files that aren't dealt with by Quick Defrag. The thread Less Strict Fragment Defintion, Display & Reporting - End users are confused by high fragmentation % and many red blocks discusses the issue you've observed, and tries to explain why the high % reported is unimportant.
  24. Another thread illustrating the point about the effect of current fragmentation reporting = defrag result higher and higher Incidentally a benefit of "chunking" could be to speed compaction, as rather than needing to shift huge very variable sized fragments, you'ld have many more standard size large file chunks plus smaller over size last fragments within a more constrained range. Holes could more often be filled by relocating chunks, rather than shifting GB's of files one after the other as tends to happen currently.
  25. I wouldn't want to do what the OP suggests for a number of reasons : 1) I bought a good amount of memory in my PC, most of it is used by Win 7 to cache disk blocks, and the pages can be released and re-read from disk when necessary, so there is little paging. Win 7 does not perform worse than Linux which does very little writing & reading to/from pagefile in my useage, if Win 7 did it would crawl. I suspect this "No brainer" optimistation could slow a system down! I moved my pagefile out of C: drive, into a freed partiton and saw no difference (despite it being at beginning of disk). 2) I installed into a sane C: partition size so all of Windows and most common small applications have locality and a highish transfer rate. I can store most of the big rarely used crapola in another partition entirely. It is not as convenient as under Linux or UNIX but it is doable. 3) Demand paging and access to small files & folders, means Random Access speed is more important than sequential transfer rate. There's OS features to do pre-fetching and benefit from sequential reads, so it's actually those files which would most logically be placed on fastest part of disk, but that requires M$'s cooperation in the OS, to allocte them or move them (good luck with that!). If you want performance, buy a good SSD! 4) After regular updates, imposing the ordering on windows and program files would cause lots of recompacting. All in all, I really doubt you'll see a large performance boost, it is easy to compare; you can install into multiple partitions and have most program files in a different drive, with user data mounted elsewhere. I have most software on a 2nd disk, rather than on the system drive, which actually really does improve performance. Provision of scientifically benchmarked numbers could change my mind.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.