Talk:File system fragmentation
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
The contents of the Can't extend page were merged into File system fragmentation on April 10, 2015. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
To-do list for File system fragmentation:
Priority 6
|
Bittorrent, etc. preallocation does not always prevent fragmentation
[edit]Consider copy on write filesystems, which use immutable allocated blocks (like ZFS) This allocation will just waste time, and will not change the characteristics of the filesystem —Preceding unsigned comment added by 129.65.102.1 (talk) 17:49, 27 February 2008 (UTC)
Specific File System
[edit]Wasn't there an attempt at a file system that attempted to avoid that by writing data in layers, from top to bottom, letting the old data drop? Could someone tell me its name please? Thanks! 84.129.170.44 (talk) —Preceding undated comment added 04:23, 5 April 2009 (UTC).
Reasons for duplicating "defragmentation" article
[edit]For the record, I created this article as I was dissatisfied with the current defragmentation article that:
- Approaches the problem from the inverse perspective, talking about how to solve the problem without saying what exactly the problem is.
- Uses imprecise terminology, occasionally saying "partition" instead of "file system", etc.
- Includes the misconception that fragmentation only takes place on the level of individual files.
- Spreads the common myth of Unix file systems not needing defragmentation, citing only unreliable sources (the majority of "reliable" research on file systems does indicate that fragmentation is a problem, and I will cite sources as necessary)
I have attempted to mitigate these somewhat, but ultimately decided to write this article. I don't know if I can get it into a good enough shape to be merged with "defragmentation" (if at all), but I will try, and I will cite genuine research in the progress. It may or may not be considered a "rewrite" at this point. Any criticisms and comments are very welcome. -- intgr 03:53, 14 December 2006 (UTC)
"Related file fragmentation"?
[edit]While I myself added this to the article, I'm not sure it is fair to consider "related file fragmentation" a type of fragmentation. While research dealing with fragmentation very often also touches the topic of keeping related files together (e.g, files in a single directory), I don't think I can recall any instances where it's actually referred to as "fragmentation" per se.
However, consider when an archive is unpacked. As all files are decompressed in order, they will likely be laid out sequentially. But when time goes on, and files are added and deleted, the directory tree in which the files were decompressed into, becomes less and less "contiguous", e.g., can be considered "fragmented". -- intgr 14:21, 19 December 2006 (UTC)
- http://www.kernelthread.com/mac/apme/fragmentation/ talks about this as "User-level data fragmentation" -- intgr 10:40, 21 December 2006 (UTC)
Mac OS X
[edit]I struck the Mac OS X note, since it isn't what actually happens. Mac OS X/HFS+ do not defrag at idle. What happens is that when a fragmented file is opened, it is defragged (unless the system has been up less than some specific time, and I forget what that time is). Thus, there's no "at idle" about it. Now if there's a seperate "at idle" process, by all means put the claim back in (but please reference it). Thanks. :) --Steven Fisher 18:58, 19 December 2006 (UTC)
- Thanks for correcting, I should have looked it up before mentioning; I've heard this myth from several people and it turns out not to be true indeed. :) -- intgr 19:32, 19 December 2006
Merge with defragmentation
[edit]As the article has been tagged with {{merge|Defragmentation}}, does anyone have ideas how to do that (e.g., what to merge, what not to merge, and what to throw away)? -- intgr 14:32, 30 January 2007 (UTC)
Don't Merge
[edit]Although it would seem logical to merge the "Defragmentation" and the "File system fragmentation" articles, the first will naturally focus on practical aspects of dealing with the problem, and the second on a more theoretical understanding of the root cause of the problem. Combined into one article, there is a danger that it will get overly complex -- or that important material will be deleted to keep the article simple.--69.87.193.53 19:22, 18 February 2007 (UTC)
I totally disagree...defragging is the way by which the natural order is kept...extra work that users have to do is good for their character. —The preceding unsigned comment was added by 68.249.171.240 (talk)
- Don't merge. I agree with 69.87.193.53. File system fragmentation is an issue for OS/filesystem designers, while defragmentation is an issue for system administrators. They are as different as Dirt and Cleaning. --N Shar (talk • contribs) 04:18, 13 March 2007 (UTC)
I also disagree, merging the two articles would lead to one gigantic overly complex article. As stated earlier, information would probably be cut for the sake of simplicity, resulting in an incomplete article overall. Just link from the fragmentation article to the defragmentation article. --Rollerskatejamms 13:07, 13 March 2007 (UTC)
Merge Defragmentation#Causes of fragmentation with File system fragmentation
[edit]A much better method, only merges the needless copy of File system fragmentation#Cause.
Spitfire (talk) 01:35, 25 September 2008 (UTC)
posix_fallocate
[edit]There is POSIX fallocate API for allocating predermined size for file (or part of it). It have two motivations: make sure there is enaugh free space for file (so write operations will not file becuase of no space), and file can be preallocated on single extent on disk if possible in optimal manner.
Some tools already uses it to decress fratgmentation. —Preceding unsigned comment added by 149.156.67.102 (talk) 21:50, 7 December 2009 (UTC)
Infinite fragmentation
[edit]User:Codename Lisa recently changed some content in this article and introduced a concept of "infinite fragmentation", which I reverted. He/she started a discussion at User talk:Intgr#File system fragmentation, which I have moved here:
- Hi.
- I am a little shocked the you deleted a sentence that had a source associated with and called it original research!
- Here is what the source says:
It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.
- And...
Additionally, there is a maximum level of fragmentation that the file system can handle. [... Read on]
- Also, as you are aware, statements in the lead need not have a footnote as long as the body has one. (WP:LEAD)
- Best regards,
Codename Lisa (talk) 17:17, 30 November 2015 (UTC)
Sorry, I was perhaps too blunt. I was mainly ticked off by the weird term "infinite fragmentation" (that it appears you made up, hence "original research") and the claim that filesystems can't "sustain" some level of fragmentation.
The only source you're citing is a post on the personal blog of Scott Hanselman. Looking at his publications, he doesn't appear to be an expert on file systems. I don't claim that he's incompetent, maybe just makes oversimplifications, which I think are too misleading for an encyclopedia.
You (and/or this source) are confusing the fragmentation within the SSD device itself (in the flash translation layer, or FTL) and fragmentation of file systems. Those are two separate layers; fragmentation occurs at both layers, but FTL fragmentation is of no concern to the file system. The difference here is that SSDs must make guarantees about how much user data they're capable of storing. Perhaps it's possible for very fragmented SSDs to violate that guarantee. It sounds dubious to me, but that's not really relevant. It doesn't apply to file systems because FSes don't make such guarantees.
On most regular file systems, file data and overflow extent (fragment) mappings are allocated from the same pool of storage. What this means is, if your files are very fragmented, you'll run out of disk space slightly quicker. But the filesystems will "handle" and "sustain" it fine, nothing will break. Out of space is a well-defined state for file systems to be in. File systems don't make any guarantees about how much of the disk space will be available for file content. The "limits" of file system fragmentation are bound by disk space. -- intgr [talk] 09:07, 1 December 2015 (UTC)
- Hi.
- "
he doesn't appear to be an expert on file systems
". Skipping over the questionable "appear" part, I didn't appeal to his expertise (even though 16 published books goes long way to establish expertise). Scott Hanselman, in this case, is a secondary source in good standing for relying information on behalf of the storage team.
- "
You (and/or this source) are confusing the fragmentation [...]
". The subject of discussion is file system fragmentation and the source is quite clear that it is Windows Defragmenter that is involved, that the operation is on volume level and that the fragment limit is an issue on both traditional and SSD storages. Furthermore, judging the source from a technical standpoint without another source is exactly original research. Finally, I have a purely unofficial FYI to add: Frankly, I think the last two paragraph are bits and peaces of random facts with highly technical words, sewn together like a Chimera, to frighten.
- Best regards,
- Codename Lisa (talk) 06:34, 2 December 2015 (UTC)
- @Codename Lisa: Fair enough, you're welcome to ignore my explanation (original research) for how fragments are allocated on file systems. No, it wasn't meant to frighten.
- Instead, let's start with you finding an actual reliable source.
- Yes, he has published 16 books on the subject of programming languages -- not file systems. The reason why I pointed out that he's not an expert on file systems, is that this source fails WP:USERGENERATED. There's an exception for blogs published by experts, but does not apply to this source: "Self-published material may sometimes be acceptable when its author is an established expert whose work in the relevant field has been published by reliable third-party publications"
- -- intgr [talk] 08:11, 2 December 2015 (UTC)
- Intgr, I am not comfortable with where this discussion is going. First, you call it OR, even though there is a source. Then, you use the Chimera sentence, showing that you didn't even read the source. And now you are attacking the author while ignoring that he is merely a secondary source; the people who needs to have expertise here are the Windows Storage team. S.H. has enough qualifications for being a secondary source; i.e. have an in-depth developer-grade understanding of computers.
- It appears you have first decided that contribution must be wrong, and now you are trying different methods of attacking it. If this discussion is to be useful, you must abandon your presumptions, read the source that I gave you, judge it objectively.
- Best regards,
- Codename Lisa (talk) 18:30, 2 December 2015 (UTC)
@Codename Lisa: Ok, in some respects you are right, I have been unreasonable in my communication and I apologise for that. In other points, I think you're misrepresenting what I said and inverting burden of proof. But quarreling about that won't move this discussion forward, let's stop arguing about who-said-what-when.
I looked into this and it appears that NTFS indeed has issues with hitting file fragmentation limits. If I understand correctly, this is what you're talking about. These pages go into some depth about what's going on: [2] and [3]. Based on these, I think it's fair to state in the article that NTFS has a limit on the amount of fragments per file. But that doesn't extend to generalizations like "file systems cannot sustain unlimited fragmentation", or the more extreme "no file system can sustain unlimited fragmentation" — if by "not sustaining" you mean something other than degraded performance. Can you find any sources about fragmentation limits on other common file systems, such as FAT, ext4, UFS, ZFS, XFS?
I believe these pages even confirm that it is not an inherent limitation of all file systems, but an NTFS implementation detail: "A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations", "In the newer versions of Windows, NTFS will stop fragmenting compressed and sparse files before the attribute list reaches 100% of its maximum size. This should put the issue to rest once and for all."
There are books written on the design and administration of Windows and many of them talk about NTFS. Can you find any that cover this issue? I gave it a shot ([4], [5], [6]. [7], [8]) and sadly they barely cover fragmentation at all. For the sake of argument (please bear with me): if it applies only to NTFS, it's not documented in reliable secondary sources, and the primary source that I linked says that it's a rare occurrence, is it really important enough to cover it in the article? -- intgr [talk] 23:15, 2 December 2015 (UTC)
- Hi.
- Thanks. Now I am more comfortable with where this discussion is going. The fragmentation is caused by the file system but not in the file system. Fragmentation occurs on data clusters and blocks. (Feel free to contradict this by deleting three quarter of the article that says it. I didn't add them anyway.) Here is a little bit more reading: https://technet.microsoft.com/en-us/library/cc781134%28v=ws.10%29.aspx.
- Best regards,
- Codename Lisa (talk) 01:56, 3 December 2015 (UTC)
- @Codename Lisa: Erm? I'm confused. What does this have to do with fragmentation limits? I'm also not sure what I'm supposed to be reading from your link. -- intgr [talk] 02:11, 3 December 2015 (UTC)
- @Intgr: Read a section called "§Physical structure". And if I didn't respond immediately to your next message, I will within, say, two hour or so. Best regards, Codename Lisa (talk) 02:14, 3 December 2015 (UTC)
- @Codename Lisa: Clusters/blocks are part of the on-disk layout of the file system. How can you say that they're "not in the file system"? I read it and I still don't get it, what does that have to do with fragmentation limits? -- intgr [talk] 02:44, 3 December 2015 (UTC)
- Huh? No, they are not. Sectors are not part of the file system. They exist even when a file system is not instantiated. (Hint: A sector-by-sector backup app can create a backup image of a partition even when it does not understand its file system. Useful for backing up Linux from Windows or vice versa.) Clusters are logical groupings of sectors. File system abstracts fragmentation, i.e. apps that interface with the file system don't have to deal with fragmentation; they just read and write their data. Best regards, Codename Lisa (talk) 03:07, 6 December 2015 (UTC)
- @Codename Lisa: Clusters/blocks are part of the on-disk layout of the file system. How can you say that they're "not in the file system"? I read it and I still don't get it, what does that have to do with fragmentation limits? -- intgr [talk] 02:44, 3 December 2015 (UTC)
- @Codename Lisa: Why the hell did you change the subject? Or do you agree with my position wrt "unlimited"/"infinite fragmentation" and you're just looking for the next thing to argue about? -- intgr [talk] 08:12, 7 December 2015 (UTC)
- Whoa! Here comes the H word! Frankly, I don't understand your position. Do you even have any? You ask questions and when I answer them, you say that I change subject!
- What I did in the article was (1) adding a source for the claim that SSD's performance is not as much impacted by fragmentation (2) stating why it is so and (3) adding new information on other negative consequences of fragmentation besides performance, i.e. the limit. You somehow don't seem to like items #2 or #3. First, you call it OR, even though there is a source. Then you use a Chimera sentence without reading the source. Next you attack the source's author even though his qualification is enough for a secondary source. And finally this NTFS distraction. I asked "so what?" and you ignored it.
- @Codename Lisa: Why the hell did you change the subject? Or do you agree with my position wrt "unlimited"/"infinite fragmentation" and you're just looking for the next thing to argue about? -- intgr [talk] 08:12, 7 December 2015 (UTC)
- I said it once, I say again: Unless you study the source and judge its content objectively, the discussion is going nowhere. And this time, I am not going to continue a discussion that goes nowhere. Also, you seem to be stumbling about for another source, like a person who religiously believes SSDs need no defragmentation at all, and now, having read something contrary, seek to either disprove or silence it. Tell me intgr: Am I wasting my time here?
- Best regards,
- Codename Lisa (talk) 09:25, 7 December 2015 (UTC)
- @Codename Lisa: Yes, I used the "H word"; I asked nicely twice, how is this "How NTFS Works" article (or the question whether "data clusters and blocks" are part of the file system) relevant to our discussion about the limits of fragmentation — but I didn't receive an answer. If it is relevant, please explain how. So far it seems to me like a distraction.
- My position is stated in this message. As far as I can tell, you didn't really address most of it. To move the discussion forward, please state clearly, what parts of it do you agree with, and what do you disagree with. The most important part being:
Based on these [sources], I think it's fair to state in the article that NTFS has a limit on the amount of fragments per file. But that doesn't extend to generalizations like "file systems cannot sustain unlimited fragmentation", or the more extreme "no file system can sustain unlimited fragmentation"
-- intgr [talk] 09:57, 7 December 2015 (UTC)- "
And finally this NTFS distraction. I asked "so what?" and you ignored it.
" - "So what?" is not an argument. You have a source for the fact that NTFS has limits for the number of fragments per file (do we agree on that?). But in the article you want to state that every file system has this limit. Why do you think such a generalisation is appropriate, or necessary? Please offer some justification, more than "so what?". -- intgr [talk] 10:14, 7 December 2015 (UTC)
- I am not playing a game of Invent Your Own Irrelevant Problem, especially not one in which using seven filthy words has a bonus score. Neither the source said anything about NTFS nor I. You first said it, for no relevant reason. I answered you out of courtesy only. But from here on out, I refrain to participate in this discussion any further unless a genuine concern come up. Best regards, Codename Lisa (talk) 10:09, 8 December 2015 (UTC)
- @Codename Lisa: Please understand this: we're having this argument because we disagree with each other. In order to get anywhere with this discussion, we have to understand exactly where the disagreement lies. To that end, we have to state clearly what we disagree with. If you don't understand why I bring something up then you can ask me why I did so — like how I asked you, why you bring up the clusters and blocks thing (but let's take one thing at a time). If you just ignore the things that you disagree with me about, or that we're talking past each other about, then we have no basis for a discussion, and that's were things bog down.
- I brought up the NTFS sources because I thought they are relevant to understanding the issue that the Scott Hanselman article talks about.
- So if I understand you correctly (please state yes or no), you disagree that the "maximum level of fragmentation" in your source was talking about the NTFS fragmentation limit that I pointed out? That's why I did not state it as fact in my earlier comment: "
I looked into this and it appears that NTFS indeed has issues with hitting file fragmentation limits. If I understand correctly, this is what you're talking about
". A simple "no, I don't think that's the same thing because [...]" would have sufficed.
- Because the Scott Hanselman source is somewhat vague, I grant it's difficult to pin down what file systems it's talking about. Perhaps they're not talking about just NTFS. But it keeps repeating "Windows" so many times, and bases this on "[talking] to developers on the Windows storage team". This appears to be advice for people running Windows systems. (Do you agree/disagree?)
- If yes, is it possible that they're talking about restrictions that are specific to Windows file systems? (Do you agree/disagree?)
- If yes, isn't the "no file system can sustain unlimited fragmentation" claim from your edits too much of a generalization? (Do you agree/disagree?) -- intgr [talk] 14:10, 8 December 2015 (UTC)
- P.S. I forgot to say this: So what if it is restricted to NTFS? (I doubt it, but what if?) If SSD drives with NTFS volumes need defragmentation, then it is wrong to say "SSD drive don't need defragmentation at all", which was what the article said before I edited it. (Or was it another article? Must check the diffs.) Best regards, Codename Lisa (talk) 02:08, 3 December 2015 (UTC)
- @Codename Lisa: This article said "File system fragmentation has less performance impact upon solid-state drives" (which you moved and slightly rephrased), and that's correct as far as I'm concerned. I didn't revert/delete/dispute that part. -- intgr [talk] 02:21, 3 December 2015 (UTC)
- Okay, so it was another article after all. All is well. Best regards, Codename Lisa (talk) 03:12, 6 December 2015 (UTC)
- @Codename Lisa: This article said "File system fragmentation has less performance impact upon solid-state drives" (which you moved and slightly rephrased), and that's correct as far as I'm concerned. I didn't revert/delete/dispute that part. -- intgr [talk] 02:21, 3 December 2015 (UTC)
What is this?
[edit]It is hard to figure out what this discussion is about. Beginning with the title, "Infinite fragmentation", there is no such thing: Try imagining how one might appoximate unbounded fragmentation: Even if 128 GB of ram were fragmented into 128 billion single bytes of storage, it is far from infinite.
The subject of SSDs not suffering from fragmentation appears to have two points-of-view. Of course a fragmented file cannot be read into memory with a single i/o operation ("read 123 clusters beginning with disk cluster 765 into address 1000"); instead many i/o operations are needed, at least one per fragment, though the i/o would not suffer from seek delays. But the on-disk overhead of tracking all those fragments affects SSDs, hard disks, thumb drives, floppies, etc. equally.
NTFS has what I think is an elegant way of describing stream cluster locations. If the data content is less than about 800 bytes (the actual value depends on the length of the filename and the number of aliases, security descriptor complexity, and number of streams in the file), the data is stored in the FRS (the $Mft structure which describes all aspects of the file) in the place where, for a larger file, the "run list" would be. The run list is a list data structure where each element corresponds to a "fragment": that is a starting cluster number and the number of consecutive clusters. The run list is cleverly compacted so that most elements are 3 to 5 bytes long. This means that 200 to 230 fragments can be held in one FRS (which are 1024 bytes long). I once examined a file (an exchange server database) with more than 22,000 fragments. It took only 101 extension FRSs to describe it all. 100 K of filesystem overhead to describe a 75 GiB file. Of course, had the file been defragmented, the overhead would have been 1 K.
NTFS manages all allocations by cluster units. There is no possibility of block fragmentation. 4 K (8 sectors) is the usual cluster size for NTFS due to NTFS-provided compression limitations. Any discovered bad sectors are managed by the cluster they are in.
All filesystems are inherently fragmentation limited: For a file of N clusters, the upper bound is N fragments. It is that simple. Likewise the filesystem's upper bound of freespace fragmentation is the number of free clusters. This is true of NTFS, FAT, UFS, ext2-4, etc. Of these, FAT is unique in that it does not provide a cluster count representation, but reasonable drivers look for sequential cluster numbers in the allocation table with the goal of treating it as a [cluster number, cluster count] structure.
—EncMstr (talk) 09:26, 3 December 2015 (UTC)
- @EncMstr: This discussion is about some claims that I removed from the article, which have since been re-added by Codename Lisa in slightly different formulation.
- From your message: "For a file of N clusters, the upper bound is N fragments" — that's not really relevant to this discussion. We're talking about limits in the sense that the filesystem will fail ("not sustain" in Codename Lisa's formluation) when there are too many fragments per file or per file system. I was skeptical, but it turns out that NTFS has such a failure mode, see [9]. -- intgr [talk] 10:17, 3 December 2015 (UTC)
@EncMstr: Well, it appears that Codename Lisa is out. Can you please chime in, do you agree with the changes I made to the article? -- intgr [talk] 22:46, 10 December 2015 (UTC)
TL;DR
[edit]@Intgr: TL;DR. It is obvious that the conversation above has lead to nowhere. Please explain to me: Why did you delete this:
However, regardless of the performance issue, no file system can sustain unlimited fragmentation. For each fragment, there needs to be one additional piece of metadata that records its location and affiliation to its file. Each piece of metadata itself occupies space and requires processing power and processor time. When the maximum fragmentation limit is reached, write requests will fail.<ref name=":0" />
Also this:
Hence Microsoft engineers recommend regularly defragmenting NTFS even when used on SSDs.<ref name="hanselman" />
...fails verification. Period. You did read the blog posts I hope? Fleet Command (talk) 15:13, 12 December 2015 (UTC)
- @FleetCommand: Thank you for joining this discussion, it badly needs some new blood. Firstly, I didn't delete it all outright, I rewrote it to conform to my best understanding of the relevant issues. I sense some negativity from you, but please have some patience and give me benefit of the doubt.
- The claim that I object most to, is "
regardless of the performance issue, no file system can sustain unlimited fragmentation.
". First, it's not really clear what it means — the word "sustain" is not a technically accurate term. Obviously, a file system can run out of space faster if it needs to write out more metadata about file fragments.
- But while discussing with Codename Lisa, I got the impression that that's not what it means, but rather, a rewording of this quote from the Hanselman blog article: "If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file."
- The assumption underlying this claim appears to be that the Hanselman source talks about all file systems. But I believe that's not true — the article repeats lots of times that it's talking about Windows, and includes quotes from "developers on the Windows storage team".
- Now compare that Hanselman quote with the introduction in this Microsoft knowledge base entry about NTFS: "A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations."
- Is it unreasonable to connect these dots? The quote in Hanselman's, from a member of the Windows storage team, appears to match up to a specific limitation in NTFS — the primary Windows file system. Then maybe the quote in Hanselman's blog didn't talk about all file systems having this limitation, but just NTFS?
- If this fragmentation limit was a common problem in file system design and affected all file systems, then there should be more and better sources to support the claim. I spent hours trying to find other sources about limits of fragmentation in file system design. I couldn't find anything better than this TechNet article about NTFS.
- The text "
For each fragment, there needs to be one additional piece of metadata that records its location and affiliation to its file. Each piece of metadata itself occupies space and requires processing power and processor time.
is correct and accurate. I removed it because it's not directly related to the NTFS limitation I was describing. In retrospect, I probably should have moved it instead of deleting.
- Lastly, "
When the maximum fragmentation limit is reached, write requests will fail
" was not deleted, but merely rephrased as "very fragmented files on NTFS can reach this limit, whereby writes to the file will fail even if there is free space available". The reference also wasn't deleted.
- "
fails verification. Period. You did read the blog posts I hope?
" - Yes, I read the Hanselman blog article, and that was my attempt at summarizing the article. Re-reading it now, I agree, I did a bad job. We can revisit that after we sort out the core issue here. -- intgr [talk] 01:16, 13 December 2015 (UTC)
Is it unreasonable to connect these dots?
Yes, not only it is unreasonable but also forbidden by the policy. Abandon anything that has NTFS in it. Windows supports FAT12, FAT16, FAT32, NTFS, ReFS, ExFAT and EFS. (I excluded file systems that cannot be defragged.) Abandon all other assumptions too, such as "Obviously, a file system can run out of space faster [~snip~]" and "If this fragmentation limit was a common problem [~snip~]". You are invoking false dichotomies while there are zillion other options. These assumptions are so dangerous that if you take them for granted and discuss, it can quickly alienate the other party.
the word "sustain" is not a technically accurate term
. "Sustain", "support", "handle", "allow", "permit", "function properly with". Take your pick. Looking at the very top of the discussion, "handle" is chosen by S.H. I advise "support".
I got the impression that that's not what it means
. What did you think it meant?
- Addendum: I re-wrote sentences so that there is no strong claim that there definitely is a hard maximum fragmentation limit for absolutely every file system. I wrote "if the maximum fragmentation limit is reached". File systems that have no such limit (if there are any) do not qualify. Fleet Command (talk) 15:57, 13 December 2015 (UTC)
- @FleetCommand: Thank you, that's much better. So now we've established that this fragmentation limit does not necessarily affect all file systems.
- I agree that my reasoning earlier was based on synthesis, and thus discouraged by Wikipedia policies. But I believe now synthesis is no longer necessary:
- Now my concern is: you're basing this on one source (Hanselman blog) that's un-specific about which file systems are affected by a fragmentation limit. (I find that the Hanselman source does not seem to pass WP:RS: it's not published by Microsoft like stated in the citation, but it's a personal blog of Scott Hanselman and it quotes anonymous "developers on the Windows storage team").
- You say "
Abandon anything that has NTFS in it
" — but why? - In this discussion, I provided two reliable, although primary sources, that state that a fragmentation limit exists in NTFS:
- Microsoft knowledge base entry: "A heavily fragmented file in an NTFS file system volume may not grow beyond a certain size caused by an implementation limit in structures that are used to describe the allocations"
- Microsoft TechNet blog entry "The Four Stages of NTFS File Growth, Part 2" (this one was used in edits to the article that you reverted): "A file’s attribute list has a hard limit of how large it can grow. This cannot be changed.", "So it is possible to hit a point where a file cannot add on any additional fragments.", "What these messages are trying to tell us is that the attribute list has grown to its maximum size and additional file fragments cannot be created."
- I'm not trying to claim that NTFS is the only file system affected — I never claimed that. But these sources provide a good basis for making the claim that NTFS has a fragmentation limit, right?
- You say "
- First thing, first: You didn't answer my one question.
- '
So now we've established that this fragmentation limit does not necessarily affect all file systems.
' No! On the contrary, I made a point of being completely silent about whether it does or does not. - '
You say "Abandon anything that has NTFS in it" — but why?
' Because we have insufficient data. You struggle to make up for it by bringing in sources from the Core team and Microsoft Support but you cannot fill this void without resorting to argumentum ad ignorantiam or other fallacies which are not tried yet. As long as you don't have a source that says "File System X has no fragmentation limit" or a source that says "all file systems have a hard fragmentation limit", any attempt to read between the line is non-scientific prejudice. - '
[~snip~] does not seem to pass WP:RS
'. I investigated. Hanselman seems an author of good standing; we can trust him acting as a secondary source. Wikipedia articles cite the notoriously biased Paul Thurrott! Hanselman is in much better standing in comparison and in this specific post, he is receiving critical review of another author in good standing. Fleet Command (talk) 13:03, 14 December 2015 (UTC)
- @FleetCommand: It seems to me that we're mostly in agreement, just talking past each other. Sorry that I'm writing such long messages, I am hoping it makes my position clearer so the misunderstanding can be identified.
- "
First thing, first: You didn't answer my one question.
" - You mean this question: "
What did you think it meant?
"? It was about this quote from the article: "regardless of the performance issue, no file system can sustain unlimited fragmentation." I found that the original was vague enough that it could mean multiple things. I didn't answer because it's a moot point now anyway: you rewrote this sentence in two edits and I agree with your new version, so I don't see any reason to continue discussing this.
- "
- "
Hanselman seems an author of good standing; we can trust him acting as a secondary source
" You've done nothing to convince me that it meets the criteria set in WP:RS (your argument is a red herring: RS doesn't require "good standing
" and comparing it to another bad source has no bearing on RS). But I don't want to argue about this, for now I agree that the Hanselman source can be kept in the article.
- "
- "
I made a point of being completely silent about whether it does or does not.
" You're misunderstanding me here. I agree entirely with this point and that's exactly what I was trying to say. when I said "does not necessarily affect all file systems
" I meant: it might or it might not affect them all, we don't know for sure.
- "
- "
As long as you don't have a source that says "File System X has no fragmentation limit" or a source that says "all file systems have a hard fragmentation limit", any attempt to read between the line is non-scientific prejudice.
- I think we're talking past each other here.
- When I bring up NTFS, I'm not trying to state either of these two things. I'm not trying to claim anything about other file systems — precisely due to the absence of evidence about other file systems. I'm trying to make a separate claim that NTFS is one of the file systems affected by a fragmentation limit, backed up by the TechNet and MS Knowledge Base sources.
- "
- This is reflected in my edits that you reverted. Let me clarify what I meant when I made the changes:
- "File systems may have a limit to the number of file fragments they're capable of storing."
- ^ This says that we know there exist some file systems for which a fragmentation limit applies. Not claiming that it affects all file systems. Not claiming that NTFS is the only one it affects. Not claiming what we know which file systems are affected. This is supported by the Hanselman article too. Do you agree with this statement?
- "Under rare conditions, very fragmented files on NTFS can reach this limit, whereby writes to the file will fail even if there is free space available."
- ^ This is a separate claim from the first. This is entirely supported by the TechNet source. There is no unpublished synthesis. We know for a fact that NTFS is affected by a fragmentation limit, and this explains when this occurs in NTFS what happens when the limit is exceeded. Did you read the TechNet article to be able to say that "
we have insufficient data
"?
- This is reflected in my edits that you reverted. Let me clarify what I meant when I made the changes:
- It probably can be rephrased to be clearer, but I believe it doesn't commit any logical fallacies. -- intgr [talk] 15:37, 14 December 2015 (UTC)
Allow in-place modification
[edit]Although the phrase goes back 10 years to the initial article, What does fragmentation have to do with "allow in-place modification" mentioned in the very beginning of the article? DGerman (talk) 01:12, 29 November 2016 (UTC)
External links modified
[edit]Hello fellow Wikipedians,
I have just modified 2 external links on File system fragmentation. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20041117233607/http://www.eecs.harvard.edu/vino/fs-perf/papers/keith_a_smith_thesis.pdf to http://www.eecs.harvard.edu/vino/fs-perf/papers/keith_a_smith_thesis.pdf
- Added archive https://web.archive.org/web/20110519215817/http://video.google.com/videoplay?docid=6866770590245111825&q=reiser4 to http://video.google.com/videoplay?docid=6866770590245111825&q=reiser4
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 16:31, 30 September 2017 (UTC)