Talk:Oracle ZFS/Archive 1
This is an archive of past discussions about Oracle ZFS. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
Storage capacity
I'm removing this: "Sun believes that this capacity will never be reached, meaning that this filesystem will never need to be modified to increase its storage capacity. Although today such an assertion seems reasonable, a number of similar statements made in the past have been proven famously wrong." The quotes given on the only referenced sun page references Moore's law and give plenty of market spin ("According to Bonwick, it has to be. "Populating 128-bit file systems would exceed the quantum limits of earth-based storage. You couldn't fill a 128-bit storage pool without boiling the oceans.") but it is never stated that "Sun believes that this capacity will never be reached." Unsigned comment by 24.248.74.254 on 18:57, 9 June 2005 (UTC)
Regressing to stub
Thus far, Sun has not:
- Released the file system yet
- Given any hard technical details
- Committed to shipping ZFS in 2005
- Committed to the ZFS feature set
So, since Wikipedia is not a crystal ball, I think it would be best to remove most of the hype in this article and turn it back into a stub. —Ghakko 16:34, 18 July 2005 (UTC)
FYI, Regarding the release date. When I was in a Solaris 10 training class, the trainer mentioned that the ZFS developers plan on it being completed in Oct or Nov. However it has not been decided yet if ZFS will be released on it's own, or if it will be released with the first major update of Solaris 10, next spring. Also, the current version doesn't support ZFS as a boot device and when it is made public it probably won's support it either. That support will come later with an obp update. amRadioHed 01:38, 7 October 2005 (UTC)
- ZFS is released, stub notice removed.
Comparison with other filesystems
Once the ZFS specs become clear, suggest the page at Comparison_of_file_systems is updated with ZFS details. --Oscarthecat 07:58, 7 November 2005 (UTC)
It's escaped.
Source and binary for ZFS have been released. see [1].
Requested move
Since the moniker for ZFS is now actually inaccurate (see [2]) and the other articles that expanded to ZFS were all redlinks, save for this one, this article should occupy the 'main' ZFS name. --moof 18:06, 22 November 2005 (UTC)
Destubbify
I propose that the stub tag is long since obsolete and it should be removed. I propose to do so Monday, 11/28/05 unless there are serious objections. Georgewilliamherbert 20:13, 24 November 2005 (UTC)
- Also, the Request for Expansion... no notes here in the discussion page or the RFE central page on what they were looking for, and the article seems expanded to me. I propose also removing that tag on 11/28/05 barring serious objections. Georgewilliamherbert 20:24, 24 November 2005 (UTC)
Technobabble
Some terms in this article are in dire need of clarification. e.g. "automatic length and stride detection" - sounds more like something from a triple-jump event, also "deadline scheduling" --OscarTheCattalk 10:03, 23 February 2006 (UTC)
- I agree. The terms "volume management", "storage pool", "transactional object model", "block pointer", "target block", "metadata block", and "synchronous write semantics" need to be better explained. -P
- In true encyclopedian manner, these should be linked to seperate articles where they are properly explained, but certainly not here. --Puellanivis 00:54, 23 August 2006 (UTC)
- Some of these terms are industry generic, some are ZFS specific. The ZFS-specific ones should probably be explained here. The generic ones may either deserve WP pages or Wiktionary pages... Georgewilliamherbert 05:24, 23 August 2006 (UTC)
- In true encyclopedian manner, these should be linked to seperate articles where they are properly explained, but certainly not here. --Puellanivis 00:54, 23 August 2006 (UTC)
Advertisement for Sun?
Is it just me or does this whole article read like an advertisement? Ken 18:24, 24 April 2006 (UTC)
- There are a few mentions, but it seems OK. I've removed one superfluous mention by re-writing two sentences. Mindmatrix 18:30, 24 April 2006 (UTC)
Stranded Storage
Sun's marketing states "stranded storage". I'd like to happily pretend this means I can mirror data onto a drive, unplug and walk offsite with the drive, walk back with the drive a week later, plug it in, and have it automatically sync up. But any real information on this would be appreciated. Myren 01:42, 7 June 2006 (UTC)
Looks like Apple's interested in porting ZFS too
See http://www.osnews.com/story.php?news_id=14473
Linux
Is there any info available about the status of the effort to port this to Linux? Is this an internal Sun thing? or is there a project surrounding it? The artcile doesn't make this very clear.. --Quasar 15:07, 9 July 2006 (UTC)
- I believe that it would have to be integrated in the Linux kernel for support. That is impossible, seen as how the licenses of the Linux kernel and ZFS (GPL and CDDL) are not compatible. It's very unfortunate! —msikma <user_talk:msikma> 08:13, 24 August 2006 (UTC)
- You can read about the project at http://code.google.com/soc/opsol/appinfo.html?csaid=1EEF6B271FE5408B It has links to the project home page and a blog detailing his progress. --NapoliRoma 13:08, 24 August 2006 (UTC)
"advert" for Sun.
Not added to the page as yet is that ZFS lacks transparent encryption, a la NTFS, and that presently only n+1 redundancy is possible. n+2 redundancy (RAID level 6) is also in the development branch only - via the OpenSolaris distribution. These omissions in the production branch of Solaris, as of Solaris 06/06 current release - do diminish ZFS's attractiveness in a lot of the situations at whiich the new FS is targeted. Adding to the talk page because never contributed to the wikippaedia before, and some double checking needed anyway. I've been evaluating ZFS for some weeks now, so i'm fairly sure i'm correct :) My point being that backing up huge FSs (24TB on the latest 4U Sun 4600 box) is an absolute b*tch, and n+2 redundancy and transparent encryption are nearly standard. The workaround i had, of using PGP command line to encrypt on a x86 OpenSolaris build doesn't fly because PGP commandline is SPARC only as of writing, and loading OpenSolaris (for the n+2) on a SPARC box of large capacity negates all the nice support contracts.
By the by, i think the capacity specs "2^64 — Number of devices in any zpool" etc. are redundant technicalia in the main entry. The quotes given to evaluate what these specs mean are better descriptive.
One last point, there is no context given for the development of ZFS in the main entry. e.g. how to stripe huge volumes for throughput, historic limitations of other FSs (NTFS I'm looking at you, when you hang a quad core Xeon if a folder gets more than 20,000 objects!), how the choice of copy on write giving the snapshots is probably influenced more by backup difficulties and so on.
Someone could make a point or two about how general purpose CPUs make a fair alternative to proprietary embedded OSs such as those used by Adaptec and LSI Logic (on Intel XScale)and how this means less data abstraction when things go badly wrong i.e. with the embedded controllers, you've no chance to talk to your files save as volumes via the embedded BIOS, should you get a corrupted drive.
Last, but not least, given Sun's recent product launches, there ought to be a comparison to similar capability (capacity, throughput) RAID systems, such as from DataDirect and the whole interconnect problem with such a large array (Infiniband? FC? 10GbE? Switching Latency? and hence maybe even a touch upon the forthcoming NiagraII Sun chip which has 10GbE on - die) . . . in other words, how do you take advantage of all the "specifications goodness" :)
Will leave this for more seasoned contributors to do with as they see fit, but will revisit sometime soon.
Kind regards to all.
Unicode Support?
Does ZFS uses Unicode natively, or even support it? I think that might be an interesting tibit to add to the article. --Saoshyant 14:13, 11 September 2006 (UTC)
NetApp lawsuit
NetApp has sued Sun for patent violations in ZFS. I think a heading regarding these legal issues should therefore be added.Laxstar5 14:36, 6 October 2007 (UTC)
- Perhaps a "Controversy" section? Although I'm not really sure that the patent lawsuit is germane to the article; given the sue-happy nature of business in the US, if every tech article had a list of the patent lawsuits associated with it, they'd all be 100% larger than they are, and 50% less informative. Let's just note the facts. Rubicon 03:43, 8 October 2007 (UTC)
Checksums?
Sun's FAQ [3] indicates 64-bit checksums but the article indicates 256 checksums. Anyone have a justification for 256? --Treekids 21:45, 1 October 2007 (UTC)
- The link you provided, being from SUN, is likely accurate. I've updated the article. Rubicon 05:17, 5 October 2007 (UTC)
- Shoot. According to [4], ZFS does use 256-bit checksums. Now I'm upset. Looks like marketing hasn't been speaking with development. Reverting to 256. -Rubicon 07:56, 5 October 2007 (UTC)
Boiling the oceans
Perhaps we could remove the boiling the oceans quote since it really does not add anything to the article 66.68.63.11 06:34, 2 June 2007 (UTC)
- It adds to the history and understanding of the article: this quote is one of the notable things in ZFS's history, and it helps to put its architectural limitations (or lack thereof) in terms more people can understand. Almost all of us who are used to scientific notation still don't have a "feel" for numbers like 18.4 × 1018.... Hga 11:32, 2 June 2007 (UTC)
- Along the same lines, is it really necessary to express all the storage mentioned in the article as a power-of-two? I don't recall being overwhelmed with joy that the 120 GB hdd I bought two years ago had a capacity of a touch less than 237 bytes. Of course, when speaking of things like 'the number of entries in a directory', it's much neater (to my eyes) to say "248" than it is to say "281,474,976,710,656" - but do we really need to show, for example, 16 EB in terms of powers-of-two of bytes? Rubicon 07:20, 9 October 2007 (UTC)
- I find it amusing that while I can have a 16 EiB file, the filename limit is still 255 characters. Needless to say, the latter is significantly more relevant to me. Superm401 - Talk 11:11, 21 February 2008 (UTC)
- Could this be for POSIX compliance (NAME_MAX)?.--NapoliRoma (talk) 13:55, 21 February 2008 (UTC)
- Given that the math has been done within the article demonstrating that the boiling the oceans quote is factually inaccurate, really, shouldn't it be yanked? I came to this article to learn about ZFS, and I found that portion—right at the top of the article as it is—baffling and unhelpful. (As is the whole 2x business, which is meaningless to me and surely the great majority of readers.)--WaldoJ (talk) 18:04, 14 September 2008 (UTC)
- It shouldn't be removed but it's inaccuracy clarified (the german entry has some math on it, saying that it needs at least an 156bit filesystem to boil the oceans) 78.53.101.166 (talk) 14:55, 14 October 2008 (UTC)
limitations need citing and impartiality
The article states:
"ZFS lacks transparent encryption, a la NTFS, and presently only n+1 redundancy is possible. n+2 redundancy (RAID level 6) is only in the development branch—via the OpenSolaris distribution[1]. These omissions in the production branch of Solaris (as of Solaris 06/06 current release) diminishes ZFS's attractiveness in several situations at which it's targeted."
I think that this could be written more impartially, but still convey the same facts by being rewritten to read:
"Transparent encryption is still in the process of being implemented for ZFS. [5] Some features, including N+2 redundancy (RAID level 6), which are available in OpenSolaris and Solaris Express, are not yet available in Solaris 10."
Readers can draw their own conclusions about whether those limitations diminish ZFS's attractiveness.
At a minimum, the situations in which ZFS's attractiveness is diminished need to be explicitly listed. Better yet, a published article backing up this claim could be cited.
(For the record, I am a ZFS developer.)
Mahrens 06:02, 12 September 2006 (UTC)
Citation needed.
Will some expert kindly provide a source for the sentence quoted below, so that I can reach my goal of paring down the Citation Needed references on the following page? http://en.wikipedia.org/w/index.php?title=Category:Articles_with_unsourced_statements&from=Z
The quota model and other useful management capabilities suggest the possiblity of per-user filesystems, rather than simple home directories.[citation needed]
Sincerely, GeorgeLouis 06:27, 29 October 2006 (UTC)
128 bit?
Seeing as all the limits are 2^64 or less, I wonder why the 128 bit denomination. Anyone care to explain? It's probably something worth mentioning in the article. -Anonymous —The preceding unsigned comment was added by 83.138.218.1 (talk) 19:50, 19 December 2006 (UTC).
- I think you're right. I don't see anything 128-bit'ish about this filesystem. According to the article, it certainly can't store more than 278 bytes in a disk array, and it can only fill that if you create 16384 filesystems. There's a conflict between the specs and the claims in the Capacity section. It seems like "128-bit" is at best half-true marketing hype from Sun. If there's any basis for this number at all, it should be addressed in the article. Either way, someone who knows should definitely explain this. There are a lot of skeptics. -- Bilbo1507 00:13, 26 June 2007 (UTC)
- (Responding to myself.) I happened upon this discussion. This really needs to be explained in the article. I'd write it, but I don't feel qualified to do so. Is there someone willing to write it who is familiar enough with ZFS to write a robust explanation and cite sources? http://linux.slashdot.org/comments.pl?threshold=0&mode=nested&commentsort=0&sid=238977&cid=19569673 -- Bilbo1507 19:28, 5 July 2007 (UTC)
- Is it something like what MS did with NTFS? NTFS is a 64 bit filesystem but the first few implementations were 48 bit. So is it like 128 bit design but current implementations are limited to 64 bits? --soum talk 19:47, 5 July 2007 (UTC)
- Apparently, POSIX defines no 128-bit interfaces yet, 64 being the max available. SUN could've written their own libraries, but that would've broken POSIX compliance. So while I think it's nice that SUN can say they have a 128-bit, POSIX-compliant FS, given the situation, they could've just as easily said a 256-, 384-, or 512-bit FS. I'm not going to rock the boat and say "ZFS isn't 128-bits!", but for all practical purposes, ZFS is (for the time being) a 64-bit FS. Presumably, once POSIX catches up, it should be a (relatively) simple matter to scale-up to 128-bit, given that ZFS' block pointers allocate a 128-bit address. [6] -Rubicon 08:26, 5 October 2007 (UTC)
- Personally I don't know why limit anything nowadays to 32, 64, 128 or any number of bits... why at all choose any size word, dword, qword or whatever?
For having file or memory allocation subsystem of any size one could use start/end delimiters (alt. escape sequences) and process them from memory/disk. Doubling any delimiter in case of data=delimiter, can be used to make it fully data-transparent. Any structure on the stream can be referenced through this logical means providing effectively endless storage to be accessed/referenced. Even logical addresses themselves can be made extensible to any size needed in this way, without theoretical need to touch code at all. So we can write this ghost FS today and use it forever. ZFS, as any other modern FS is looking more and more like database - using logs, user/os isolation, two phase comit, compression, blocks/clusters, redo logs and similar, reinventing the wheel only on a lower level. At the same time my feeling is we are only increasing limits but still keeping them to be hit one day. Remember, 640k ought to be enough for everyone. -- Anonymous 18:55, 1 June 2009 (UTC) —Preceding unsigned comment added by 89.201.145.160 (talk)
Apple / Solaris
The listed operating systems for ZFS are as follows: "Supported operating systems Solaris, Mac OS X v10.5". OS X hasn't even been released yet. Other OS' IE: FreeBSD have a port in progress; however, they are not listed in the description as being supported. Suggestion; remove OS X 10.5 from the list until the release of OS X 10.5 has been confirmed with ZFS.
- Done. Please sign your posts. Chris Cunningham 11:28, 29 December 2006 (UTC)
zfs support seems to be gone from build 9a410. It's no longer in the GUI and there appear to be no command-line binaries, anymore, fwiw Jgw 02:09, 5 May 2007 (UTC)
Lead
The lead says "It is notable..."; however that sounds PoV (as it implies deliverately drawing attention to some way of interpretation). Please fix it to be something neutral. --soumtalk 07:06, 14 March 2007 (UTC)
- I made the change. I am open for a discussion if anyone is opposed to it. --soumtalk 15:06, 20 March 2007 (UTC)
ZFS administration gui
http://blogs.sun.com/talley/entry/manage_zfs_from_your_browser —The preceding unsigned comment was added by 71.7.147.153 (talk) 04:19, 18 April 2007 (UTC).
Z-FS Disambiguation...
I was doing research on several NAS devices that claimed to support Z-FS... it turns out that it is not ZFS but instead a filesystem by a company named Zetera trademarked Z-FS. The only details on the FS I can find are in a marketing blurb here:
http://www.zetera.com/index.php?option=com_content&task=view&id=4&Itemid=7
Should there be a disambiguation note about this since I have seen this cropping up on several different NAS devices? Especially since several of them omit the hyphen and actually list it as ZFS on some of their manuals/dialog boxes.
Does anyone know which came first (trademark wise) ZFS from solaris or Z-FS from Zetera (more for curiosity than anything)
Aelana 18:39, 19 April 2007 (UTC)
- Done. --soum (0_o) 20:08, 19 April 2007 (UTC)
Confusing use of "file system"
The article sometimes uses the term "file system" to mean "a file system type", and sometimes to mean "an instance of a file system. This makes the article quite confusing. Especially confusing is this sentence: "Unlike a traditional file system, which resides on a single device and thus requires a volume manager to use more than one device, ZFS is built on top of virtual storage pools called zpools". Does "reside" mean the same as "use" here? If not, what is the difference? If yes, the sentence is contradictory: it says that traditional file systems reside on a single device, but sometimes not. I do not know the correct technical terms that distinguish the type from the instance. Please update the article with the correct unambiguous terms. -Pgan002 18:33, 30 April 2007 (UTC)
- Been looking at the Storage pools section of the article, and, yes, it is probably confusing for someone who's never dealt with something like LVM. Should I rewrite it, maybe, with more of a layperson in mind? -Rubicon 05:33, 5 October 2007 (UTC)
On Portal:Free software, ZFS is currently the selected article
Just to let you know. The purpose of selecting an article is both to point readers to the article and to highlight it to potential contributors. It will remain on the portal for a week or so. The previous selected article was OpenDocument. Gronky 12:24, 28 May 2007 (UTC)
- The selected article box has been updated again, ZFS has been superceded by Emacs (to mark the release of GNU Emacs v22). Gronky 13:20, 5 June 2007 (UTC)
ZFS in OS 10.5
It only seems logical to me that if ZFS is the default fs in OSX, then it must be able to act as a root partition. The statement in the article that says otherwise is outdated, yes? --70.91.110.41 21:15, 6 June 2007 (UTC)
Its nothing but a rumor until Apple makes it official. Since when did wiki become a rumor site?
Marc Hamilton has backed away from his original 'confirmation' in the comments section of his blog. He says he has no knowledge of Apple's product plans.
- Here is a link to confirmation that it will NOT be included in Leopard. [7] Should we remove the rumor then?--147.160.136.10 17:42, 12 June 2007 (UTC)
- It was not a rumor - It's widely known that Apple's OS team had ZFS running. That said, it's clearly not in Leopard, per yesterday and today's Apple announcements. We could speculate why or what's going on, but that's pointless, and Wikipedia is not supposed to speculate. It's not in Leopard, and so it should be off the list of supported OSes. Georgewilliamherbert 17:51, 12 June 2007 (UTC)
- Annnd it's back. See [8] in which Apple clarifies that it's in there but not available as a default filesystem or bootable filesystem. Sort of. The clarification isn't entirely clear, but they do say that ZFS is on the system. Georgewilliamherbert 00:44, 13 June 2007 (UTC)
- It was not a rumor - It's widely known that Apple's OS team had ZFS running. That said, it's clearly not in Leopard, per yesterday and today's Apple announcements. We could speculate why or what's going on, but that's pointless, and Wikipedia is not supposed to speculate. It's not in Leopard, and so it should be off the list of supported OSes. Georgewilliamherbert 17:51, 12 June 2007 (UTC)
Storage Units
Reading [this], ZFS' ultimate limitation is stated in terms of ZB, not ZiB; I therefore infer that where the article states EiB, it would be more accurate to use the term EB. Can't find supporting evidence for the 'max size of single file', 'max size of an attribute', etc. I've never liked the practice of using binary prefixes - always seemed like a way for a storage vendor to scam a customer into thinking they were getting more storage than they actually were. Does anyone have any references for the numbers given in the article - and proper storage units? Rubicon 07:09, 9 October 2007 (UTC)
ZFS in the release version of Mac OS X 10.5
I've tried to update the info on the current Mac OS X implementation, but I was not sure what of the older rumors/announcements to leave in. I hope I've found a good compromise, using as much as possible of the original text. Do you agree? Xnyhps (talk) 14:38, 30 December 2007 (UTC)
ZFS vs. zFS; google hits
In my last edit summary, I mentioned that zFS (as in Z/OS) had "fewer than 200 Google hits". I don't know what mutated search I did then, because I just tried it again and got more like 17,000 hits. This still seems to me to be fewer than would justify having it specifically mentioned in the lede, though.--NapoliRoma (talk) 21:58, 17 February 2008 (UTC)
Casablanca?
No comment on what was discussed, but Linus Torvalds and Jeff Bonwick have been talking http://blogs.sun.com/bonwick/entry/casablanca Robmbrooks (talk) 10:17, 21 May 2008 (UTC)
- The Linux compatibility part in the main article is wrong: The CDDL allows to combine CDDL code with code from any other license and the GPL does not forbid to link a GPL work against CDDLd code. Because of the compatibility with all other licenses, Sun will not sue people who use ZFS (as long as they follow the CDDL) and Sun is happily waiting for people who like to sue Sun because of a CDDL/GPL combination in order to defend against the attempt to forbid it. —Preceding unsigned comment added by 87.158.110.240 (talk) 17:50, 14 September 2008 (UTC)
Linux section: OR?
I'm concerned that the linux section suffers from original research. The suggestion that because ntfs-3g (a FUSE filesystem) performs "well", based on a single benchmark, is used to suggest that the zfs port could perform "excellently". This is a bit of a leap of faith logically. -- 87.194.117.25 (talk) 14:34, 24 May 2008 (UTC)
Now it reads: "This shows that reasonable performance is possible with ZFS on Linux after proper optimization." Are there any data supporting that? ZFS and NTFS are very different. I think this comparison is inappropriate and should be removed. At the very minimum, it should be worded a lot more careful. 85.177.245.75 (talk) 15:09, 10 March 2009 (UTC)
Storage Pools
Can the information on storage pools be clarified or described in non-tech terms? I understand it this way, all drives show up as one device. If I have a computer with a 500GB internal drive formatted with ZFS (I'll call it Zelda), and I plug-in a 500GB Firewire ZFS drive, it will automatically expand the capacity of the Zelda volume to 1TB? Any drive I add or remove will merely change the total available disk space under one volume? --24.249.108.133 22:29, 16 July 2007 (UTC)
- See this; o.p. has the wrong idea about how zpools, vdevs, etc. function. -Rubicon 05:38, 5 October 2007 (UTC)
The section about devices "... designated as volatile read cache (ARC)" is confused. I have replaced it with a a description of SSDs used as L2ARC devices (readzilla). The volatile ARC does in fact exist, but does not require any special device - it is just (kernel) memory, while the write cache (or "readzilla", used to speed up synchronous writes needed for some POSIX semantics) requires SSDs with good write performance and endurance. - Henk Langeveld (talk) 00:58, 25 December 2009 (UTC)
Solaris implementation - Out of Date
In the forth entry under Solaris implementation Issues it says:
New vdevs can be added to a storage pool, but they cannot be removed. A vdev can be exchanged for using a bigger new one, but it cannot be removed, in the process reducing the total pool storage size even if the pool has enough unused space. The ability to shrink a zpool is a work in progress, currently targeted for a Solaris 10 update in late 2007.
Well...it is 2008. Did they or didn't they implement this new feature? This isn't the only item with this problem. Maybe someone who uses the latest release can go through and update this section.
12.207.190.29 (talk) 04:07, 31 January 2008 (UTC) KRowe (not registered)
- I wanted to add that there is a new version of OpenSolaris (2008/05) with a better implementation of ZFS. It should be noted and updated. --HeffeQue (talk) 18:09, 7 May 2008 (UTC)
- It's Dec 2009 - removing vdevs is not there yet. Henk Langeveld (talk) —Preceding undated comment added 01:04, 25 December 2009 (UTC).
Invalid Citation Removed
Whoever use this citation, it is invalid. Wikipedia is not a place for you to cite resources that utilize people opinion. Blogs are not valid citation.
The name originally stood for "Zettabyte File System", but is now an orphan acronym.[1]
--Ramu50 (talk) 22:50, 15 June 2008 (UTC)
This citation is originally in References #5 [9]
- Hi Ramu50, the blog in question was written by Jeff Bonwick, the person who designed (and named) ZFS. I agree with you that blogs should be treated with suspicion, but I'd consider this one to be a valid source.--NapoliRoma (talk) 01:28, 16 June 2008 (UTC)
Oh I didn't know Jeff Bonwick was the person who designed ZFS. I am still learning how the Coolthreading, Logical Domain, ZXTM...etc. But blogs usually contain various viewpoints that doesn't represent a official idea, because even though he might be the director of the project, but projector contributor in Sun Microsystems are still needed to be respected nevertheless. --Ramu50 (talk) 03:31, 16 June 2008 (UTC)
It's status as an orphaned acronym is also stated in the Open Solaris ZFS FAQ.
"Originally, ZFS was an acronym for "Zettabyte File System." The largest SI prefix we liked was 'zetta' ('yotta' was out of the question). Since ZFS is a 128-bit file system, the name was a reference to the fact that ZFS can store 256 quadrillion zettabytes (where each ZB is 270 bytes). Over time, ZFS gained a lot more features besides 128-bit capacity, such as rock-solid data integrity, easy administration, and a simplified model for managing your data."
Legios (talk) 02:53, 3 March 2009 (UTC)
- Okay, I didn't get any feedback, so I've changed ZFS back to an orphan acronym. In doing so, I changed the citation from Jeff Bonwick's blog to Sun's ZFS FAQ.
- Legios (talk) 06:19, 6 March 2009 (UTC)
Supported operating systems
PC-BSD supports ZFS since version 7. —Preceding unsigned comment added by 91.32.78.194 (talk) 13:59, 3 October 2008 (UTC)
- This is due to PC-BSD being based on FreeBSD 7. I don't think it really warrants mentioning. Legios (talk) 01:58, 17 March 2009 (UTC)
vandalism?
last good version: http://en.wikipedia.org/w/index.php?title=ZFS&oldid=243119716
newer revisions read like copypasta and don't have a neutral standpoint. Too much businesstalk and exaggerations. 78.53.100.250 (talk) 11:56, 14 October 2008 (UTC)
Questionable: A modern hard disk devotes a large portion of its capacity to error detection data.
In my opinion, this is not a useful statement, for two reasons: First, "large" is a very subjective measure; it would be better to give a concrete number, such as, e.g., "5%". Second, the assertion itself is incorrect, as modern hard disks use data coding and modulation schemes where it is not possible to simply distinguish between "data bits" and "error correction bits". The data are written as a stream of symbols, and some maximum likelihood detection algorithm is used when reading it. There is no one-to-one relationship between bits and symbols. — Preceding unsigned comment added by 91.113.10.245 (talk) 10:04, 15 August 2011 (UTC)
Multiple ZFS implementations, on-disk format changes
The article currently has references to ZFS on-disk format versions (pool versions) past 28.
29 OpenSolaris Nevada b148 RAID-Z/mirror hybrid allocator. 30 OpenSolaris Nevada b149 ZFS encryption. 31 OpenSolaris Nevada b150 improved 'zfs list' performance. 32 OpenSolaris Nevada b151 One MB block support 33 OpenSolaris Nevada b163 Improved share support
Not only does "OpenSolaris" no longer exist, the source code to Oracle Solaris 11 which includes these feature additions is closed and the open ZFS implementations, which are based on illumos, do not support them. The illumos ZFS implementation has also begun to diverge from the Oracle one, as new enhancements and fixes are added. So far the version 28 on-disk format has been preserved in this implementation. ZFS "feature flags" are in development for illumos which will replace the one-way on-disk format upgrades: http://blog.delphix.com/csiden/2012/01/11/illumos-meetup-january-2012/ Triskelios (talk) 15:44, 31 January 2012 (UTC)
Data integrity
When you discuss ZFS and silent corruption, there are many Unix sys admins that refuse to believe that silent corruption exists. It doesnt matter how many research papers you show, they refuse to believe that silent corruption is a problem. But if you mention ECC RAM, the sys admins agree that silent corruption can occur in RAM, and then they finally understand that silent corruption also can occur on disks. That is the reason ECC RAM must be mentioned when trying to convince sysadmins that Silent Corruption is a problem on disks. So, please dont remove the part about ECC RAM. — Preceding unsigned comment added by 217.73.15.6 (talk) 09:21, 15 August 2011 (UTC)
- The question is whether an article about ZFS should be used as an instrument to educate system administrators. The way it is currently written in the article, one wonders why ECC memory is specially mentioned, knowing that data corruption occurs everywhere. 91.113.10.245 (talk) 10:18, 15 August 2011 (UTC)
- So are you saying that wikipedia articles are not for education or to teach people about facts? — Preceding unsigned comment added by 213.114.154.48 (talk) 19:53, 15 August 2011 (UTC)
ZFS on FreeBSD
http://lists.freebsd.org/pipermail/freebsd-current/2007-April/070544.html On the announce page it says ZFS works for i386 systems and soon it will work for Amd64. So, apparently it's working on i386 and it doesn't on amd64. Any takes on this? (82.77.127.35 (talk) 13:19, 16 October 2008 (UTC))
- According to ZFS Tuning Guide - FreeBSD Wiki it takes more tuning to get it working well on i386 and I believe amd64 is the recommended platform anyway. At least it was when I was switching from gvinum to it as I reinstalled as amd64 to get it. Douglaswth (talk) 22:57, 16 October 2008 (UTC)
- I see someone edited the entry related to ZFS on FreeBSD. I think the new text describes better the current situation. Thanks! 89.136.52.242 (talk) 20:10, 18 October 2008 (UTC)
Mixing vdev types
According to the latest ZFS admin guide at http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf mixing vdev types in a zpool is quite possibile(I should test this under FreeBSD 8-CURRENT, but I don't have enough spare disks handy right now) but not sggested, because depending on the mix there could be no warranty of integity in case of loss of some disk (if you mix a stipe with a mirror and loose a disk on the stripe you loose the whole pool).
An example in the admin guide says if you mix z mirror and a raidz you can't have the higher warranties of a mirror, but you're limited to the warranties of the device with the lowest protection level.
this is on page 99 of the admin guide. —Preceding unsigned comment added by 80.74.176.55 (talk) 12:04, 4 December 2008 (UTC)
That is correct, mixing vdev types is definitely supported. This "limitation" should be removed. --Mahrens (talk) 03:27, 11 June 2009 (UTC)
max capacity?
In the History section the following is written:
The name originally stood for "Zettabyte File System", the original name selectors happened to like the name, and a ZFS file system has the ability to store 340 quadrillion zettabytes (256 pebi-zebibytes exactly, or 2128 bytes). Every ZiB is 270 bytes.[6]
with the source for this information beeing [10]. although the source states that ZFS can store 256 quadrillion zettabytes instead of the 340 quadrillion zettabytes written in the article. Should this be corrected, or am I mistaken/overlooking something?
Originally, ZFS was an acronym for "Zettabyte File System." The largest SI prefix we liked was 'zetta' ('yotta' was out of the question). Since ZFS is a 128-bit file system, the name was a reference to the fact that ZFS can store 256 quadrillion zettabytes (where each ZB is 270 bytes). Over time, ZFS gained a lot more features besides 128-bit capacity, such as rock-solid data integrity, easy administration, and a simplified model for managing your data.
btw another source for the 256 quadrillion zettabytes [11] (page 22) Tdomhan (talk) 09:25, 14 August 2009 (UTC)
Automate archiving?
Does anyone object to me setting up automatic archiving for this page using MiszaBot? Unless otherwise agreed, I would set it to archive threads that have been inactive for 30 days and keep the last ten threads.--Oneiros (talk) 19:34, 9 January 2010 (UTC)
- Done--Oneiros (talk) 00:50, 13 January 2010 (UTC)
Data Integrity
There is a sentence speaking of the speed of ZFS, saying: "While it is also faster than UFS or DragonFly BSD's HAMMER file system, it can be seen as the successor to UFS.[23][24]". I would like this sentence removed, because it links to www.phoronix.com and phoronix does have quite a bad reputation and is not credible. See here for instance on how bad phoronix is at benchmarks: http://blog.xen.org/index.php/2011/11/29/baremetal-vs-xen-vs-kvm-redux/ — Preceding unsigned comment added by 213.114.151.137 (talk) 00:06, 24 November 2012 (UTC) PS. ZFS is slower in some benchmarks, and faster in others. So you should not say ZFS is faster than UFS or HAMMER. — Preceding unsigned comment added by 213.114.151.137 (talk) 00:10, 24 November 2012 (UTC)
Netapp lawsuit
The article probably should make some mention of the Netapp lawsuit since that's significant as long as it's ongoing regardless of whether it has any merit. For example, I've seen some suggestion it may be one of the reasons why Apple abandoned ZFS (Sun weren't willing to indemnify them in case they lost, and they would obviously have been a high profile target). And it seems likely it would be a factor discouraging others outside of Sun from adopting ZFS as long as it remains an issue. Edit: Actually from what I can tell the lawsuit has largely failed [12] but there's still likely merit for brief mention in this article Nil Einne (talk) 13:40, 25 January 2010 (UTC)
The only substantiation I have ever found in my research on the subject of why Apple withdrew from the ZFS project, is a quote on a mailing list. One person basically said "hey if any Apple employees are listening, is it because of the lawsuit between Sun and Netapp?" and an Apple employee (@apple.com email address) acquiesced to the affirmative, but without saying 'yes'.
I updated the above url, to pick up from archive.org, since Oracle has obliterated sun.com as we knew it.
Smuckola (talk) 07:36, 2 August 2012 (UTC)
Linux -> GNU/Linux ?
We are speaking about Linux as a whole system, so we must use GNU/Linux instead of Linux, isn't it? The Kernel case is well explained after. —Preceding unsigned comment added by Naparuba (talk • contribs) 14:16, 24 March 2010 (UTC)
Perhaps not, since GNU/kFreeBSD also support ZFS; the GNU part is not essential here; the kernel is! So whether it is GNU or BSD userland doesn't really matter; ZFS is a kernel-level implementation and thus feature. Sub.mesa (talk) 13:30, 15 January 2011 (UTC)
Deduplication edit
I replaced some info in this section due to a dead link and accompanying unverifiable info. If the info and citation that I have inserted in its place is not appropriate, I apologize in advance, as this is not my area of expertise. However, I did try my best with sourcing info that seemed to best fit.--Soulparadox 18:02, 9 June 2012 (UTC)
FreeNAS
I removed the different headings for FreeNAD 7 and FreeNAS 8, as they were redundant and the FreeNAS 8 entry was wrong anyway. NanoBSD is not a separate OS from FreeBSD, it is simply a tool (on FreeBSD) for building FreeBSD images for embedded systems. See: http://www.freebsd.org/doc/en_US.ISO8859-1/articles/nanobsd/index.html — Preceding unsigned comment added by 66.30.48.132 (talk) 20:07, 25 June 2012 (UTC)
Sun/Oracle build numbers
Some people recently tried to introduce confusion by publishing the false claim that OpenIndiana is based on Oracles's build 151. Note that all other table entries mention Sun build bumbers gor good reason. --Schily (talk) 12:30, 22 September 2011 (UTC)
Let me clarify build numbers: The last OSS code from Sun uses a label "b148", but it contains only aprox 10% of the deltas from b148. This is because the label is incremented just after a new build release tag (e.g. the one for b147) is added. --Schily (talk) 14:02, 5 October 2011 (UTC)
Two questions about the comment "OpenIndiana creates a name clash with naming their code b151a" in the Comparisons table.
1. Why does this say OppenIndiana is the one who created the naming clash rather than Oracle? Is it because Oracle used the b151a designation first? I'm not making a comment here, this is a real question because I don't know who got there first.
2. Why doesn't the EON NAS (v0.6) line contain the same comment. They actually list b151a as their SUN/Oracle build number. Is it not a conflict like OpenIndiana's use of that designation is? Is this different because their b151a is actually compatible with the closed source Oracle b151a? Daviesow (talk) 00:32, 10 August 2012 (UTC)
Structure of zpools, datasets
This article says a lot about the features which are implemented (probably too much) and which operating systems use which version of this code (definitely too much), but it says almost nothing about the actual underlying data structures and algorithms (e.g., how directories are represented, or how uberblock updates work) beyond mentioning ARC and the Merkle tree structure created by block checksums. By contrast, the btrfs article is full of such information. I'm sure that there have been numerous papers published about this (at conferences like FAST if nowhere else) so there should be plenty of WP:RS to refer to. 121a0012 (talk) 05:33, 15 September 2012 (UTC)
Hardware raid on ZFS
I think this is a less optimal heading. "Hardware raid on ZFS"? ZFS might run ontop hardware raid, there is never the case that a hardware raid runs ontop ZFS. And the "Software raid on ZFS"? It doesnt make sense really. ZFS is a software raid system. So you should not run a software raid ontop ZFS (which is software raid). Better is "ZFS and hardware raid". And "ZFS redundancy modes"
I made some comments, there are errors in this part: "Hardware raid on ZFS". For instance, ZFS does not need multiple devices to guarantee data integrity, just read the link. — Preceding unsigned comment added by 213.114.157.231 (talk) 16:42, 31 March 2013 (UTC)
RAM requirements
I wrote a piece on ZFS and RAM requirements. Why was it deleted? — Preceding unsigned comment added by 213.114.157.231 (talk) 12:18, 20 September 2013 (UTC)
- Hello there! In a few words, almost all of the information you provided was already available in sections Cache management and Deduplication. Your addition was repeating the description of how ZFS performs file system caching (Adaptive Replacement Cache etc.), and the need for a lot of RAM in case data deduplication is used (together with RAM recommendations etc.). Please have a look at those two sections, and it should be clearly visible. Thank you. -- Dsimic (talk) 12:31, 20 September 2013 (UTC)
OS X
Hi everyone. I did some work on the "OS X" section in bits and pieces when I have had time. It's still a big mess, and the writing is completely mixed between mine and that of previous contributors. It needs to be rewritten or restructured, and I intend to do so, but for now, I cleaned it up to be more current and more clear per sentence. I'm a contributor to the MacZFS project, so I wanted to keep this current. I appreciate any feedback on keeping it objectively encyclopedic and unbiased, as I am not yet an expert at wikipedia. I will keep reading the principles of editing here. You're all doing a good job, and I thank you.
Some anonymous user deleted a portion, making a vague claim implying that maybe they have received CDDL source code from Ten's Complement. It's good if you got a *private* release, but the deleted portion of the article had stated that there was no *public* release. And, in turn, it seems that if anybody has privately received code, the recipient(s) have not made that code public. Google searches reveal nothing. Furthermore, to be clear, it is not the commercial sale of a license which grants the right to the source code, but rather any transmission at all of a CDDL-covered binary which grants it. That includes betas or anything else. Because we allow anonymous editing, I can't have dialog with this person. That source code needs to be published, or given to someone who will publish it, so please contact me about that. — Preceding unsigned comment added by Smuckola (talk • contribs) 05:49, 9 August 2012 (UTC)
Smuckola (talk) 06:35, 2 August 2012 (UTC)
Around seven hundred words of history, including ZFS-OSX in the OpenZFS announcement context, have been moved to the uppermost history section of the article.
That leaves, under the heading of BSD, a more digestible view of options for users of OS X. (Note to self, consistency: in the OpenZFS wiki, ZFS-OSX is positioned under Darwin.)
Outstanding:
- attention to external links, presence and/or placement of which are not to Wikipedia guidelines
– some of those are probably from earlier editions by me (sorry).
--Grahamperrin (talk) 23:16, 22 September 2013 (UTC)
History (OpenZFS and feature flags)
Hey Grahamperrin! Regarding section "History", please have a look at an excerpt, associated with one of your recent edits (r574103369): "... and pool version 5000 – an unchanging number..." Sorry, but I'm unable to find a reference anywhere about 5000 as the version number. Shouldn't that be 1000 instead? Please advise. Thank you. -- Dsimic (talk) 23:00, 22 September 2013 (UTC)
- Ok, I went bold :) and did the change after reading some more papers, referencing the PDF where it's stated to be 1000. Please correct me if that's wrong, otherwise ZFS-OSX's pool version should probably be also changed to 1000. What's with FreeBSD and the pool version for feature flags? Thank you. -- Dsimic (talk) 00:08, 23 September 2013 (UTC)
- The PDF presentation that mentions version 1000 was before the feature flags implementation was put back into the main source. It was bumped up to 5000 when it was actually committed to the Illumos source repo. Header file with the recognized pool versions: https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/sys/fs/zfs.h#L338 and source comments describing how the new feature flags work: https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/fs/zfs/zfeature.c#L89 -- IvanRichwalski (talk) 07:33, 16 November 2013 (UTC)
- Thank you for the explanation! Just updated the article with this additional information, please check it out. Well, it's quite confusing, and there's no doubt why OpenZFS was created as an umbrella project. :) -- Dsimic (talk) 15:27, 16 November 2013 (UTC)
Honestly
This article is a steaming pile of crap. It appears to have been mostly written by the various people involved in the open (and closed) source cockfights around this project and is of nearly zero value to an uninvolved reader. Someone not using his real name (talk) 21:24, 23 December 2013 (UTC)
- Well, it is quite messy. You can't beat the fact this article provides tons of information, but many parts of it are quite biased towards ZFS being "the only and the best", probably even since sliced bread. And, for some reason, the article's Mac OS X part is absurdly overdetailed an quite unreadable. — Dsimic (talk) 21:30, 23 December 2013 (UTC)
- Might someone at least clarify the relationship between OpenZFS and OpenIndiana as they relate to ZFS? Someone not using his real name (talk) 21:35, 23 December 2013 (UTC)
- Well, you've just deleted a section providing a better insight. It wasn't an advertisement. — Dsimic (talk) 21:40, 23 December 2013 (UTC)
I can't fix it...
In the last sentence in section "Features" -> "ZFS and hardware RAID", a phrase is given as "disks that do not respond in time (like green hard drives)". AFAIK, energy efficient or "green" drives are no more likely to drop out than regular consumer-class 5400 or 7200 drives, so this is not an appropriate way of wording the idea. The entire sentence should probably be reworded, but I unfortunately don't have the creativity to do so. DraugTheWhopper (talk) 16:35, 29 March 2014 (UTC)
- Your assumption is incorrect, sorry. Some drives save energy by powering down in a generally unstoppable way, causing them to sometimes time out from an array. The debacle is about as infamous as the 4k issue, namely with the popular WD Green series. — Smuckola (Email) (Talk) 16:40, 29 March 2014 (UTC)
- But is that actually what's happening? I was strongly under the impression that it wasn't spin-down causing problems, but rather extended non-communicative error recovery. I know some people claim it's a problem, but I haven't seen official statements or lab tests. Besides, its still a poorly worded sentence, as "green" and "TLER/CCTL/ERC-enabled" are not, strictly speaking, opposites. Case in point: WD Red is a bit of a hybrid between WD RE and WD Green: Power saving and variable spindle speed from the Green, and ?NASWare? and TLER from the RE. --DraugTheWhopper (talk) 18:07, 29 March 2014 (UTC)
- Well, your followup question is different, and I'd have to defer to experts on that. But the originally popularly known fact as of a few years ago, was that WD Green drives (one of the most popular 'green' drives) would time out in a way that was related to energy savings. I'm not sure but other models may have done so as well. This could originally be alleviated by obtaining software that can toggle the TLER setting. Anyway, I just edited the section in question, so does that address your request for increased precision in prose? — Smuckola (Email) (Talk) 19:00, 29 March 2014 (UTC)
- But is that actually what's happening? I was strongly under the impression that it wasn't spin-down causing problems, but rather extended non-communicative error recovery. I know some people claim it's a problem, but I haven't seen official statements or lab tests. Besides, its still a poorly worded sentence, as "green" and "TLER/CCTL/ERC-enabled" are not, strictly speaking, opposites. Case in point: WD Red is a bit of a hybrid between WD RE and WD Green: Power saving and variable spindle speed from the Green, and ?NASWare? and TLER from the RE. --DraugTheWhopper (talk) 18:07, 29 March 2014 (UTC)
- I'm still skeptical of the technicalities, but yes, the wording is better now. Thanks! On a side note, is it appropriate to have a comma trailing after "consumer grade"? DraugTheWhopper (talk) 19:13, 29 March 2014 (UTC)
Hello there! Please, have a look at this excerpt from the hdparm man page, which explains pretty well the exact behavior of WD Green HDDs:
- -J Get/set the Western Digital (WD) Green Drive's "idle3" timeout
- value. This timeout controls how often the drive parks its
- heads and enters a low power consumption state. The factory
- default is eight (8) seconds, which is a very poor choice for
- use with Linux. Leaving it at the default will result in
- hundreds of thousands of head load/unload cycles in a very
- short period of time. The drive mechanism is only rated for
- 300,000 to 1,000,000 cycles, so leaving it at the default
- could result in premature failure, not to mention the
- performance impact of the drive often having to wake-up before
- doing routine I/O.
- WD supply [sic] a WDIDLE3.EXE DOS utility for tweaking this setting,
- and you should use that program instead of hdparm if at all
- possible. The reverse-engineered implementation in hdparm is
- not as complete as the original official program, even though
- it does seem to work on at a least a few drives. A full power
- cycle is required for any change in setting to take effect,
- regardless of which program is used to tweak things.
- A setting of 30 seconds is recommended for Linux use.
- Permitted values are from 8 to 12 seconds, and from 30 to 300
- seconds in 30-second increments. Specify a value of zero (0)
- to disable the WD idle3 timer completely (NOT RECOMMENDED!).
In a few words, it's all about WD Green drives going into a low-power state, somewhat similar to laptop HDDs, what requires some time for the HDD to wake up later making it highly possible to drop out of a RAID. However, that sleep timeout is configurable, what makes WD Green drives perfectly suitable for RAID configurations – at least in Linux which performs periodic background flushes to HDDs. Those flushes, when combined with increased sleep timeout, would keep WD Green HDDs from entering the low-power state. — Dsimic (talk | contribs) 19:48, 31 March 2014 (UTC)
ZIL Mirroring
There is conflicting information as to whether data loss may occur if an external ZIL fails. Specifically, RackTop Systems claims that no data loss occurs if the ZIL fails.[2] Eatnumber1 (talk) 18:41, 3 January 2013 (UTC) Update: Unability to import a filesystem occurred with zpool version < 19 when a separate log device died at reboot. With version >=19, import is possible with a corrupted log, but some transactions (max 5sec from the last sync period) may be lost. There's no problem with any version when the log dies in regular operating mode. — Preceding unsigned comment added by 141.52.58.15 (talk) 09:17, 20 May 2014 (UTC)
- ^ Jeff Bonwick (2006-05-04). "You say zeta, I say zetta". Jeff Bonwick's Blog. Retrieved 2006-09-08.
- ^ http://www.racktopsystems.com/zfs-to-mirror-or-not-to-mirror-the-zil/.
{{cite web}}
: Missing or empty|title=
(help)
Vandalism revision
Hey, I was wondering in what way my recent edit providing a non-primary source for the description of OpenZFS as the open-source alternative could be construed as vandalism? At the worst, it's not an amazing quality source, but it certainly isn't vandalism. Paulcd2000 (talk) 01:07, 12 March 2014 (UTC)
- Hello there! Please have a look at your edits – in addition to the reference (which is good), why did you introduce words such as "userspaaace", "atomic current" and "spaaace"? Sorry, but such changes automatically qualify your edits as a case of vandalism. — Dsimic (talk | contribs) 12:41, 14 March 2014 (UTC)
- ... I'm an idiot. I knew having the xkcd string replacement script would come back to bite me at some point. Sorry about that. Paulcd2000 (talk) 17:04, 15 April 2014 (UTC)
- No worries. Which string replacement script are you using? — Dsimic (talk | contribs) 02:44, 17 April 2014 (UTC)
- I don't know anything about an automatic script, but I'm guessing it has something to do with "Substitutions that make reading the news more fun" http://xkcd.com/1288/ . --70.177.113.174 (talk) 04:52, 14 July 2014 (UTC)
- Funny stuff. :) — Dsimic (talk | contribs) 10:14, 15 July 2014 (UTC)
- I don't know anything about an automatic script, but I'm guessing it has something to do with "Substitutions that make reading the news more fun" http://xkcd.com/1288/ . --70.177.113.174 (talk) 04:52, 14 July 2014 (UTC)
- No worries. Which string replacement script are you using? — Dsimic (talk | contribs) 02:44, 17 April 2014 (UTC)
- ... I'm an idiot. I knew having the xkcd string replacement script would come back to bite me at some point. Sorry about that. Paulcd2000 (talk) 17:04, 15 April 2014 (UTC)
Description of OpenZFS
Hey Schily! Sorry for going back and forth on the line that describes OpenZFS, I've ended up with putting together another condensed description of OpenZFS – it leaves pretty much no space for different meanings, and I hope you'll find it good enough. In a few words, OpenZFS isn't a fork as it produces no actual source code and serves only as an umbrella project that brings tohether other projects (or companies) that deal with ZFS directly. Of course, I'm more than open for further discussion. — Dsimic (talk | contribs) 13:28, 25 August 2014 (UTC)
- The text is now better, it still could be enhanced. The existing code is and must be a fork of the Sun original because only the code from Sun grants via the CDDL royalty free access to the related patents. Whether all patents are valid is another question as the basic idea of ZFS is based on my master thesis that was written between late 1988 and May 1991 and published in May 1991 . This is the reason why all patents from Netapp that were used when Netapp tried to sue Sun for ZFS are invalid as well.
- We have another problem with OpenZFS as OpenZfs does not really bring people together. There is no own repository and Illumos is a Solaris-ON fork that meanwhile deviates too much from the OpenSolaris original to be able to serve as a master copy. The most important information that would be needed by ZFS developers is not available: the explanation for the extension system. It thus is not possible to check whether the people at Illumos have been able to find a way that allows enhancements from different vendors to co-exist. Schily (talk) 14:04, 25 August 2014 (UTC)
- That sounds really interesting, I'll dare to ask have you managed to monetize the concepts from your master thesis? :) You're right that OpenZFS produces no source code, and that's what I was saying all the time, if you agree. Instead, OpenZFS aims to bring different projects/companies/people together, including the handling of a mess created around ZFS pool versions and feature flags (what leads to extensions you refer to), and the time will tell how successful or not the OpenZFS initiative was. That said, OpenZFS is more of a soft skills management movement, so to speak, which aims to do something. — Dsimic (talk | contribs) 14:28, 25 August 2014 (UTC)
- The development of my "WormFS" was payed by H.Berthold AG that went bankrupt in August 1993. That prevented a commercial use of the original. Schily (talk) 16:33, 25 August 2014 (UTC)
- Too bad it's in German. :( Wait a second, are you Jörg Schilling? — Dsimic (talk | contribs) 21:26, 25 August 2014 (UTC)
- German was the language of science until 1945 and this did not change because of the German scrientists... if the content is interesting, people still read German. The people from Sun in California had no problems with reading the text,-) I have been told that the thesis has been read not only by the ZFS people but by filesystem people in general as it includes the only descriptions for the VFS interface. You see, I did more than some people like to have in the WP article. Schily (talk) 12:05, 26 August 2014 (UTC)
ZFS on Linux articles
Here are a couple of useful NEW links that someone might be able to use as references.
- The State of ZFS on Linux - https://clusterhq.com/blog/state-zfs-on-linux/
- File systems, Data Loss and ZFS - https://clusterhq.com/blog/file-systems-data-loss-zfs/
Sbmeirow • Talk • 18:36, 11 September 2014 (UTC)
- Unfortunately, blogs (in general) aren't considered to be reliable references. However, IIRC there's a recent LWN.net article that describes current state of the ZFS on Linux. — Dsimic (talk | contribs) 07:30, 14 September 2014 (UTC)
Max File name length
255 bytes is the same as 255 ASCII characters, this should be changed to be less misleading to users. — Preceding unsigned comment added by Bumblebritches57 (talk • contribs) 09:14, 28 February 2015 (UTC)
- Hello! Yeah, but what about the Unicode crap (read: multibyte characters) people want to use in their file names? — Dsimic (talk | contribs) 07:48, 1 March 2015 (UTC)
- That is a fairly good point, maybe we can say that it supports 255 ASCII chars, and then expand on that in parentheses? Bumblebritches57 (talk) 05:50, 2 March 2015 (UTC)
- Sounds good, please go ahead and I'll review it later. — Dsimic (talk | contribs) 05:59, 2 March 2015 (UTC)
Size of (Fletcher-based) checksum?
If anyone knows the EXACT size of the (Fletcher-based) checksum that is stated in section ZFS#ZFS_data_integrity, please add it to the article. Thanks! • Sbmeirow • Talk • 18:32, 11 September 2014 (UTC)
The exact size is specified in the source code. Check openzfs source code. — Preceding unsigned comment added by 213.89.27.171 (talk) 14:57, 17 August 2015 (UTC)
Block Pointer Rewrite
What is Block Pointer Rewrite ? It is mentioned in the article, but there is no explanation. I can only find unreliable sources such as https://news.ycombinator.com/item?id=8646590 . --2003:71:CF36:C782:B966:D05B:EDD0:9AD4 (talk) 17:38, 1 February 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to 2 external links on ZFS. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20110205111337/http://download.oracle.com:80/docs/cd/E19963-01/821-1448/gavwq/index.html to http://download.oracle.com/docs/cd/E19963-01/821-1448/gavwq/index.html
- Added archive https://web.archive.org/20090508081240/http://www.opensolaris.org:80/os/community/zfs/version/15/ to http://www.opensolaris.org/os/community/zfs/version/15/
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
An editor has reviewed this edit and fixed any errors that were found.
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 15:02, 9 January 2016 (UTC)
- Done, all fine. — Dsimic (talk | contribs) 10:29, 2 February 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to 14 external links on ZFS. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20140304135249/http://doc.freenas.org/index.php/ZFS_Scrubs to http://doc.freenas.org/index.php/ZFS_Scrubs
- Added archive https://web.archive.org/20150905142644/http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide to http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
- Added archive https://web.archive.org/20070928185125/http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide to http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations
- Added archive https://web.archive.org/20091126062301/http://bugs.opensolaris.org:80/bugdatabase/view_bug.do?bug_id=6854612 to http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6854612
- Added archive https://web.archive.org/20121111135134/http://mail.opensolaris.org:80/pipermail/onnv-notify/2009-July/009872.html to http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html
- Added archive https://web.archive.org/20081230170058/http://www.opensolaris.org:80/os/community/zfs/docs/ondiskformat0822.pdf to http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
- Added archive https://web.archive.org/20110823190119/http://opensolaris.org/jive/thread.jspa?messageID=417776 to http://opensolaris.org/jive/thread.jspa?messageID=417776
- Added archive https://web.archive.org/20110127082517/http://bugs.opensolaris.org:80/view_bug.do?bug_id=4852783 to http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
- Added archive https://web.archive.org/20090527011500/http://ivoras.sharanet.org:80/blog/tree/2009-05-21.zfs-v13-in-7-stable.html to http://ivoras.sharanet.org/blog/tree/2009-05-21.zfs-v13-in-7-stable.html
- Added archive https://web.archive.org/20110515061128/http://hub.opensolaris.org:80/bin/view/Community+Group+zfs/faq/ to http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq#HWhatdoesZFSstandfor
- Added archive https://web.archive.org/20060515222240/http://mail.opensolaris.org:80/pipermail/zfs-discuss/2006-April/002119.html to http://mail.opensolaris.org/pipermail/zfs-discuss/2006-April/002119.html
- Added archive https://web.archive.org/20071224112557/http://synesius.wordpress.com:80/2007/11/18/zfs-beta-seed-v11-will-not-install-on-leopard1-1051/ to http://synesius.wordpress.com/2007/11/18/zfs-beta-seed-v11-will-not-install-on-leopard1-1051/
- Added archive https://web.archive.org/20091102050530/http://zfs.macosforge.org:80/ to http://zfs.macosforge.org/
- Added archive https://web.archive.org/20091009134153/http://www.opensolaris.org:80/os/community/distribution/links/ to http://www.opensolaris.org/os/community/distribution/links/
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 20:53, 11 February 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on ZFS. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive http://web.archive.org/web/20130401115009/http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/010356.html to http://mail.opensolaris.org/pipermail/zfs-discuss/2007-April/010356.html
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 22:12, 28 February 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to 5 external links on ZFS. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive http://web.archive.org/web/20121127160745/http://doc.freenas.org:80/index.php/ZFS_Scrubs to http://doc.freenas.org/index.php/ZFS_Scrubs
- Added archive http://web.archive.org/web/20090727183027/http://bugs.opensolaris.org:80/bugdatabase/view_bug.do?bug_id=6854612 to http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6854612
- Added archive http://web.archive.org/web/20091223182826/http://mail.opensolaris.org:80/pipermail/onnv-notify/2009-July/009872.html to http://mail.opensolaris.org/pipermail/onnv-notify/2009-July/009872.html
- Added archive http://web.archive.org/web/20081230170058/http://www.opensolaris.org:80/os/community/zfs/docs/ondiskformat0822.pdf to http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
- Added archive http://web.archive.org/web/20090629081219/http://bugs.opensolaris.org:80/view_bug.do?bug_id=4852783 to http://bugs.opensolaris.org/view_bug.do?bug_id=4852783
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 09:55, 7 March 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on ZFS. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive http://web.archive.org/web/20121001091103/http://www.informit.com/store/product.aspx?isbn=0137000103 to http://www.informit.com/store/product.aspx?isbn=0137000103
When you have finished reviewing my changes, please set the checked parameter below to true or failed to let others know (documentation at {{Sourcecheck}}
).
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—cyberbot IITalk to my owner:Online 01:34, 21 March 2016 (UTC)