Talk:Hard disk drive/Archive 3
This is an archive of past discussions about Hard disk drive. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 | Archive 4 | Archive 5 | → | Archive 10 |
Criticism of speed
Shouldn't the article contain a criticism section about the fact that hard disk speed is a big throttle in overall computer performance and that no good replacement has been developed?
BMW Z3 12:49, 2 August 2006 (UTC)
- Maybe, but hard disks are fast enough now that they aren't nearly as bad as they used to be...and cheap enough that if you need more speed, you can just RAID them together. What sort of alternative did you have in mind? -lee 04:22, 6 August 2006 (UTC)
- I don't see how that's a "criticism"; it's just the result of economics. We want massive amounts of storage but generally cannot afford for that storage to take its form as faster semiconductor memory. Hard drives provide cheap mass storage at the cost of speed, but this is generally an acceptable trade off due to various optimizations and tricks in modern VFS layers. You could obtain 100 GB of flash memory or DRAM, but it would be cost prohibitive and bulky and there aren't many reasons you would need the lower latency and higher throughput of solid state memory in a mass storage capacity. -- uberpenguin
@ 2006-08-06 15:18Z
- I don't see how that's a "criticism"; it's just the result of economics. We want massive amounts of storage but generally cannot afford for that storage to take its form as faster semiconductor memory. Hard drives provide cheap mass storage at the cost of speed, but this is generally an acceptable trade off due to various optimizations and tricks in modern VFS layers. You could obtain 100 GB of flash memory or DRAM, but it would be cost prohibitive and bulky and there aren't many reasons you would need the lower latency and higher throughput of solid state memory in a mass storage capacity. -- uberpenguin
- You say "100 GB of flash memory". Could you do that by connecting up 100 1 GB memory sticks? It might seem a silly idea and too compartmentalised, but just wondering. Comparing hard disks to flash memory and DRAM would be a good section for this article. ie. comparison section, not criticism section. Carcharoth 10:58, 8 August 2006 (UTC)
- The biggest problem would be finding a way to connect 100 memory sticks at once (though if you did some creative USB plumbing, it could happen). Your 100GB Flash drive wouldn't last very long if you ran a swapfile or heavy-duty server loads on it, though, because of Flash's limited write durability (it'll literally start burning out after a while), so I don't know how useful it'd be. -lee 03:51, 12 August 2006 (UTC)
- You say "100 GB of flash memory". Could you do that by connecting up 100 1 GB memory sticks? It might seem a silly idea and too compartmentalised, but just wondering. Comparing hard disks to flash memory and DRAM would be a good section for this article. ie. comparison section, not criticism section. Carcharoth 10:58, 8 August 2006 (UTC)
Manage Disk Space Link?
Is the link at the botton of the page titled "Manage Disk Space" really needed? It seems a little commercial and irrelevant to me. —Preceding unsigned comment added by Inklein (talk • contribs) 17:28, August 7, 2006
- Good call. I deleted the link. --ElKevbo 22:52, 7 August 2006 (UTC)
Non-linear increase in chance of damage?
Around paragraph 11 or 13 under "Mechanics and magnetics" reads: "In CSS drives the sliders carrying the head sensors (often also just called heads) are designed to reliably survive a number of landings and takeoffs from the disk surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear—when a drive is younger and has fewer start-stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage drive (as the head literally drags along the drive's surface until the air bearing is established). For example, the Maxtor DiamondMax series of desktop hard drives are rated to 50,000 start-stop cycles. This means that no failures attributed to the head-disk interface were seen before at least 50,000 start-stop cycles during testing." [bold italics added] It seems to me that while it may well be true that the increase is not linear, the explanation given does not show it to be - only that it increases. Anybody know? Or am I misreading? --Fitzhugh 04:15, 11 August 2006 (UTC)
Why is it called "C:\" Drive?
Is there any historical reason why the hard drive of a computer is called the "C" drive? I used to think that since it is the primary drive on many hardware and software configurations it would get the "A" name, but it got stuck with "C". Even the keyword "C Drive" redirects to this article. Maybe somebody knows?
Also why I have never seen a "B" drive? I'm sure there are people who set it up to have a "B" drive but this is not the default setting on many computers?
This info would be useful since many people have not really thought about it and would be interesting trivia.
--AverageAmerican 05:33, 29 August 2006 (UTC)
- I'm sure I have seen systems with B drives, particularly one 5.25 and a 3 inch drive. Originally IBM PCs came with 1-2 floppy drives, and a hard drive; but the system could take two floppies; which is useful for copying disks. The floppies were used to boot off and install software, so I guess they got lumbered with A and B and the hard drive got C.WolfKeeper 06:13, 29 August 2006 (UTC)
- CP/M -> MS-DOS -> Microsoft Windows... The single-letter-per-disk-volume is just arbitrary convention. CP/M is the earliest major OS I know of to use the scheme, though it may have already been around by then. *nixes don't use a single letter to refer to drive volumes. -- uberpenguin
@ 2006-08-29 06:45Z
- CP/M -> MS-DOS -> Microsoft Windows... The single-letter-per-disk-volume is just arbitrary convention. CP/M is the earliest major OS I know of to use the scheme, though it may have already been around by then. *nixes don't use a single letter to refer to drive volumes. -- uberpenguin
- Back in the days when there were no PC hard drives (hard to imagine, isn't it?), your two floppy diskette drives were known as "A" and "B". Lucky folks added a hard drive to these systems and this third drive was, of course, then known as "C".
- You might want to see the current sequence of the comic strip User Friendly, where Sid has just dragged out an original IBM PC.
- Do you happen to know whether that nomenclature was around before CP/M? -- uberpenguin
@ 2006-08-29 13:31Z
- Do you happen to know whether that nomenclature was around before CP/M? -- uberpenguin
- Not in the worlds (DEC-derived operating systems and Unix) that I inhabited. DEC tended to use xxn: or xxxn: where "xx" or "xxx" was a device type mnemonic (for example, "RK" for cartridge disks, "RP" for disk "pack" drives, and "RB" for Massbus-connected disk drives, all typically derived from the model name of the I/O device). Unix, of course, uses its combination of physical names (at "mount" time) and completely arbitrary mount point names the rest of the time.
- Also, not in IBM, as far as I know, but I'm far from an expert in that area.
- I'm not aware of any IBM system that used the scheme. Definitely not their midranges or later OS/360. I've never used DOS/360, but I know it has nothing to do with the PC concept of DOS (i.e. CP/M-like), so I doubt it names volumes like that either. -- uberpenguin
@ 2006-08-29 13:47Z
- I'm not aware of any IBM system that used the scheme. Definitely not their midranges or later OS/360. I've never used DOS/360, but I know it has nothing to do with the PC concept of DOS (i.e. CP/M-like), so I doubt it names volumes like that either. -- uberpenguin
- All of these reasons seems to be valid, however does anybody have maybe a link, or some reference? Of course, all the stuff us Wikipedians we put on Wikipedia should have some sort of a citation. I'm sure this information is worthy of writing onto this page. --AverageAmerican 21:02, 29 August 2006 (UTC)
- Nope, this is just knowledge from experience. Feel free to google around if you want to find some sort of reference. In any case, I don't really think it's necessary for this article. The article is about hard disks as devices, not how various OSes name volumes. -- uberpenguin
@ 2006-08-29 21:13Z
- Also possibly of interest, many systems will treat a single floppy drive as both A and B, but not all as I found out when trying to install MS Office onto a comuter with a "b" drive but no "a" drive. Rich Farmbrough, 11:46 6 September 2006 (GMT).
Cycle rating
"For example, the Maxtor DiamondMax series of desktop hard drives are rated to 50,000 start-stop cycles. This means that no failures attributed to the head-disk interface were seen before at least 50,000 start-stop cycles during testing." Is this true? It implicitly contradicts the earlier statement "Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%.". Rich Farmbrough, 11:46 6 September 2006 (GMT).
Moore's Law
Does the capacity of a hard disk coincide with Moore's law? 71.213.27.179 17:05, 11 September 2006 (UTC)
- Answered my own question, Kryder's law is the Moore's law equivalent 71.213.27.179 17:10, 11 September 2006 (UTC)
- That article was just deleted, as the term was essentially invented by a Wikipedia editor. There is no official name for this law, though it has been said to be identical to Moore's law. I am going to update the Moore's law article with the deleted info. — Omegatron 23:24, 27 October 2006 (UTC)
possible 1 TB drive
Lacie has a hard drive they claim is 1 tb, [1]
- I've seen the 500 GB model of this drive; it's actually two 250GB drives in a case, running in either JBOD or RAID-0 mode (I'm not sure which, since the RAID controller in the case obscures that). I would imagine this "1 GB" drive is the same sort of setup, only with two 500GB drives. -lee 17:23, 22 September 2006 (UTC)
Heavy reorganisation
I've just heavily reshuffled this article. An awful lot of redundancy has been exposed, which now needs to be reduced. The technology section was folded in from two or three separate parts of the article, one of which was almost impossibly technical. This needs to be addressed somehow. I've also moved the history section out mostly to its own article; that article could do with some choice pictures from here. Chris Cunningham 19:15, 19 September 2006 (UTC)
- I noticed the POV tag on the Manufacturers section; what all would you suggest changing? (The "deathstar" bit about the 75GXP can certainly go, but the rst I'm not so sure of.) -lee 17:28, 22 September 2006 (UTC)
- It's mostly gone now anyway. What remains still doesn't seem to be particularly important to the article; lots of hardware sections are dominated by a handful of manufacturers, and corporate scandal is hardly unique to hard disk companies. In an article which is already rather too long, this can probably be excised. Chris Cunningham 15:14, 23 September 2006 (UTC)
- I re-wrote the technology section. Is it OK? I admit I was the one who wrote much of the original so maybe I was biased to my stuff (I suspect I wrote the parts you say were 'impossibly technical' :) about dipoles and grains etc). I removed a lot, because I think it should go in the Disk read-and-write head and Hard disk platter articles, or even more general ones. I put the part about grains in the platter article. I kept what I thought was an overview of the technology: Platters, r/w heads, magnetic surface, binary encoding, & advantages of HDD. Other articles can go more in depth. And is it still too technical/convoluted? BlankAxolotl 01:16, 26 September 2006 (UTC)
- Your work is excellent. I was about to comment on how much better the technology section reads now. You shoudl certainly add the more esoteric information to the sub-articles, it's all good work if a little too in-depth to be presented here. Chris Cunningham 10:50, 26 September 2006 (UTC)
More work for the editors
Sorry if I'm piling on more work for you guys, but I think there needs to be a section that addresses current technologies and another one for future directions. For current technologies, I'd like to see a little blurb on perpendicular recording, the SATA interface, and maybe current limitations. For future directions, maybe a link to holographic storage, and other theoretical ways to overcome physical or quantum limitations.
And something I can't seem to find the answer for: Why were the 10K rpm drives limited to 36GB and 74GB for a while now? Almost similar to their SCSI cousins. I think the only breakthrough was from Western Digital with their 150Gig 10k-rpm drive -- which is far below the capacity on 7200rpm drives. Perhaps another physical limitation? Isn't there a fear of the platters flying apart at a certain rotational velocity?
Problems with the recent re-organisation
I have to say that I don't think the technology section is as readable as it was two days ago. The sub-headings are too short, it goes to some unnecessary lengths to clarify things and it contains an unneeded "advantages" header which should either be implicit in the text or explicitly balanced with a counter-header to read properly. Would there be objections to reverting the section structure and incorporating the better edits made? Chris Cunningham 10:08, 29 September 2006 (UTC)
- change it to whatever you think is good! I'm no english major... :) BlankAxolotl 22:46, 29 September 2006 (UTC)
- Thanks. I've reordered it in much the same way as it was previously while using the new version's text. It may be that the part about exactly how data is recorded is a little technical; I'm going to work on trying to reduce the length of the article again, so I might try to move things which relate to ferromagnetic data recording in general to a more appropriate article. Chris Cunningham 10:23, 5 October 2006 (UTC)
- Just some random comments on my cursory read of the article (sorry if this is the wrong place for my comments). (a) The article states that "Modern disks can perform around 50 random access or 100 Sequential access operations per second.". This is innacurate. In the case of random I/O. the number of IOPS is mostly limited by seek and rotational latency combined. For example, on some disks you can have 250 random reads per sec with a 4 ms seek + random latency on average. On sequential I/O - ordered sequential writes are actually batched & optimized by the controller (or other elements in the storage stack) as larger operations, so the number of seq. operations is not really meaningful. (b) The article states that FC-AL is the cornerstone of storage area networks. FC-AL is actually obsolete and replaced with switched fabric (FC-SW). (That said, you can still see FC-AL in the internal implementation of some storage arrays.) --AdiOltean 08:44, 22 October 2006 (UTC)
- Thanks. I've reordered it in much the same way as it was previously while using the new version's text. It may be that the part about exactly how data is recorded is a little technical; I'm going to work on trying to reduce the length of the article again, so I might try to move things which relate to ferromagnetic data recording in general to a more appropriate article. Chris Cunningham 10:23, 5 October 2006 (UTC)
Is my hard drive defying physics?
Hard drives act as gyroscopes. They can be pretty strong ones-- my 7200 rpm exterior hard drive offers significant resistance when I try to move it while it's on. According to common sense, and to the basic laws of physices, for the hard disk/gyroscope to exert a force that I have to spend energy to overcome, the hard drive will have to spend energy. So, my question is: if, when I turn the hard drive, it spends energy on resisting my force, wouldn't that slow the disk down? And, wouldn't a small decrease in speed screw up any data that is being read or written at the time? I've moved the disk around while transfering data to it, and the data came out fine. How? Am I misunderstanding something, or is my hard drive choosing to ignore physics? Twilight Realm 19:15, 29 October 2006 (UTC)
- The platter speed is controlled by a servo mechanism, so whilst twisting the drive would slow the drive slightly for various reasons, the servo mechanism will just adjust the torque and keep it within the specified speed range that the drive head can succesfully accept and decode. Thus any extra energy needed comes from the powersupply.WolfKeeper 19:39, 29 October 2006 (UTC)
- You're doing all the work! Gyroscopes don't spend kinetic energy producing their effects because they don't actually "create" any forces; rather, they redirect external forces. When you put a force on the disk in one axis, it reacts by trying to precess (move) in another axis 90° away from the imposed force. So you're not only applying the initial force, you're also resisting the precessional force. Or your cables would get very twisted very fast.
- You can see this with a toy gyroscope or the gyroscope demo at the Exploratorium. No matter what you do to move the gyroscope, it doesn't slow down noticeably. (Well, except for the additional friction loads the you impose on the gyroscope's bearings.)
- Gyroscopic forces in the ideal case do no work. If (without loss of generality) the hard drive spin axis is up and you apply a torque about the east/west axis, then the hard drive's response is to twist about the north/south axis, at right angles to the direction you torqued it. Work is the dot product of the applied force and motion, or of the applied torque and twist. Since the motion is at right angles to the force, the dot product is zero. zowie 06:05, 30 October 2006 (UTC)
Capacity section, images
The recent reorg seems to have introduced a rather large and detailed section on the differences between the 1000/1024 bases for disk size prefixes. This is important to note, I suppose, but not in such heavy and fervent detail. It should be seriously cut back. I don't think it needs its own subsection.
The article now also has a large number or large images. They're interfering with the section edit links. They should be moved, shrunk or removed. Chris Cunningham 18:05, 30 October 2006 (UTC)
Seagate/Maxtor
In the article, it states that Seagate bought over Maxtor in December 2006. Is this true?
Yes. References:
lifetime
How long does a typical drive last? What can I do to extend the useful lifetime of a hard drive?
The "MTBF levels up to 1 million hours" is misleading without further explanation.
An old year 2000 FAQ mentions:
- "If I purchase a drive with an MTBF of 1,000,000 hours (114 years), can I expect the drive will operate without failure for 1,000,000 hours?"
- "No, because the drive will reach end-of-life before reaching 1,000,000 hours. For example, a continuously operated drive with a five-year useful life will reach end-of-life in less than 45,000 hours. But, theoretically, if the drive is replaced with a new drive when it reaches end-of-life, and the new drive is replaced with another new drive when it reaches end-of-life, etc., then the probability that 1,000,000 hours would elapse before a failure occurs would be greater than 30 percent in most cases."
Is that still true today (in 2006)? If so, it seems to indicate that "end-of-life"/"useful life" (currently neglected in this article) is far more important than "MTBF".
The Hard disk article currently claims that "While the disk is spinning, the heads are supported by an air bearing and experience no physical contact or wear."
To maximize disk lifetime, should I leave my computer turned "on" 24/7, and make sure my screen saver is set to "never" turn off the hard drives ?
--70.177.117.132 06:20, 19 November 2006 (UTC)
- Drives are rated for a certain number of "touchdowns" across their life, and drives meant for frequent start-stop operation (in laptops, iPods, etc.) tend to be rated for many more touchdowns than drives meant for operation in a big server farm. On the other hand, the spindle bearings wear continuously as the drive spins, so there's a tradeoff between the life limitations caused by the heads touching down and the life limitations caused by the spindle bearings constanty wearing. Check the specs for your particular drive for the exact numbers.
why are all cylinders the same size
I am a memeber of a Google group named "Sun Solaris Helpdesk 24x7" and the following question was posted:
hi,i didn't really get what my tutor told about the cyinders.my question is,if sectors are the smallest part of the memory(512 bytes),and many tracks of sectors of form a cylinder,then how clould the size of the cylinder be 1.99 mb. It <http://mb.it/> should vary on each track.can anyone explain me please.
I attempted to answer it, thus:
If you are thinking in terms of the physical size of the cylinders, then yes, they get physically larger (measured in linear terms such as miilimeters or inches) as they get further from the spindle. However, if you measure the cylinders in RADIAL terms (degrees or radians) then they are all the "same size." Likewise (obviously) for the spin rate: the outer cylinders turn much faster (measured in mm/sec or miles/hour or any other linear units you like) but all cylinders turn the same number of RPMs.
So, since the sector reads and sector writes are controlled by TIME rather than distance, the sectors near the spindle are physically shorter than those further from the spindle. Nevertheless, each sector subtends the same angle and each sector takes the same amount of time to read or write.
This means that the sectors near the spindle are the ones with the
highest areal bit densities while those further from the spindle have
lower areal bit densities. This sounds as if there is "wasted space" in
the outer (lower density) sectors, but since the read operation (and I
guess the write operation in some sense) is a time-based integrator,
the amount of integration time is exactly the same for all sectors.
P.S. I was unable to surf to your link <http://mb.it> -- if you really want the group to see it, you need to fix it somehow.
This apparently was not clear enough, resulting in this follow-up question:
its really nice to hear from you,thank you for your kind reply.but still im not clear that how the cylinders are exatly formed (i.e) are the cylinders near the spindle of the same same size as the cylinders in the outer track have (1.99 mb)? if so how ?don't the size differ according to the size of the track?if u could help me ?thanks in advance.
So, I tried again, and the following seems to have satified the questioner:
All the cylinders on a disk are the same "size" in terms of bytes stored. If the inner cylinders hold 1.99 Mbytes each, then the outer cylinders hold 1.99 MBytes each. The reason for this is that each one is the same "size" when measured in terms of time. Years ago "all" disks turned at 3600 RPM, so let's work with that number even though these days many disks turn at 7200 RPM (or perhaps higher). An inner cylinder takes 1/3600 second to rotate once, and an outer cylinder takes 1/3600 second to rotate once.
To keep things simple, let's pretend that your disk has only one platter and thus only one track in a cylinder. If your disk has 4096 sectors in a track, any given sector anywhere on the disk passes under the read/write head in 1/(3600*4096) second. That is ~67.8 nanoseconds no matter how far the sector is from the spindle. During that brief time period, 512 bytes of data (4096 bits of data plus some error correction codes, etc.) pass under the read/write head. So, any given bit of information anywhere on the disk is under the read/write head for less than 67.8/4096 nanoseconds (less than 17 picoseconds).
During that 17 picoseconds, the read/write head must build up enough charge in some sort of internal capacitor (I guess that is how they work at some level) to permit a reliable decision about whether the bit is a one or a zero. ===>That is the true physical limitation that controls the entire design.<=== The fact that more physical disk material passes under the read/write head during 17 picoseconds when the head is positioned over an outer cylinder has nothing to do with anything. The read/write head still has the same tiny fraction of a second to "integrate" (build up charge or whatever) and decide if it has just passed over a 0 or a 1.
I realize that I took some liberties with my descriptions and examples, and I realize that none of the above is polished enough to go straight into the main article.. Still, I think an example or two with some actual numbers plugged in could make things clearer for many readers. In particular, the question of "why don't the physically larger outer tracks hold more data?" should be answered in the article (as I have tried to do).
[I just realized that I slipped from RPMs to revolutions per second somewhere in the midst of my example. Sorry about that. The time to read/write one bit should be on the order of 1 nanosecond rather than 17 picoseconds. Still, the basic idea is the same...]
I am assuming that what I have written can be used freely even though it was previously posted on the group website mentioned above.
This explanation sounds reasonable. But I have heard several other explanations of "the" physical limitation. These explanations that try to explain why hard drives use constant angular velocity and CDs use constant linear velocity all seem reasonable, but it is not clear to me which one is the real explanation. Using constant linear velocity packs the most information on a given area, so that's why it is used on CDs. But why don't hard drives use it? Some possible explanations:
- If we keep the disk spinning at a constant RPM (as a few very recent computer CD drives do), it requires complex read electronics to handle the varying bit rate of CLV. It's cheaper to use simple constant bit rate electronics and CAV and throw in another platter to make up for the reduced data capacity.
- If we vary the speed of the disk (as all audio CD players do) so the read electronics see a constant bit rate, every time you move the arm to a different cylinder, you have to adjust the spindle speed. It takes a long time to speed it up or slow it down to exactly the right data rate. People prefer fast hard drives over really slow hard drives, even if the really slow hard drives store 10% more data.
- If we vary the speed of the disk (as all audio CD players do) so the read electronics see a constant bit rate, every time you move the arm to a different cylinder, you have to adjust the spindle speed. Constantly adjusting the spindle speed puts additional stress on the spindle motor, making the hard drive wear out much faster, and no one wants that.
- there is no good technical reason. Hard drive manufacturers started out using CAV because it was simple and it worked. CLV is technically better, but people are reluctant to try something new when what they already have works adequately.
- ... or perhaps some other explanation is the "real" explanation.
--68.0.120.35 21:35, 24 November 2006 (UTC)
- Let's be clear about this: Hard drives are CAV because nobody wants to build a servo that can ramp the speed of the disk platter stack up up and down dozens of times per second as the head comb seeks in and out. Even most modern high-speed CD and DVD drives are CAV. But for hard drives, it doesn't matter with regard to density because disks used what is known as zoned recording. As you move out to the tracks farther from the spindle, the disks record more and more blocks per track, keeping the bit density more or less constant (although the bit rate at the head climbs by the ratio of the diameters).
- Really? Hard disk drives use variable bit-rate to pack more data onto the outer cylinders? That's news to me (and, if true, should *definitely* go into the article). Do you have a reference ? --68.0.120.35 08:45, 28 November 2006 (UTC)
- See page 115 of this (rather long) IBM PDF document. [2]. But, in brief:
- Zoned Bit Recording
- Because track lengths at the inner edge of a disk platter differ from those at the outer edge, before zoned bit recording the bit density was higher on the inner tracks than on the outer tracks Sector lengths were the same at the inner and outer edges, so less data could be stored at the outer edge. Also, peak-detection read channels could not cope with the higher data rate at the outer edge (because of the higher angular velocity of the platter at the outer edge), thus limiting the amount of data that could be effectively stored. Zoned bit recording takes advantage of the differing lengths of the tracks and keeps the areal density constant across the platter.
- Zones
- The Ultrastar 2XP platters are divided into eight zones. Since the areal density remains the same, the number of sectors per track increases toward the outer edge of the platter. Zone 1 is located at the outer edge, and Zone 8 is located at the inner edge.
- The main advantage of using zoned bit recording is increased capacity without increased platter size. However, a side effect of having higher density at the outer edges is a higher data rate off the platter due to the higher angular velocity at the outer edge. The maximum data rate at the inner edge is 10.3 MB/s and at the outer edge, the maximum is15.5 MB/s. Also, more data is physically stored toward the outer edge; almost 60 % of all data resides in Zones 1 and 2.
- Regarding the terminology: "sectors" are the basic unit of storage, though they don't necessarily have to be 512 bytes. (Practically speaking, only special server-class drives had anything other than 512 bytes.) A "track" refers to all the sectors at a given radius. Well, since strictly speaking written servo isn't necessarily precisely circular, a track would really be sectors identified by the servo subsystem as being at a given track, which is something of a circular definition. "Cylinder" used to mean all tracks down the disk stack at the same radius, but it's now pretty much synonymous with "track".
- I'm not familiar with all the reasons that the developers of compact disc selected CLV, but likely it was simplicity for the playback circuitry: when the data rate is constant, one doesn't have to do any weird buffering due to variable data rate. Furthermore, since CDs weren't designed to be random-access devices, as playback is sequential, CLV didn't harm performance. In the case of HDDs, if the spindle speed were CLV, seek times would be pretty miserable. (Note that with embedded servo, servo frequency and number is still constant across the disk.)
- Since HDDs are CAV, that means that the linear velocity at the OD is greater than that at the ID. Put another way, if data rate is maintained as being constant, then the BPI is much higher at the ID, which limited capacity. In general, one wants BPI to be fairly constant across the disk to maintain optimal recording characteristics. Since for a given unit of time, the arc length at the OD is longer, commensurately data rate is set higher at the OD than at the ID. Older HDDs before ZBR had constant data rate (simpler electronics because of the technology level), and thus wasted potential density at the OD. The additional complexity of ZBR was worth the capacity increase.
- A given design point has a certain areal density, which is equal to the TPI times BPI. TPI is fixed across the disk, so to get a constant AD, one needs a constant BPI. Still with TPI so high these days, aerodynamic effects are prompting investigation for relief, such as variable TPI (ick!) and sealed HDDs filled with a lower viscosity gas.
- Not sure if that added any significant explanation.... GMW 07:29, 9 January 2007 (UTC)