Wikipedia:Reference desk/Archives/Science/2015 March 22
Science desk | ||
---|---|---|
< March 21 | << Feb | March | Apr >> | March 23 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 22
[edit]Iridium satellites - flash, etc
[edit]There were to be two bright Iridium flares tonight (-6 and -8 magnitude) in about the same part of the sky, 90 seconds apart. See this and this. I tried to photograph them. There were some thin clouds blockins a lot of the stars, but I did see the second one through the clouds. I missed the flares in the photograph, but I'm wondering if I got the trails. I couldn't see many stars so I got what I thought was Jupiter in the frame, but I think I was aiming too far to the left.
Are Iridium satellite trails solid, or do they have small flashes periodically? Along the bottom is something making regular pulses. I didn't see any airplanes. Over to the bottom left there are two, in parallel, with one flashing twice as often as the other. I can't identify the stars well enough to tell if this matches the paths of the Iridium satellites. Can anyone tell? Bubba73 You talkin' to me? 02:02, 22 March 2015 (UTC)
- The trails of an Iridium flash (I've seen several) is solid, because it's caused by a single reflection event. It's also brighter than Jupiter (as those magnitude figures demonstrate), and lasts no more than a few seconds, which doesn't seem consistent with your photo.
- A rapidly spinning satellite could give a pulsed trail something like this, but most satellites don't spin rapidly because they wouldn't be able to function: nearly all the ones I've seen while using astronomical telescopes or with the naked eye (it's surprising how many there are if you look out for them) have a steady appearance that fades out slowly and smoothly when they pass into the Earth's shadow.
- This looks to me very much like aircraft lights – they may have been too faint to see with the naked eye at the time. {The poster formerly known as 87.81.230.195} 2.218.13.204 (talk) 03:39, 22 March 2015 (UTC)
- I know I missed the flares - I saw one of them visually and they were out of the frame. I was wondering if the path of dots was the satellite before or after the flash. It must have been an airplane I didn't see. Bubba73 You talkin' to me? 04:06, 22 March 2015 (UTC)
- However, the cropped color version doesn't show any color, as airplane lights should. Bubba73 You talkin' to me? 04:21, 22 March 2015 (UTC)
- And if these were regular running lights, they would be streaks because of the time exposure. Therefore, if they are on an airplane they must be white strobes. Do any airplanes have such lights? Bubba73 You talkin' to me? 16:21, 22 March 2015 (UTC)
- Planetarium software such as Stellarium can show the location of satellites at a given date/time/location. Stellarium should be able to auto-point to a named satellite, but I can't seem to get it to work. LongHairedFop (talk) 12:47, 22 March 2015 (UTC)
- I have the wonderful Stellarium program, but I haven't used it for satellites. Bubba73 You talkin' to me? 16:21, 22 March 2015 (UTC)
- I used Stellarium to show last night, and I changed my location from just my city to my actual location to within a fraction of a minute, but it didn't show the Iridium satellites. Maybe there is an option I don't know about. Bubba73 You talkin' to me? 17:08, 22 March 2015 (UTC)
- Put your mouse over the bottom menu; there should be a stylised satellite icon to the left of the time-control cluster. This turns on satellite highlighting, and the short-cut key is Ctrl-Z. To pan to a satellite, open the Config menu (F2), select the Plug-ins tab, and the Satellites from the list on the left of the dialog. Ensure that "Load at Start up" is checked. If not check it, and then restart Stellarium. (After restarting if needed), press the Configure button, and a second dialog will open. Select the Satellites tab, and scroll down the list to the Iridium satellites. Double click on a satellite's name to pan to it, and then follow it as it moves. Iridium flare doesn't seem to be emulated. HTH LongHairedFop (talk) 21:45, 22 March 2015 (UTC)
- I used Stellarium to show last night, and I changed my location from just my city to my actual location to within a fraction of a minute, but it didn't show the Iridium satellites. Maybe there is an option I don't know about. Bubba73 You talkin' to me? 17:08, 22 March 2015 (UTC)
- Thanks, fascinating. Having the Iridium names in very dark blue on a black background isn't a good choice. I got it to follow Iridium 4, which is the one Heaven's Above said it was. It was following it, but it wasn't up at the right time. I checked my location and date/time, and it still didn't match. Then something went wrong and it quit following I4. I put the time and date in and watched the sky and it showed other satellites going by, but not the Iridiums. I'll try again the next time a clear night and an I-flare coincide. Bubba73 You talkin' to me? 03:09, 23 March 2015 (UTC)
Life expectancy pie chart
[edit]I would think our current life expectancy could be divided up into a pie chart, starting with a certain number of years our hunter/gatherer ancestors had, then add in some years for water and sewage treatment, modern food processing (such as pasteurization of milk), vaccines, emergency medical treatment, etc. So, is there a chart somewhere that attempts to break down how many years each of these technologies adds to our lives ? StuRat (talk) 03:48, 22 March 2015 (UTC)
- For something in the ballpark see Spiegelhalter's work on microlives and related literature. However, my understanding is that all this work is valid only for small perturbations of one factor at a time around the mean while other factors are kept constant. So one cannot really add these numbers up to say something akin to current life expectancy = base life expectancy + 10 years dues to sanitation + 5 years due to vaccines etc, and in fact I'll be highly skeptical of the scientific validity of any "popular" source presenting such a chart (even the idea of microlives is "intended for popular rather than scientific consumption"). To make an analogy: we can surely measure the marginal change in a computer's performance (however one measure it) of adding 1GB of RAM vs faster hard-drive vs speedier processor, but cannot conclude from that that computer performance is 50% due to processor + 30% due to RAM etc, since those factors are neither independent nor additive without the small-perturbation assumption.
- But then again, maybe someone has put in the rigorous thought and work required to come up with such a chart and some refdesk responder will find it, but beware of clickbait! Abecedare (talk) 04:32, 22 March 2015 (UTC)
- These stats need to be careful. Most actuarial tables list median remaining years based on current age. For example, the average remaining years on a newborn may be 72 years. The remaining years of a 60 year old might be 25 years. It would not be surprising if 50 year old hunter/gathers had a median remaining lifespan the same as today. Infant mortality is the largest difference between disparities in lifespan. The human lifespan has not improved much, to wit, the oldest humans seem to be a rather constan age. --DHeyward (talk) 08:10, 22 March 2015 (UTC)
- Yeah: I'd expect that if you managed to survive to 50 in those times, your odds of a couple more decades should be about the same as they are now. Keyword "if", of course. Today that is no longer a keyword, however, and as a result life expectancy shoots up without maximum lifespan getting quite as much higher). Double sharp (talk) 15:05, 22 March 2015 (UTC)
- Not quite an answer to your question, and not exactly the fruit of intensive searching ;), but life expectancy gives some clues. IBE (talk) 09:51, 22 March 2015 (UTC)
- NB I did actually spend some time searching for a relevant WP article on life expectancy (at birth and at age 10, or something like that). I've seen it before, and I'm sure it's around somewhere, so if someone could dig that up, I'd really appreciate it myself. IBE (talk) 09:56, 22 March 2015 (UTC)
- I'm struggling with how/why a pie chart would be appropriate. Pie charts typically illustrate proportions which "add up" to a total, which your data would not do. Surely some kind of regression analysis would be more suitable? Matt Deres (talk) 11:25, 22 March 2015 (UTC)
- Indeed. For example, you owe nearly a sixth of your life expectancy to not playing Russian roulette as a kid. :) Wnt (talk) 14:32, 22 March 2015 (UTC)
- And we all owe nearly all of our life expectancy to not jumping off buildings. XD Seriously, though, maybe this would be a better idea as a graph plotted in years than a pie chart? That way you could list exactly how many years your life expectancy by each advance or activity. You could even have facetious negative entries for "playing Russian roulette", "cordless bungee jumping", and "neglecting that human and piscine responses to severe environmental hazards are similar and usually death". Double sharp (talk) 15:05, 22 March 2015 (UTC)
- Indeed. For example, you owe nearly a sixth of your life expectancy to not playing Russian roulette as a kid. :) Wnt (talk) 14:32, 22 March 2015 (UTC)
- The pie chart makes perfect sense, given the way the question is phrased. It seems clear to me that Stu wants a chart that starts with a hunter-gatherer lifespan, and assumes that various developments, historically sequenced, always add to that, as if independently. This seems to involve no assumption of independence as a medical/statistical fact, because it looks rather like a whim of the OP, for interest's sake. Then it makes sense. IBE (talk) 16:46, 22 March 2015 (UTC)
- I see what you're saying, though that would only be useful if all the items were positive or you only included positive items; the pie chart could not show any decreases. That, in turn, would make all the other numbers suspect. Just as a theoretical, let's say that the LE of a hunter-gatherer was 30 years and the switch to simple agriculture dropped the LE to 25, how would that be shown? And if the changes to food preparation (say, the shift to "safe" beer from "unsafe" water) then boosted it back to 32, would it show as an addition of 2 or of 7? Done as a line graph, all that would show correctly and would allow for easy comparisons with other populations (which you'd really need to do because the effects of these things were hardly homogenous across the world). Matt Deres (talk) 16:49, 23 March 2015 (UTC)
- If my assumption that hunter-gatherers had the lowest lifespan is incorrect, then start with early agricultural society, or whoever did have the lowest lifespan. And temporary drops in lifespan, as due to the Black Plague, need not be noted, only items which contribute to lifespan now. StuRat (talk) 17:16, 24 March 2015 (UTC)
Windshield wipers
[edit]Why is it that windshield wipers are included for rear windows when they are close to or actually vertically oriented (as on SUVs) but not when they are slanted (as on passenger cars). It would seem to me that accumulation of precipitation is less likely when the window has close to an infinite slope. Thanks! DRosenbach (Talk | Contribs) 13:51, 22 March 2015 (UTC)
- Experience? I'd have to say that on the rare occasions I lease a sedan, a rear wiper hasn't seemed necessary.Greglocock (talk) 14:06, 22 March 2015 (UTC)
- See these results for a simple search for 'rear windshield wipers sedan'. [1] [2] [3] [4] [5] [6] [7]. There are a lot more if you aren't satisfied. Perhaps part of the confusion comes from the examples given. AFAIK and as per our article, windscreen wiper#Rear wipers, many hatchbacks and station wagons do have rear window wipers nowadays. As Greglocock has hinted at, of passengers cars it's mostly commonly sedan (automobile) that don't. Nil Einne (talk) 14:17, 22 March 2015 (UTC)
- As laid out in this patent's Description, a near-vertical rear window accumulates dust. -- Scray (talk) 14:23, 22 March 2015 (UTC)
- Thanks for that confirmation! I've now improved our article, since I'm sure DRosenbach and I aren't the only ones to have wondered about this. --Steve Summit (talk) 15:03, 22 March 2015 (UTC)
- (ec) I believe
[warning -- this is unfounded speculation]that it ends up having to do with aerodynamics. - A sedan's rear window is, yes, more horizontal, and nominally catches more rain falling down. But (a) raindrops are at least transparent, and anyway (b) the slipstream of air tends to carry them away fast enough that vision out the rear window isn't badly impeded.
- For an SUV or something with a more vertical rear window, on the other hand, there's much less of a slipstream to carry the water away, instead there's an eddy where stuff can collect. And the "stuff" that collects includes not only rainwater, but dust and other road gunk. So the rear window gets filthy pretty fast.
- I find that when I'm driving something with a vertical rear window, I need the rear window washer just about as much as the rear window wiper. --Steve Summit (talk) 14:30, 22 March 2015 (UTC)
- Yes, it has to do with the non-aerodynamic shape of those vehicles, which causes turbulent flow of air rather than laminar flow. This in turn causes dust to get caught in those eddy currents and slam into the rear window and get stuck, whereas an aerodynamic sedan has a nice aerodynamic laminar flow and dust hits at a shallow angle and is immediately blown back off. If you watch the rear window of an SUV while driving it during rain, you may even see the rain pile up in the middle of the window, since it's weight is countered by the updraft. StuRat (talk) 01:20, 23 March 2015 (UTC)
Angular momentum
[edit]What is the total angular momentum of the universe? And is it the same as before the big bang?--109.146.20.31 (talk) 15:31, 22 March 2015 (UTC)
- I suspect you can't measure the angular momentum of the entire universe, because would would need a frame of reference outside the universe to compare any possible rotation to, and we don't have such a frame of reference, because if we did it would automatically be part of the universe too! So I think the answer is that the question isn't meaningful, though I'm hoping that a real expert (disclosure: I'm not really a physicist) will come along and check my reasoning! RomanSpa (talk) 15:53, 22 March 2015 (UTC)
- If the universe were rotating in a normal, three-dimensional sense, we would expect there to be an axis of rotation, where distant stars would experience no unusual force relative to us, and an equator, where we or they or both are being pushed away by "centrifugal force". I think. Of course, rotating relative to what is another question, one which makes the situation harder to grasp.
- I wonder though if the universe/matter in it could be rotating in regard to the alleged compactified dimensions? Would that mean anything? Wnt (talk) 16:11, 22 March 2015 (UTC)
- The total angular momentum of the observable universe, i.e. that part of the universe that we can see, is usually assumed to be zero (or near enough as to make no difference). This is in keeping with the cosmological principle. One doesn't have to be outside the universe to measure the angular momentum. In principle one can measure the angular momentum relative to our location, though in practice it isn't an easy thing to do, and as far as I know everyone who has tried to do it directly has gotten values consistent with zero. This paper [8] argues that sensible space-time metrics require global rotation less than 1 revolution per 200
billiontrillion years. At that rate, the total angular momentum would be ~1082 J-s. More recently [9], observational studies of galaxies suggested there might be a slight deficit of clockwise vs. counterclockwise rotation, which would be consistent with a small (but non-zero) global angular momentum. However, I would say the jury is still out on whether the universe has any net rotation or not. Dragons flight (talk) 18:32, 22 March 2015 (UTC)
- I corrected an error above where I said billion (109) instead of trillion (1012). Dragons flight (talk) 01:57, 23 March 2015 (UTC)
- Found two interesting blog posts on Michael Longo's work. [10] [11] He came up with a statistically significant effect, but some people think it could be observer bias. Now why that bias is different in the southern hemisphere than the northern... I dunno, maybe they grew up watching their toilets swirl in opposite directions. :) Wnt (talk) 19:08, 22 March 2015 (UTC)
- Doesn't the universe have to have some angular momentum ? After all, the rotation of the planets around the Sun comes from the rotation of the protoplanetary disk, which in turn comes from the rotation of the galaxy (possibly with some intermediary rotations in the local arms of the Milky Way). The galactic rotation then comes from the rotation of the Local Group, and eventually it all must come from the rotation of the universe, right ? That is, a system with absolutely no rotation shouldn't be able to change to one with rotation, without an external force acting on it, which I suppose is another possibility. StuRat (talk) 23:07, 22 March 2015 (UTC)
- Not necessarily. If two discs spin with opposite polarity, they would have a net angular momentum of zero. It just requires that every spin has a matched spin in the opposite direction, relative to when they started spinning. That's all that is required for conservation of angular momentum. --Jayron32 23:22, 22 March 2015 (UTC)
- True, but is a universe that starts with different parts spinning in different directions as simple to explain as a universe that starts with a single spin ? StuRat (talk) 01:10, 23 March 2015 (UTC)
- If the universe starts with any inhomgeneities at all, which it must since galaxies and stars did form via gravitational collapse (i.e. structure formation), then it follows that you expect small random fluctuations in angular momentum which would get amplified as gas clouds collapse. Asking why any primordial inhomogeneities exist is a tricky thing, but it is no more tricky in the angular momentum case than in the mass density case. Dragons flight (talk) 02:07, 23 March 2015 (UTC)
- I asked this same question a few years ago, could the expansion of the universe be caused by its complex rotation in a higher dimension. The answer (at the math desk, I believe) was that a non-solid shperical body would expand in all three dimensions if rotated properly in some higher dimension, although I don't remember the details. The problem with this, as I see it, is that it wouldn't explain inflation or the current increasing rate of expansion. μηδείς (talk) 22:11, 23 March 2015 (UTC)
Black holes/ white holes
[edit]Bearing in mind the infinite warping of spacetime at the centre of a BH, is it not possible that the matter could leak out into another region of the universe? (eg by means of a white hole?)--109.146.20.31 (talk) 15:34, 22 March 2015 (UTC)
- See white hole. One idea is that a black hole is a white hole, and the matter just comes out really slowly. It becomes less and less obvious to me that anything ever "really" enters a black hole at all... A "white hole" that looks like a garbage dump for unknown aliens tossing their apple cores and radioactive waste into some black hole close to them... has yet to be discovered. Wnt (talk) 16:13, 22 March 2015 (UTC)
- Yes, all black holes are white holes, see Hawking radiation. μηδείς (talk) 19:10, 22 March 2015 (UTC)
- But note that the "really slowly" is a bit of an understatement, being on the order of 10100 years. StuRat (talk) 23:13, 22 March 2015 (UTC)
Gravity and waves
[edit]Gravitational fields are generally steady and do not fluctuate. However, if a local gravitational field were suddenly to change, would gravity waves propagate out from the location. If so, of what form are these waves and at what speed do they travel? If at 'c', would they be electromagnetic waves? --109.146.20.31 (talk) 15:41, 22 March 2015 (UTC)
- In case of any sudden change of gravity field the gravity waves will be emitted. They will propagate with the speed equal to that of light. No, the are not electromagnetic waves. Ruslik_Zero 18:21, 22 March 2015 (UTC)
- You're looking for gravitational waves, not gravity waves. --Wrongfilter (talk) 18:56, 22 March 2015 (UTC)
- The correct answer is, we don't know. Gravitational waves are predicted by theory which is well-proven in other ways, but we have yet to actually observe them happening. It is still possible that Newton was correct and that gravity acts immediately across distance. See gravitational wave for details. GoldenRing (talk) 03:58, 24 March 2015 (UTC)
Human teeth self-sharpening
[edit]Could someone drop a scholar reference on how human teeth stay relatively sharp, including incisors? I can't see any, aside from Yahoo Answers. Specifically, is there any self-sharpening mechanism involved, as in beavers and some other animals? Neither human tooth nor human tooth sharpening seem to mention the natural sharpening so far. Brandmeistertalk 16:10, 22 March 2015 (UTC)
- Human teeth only stay relatively sharp because western civilisations tend to eat softer food these days. If you started trying to grind hard grain with your teeth, they would eventually wear down.--109.146.20.31 (talk) 16:27, 22 March 2015 (UTC)
- I doubt it's the primary cause. By analogy with a food-cutting kitchen knife, incisors and some other teeth should become blunt in a couple of years (which seemingly doesn't occur). Brandmeistertalk 18:39, 22 March 2015 (UTC)
- The food cutting knife is a bad analogy. Knives are made from steel which is not only softer than tooth enamel, but it is more malleable. Tooth enamel is more like glass than steel. I don't believe teeth "self sharpen" like bird beaks or something. Human teeth I believe essentally don't grow after they are done, if you "blunt" your teeth, that's it, they'll be blunt, they won't ever "sharpen" them selves, as anyone with a chipped tooth will testify. Vespine (talk) 22:35, 22 March 2015 (UTC)
- I'm not saying that human teeth work this way, but note that it is possible for something to self-sharpen without growing. This can be done by having a hard, sharp "blade" or "point" in the center, protected by softer materials on one or more sides. The softer materials wear away more quickly, revealing more of the blade or point. Think of a pencil, where the wood protects the lead and is then removed to expose more lead, when you sharpen the pencil. Even a piece of sandpaper can thus sharpen a pencil, and chewing could do the same to teeth with this design. And, as in the pencil, teeth are eventually worn down to beyond the point of sharpening. (Of course, a pencil lead is quite soft, so the analogy ends there.) StuRat (talk) 22:56, 22 March 2015 (UTC)
- You may be interested in this paper which tries to explain why human front teeth met edge-to-edge until about 200 years ago and then changed to the overbite which is normal today. The introduction of the table fork seems to be the answer. [12] Alansplodge (talk) 16:21, 23 March 2015 (UTC)
- I would weigh in to say that teeth definitely wear other teeth away, and even more so, does porcelain from prosthetic crowns wear opposing enamel. Because posterior teeth (premolars and molars) are generally rubbing against each other on their occlusal surfaces, they tend to reduce the cusps until the biting surface becomes flat (as can be seen in bruxers, those who grind), while anterior teeth (incisors and canines) do not meet edge to edge in what we would term a physiologic occlusal scheme (biting edge to edge is termed a malocclusion, type 3 occlusion to be specific -- see the edge-to-edge bite of the pic at the top of the page on bruxism). To summarize, as a dentist, I never considered how and why teeth remain sharp. I can tell you that I personally have an edge-to-edge bite (it is my grimace gracing the top of the bruxism page) and my incisal edges are therefor worn down from their original forms, and yet I'm able to rip into and chew my food with no appreciable deficiency. Maybe it is, as 109.146 suggests above, because we eat soft foods. DRosenbach (Talk | Contribs) 15:42, 24 March 2015 (UTC)
Time it takes for a TV to turn on
[edit]Back in my day, it took quite a few seconds for a TV to come on - because it takes time for tubes to get hot and start working. But it takes our solid-state LED TV more than 10 seconds to come on - why? Bubba73 You talkin' to me? 16:23, 22 March 2015 (UTC)
- With our TV the screen comes on in about 2 seconds, but it's something like 10 seconds before all the controls work so you can change channels, zoom to allow for a different shape picture, etc. Evidently the TV has an embedded computer that handles these things and it takes some time to boot and get all the processes running. --65.94.50.15 (talk) 18:14, 22 March 2015 (UTC)
- If you are controlling your TV through a digital cable box you will notice quite a delay over on-air transmission. That delay was one of the reasons my parents got rid of cable. μηδείς (talk) 19:09, 22 March 2015 (UTC)
- Yes, we seem to have gone backwards in this regard. Originally you had to wait for the tubes to warm up, but you would get the sound almost immediately and then the cathode ray tube would slowly get brighter. Still using CRTs, some then added an "instant on" feature that would burn some energy to keep the tube warm all the time, in standby mode. But these days, electronic TVs seem to need quite a while to "boot". Even worse, some give you no indication they are on until the boot process has completed, causing people to hit the on/off button repeatedly waiting for something to happen. StuRat (talk) 22:45, 22 March 2015 (UTC)
- Not to criticize anyone, but I can't help reflecting on what it says about Western culture that we are discussing the fact that it takes a few more seconds to turn on our television. But that would take the thread into the realm of WP:Reference desk/Humanities, so I'll stop there. ―Mandruss ☎ 22:58, 22 March 2015 (UTC)
- Vacuum tube TVs took quite a few seconds for the picture and sound to work. A book I once read on the FBI in the time of J. Edgar Hoover said that he called a tech in and asked that his TV be fixed. The tech checked the TV and found everything to be working fine, and asked what the problem was. The answer was that Hoover did not want to wait for the picture and sound to come on, The tech modified the set so that it was always on, and when the on-off switch was operated it merely gated the already-present picture and sound to appear, at a considerable waste of electricity and tube life. Later,coincidentally, the "Instant-on" feature was added to most TVs, also at a considerable waste of electricity. Today, even with solid state electronics, the sets consume some power when supposedly turned off, as a portion of the electronics are kept energized. This is sometimes called "vampire power" o Standby power. In the 1950's when a radio or TV was turned off, it generally drew zero power. The switch actually interrupted power flow. Edison (talk) 01:44, 23 March 2015 (UTC)
- Hey, that electricity wasn't wasted—it was performing a useful function by enabling the TV to come on instantly! By the way, in my experience in the days of vacuum tubes the time to warm up was about 30 seconds; I'd call that more than "quite a few". --65.94.50.15 (talk) 04:33, 23 March 2015 (UTC)
- An LED TV contains a computer inside. The computer has to boot up in the same way as a desktop or laptop computer. It is optimized to boot as quickly as possible, but it still takes some time. Looie496 (talk) 14:24, 23 March 2015 (UTC)
- Also of note, many modern HDTVs have noticeable "input lag" which older CRTs did not. Besides the "boot-up" time issue with computers, there's also a signal-processing time, older CRTs had much faster response times: This article explains some of it, some video game systems have lag-time compensation built in to them to account for this problem, but it can still screw up multiplayer games. --Jayron32 14:29, 23 March 2015 (UTC)
- A television's DTV DSP is very similar to a GPU (graphics processor) in that it can have a complex graphics pipeline, perhaps including multiple full frame buffers. Each frame buffer necessarily delays output by one full frame; a state-of-the-art DTV DSP might actually have 48 or 96 frames (1 to 2 seconds or more!) of buffer - if we include the incoming analog signal, mixed signal, decompressor/decoder group of pictures, and the rendering pipeline, the video lag can easily manifest in the range of several seconds. This is a severe usability regression; but arguably is intended to improve your experience. For the majority of the time you watch the video, you aren't transitioning (starting or stopping playback or changing channels); and while your video is in steady state, it is arguably playing at a higher quality because of these digital features.
- However, in the imperfect world we live in, the folks who make DTV DSPs are severely constrained by cost, talent, and intellectual property. Few consumers want to pay big bucks to put a faster CPU in their television's embedded processor. Few people want to pay big bucks to patent-trolls and large corporate intellectual-property owners - especially when license terms may include royalties per view. Imagine if each time you changed channels - it was nearly instantaneous but also deducted a few cents from your account! Instead, television DSP engineers often must choose bargain-basement, royalty-free or low-cost alternatives to standard technologies. Next time you read a press release about a "royalty free" codec like VP9, read it with a skeptical eye - somehow somebody is paying for the technology, either up front (e.g. in the sale price of the electronic device/computer/TV) or per-use (e.g. out of your monthly television or internet service subscription fee) - or perhaps, as in some cases, the technology company bills the movie studio up-front, just in case the movie or TV show is ever distributed in a proprietary digital form at any time in the future! Regarding the talented engineers who work on DTV systems: which qualified engineers want to go work in a low-margin commodity industry dominated by companies without name-recognition, especially when the major designers and manufacturers of these systems lay off their entire engineering staff every couple of years? Invariably, the intellectual property owners will acquire a bunch of young kids from a hot new video engineering start-up, bilk the patent system, sell a lot of televisions, and repeat again when the next marketing buzz-word catches on. Anybody with experience and talent at optimizing a very complex graphical computer system can get a better job in a more stable industry.
- So: there is a sort of a regression on the user-interface speed: but this is Moore's law playing out in reverse of its normal pattern: the massive economic feedback loop is actually driving lower performance, even though the DTV processors are getting more features, higher transistor count, perform more megaflops and have more RAM. The money in the business dictates the user's perceived performance. When consumers start basing their television purchase-decisions on boot time, this problem will start to be addressed.
- Nimur (talk) 15:11, 23 March 2015 (UTC)
- In response to Mandruss scathing critique of Western Civilization, Steve Jobs famously said something along the lines (not verbatim) that "improving the boot time on the Macintosh would save lives. A 10-second improvement added up to many lifetimes over the millions of users booting their computers multiple times a day." I'm sure there's a better source with a verbatim quote. μηδείς (talk) 21:05, 23 March 2015 (UTC)
- Steve Jobs was a smart guy. If he said that, I'm sure it was tongue-in-cheek. (BTW, I can do "scathing", and that weren't it.) ―Mandruss ☎ 21:09, 23 March 2015 (UTC)
- I can think of more direct ways reduced boot time could save lives. In an emergency, if you need to boot your iPhone to call 911, then seconds could literally mean lives. And with people on VoIP, the same can be true of booting their home PC. (I now have VoIP from the phone company, and have to wait about 5 seconds for it to boot each time I lift the handset.) StuRat (talk) 21:21, 23 March 2015 (UTC)
- My comment was about time to turn on a television, not about boot times in general. I apologize for missing Medeis's non sequitur. ―Mandruss ☎ 21:40, 23 March 2015 (UTC)
- TV boot time could also cost lives, if the sky looks strange so you boot up your TV to check the weather, only to find out a tornado is headed your way too late to take cover. StuRat (talk) 21:51, 23 March 2015 (UTC)
- The quote was certainly not tongue in cheek by Jobs, it was reported in the press, and it was meant to inspire his company to perform better by making a very valid point. Every time 230,000,000 people totally waste 10 seconds that's about the equivalent of an entire 72 year lifespan lost. The concept of man-hour is not invalid, and those 10 seconds represent hundreds of "man-lives" wasted a year. The same thing applies to waiting for the TV to warm up. It's why cars have radios. Of course, knowing my old HP running Windows 95 would take three minutes to boot, it was a good opportunity to roll a joint and light up. But had I wasted that time passively it would have been like contributing to the stillbirth of several thousand people a year. It's the same reason you should always carry a good novel on the train or to the doctor's office. μηδείς (talk) 21:58, 23 March 2015 (UTC)
- Confronted by non sequiturs and inane arguments for which I have no rebuttal, I concede defeat. ―Mandruss ☎ 22:02, 23 March 2015 (UTC)
- Mandruss, I did not insult you or resort to ad hominem--your argument is not with me, and it's not to me you concede. Here's the exact quote of Jobs at wikiquote, where it is sourced: "If it could save a person’s life, could you find a way to save ten seconds off the boot time? If there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year people would save, which was the equivalent of at least one hundred lifetimes saved per year." μηδείς (talk) 00:58, 24 March 2015 (UTC)
- That 10 seconds waiting for a device to boot is not like 10 seconds of non-existence. That time could be spent planning what to do, grabbing a cup of coffee, stretching, etc. StuRat (talk) 02:55, 24 March 2015 (UTC)
- Jobs didn't say lives, he said the equivalent of lifetimes spent, time as you spend it during your life. And I also mentioned the point that I would use the waste time to do something else, but I would rather not have to. Who here is in favor of longer boot times for electrical appliances? Raise your hands. μηδείς (talk) 21:52, 24 March 2015 (UTC)
- Never mind the time wasted waiting for the TVs to warm up, or the computers to boot. How many man-years of life are wasted in front of the things after they have warmed up? Mitch Ames (talk) 13:40, 27 March 2015 (UTC)
- The LCD computer monitors I've used go blank for a second or more even when changing video resolutions, so it's not a boot time or a warm-up time. It's not a seconds-long video pipeline, since the screen responds much more quickly than that when I move the mouse cursor. It happens even when the connection is digital (DVI), so it's not re-syncing to a new refresh rate. -- BenRG (talk) 06:22, 24 March 2015 (UTC)
- Are you able to investigate the root-cause further? What device is generating the video output? Who makes your HDMI PHY? What software drives the PHY, what software runs on the video processor, and what software implements the display driver on the host computer? What company designs and manufacturers these parts, and what company writes the software for them? Is the issue isolated to HDMI or to all video outputs? Does it manifest on all display devices, or only certain LCD monitors?
- In all seriousness, these are questions that we can actually answer... but we probably can't fix the symptom: it would be prohibitively expensive to buy the necessary (proprietary) specifications, tools, and software required to fix this. For example: try to figure out where to sign the necessary paperwork to get access to the official HDMI specification: How do you license HDMI 2.0? (I did mention, earlier, that intellectual property can be a severe hurdle for manufacturers - for an independent hobbyist, this is a non-starter! It costs $15000 US to gain access to the forum from whence you may ask questions about licensing fees and specification access levels. Pause for a moment to contemplate the value of a free resource like Wikipedia, in context: consider the open market price for access to a person who knows important information?)
- On a hunch, BenRG's symptom sounds like many manifest items: peripheral device discovery is probably handled very slowly by the host operating system (it is a rare event to plug a device in, so it is probable that the software is designed such that the polling loop is low-rate, or the management daemon is launched only on demand, or the hardware interrupt is low priority); and the device itself - including the port management peripheral and perhaps even the cable - may require firmware load, firmware boot, and physical layer calibration after boot.
- Where to start? Well, if you're a Windows user, start reading the developer documentation at MSDN's DDK webpage; if you're an Apple user, start reading IOKit Fundamentals... if you're a Linux user, O'Reilly's Linux Driver Development text is a good starting point. Optimizing performance on hardware peripherals is a lot messier than optimizing pure software: much more of the hard work is spent researching where the performance problem even manifests, and often, there is no solution available (except to change the hardware). Nimur (talk) 14:30, 24 March 2015 (UTC)
- On a related note: as several people above have already invoked Apple history without citing any sources... I happened to be involved in a discussion about device latency on a different mailing list - with my friend. He worked on the ASR33 circuit and software - before I was alive! That sent me off to read the Red Book (1978 edition) to review the device driver software (about thirty lines of code, page ~118) for the circuit that Woz built. It is stunning how very few parts and how very little software used to be involved in the manufacture of computer peripherals. Today, it is very normal for a peripheral device to have many dozen CPUs with separate software environments running on several hundred million transistors. Your HDMI cable probably has more compute horsepower than the Apple II. Faster computers, in themselves, do not make for less user-latency. Nimur (talk) 14:38, 24 March 2015 (UTC)