Jump to content

Wikipedia:Reference desk/Archives/Science/2015 September 1

From Wikipedia, the free encyclopedia
Science desk
< August 31 << Aug | September | Oct >> September 2 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 1

[edit]

Do devices compete against each other in an electrical installation?

[edit]

In an electrical installation (in a factory, or anything more serious than a home), how can we guarantee that each plug get enough energy? If different devices get connected, and each one pulls a different amount, could a device get greedy and leave others underpowered? --Scicurious (talk) 15:13, 1 September 2015 (UTC)[reply]

Voltage droop is real; it can happen if the demanded current is greater than the circuit can supply. In residential or commercial electric installations, a voltage droop implies overcurrent and should probably trip a circuit breaker before any damage occurs.
It's hard to recognize these effects because electric utilities are so massive - but power engineers need to design the distribution lines so that the right amount of current flows to meet demand, even as the demand changes on a minute-by-minute basis. This is load balancing, and it occurs at the macro- and micro- level. In a small residential installation, your voltage should stay pretty constant and circuits should trip a breaker if the utility cannot supply all of the demanded current on any specific circuit.
"How do they do it?" Well, the power station actually oversupplies power, exceeding peak demand - and burns off the excess energy as waste heat at electrical substations. This is because with today's technology, it is much easier to build devices that burn energy at quickly-varying rates, compared to devices that can produce large amounts of energy at quickly-varying rates! The ideal and perfect world would see a constant load - in other words, the demanded energy would be perfectly predictable, and the power station would produce exactly that amount of energy at exactly that rate, and no energy would be wasted. Alternately, if we could dial the power-station output on a second-by-second basis, we could produce the exact electrical demand (accounting for every individual watt each time one person flips a switch in their home)... and the grid would be wasting nothing... But our technology today doesn't let us rapidly dial the rate of power production in coal, nuclear, and especially solar plants. These production sites are very slow to change their rate of energy production - so engineers must design them to exceed peak demand, and burn off the excess. On a much slower time-scale, power engineers can nudge the average production levels by turning on individual power stations, but those decisions work on time-scales that are usually measured in hours, days, and weeks.
Nimur (talk) 15:20, 1 September 2015 (UTC)[reply]
Load balancing seems to be an issue for power stations. And it seems to be about matching production to the power demand. Sometimes, though, you can not just obtain more, you have to distribute what you have. Isn't there a local solution without tripping a breaker (which will stop the flow altogether, as I understand it). Couldn't they just limit at the point of service (at the plug) the flow to a specific Voltage x specific A? It's not just about protecting against damage, but about distributing the resource. It would be similar to ensuring a minimum quality of service in a network. Quality of service talks about computers and telephone networks. Is there a similar concept for electrical installations (which are also networks)? — Preceding unsigned comment added by Scicurious (talkcontribs) 15:32, 1 September 2015 (UTC)[reply]
Generally, in the realm of electronic engineering, it is better to shut down completely rather than to operate "out of specification." This improves safety and reliability.
We could have an electric grid that allowed for brownouts - that's what happens in some poorer parts of the world. WP:OR: I have been in places where the lights in our home brightened and dimmed because the not-quite-utility-company was supplying random amounts of power at arbitrary voltages somewhere between 100 and 300 volts. It worked for lightbulbs - although it was probably none-too-good for the average bulb lifetime. But it also blew up the power supply on a computer, fried the television once, and it put a higher voltage than you would like to find on the water pipes, the coaxial cable that ran to the rooftop antenna, and probably anything else! (Have you ever had to manually re-tap your home's transformer? That's a device you don't see in American household too often, but in some places it's an hourly chore!)
In places that value safety, stability, and properly-functioning electronics, electrical engineers would rather simply shut down the power instead of delivering an unknown and uncontrollable quantity. Nimur (talk) 15:38, 1 September 2015 (UTC)[reply]
I get why production cannot vary as fast as demand, and the reasons for oversupply. But surely there's something better than dumping excess to heat. Are there any cases where excess power is dumped into a flywheel or Pumped-storage_hydroelectricity or any form of energy storage that might offer some return of usable power? I know those things exist, just not sure if they are used to vent off excess power and store some of it. SemanticMantis (talk) 15:40, 1 September 2015 (UTC)[reply]
It's be great if there were - but this all comes back to scale. Your intuition deceives you because electrical energy is completely invisible and therefore hard to compare qualitatively to other forms of energy. A utility substation might be seeing fluctuations in the order of kilowatts or even megawatts, on a minute-by-minute basis. How large and massive must a flywheel be to store a megawatt of energy? For reference, compare the energy budget on the regenerative braking system of a Toyota Prius... that thing collects a couple kilowatts by storing energy in the form of the momentum of a compact car traveling at highway speeds. It is unreasonable and unsafe to emplace a mechanical flywheel whose rotational inertia is commensurate to a thousand Prius-es flywheeling at variable rates.
Instead of trying to invent a new machine to solve this problem, the current state of the art is to address it using statistical and computational modeling. It is vastly more efficient to throw math at the problem: statisticians can model and predict load, accurately guessing how much energy to produce. The guesses aren't always correct; but using higher-order models, they can do one better: for a given set of controllable parameters, the statistics can tell you how often the guesses will be wrong! Then, they can dial in a cost- to- error- ratio, and hand that over to the bean-counters accountants, who will decide how much energy to overproduce, which all flows back into the efficiency of the utility, and manifests as the price charged for electricity.
This explains why a statistician may have a better probability for getting a high-paying job at a power company than an electronics engineer!
Nimur (talk) 15:45, 1 September 2015 (UTC)[reply]
The scale issue goes both ways-if a megawatt-hour is a lot to store, then it's also a lot to lose! I understand the benefits of statistical load modeling, throwing math at the problem, etc. But it seems clear from your responses that there is still a lot of loss. For example here [1] "T&D losses amounted to 239 million MWh, or 6.1% of net generation." - though I'm not sure what portion of that is loss from oversupply vs. other sources. I also understand that the infrastructure and maintenance costs of storage must be weighed against the cost of producing more power. But at some point, saving the overproduction seems like it might make sense, even if flywheels are probably not the right tool for the job.
E.g. CA and NV are looking into Energy_storage#Gravitational_potential_energy_storage on a pretty large scale - at present it seems the plan is to sink power in when costs and demand are low during the night. But conceptually the thing (or several similar smaller things) could be plugged in to the grid as a dump for excess power from many locations. And it looks like people have built 2GWh storage towers for thermal energy storage, so it's not as though we can't store relatively large amounts of energy if we're willing to pay some cost. I'm not claiming this is an obviously useful thing, I know a lot of smart people work on this, etc. I just thought it was an interesting issue, based on your description of energy loss due to necessary oversupply. Indeed, economics drives the whole decision process, so maybe if prices of electricity go up enough, capturing oversupply will be more viable. SemanticMantis (talk) 16:28, 1 September 2015 (UTC)[reply]
Update: if anyone aside from me is interested in recapture/storage of oversupply, these [2] [3] are the current best refs I have found on the topic. The first one specifically discusses pumped-water storage, while the second discusses storage as part of a portfolio of techniques, and primarily pushes for "Energy sector coupling" to deal with oversupply. SemanticMantis (talk) 17:40, 1 September 2015 (UTC)[reply]
Sorry, but there is no overproduction and burning off the remainder as heat. Matching supply to demand in power systems is more sophisticated than that (and in some sense flywheels are used!). What happens is the following: Generator output for most generators can be varied quite quickly, almost quick enough to match demand. What cannot be done quickly is to switch on a generator from cold. So there are enough generators switched on so that their maximum output is well above the current demand, even though some of them will run at less than maximum output. Having a generator switched on but on idle is inefficient, so companies will try to keep their number minimal. For some generator types, such as hydro, running below capacity may mean that water is spilled. When a switch is flicked on and momentarily the demand outstrips the supply, the first thing that happens is that the frequency (remember this is AC) of the whole system drops: to some extend the rotational energy stored in the generators of the entire power system is converted into electrical energy. Or you could think of the rotors as flywheels whose stored energy is being released. This won't last very long, but the frequency drop is picked up immediately by control logic at the generators which now increases generator output. All the operators have to do is to ensure that the currently switched on generators have enough spare capacity to cope with the predicted demand. 86.151.3.229 (talk) 23:21, 1 September 2015 (UTC)[reply]
Very interesting, thanks. Can you find a reference that addresses your first sentence? Most of the stuff I can see about storage of oversupply is indeed focused on things like hydro and wind electric generation. SemanticMantis (talk) 23:27, 1 September 2015 (UTC)[reply]
What is the process by which "draw more current" becomes "frequency drop"? DMacks (talk) 01:22, 2 September 2015 (UTC)[reply]
The increased electrical load causes the generator's turbine to feel more "torque". That would be the Biot–Savart law and the magnetic moment effect it creates on the dynamo. In response to this change, the control system at the power plant - a thermal regulator, or governor - can add a little more thermal energy to speed the turbine back to normal. This is called the frequency reserve, and it is only able to adapt to small variations in load. Here's a presentation from Argonne National Laboratory: Load Participation in Ancillary Services. From this presentation, you can see some charts of the magnitudes and time-scales involved in this type of regulation. The time-scale for thermal plant regulation ("spinning the generator faster or slower") is around 1 to 20 minutes - logically, think about how long it takes a machine to regulate steam pressure or increase the thermal input by increasing the fossil fuel supply. For events faster than that timescale, the excess power is burned - wasted as thermal exhaust - either on site or remotely. (In a sense, it doesn't matter if that burn-off happens prior to, or after, transduction of the fossil-fuel energy into electrical energy - it's still wasted as thermal exhaust!) For events slower than that timescale, the power is usually sourced from a remote generator (this is called the spinning reserve, because it's a power station somewhere else that's already operating). For events much slower than that timescale - say, hours or days - the utility operator will bring additional generators online (this is called the non-spinning reserve). The spinning reserve is literally and actually wasting its power until it is needed. Here's a white-paper from Department of Energy: The role of electricity markets..., which talks about demand-following and the real-time electric utility marketplace - sometimes called the "real time" market or the "five minute" market. As demand increases, prices rise, and remote sites sell power to the places that need the power. As prices drop... the power is wasted.
Here's a research proposal from Oak Ridge National Laboratory, proposing the use of certain industries - particularly, aluminum electrolysis and smelting - including a case study: Providing Reliability Services through Demand Response: A Preliminary Evaluation of the Demand Response Capabilities of Alcoa Inc. The idea is simple - if we're going to be burning energy as waste heat, why not find an industry who wants to burn power, and can do it at an instantaneously-variable rate? The theory goes something like this: a tightly-integrated utility-company could remotely control how much electricity the aluminum plant is consuming on a minute-by-minute, second-by-second basis, and use the industrial plant to perform short-timescale load-leveling on the scales of thousands of megawatts.
All of these resources came from direct perusal of the Department of Energy's website, http://smartgrid.gov - a website dedicated to informing the public about emerging technologies that will improve efficiency for the generation, transmission, and distribution, of utility electricity in the United States.
Nimur (talk) 23:10, 3 September 2015 (UTC)[reply]
I gather that few of you (excepting perhaps 86.151.3.229) have ever received a paycheck as any level of engineer for an electric utility. Edison (talk) 03:39, 2 September 2015 (UTC)[reply]
Hey, I'm just asking some subquestions, discussing an interesting topic, and searching for references. Nobody here has claimed to be an engineer working for electric utilities. Is there some error here you wish to correct, or some reference you can share that might shed light on some of these issues? If so, I'd appreciate it :) SemanticMantis (talk) 03:49, 2 September 2015 (UTC)[reply]
I am not claiming to be an electric utility engineer; but in the spirit of full disclosure, some of my previous research was sponsored by a major energy conglomerate who owned and operated 60 gigawatts of regulated electric power in the United States. I do not presently receive financial compensation from that corporation except in the form of dividends paid to common-stock shareholders. It ain't much, but it pays for gas!
I don't wish to participate in a fruitless discussion about credentials, mine or those of anyone else. That isn't how Wikipedia works. Besides, how can you verify if any of my statements about my experience or credentials are true? Instead of a fallacious appeal to authority, or argument attacking my (lack of) experience as a utility engineer, it would be much more productive to address any specific factual error in any statements I have made.
If I am wrong about any detail, please inform me what that detail is, and cite a source. If you convince me that I am mistaken, I will graciously apologize for my error. I have already been wrong at least once this week! Nimur (talk) 23:10, 3 September 2015 (UTC)[reply]

Are sperm alive?

[edit]

The classical definition of alive is growth, nutrition, movement, sensitivity, and reproduction.

  • Sperm grow inside the nutsack.
  • They eat fructose which makes up 70% of seminal plasma according to the semen article.
  • They move/swim
  • They sense and locate the egg
  • And finally they reproduce (or at least try)

So are they considered to be a form of life?

62.37.237.16 (talk) 18:08, 1 September 2015 (UTC)[reply]

01 Life is a characteristic that distinguishes things that have signaling and self sustaining processes from those that do not. Void burn (talk) 00:45, 4 September 2015 (UTC)[reply]

Alive yes, but a complete organism, no. The same is true for every other cell in your body, too. It gets trickier with plants and simple animals, where one cell can indeed form a new organism on it's own. StuRat (talk) 18:51, 1 September 2015 (UTC)[reply]
Not really. They can't reproduce on their own. Alternation of generations is a plant thing, where both the diploid and haploid forms of the plant can live independently and divide. Of course, this is a debate over human classifications (see also: are viruses alive). Nature doesn't care what we call things. But, it is an important distinction that gametes can't live and reproduce independently. This is still true in plants; the diploid sporophyte produces spores, which grow into haploid gametophytes that produce gametes, which, after fertilization, produce a new sporophyte. --71.119.131.184 (talk) 00:11, 2 September 2015 (UTC)[reply]
Oh, please. Humans can't reproduce on their own; they need sperm and eggs. This is something a first year university major should have mastered, assuming he didn't do so in 9th grade high school, when the concept was first presented. μηδείς (talk) 21:01, 2 September 2015 (UTC)[reply]
I think the IP's point is that in humans (or really any animal), the sperm and ovum don't undergo mitosis by themselves. So the sperm and ovum aren't generally considered a seperate generation. This compares to the situation in almost all multicellular plants, where the haploid spores do undergo mitosis to produce gametophytes and are generally considered seperate generations from the sporophyte. (The article you linked to more or less says the same thing BTW.) Nil Einne (talk) 13:02, 3 September 2015 (UTC)[reply]
The classical definition of "life" is flakey in the extreme - and not one of the many, many efforts to define the word have met with universal acceptance. If the list is "growth, nutrition, movement, sensitivity, and reproduction" then consider fire...it grows, it consumes nutrients (wood, paper and other combustibles), it follows sources of nutrition and is sensitive to temperature, humidity and the oxygen content of the air, and can it can shoot out hot embers and start new fires (ie, reproduce) - it also excretes ashes, converts oxygen to CO2 and so forth. Is fire "alive"? Well, not according to biologists. The standard definition also fails in other serious ways...my 2 year old grandchild cannot reproduce (well, not yet, anyway) - and I know plenty of people who have had their reproductive apparatus disabled in one way or another and so cannot reproduce...are they no longer alive? Clearly that definition implies that "life" only applies to collective objects and not individuals - and arguably, not to parts of individuals. Is my hair "alive"? Are my fingernails "alive"? Same question with sperm.
In so many ways, trying to seek knowledge from the definitions of words is not teaching you anything except the meaning of those words. Knowing whether sperm fit the current definition of "life" tells you nothing whatever about sperm - only things about how the current definition of the word is. This is just like when the formal definition of a "planet" changed and Pluto ceased to be a member of that club...does this change Pluto? Does it teach us anything new about it? Not really.
A very large fraction of the questions we get on the science reference desk are like that. So don't sweat it...sperm are what they are, and it doesn't matter a damn whether we label them "alive" or not.
SteveBaker (talk) 01:39, 2 September 2015 (UTC)[reply]
IOW, if "reproduction" is included in the definition of life, would one classify a post-menopausal 80 year old nulliparous woman as dead? 😀 - Nunh-huh 02:01, 2 September 2015 (UTC)[reply]
Depends. Do her pupils dilate? If so, she's good to go, but her children remain unborn and the only thing standing between her and the grave is far fewer heartbeats than a typical 60-year-old has left. So, I think it's fair to call her "done for", "fading away", "on her last legs" or "with her dying breath". InedibleHulk (talk) 02:29, 2 September 2015 (UTC)[reply]
Pining for the fjords, actually... - Nunh-huh 14:06, 2 September 2015 (UTC)[reply]
Then again, Susannah Mushatt Jones turned 80 and childless on July 6, 1979, and she's still alive as can be, aside from the blindness, deafness, lameness and muteness. She's so old that PokeMyBirthday.com doesn't work for her, but using 1999 as a guideline, it seems her father, Callie, filled her mother, Mary, with 500 million tiny organisms on August 30, 1898. On October 21, 1898, little Susie's heart pumps for the first time (as the Presbyterian Synod of Illinois closed forever and Captain Light went to jail). InedibleHulk (talk) 02:38, 2 September 2015 (UTC) [reply]
(Random rant, not an answer) Personally I am inclined to define life in terms of the complexity of homeostasis that is present in a system. I think instinctively, we think of something as alive if we bash at it and it avoids us, or heals, or dies but is replaced by progeny, but in any case tries to maintain itself as it was, often taking in food, water, air, sunlight etc. to do so. The more complicated the neural-like network of biochemical or other feedback mechanisms that exist, the more you think of something as alive. Wnt (talk) 22:11, 2 September 2015 (UTC)[reply]
Hmmm - but does fire fall under your homeostasis definition? Fire is a tough test case for most definitions - we don't think of it as being a life-form, yet it does fit with most definitions of life. I wonder whether the localized reversal of entropy in the vicinity of the organism is something that should figure into the definition - that seems to nicely exclude fire - but it also includes things like crystal growth, so it needs more caveats. SteveBaker (talk) 18:21, 3 September 2015 (UTC)[reply]
If you add one definition to the definition of life, you can exclude fire. Life is all the things you said before (it consumes, it grows, it reproduces, etc.) Of course, fire is all of those things. Here's where life is different. Life is antientropic. That is, life systems always self-organize into systems of lower entropy, living systems take higher entropy stuff and give it more order. Fire, on the other hand, always increases entropy. Add a little thermodynamics to the mix, and it all becomes clearer. Of course, life is not the only self-organizing system possible, but when you couple it with the other definitions of life (nutrition, growth, reproduction, stimulus-response) it closes out fire nicely. --Jayron32 19:19, 3 September 2015 (UTC)[reply]
Fire isn't always homeostatic - a tossed cigarette either dies out or spreads to engulf a whole forest. But under the right circumstances, it can be somewhat self-regulating for some period of time, such as a candle flame. And indeed, in language we say that such a flame "dies" when it loses its homeostasis. I would say that this is a real kind of life, but a very low order of it. A "real" living thing has tens of thousands of biochemical reactions (broadly defined), all feeding back into one another to preserve the parameters under which they all exist. A single physical self-regulation based on the amount of paraffin that climbs a wick or something is not really much compared to that. Wnt (talk) 00:36, 4 September 2015 (UTC)[reply]

When athletes train explosive power

[edit]

When athletes say they are training "explosive power", they are just training power, right? Or is there such a thing as "explosive power"? --YX-1000A (talk) 18:13, 1 September 2015 (UTC)[reply]

Well, there is longer term power, useful for things like marathons. Being extremely musclebound isn't so useful there, but is for events like weightlifting. StuRat (talk) 18:49, 1 September 2015 (UTC)[reply]
Well, power is work over a period of time. Clearly, the same person running a 100 meter dash in 10 seconds is more powerful than the same person running a marathon in 2.5 hours. A marathon is about 420 times longer distance than a 100 meter dash, and 2.5 hours is 900 times longer than 10 seconds. Assuming the same mass, that means the marathoner does 420 times as much work, but takes proportionally over twice as long to do it; meaning the marathoner is less than half as powerful as the sprinter. --Jayron32 20:12, 1 September 2015 (UTC)[reply]
That usage of "power" is as a technical term in physics. I doubt that athletes and trainers have it in mind at all. --65.94.50.17 (talk) 21:34, 1 September 2015 (UTC)[reply]
Each muscle contains a mix of different types of fibers, as discussed in Skeletal striated muscle#Type distribution. Fast-twitch fibers come into play in "explosive" movements, whereas slow-twitch fibers play a more important role in slow sustained movements. There is some evidence that different types of training can alter the proportions of fast-twitch and slow-twitch fibers; however the story is not all that clear. Looie496 (talk) 19:10, 1 September 2015 (UTC)[reply]

Black hole star

[edit]

What are the possibilities of a 'black hole' star bursting, rather than following its star-life procedure, and rather than accreting extras from its surrounding? -- Space Ghost (talk) 18:26, 1 September 2015 (UTC)[reply]

Not quite sure what you are asking. Some possibilities:
1) If you refer to a current black hole, the only way we know for mass to leave from inside the event horizon is extremely slowly, due to "evaporation" from Hawking radiation. So, 0% chance of an explosion.
2) If you refer to a star with a mass large enough to one day become a black hole, then a supernova would occur first, blowing most of it away, but leaving a black hole at it's core. So, the chances of there being an explosion is 100% or darn close to it. (Perhaps some type of near miss with another star could rob it of enough mass so it no longer explodes ?). StuRat (talk) 18:45, 1 September 2015 (UTC)[reply]
I'm trying to 'make a sense' in my head i.e., what if the 'Big Bang' occurred from a black hole, and you clarified that it won't.
1) If not, what colour the primodial atom would've been, since we know the 'four fundamental forces' were combined...? Note that all the fundamental forces are producable artificially...
2) Population III star is the least we have, the 'laws of physics' governs, assuming, most of the galaxies/universe, what if there was Population IV stars which had only the fundamental forces?
3) What are the possibilities for a star to burst without accreting its surroundings, at any stage of its lifecycle? What are the possibilities of a star being cracked at any stage of its lifecycle? the only thing I can think of is 'sunspots' but the word and what it does is irrelevant...
Space Ghost (talk) 20:15, 1 September 2015 (UTC)[reply]
Funny you mention that, because there is an actual, somewhat out-there hypothesis that posits our universe could be inside the event horizon of a black hole in another universe: see holographic principle. Again, this is getting into the really speculative physics. Anyway, for your other questions: "What color was the universe when it was a singularity?" is kind of a meaningless question, in my view. It's like asking "What's north of the North Pole?" Light as we know it probably didn't even exist, and certainly eyes and brains to perceive colors didn't. The thing about the Big Bang is our current theories of physics can't describe the universe before the electroweak epoch. In physics, "singularity" is just a fancy term that means "our math blows up here". So, any attempts to reason about what the universe was like before then are highly hypothetical. Maybe there's a multiverse, maybe the universe formed from expansions of branes; right now, we don't know. For your second question, the fundamental forces are what govern the universe right now. That's what "fundamental" means: unlike "composite" forces, like friction, drag, thrust, etc., they aren't reducible to other phenomena (below their unification energy, anyway). Finally, I'm not sure what exactly you mean by "burst" or "cracked". I take it you aren't a native speaker? If you're talking about novas and supernovas, core collapse supernovas don't involve any accretion of material from the star's surroundings. Massive stars tend to shed large amounts of material, rather than accumulate it. As far as "cracked" stars, the only thing I can think of are starquakes, which are really amazing things that happen on neutron stars because their insane gravity crams the star's matter into what is basically a solid object. Normal stars are not solid, but huge balls of gas and plasma, so physicists don't usually talk about stars "cracking" like a solid object might. Sunspots are simply cooler regions of a star's atmosphere that are formed by interactions in the star's magnetic field. The star doesn't "open up". The "spot" simply appears dimmer because it's cooler than its surroundings (but still quite hot!). --71.119.131.184 (talk) 22:35, 1 September 2015 (UTC)[reply]

I understand, thanks guys. -- Space Ghost (talk) 18:38, 2 September 2015 (UTC)[reply]

Some more thoughts regarding the color of the universe... Even after the point at which our current theories start working, the universe was opaque to light for around the first 380,000 years of its existence. It was so hot that the universe was filled with a plasma that absorbs light, much like the plasma that "regular" stars are composed of. There were plenty of photons, but they were constantly being absorbed and re-radiated. If you were able to time travel to then, I guess you would just see a bright white light. You'd also be instantly vaporized by the heat, but that's beside the point. Most of the "light" bouncing around at that time would have been in the gamma and X-ray range anyway. It's worth remembering that the visible light that we see is only a tiny part of the electromagnetic spectrum. Our eyes evolved to detect this portion because it doesn't interact strongly with Earth's atmosphere, but it does interact with most solid and liquid matter, so it's useful for detecting where that matter is. Lower and higher wavelengths tend to pass through neutral matter (think radio waves, or X- and gamma rays), and some parts of the spectrum are strongly absorbed by the Earth's atmosphere or by water. See: visible spectrum, optical window.
Once the universe cooled enough, protons, neutrons, and electrons began binding to form neutral atoms, at which point the universe became transparent to photons. Those photons are still around, as the cosmic microwave background (CMB). So this radiation is the oldest "light" that still exists. In some sense you could say this was the "color" of the universe at that point, but when these photons were emitted, they were mostly in the gamma ray spectrum. Over billions of years, they've been redshifted by the expansion of the universe, and are now in the microwave spectrum, so you can't see them with your eyes. Nonetheless, some of them are hitting you whenever you're exposed to natural light. It might have occurred to you that this means at some point the CMB was in the visible spectrum. I'm not sure if it would have been visible to the eye at all during that time; a little cursory searching didn't turn up anything talking about it. It may not have been, since the CMB is very faint compared to starlight. As for the universe now, some scientists calculated the color of the universe today...turns out it's a beige that they named cosmic latte. I don't know about you, but I kind of wanted a more exciting color. --71.119.131.184 (talk) 20:42, 2 September 2015 (UTC)[reply]
I am happy with your comments and I ‘thank you’ too for making me understand in the simplest/summarising manner, however, I believe I should still leave room for speculations.
In regards to the colour, ‘good’ is always contrasted with ‘bad’/’evil’, therefore I presumed ‘light’ is contrasted with ‘darkness’, that's about it.
Can you make me understand two more things, in simple terms please? Other users are welcome too…
1) View the post stating ‘Big Bang’ – ‘Protogalaxies’ - ‘Protostar’ in this page [4].
2) View the post stating 'Stellar evolution' (coming soon).
I'll provide a link here anyway...
Space Ghost (talk) 19:41, 3 September 2015 (UTC)[reply]

Eye muscles

[edit]

I just had my eyes check and the doctor said that I need to exercise my eye muscles more. He pointed at some group of muscles near the lens but I didn't catch what their names were. If those muscles aren't exercised, the patient will suffer from near-nearsightedness. No medicinal advice and such please. I just want to know what those muscles are. --Lenticel (talk) 23:48, 1 September 2015 (UTC)[reply]

Check Extraocular muscles. ←Baseball Bugs What's up, Doc? carrots00:02, 2 September 2015 (UTC)[reply]
Note that our article on myopia mentions eye exercises. One 'alternative medicine' eye exercise is the Bates method. Unfortunately, i don't see an article on the Chinese eye exercises. In any case, a better description of the method of exercise recommended might help to identify the specific muscles more accurately. Wnt (talk) 00:16, 2 September 2015 (UTC)[reply]
  • Are there exercises meant to deal with farsightedness? The eye exercise article is not very clear on which exercise is meant to deal with what problem (or I simply don't understand the jargon.) μηδείς (talk) 01:01, 2 September 2015 (UTC)[reply]