Wikipedia:Reference desk/Archives/Science/2009 June 13
Science desk | ||
---|---|---|
< June 12 | << May | June | Jul >> | June 14 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 13
[edit]What is this bug?
[edit]In the past two days, I've seen two of these little guys crawling around. Because I was too terrified to get a picture, I've done this awful MSPaint approximation. They're about 3/4 of a cm long, and the spots look like little fuzzy bits, and are arranged in a rectangle, with the darker orange on the outside and the beige on the inside. There is no shell or wings or anything, and they're not shiny. I'm in Toronto, Canada. I've never seen anything like it in my life -- what are they? And should I be worried? --✶♏ݣ 01:23, 13 June 2009 (UTC)
- I don't know, but that's a great drawing, maybe you could sell it for $10 000 on eBay! ~~ Ropata (talk) 03:36, 13 June 2009 (UTC)
- Oh, Ropata, that one made me laugh! Thanks, Ouro (blah blah) 06:44, 13 June 2009 (UTC)
- I don't know, but that's a great drawing, maybe you could sell it for $10 000 on eBay! ~~ Ropata (talk) 03:36, 13 June 2009 (UTC)
- That drawing and your description makes me think these might be ladybird(ladybug)larvae. There are many species of which the Harlequin ladybird is causing a little concern in the UK following its invasion from mainland Europe over the last few years. Here[1]is an image of a Harlequin Ladybird larva. What I don't know is how common these beetles are in Canada. You have little need to be worried on a personal level as they eat aphids and other small creatures, unfortunately including the larvae of other ladybird species. Richard Avery (talk) 07:18, 13 June 2009 (UTC)
- Coccinella septempunctata is Canadian and has a pic of the larva in the article.71.236.26.74 (talk) 07:28, 13 June 2009 (UTC)
- That drawing and your description makes me think these might be ladybird(ladybug)larvae. There are many species of which the Harlequin ladybird is causing a little concern in the UK following its invasion from mainland Europe over the last few years. Here[1]is an image of a Harlequin Ladybird larva. What I don't know is how common these beetles are in Canada. You have little need to be worried on a personal level as they eat aphids and other small creatures, unfortunately including the larvae of other ladybird species. Richard Avery (talk) 07:18, 13 June 2009 (UTC)
- I second ladybug larvae. They are rather horrific looking but very peaceful (as long as you aren't an aphid). --98.217.14.211 (talk) 13:04, 13 June 2009 (UTC)
Yes, it looks like it's probably some kind of juvenile ladybug. Frightening -- I hate ladybugs. (And all bugs. But ladybugs hold a special fury for me due to their invasion of 2003, and the fact that they tend to land on people.) Thanks guys. I guess I should be on the lookout for another ladybug invasion this summer. --✶♏ݣ 23:39, 13 June 2009 (UTC)
Carnivoran trait
[edit]Why do dogs, cats and bears have ridges on the roofs of their mouths? --Lazar Taxon (talk) 01:43, 13 June 2009 (UTC)
- According to Answers.com,
- Those ridges are called rugae. They are meant to help break down the food you are chewing, as food is pressed up against them during mastication. —Preceding unsigned comment added by Ropata (talk • contribs) 06:20, 13 June 2009 (UTC)
- Its not only carnivores, rodents also have palatal rugae. Rockpocket 18:36, 13 June 2009 (UTC)
Walking dinosaurs
[edit]When I naively looked at pictures of dinosaurs , I think I see a pattern. There seems to be two types of dinosaurs.
1. Dinosaurs that walk/move on two legs
2. Dinosaurs the walk/move on fours legs.
It seems to me untrained mind that all the herbivore dinosaurs walk on four legs while all the carnivore dinosaurs walk on two legs.
This to me seems really really odd, if you are a carnivore then you need all the speed in the world to catch your prey. Therefore you must run faster by using all four of your legs, running on two legs is really stupid.
No carnivores today would run on two legs because your prey would run away faster than you can chase them.
I approach this question and I came up with 3 ideas. I want everyone to criticize me ideas and suggest new ones.
Three Basic Ideas.
1. Carnivore dinosaurs run on two legs because they have a long thick tail. Because they evolve into having long thick tails, they are unable to run on four legs therefore they must run on two legs to balance their tails.
2. Carnivore dinosaurs are blind, therefore they run on four legs. As they are blind they must hunt with their nose (sense of smell) therefore their front legs must be short as they move around with their noses to the ground.
3. Carnivore dinosaurs only hunt in shallow streams, as they must move in shallow streams, the fastest way to move around is on two legs rather than on four legs. So they hide underwater and slowly move closer to their prey and pop up in the last minute to run on two legs towards their prey. This makes a lot of sense to me as I practice running on four legs on a shallow stream and it is very difficult.
122.107.207.98 (talk) 08:39, 13 June 2009 (UTC)
- First, I doubt that your hypothesis is right. Many birds (cladistically two-legged dinos) are herbivores. And why would you assume that a 4-legged animal is necessarily faster than a 2-legged on? Humans are excellent long-distance runners (well, trained humans ;-), Ostriches can run at over 70km/h. Take a look at some kangaroos occasionally. Not all carnivores run their pry down in a sprint - there are endurance hunters, scavengers, and lurkers as well. And of course, to catch prey it is enough to be faster than the slowest prey - in fact, if anything the prey would have an incentive to evolve for speed ("The fox is running for its next meal, but the hare..."). --Stephan Schulz (talk) 09:19, 13 June 2009 (UTC)
Running on two legs does not make sense because it actually increases the amount of air friction as the dinosaur chases it's prey. If it had run on four legs, it will minimize the air friction and have a streamline aerodynamic form and get a better grip on the ground. 122.107.207.98 (talk) 11:23, 13 June 2009 (UTC)
- Both aerodynamics and traction are only a small part of the equation. In fact, among primarily walking/running animals, I'm hard-pressed to think of a two-legged slow mover - all the slow animals seem to have 4 legs. Walking on two legs requires permanent balancing, and therefore a more active metabolism. --Stephan Schulz (talk) 12:04, 13 June 2009 (UTC)
- Two monks (the biped kind) were walking through a jungle when a fierce carnivourous dinosaur jumped out at them. One of the monks calmly changed his sandals for a pair of Nike trainers that he happened to have. His companion said "Brother, what is the use of running shoes? Surely you realize that you cannot run faster than the fierce carnivourous dinosaur!" The first monk replied "I know that. I just have to run faster than you." Cuddlyable3 (talk) 11:28, 13 June 2009 (UTC)
- Still laughing...still laughing...still laughing. Vimescarrot (talk) 15:19, 13 June 2009 (UTC)
- Air resistance is pretty negligable at the 10 to 20mph these guys might maybe have been able to reach - so that's irrelevent. There are super-fast 2 legged animals (ostriches, certainly) and super-fast 4-legged ones (Cheetah for example). There are also 2 legged animals with massive endurance (humans possibly have the longest walking endurance of any land animal) and 4 legged ones that are almost as good (horses, for example). I don't think speed enters into the equation here - and as others have pointed out, you can be a carnivore and only eat things that are already dead!
- I suspect the reason for two-legged gait being better is that your head can be up higher while still having a stong neck, massive jaws and teeth. Getting your head up higher allows you to see further and pick up scent from the air better - that's great if you're hunting live animals, or if you're hoping to find that Stegosaurus that died of old age earlier today. But if you are the brutally-killing kind of carnivore then you need to get up on top of your prey to sink teeth in and use your weight to push it to the ground - and for that, it helps to have your principle weapons and body weight held as high up as possible. If you look at the way quadruped canivores like lions and tigers work in modern times - they jump on top of their prey and sink their teeth into the back of the neck. They can do that with four legs because their prey animals are pretty small - and at those small sizes, jumping is a definite possibility. Something weighing a couple of hundred pounds can jump fairly high - but a 7 ton T-rex isn't remotely going to be able do that! It's an odd fact that almost all animals from the flea to the elephant can only jump about a foot into the air from a standing start. Getting a foot higher is useful if you're a lion taking down a gazelle - but if you're a 30' tall carnivore trying to take down one of those gigantic herbivores and you can only jump 1 foot - then it's pretty much useless. Hence if you need all the height you can get, a two-legged gait makes a lot of sense.
- Someone should also explain about bird-hipped dinosaurs versus the lizard-hipped kind. I forget the details - but I bet that matters.
- As for the two different types of dinosaurs, see ornithischian (bird-hipped) and saurischian (lizard-hipped). To generalise, ornithischian dinosaurs were usually four-legged and herbivorous, while saurischians included the sauropods (long-necked four-legged dinosaurs, usually herbivorous, some omnivorous) and the therapods (two-legged, carnivorous or omnivorous). The late-Cretaceous dromaeosaurid (therapod) dinosaurs included the ones that developed the most "advanced" brains, and are rather similar to today's birds. ~AH1(TCU) 22:26, 13 June 2009 (UTC)
The way I see it is they evolved to be two legged to sharpen their hunting technique. What is the key thing dinos do in the movies? They jump really high, and pin their prey beneath them. Four leggers, on the other hand rarely hunt this way. In fact the mor efficient hunters tend to hide in the bushes and make extremely long range jumps, rather than the high jumps used by dinos. Could you imagine anything that walks on four legs making this kind of jump?Drew Smith What I've done 09:19, 14 June 2009 (UTC)
- The OP is incorrect to claim that "if you are a carnivore then you need all the speed in the world to catch your prey." because for success a predator needs other characteristics that may be incompatible with having maximum speed. Some useful abilities for a predator are maneuvrability, having size and colouration that are appropriate for lying in ambush or stalking close to a prey, finely tuned senses of sight and smell, a cooperative pack strategy, a combination of muscles + teeth + claws sufficient to triumph against the prey and the sense not to take on losing battles. Mating between predators happens only after some delicate negotiation. Cuddlyable3 (talk) 12:03, 14 June 2009 (UTC)
How hot is boiling water?
[edit]I was watching QI last night when Hugh Fearnley-Whittingstall mentioned that water on a "rolling-boil" was much hotter than water that was simmering. I have never really understood the difference between boiling and simmering, because in both cases the water is turning into vapour and therefore presumably reaching 100C, and I would not have expected that it would go over 100C (at least not by very much). Now I am wondering whether, on a rolling boil, whether so much heat is being put into the bottom of the pan that the water actually is getting much hotter (as HFW states) than 100C, before it gets a chance to vapourise. I read Boiling_water#Boiling_in_cookery but it does not seem to give a definite answer. Can anyway help clarify what actually happens please? Frank Bruno's Laugh (talk) 09:21, 13 June 2009 (UTC)
- Were you not, in fact, watching Have I Got News For You? 80.41.126.158 (talk) 11:53, 13 June 2009 (UTC)
- Yes, actually you're right, I was! I watched both last night and I guess I merged the two together. This is what happens when you waste your life watching panel shows! :) Frank Bruno's Laugh (talk) 12:03, 13 June 2009 (UTC)
- Were you not, in fact, watching Have I Got News For You? 80.41.126.158 (talk) 11:53, 13 June 2009 (UTC)
- Pressure at the bottom of an open saucepan of water is higher than atmospheric due to the weight of water above. This raises the boiling temperature slightly. With heat applied from below there is continual vaporisation of water at the base from which bubbles of vapour rise and are replaced by new water. This process absorbs considerable heat energy due to the latent heat of vapourisation that water requires. Superheated steam could be generated only by such a huge inflow of heat that vapour is generated faster than can be replaced by liquid. Cuddlyable3 (talk) 11:14, 13 June 2009 (UTC)
- Thanks. Based upon my scuba experience, I believe that, very roughly, 10m over water exerts the same pressure as 1 atmosphere. Asssuming that the water in the pan is maybe 10cm, that would seem to indicate only a 1% rise in pressure at the very botton of the pan. In comparison, accoring to Pressure cooker raising the pressure to 2 atmospheres would lead to a boiling point of 122C, so I am guessing that the depth of water will have an incredibly small impact on the potential water temperature. Is that correct? I had a quick read of Latent heat of vaporization, but I'm not sure I really understand. It seems, as you say, that a lot of energy is required to actually vapourise the water (5 times more than is required to raise it from 0 to 100C?), but how does that actually affect the temperature of the water (and of the vapour that is formed) and how does that transfer to the food? I am assuming, probably incorrectly, that the food absorbs heat primarily from the water, and that the water itself is still around 100C. Is the vapour much hotter, and does that heat also transfer to the food. Thank again for your help. Frank Bruno's Laugh (talk) 11:30, 13 June 2009 (UTC)
- When you keep heating water, from say 250C, the temperature also rises uniformly, approximately given by the equation,
Heat = Constant * (Change in Temperature).
This goes on till you reach the boiling point of water, say at atmospheric pressure, at 1000. When you further heat water, the temperature does NOT rise. It remains at 100, the heat is absorbed to convert the water into steam. When all the water is converted to steam, only then does the temperature again rise. This extra heat supplied to raise the temperature from 100 onwards is called Latent Heat of Vaporization. I just thought i would explain this, I'm not sure about the other parts of the question. I presume the entire water would be at a constant 1000C. Rkr1991 (talk) 12:34, 13 June 2009 (UTC)
- When you keep heating water, from say 250C, the temperature also rises uniformly, approximately given by the equation,
- (ec) No it is credible not incredible that the depth of water has a small impact on the boiling temperature. Absorbing the latent heat of vaporisation does not raise the water temperature, it just changes the water's state from liquid to vapour. As long as the vapour bubbles disconnect from the heat source (base) as fast as they are formed they are not hotter than the boiling temperature. Since the thermal capacity of liquid water is higher than the expanded vapour you are correct to assume the food absorbs heat primarily from the liquid (it's all called water). Cuddlyable3 (talk) 12:41, 13 June 2009 (UTC)
- The variations in air pressure due to weather (pressure, humidity, etc), and your current altitude, probably affect the boiling temperature of water much more than the 10 cm or water pressure. Also, impurities in the water (such as trace quantities of salts and carbonates) also probably affect the boiling temperature much more than the 10 cm of water pressure. You can easily probe the water with a thermometer to find out what temperature it is when various levels of boiling activity become prominent. I know that my water is not receiving heat uniformly, because I can see nucleation sites forming preferentially tracking the shape of my electric stove's heater coil (even if I slosh the water around or move the pan) - so even this heat conduction effect seems to be more important than the static water pressure. In truth, the thermodynamics of a "simple pot of water" are very complicated - a huge number of competing phenomena, including water convection, cohesion, heat conduction, heat radiation, phase change and latent heat of vaporization, bubble-formation and fluid interaction, etc. All of these require elaborate statistics and calculus (fluid mechanics) to solve "exactly" (if such a thing can be solved exactly). The true temperature of the water when boiling is approximately 100 degrees Celsius, but at best this is a macroscopic average over the entire pot of water. Nimur (talk) 15:30, 13 June 2009 (UTC)
- Incidentally, that's a "roiling boil" (i.e., boiling sufficiently to roil the water,) not a "rolling bowl." -Arch dude (talk) 15:00, 13 June 2009 (UTC)
- Actually I think that both are correct [2], not sure if usage varies by location. In the video that I linked at the bottom of this section, HFW does seem to be saying "rolling" (not sure if you can view the video), so maybe "rolling" is more prevalent in the UK. Frank Bruno's Laugh (talk) 11:06, 14 June 2009 (UTC)
- Although the first time I mentioned it in my opening question you are right that I did say "bowl"! :) Luckily I used the correct spelling later in the question, so hopefully I don't look like a complete dunce! :) Frank Bruno's Laugh (talk) 11:09, 14 June 2009 (UTC)
- Actually I think that both are correct [2], not sure if usage varies by location. In the video that I linked at the bottom of this section, HFW does seem to be saying "rolling" (not sure if you can view the video), so maybe "rolling" is more prevalent in the UK. Frank Bruno's Laugh (talk) 11:06, 14 June 2009 (UTC)
- As most writers above seem to agree, a rolling boil is within a gnat's whisker of 100 °C. WP says that simmering starts somewhere between 82 and 94 °C. So on average HFW appears to be right. --Heron (talk) 17:19, 13 June 2009 (UTC)
- At normal pressures - the water can't get above roughly 100 degC without turning to steam - so the energy supplied by the stove top can't increase the temperature any further without first boiling all of the water away. I suppose any steam bubbles sticking to the bottom of the pan could get a little hotter than 100 degC before they floated up to the surface - but not by much because the surface tension of the water will tend to keep at least a thin layer of water between the pan and the bubble.
- But I do know that in a water-cooled car engine, if there is no detergent in the water, bubbles of steam will form in some of the narrower passageways through the engine block and the temperature of the metal around them can spike quite high. That's why it's undesirable to fill your radiator with pure water without either "water-wetting-agents" or antifreeze. That's because the bubbles get trapped inside those narrow tubes and can't easily rise to the surface as they would in a pan of water on the stove.
- What I suspect is being implied in the original post here is not that the water is literally hotter when it's boiling - but that things in the pan get cooked faster by 100 degC water that's boiling than 99.999 degC water that's just a fraction short of boiling. That is certainly a true statement - but it's about heat energy - not temperature. The reason is that when a steam bubble hits whatever is being cooked - which is still at less than 100 degC (presumably) - then the steam will rapidly condense back into water. When it does that - the steam gives up it's "latent heat of condensation" and a LOT of energy is transferred to your food as steam at 100.00001 C condenses into water at 99.99999 C. Transferring energy into the food cooks it quickly - even though the water keeps the temperature at a solid 100 degrees.
- People who get almost-boiling water spilled on themselves don't suffer too much injury - but it doesn't take much steam at almost the exact same temperature to cause severe burns.
- Whoa! It doesn't take much boiling water either. Where I live, "they" now advise people that water heaters should be set no higher than 50°C, on the grounds that 60°C is hot enough to be a significant safety risk to a person who doesn't react fast enough. I'm willing to take that risk, but that's not the point: an equivalent mass of steam may be worse than 100°C water, but the water can be bad enough. --Anon, 20:40 UTC, June 13, 2009.
- As Heron pointed out, not all the water in a simmering pot is boiling - this is connected to what Nimur discusses further up. So the overall temperature of the water is cooler than in a pot at a rolling boil - where just about all the water is boiling. This isn't a matter of 0.00001°C, as discussed in the article Heron linked to. 80.41.126.158 (talk) 19:03, 13 June 2009 (UTC)
Thanks for all of the above. I'm tempted to go and buy a thermometer now and check the temperature at different stages of boiling (still confused, because I always consider "simmering" in terms of "bring to the boil, then simmer", suggesting that the water is boiling - but I guess there are different stages of simmering too). BTW the original comments from HFW can be found here @ 27:20 (for the next 7 days and, I think, only for people on a UK IP address). Frank Bruno's Laugh (talk) 10:56, 14 June 2009 (UTC)
Suicide Question
[edit]Hello,Im not seeking medical advice, just a curiosity, whats the best method of commiting suicide if you are locked in a room whit absolutely no tools (only your body)? (Not my case , if im writing this I have electricity and I can finish myself of this way. DST
- It's good that you don't seek medical advice because Wikipedia won't give it. The best method is not to entertain a morbid curiosity. Cuddlyable3 (talk) 11:20, 13 June 2009 (UTC)
Its a scientific question, no morbid curiosity i stated it before. —Preceding unsigned comment added by DSTiamat (talk • contribs) 11:26, 13 June 2009 (UTC)
- There are plenty of ways to kill yourself under those circumstances. I'm not sure what you mean by "best". The most efficient? The least painless? Both are hard to gauge. I'm going to assume that simply dying of starvation and dehydration wouldn't be an option, for whatever reason -- so that implies that there's a kind of a time limit in which you need to get this done. In any case, under those circumstances, it's definitely going to get really ugly, and it's going to take some serious, serious will to die. I'm pretty sure most of us couldn't do it, because it's just too goddamn hard.
- But there are options. If you have clothes, you could hang yourself or try to choke yourself with them -- but perhaps you're naked, what with the "absolutely no tools" parameter in place, and unfortunately, that's about as easy as it gets. You could chew through the arteries in your arms and bleed to death. You could bash your head against the wall or against a sharp corner if there's some sturdy furniture around. You could try to throw yourself down and land on your head so you break your neck -- again, probably easier if you had some furniture around, so you could jump off it. You could probably break your own neck with your hands. You could crush your own larynx and choke to death. You could simply punch yourself over and over again until you do some real damage.
- I mean, as long as your body still obeys your commands, there are things you can do to damage yourself very badly. But none of these methods are easy, and all of this would be extremely difficult to do, not only because of the physical challenges involved but because this kind of stuff really goes against the grain: the instinct for self-preservation is going to make the kind of extended effort this kind of thing requires very, very difficult. That's why people who commit suicide generally do it in a way that's either fairly peaceful or fairly quick -- if all you need to do is jerk your finger or jump once, that's relatively easy. Repeatedly smashing your fist into your own body as hard as you can in the hopes of rupturing some internal organs or cracking a rib so it'll puncture a lung -- for example -- is a different story.
- But even if you were to attempt something like this, it's hard to guarantee success; if you hesitate at the wrong moment, or don't have the willpower to go through with it, or just make a mistake, you could just incapacitate yourself instead of dying, and take too long to die. It's not a problem if no one is coming to stop you, of course, but if that was the case, you could just stop eating and drinking and let dehydration take care of it.
- So what's the best method? I don't know. Personally, I wouldn't recommend any of them, because I'm not that kind of crazy, but rest assured that if you're locked away in that cell and 24 hours from now the alien invaders are going to march in and drag you to the Brain Probe to find the secrets of Earth's defense network from the deepest recesses of your mind, there are things you can try and, if you have the willpower and a basic understanding of what parts of your body are vital, and you're a little lucky, you can do it. Probably. -- Captain Disdain (talk) 12:11, 13 June 2009 (UTC)
- We have an article called Guantanamo suicide attempts, which details the successful and unsuccessful suicide methods of several prisoners locked in a room with no meaningful tools. Many of them attempted to strangle or hang themselves with their own clothing. Many managed to fashion a knife or sharp object. Many have undetermined or unreported method of suicide. Nimur (talk) 15:18, 13 June 2009 (UTC)
- Just wait for a week or so and you will die from dehydration. You can speed this up by vigorous exercise, though you won't be in the mood for dancing for very long.--Shantavira|feed me 16:29, 13 June 2009 (UTC)
- If you'd like to volunteer for a Darwin Award you could try to verify the urban myth whether you can obstruct your airways with your tongue. Commonly referred to as "swallowing one's tongue". There are retching reflexes that should prevent that, but there's suspicion that running out of air might possibly interfere with that function. Another unreliable method would be to try to create a blockage in the Superficial temporal artery and hope for that to disrupt the major cartoid arteries. 71.236.26.74 (talk) 21:07, 13 June 2009 (UTC)
- Just wait for a week or so and you will die from dehydration. You can speed this up by vigorous exercise, though you won't be in the mood for dancing for very long.--Shantavira|feed me 16:29, 13 June 2009 (UTC)
- Hold your breath. Pinching your nose may help. In the meanwhile, if you change your mind, stop the procedure and start breathing again. --pma (talk) 22:51, 14 June 2009 (UTC)
- I'm not sure if you're joking here, but I'm nearly certain you will pass out and begin breathing again if you try to commit suicide by holding your breath. Diogenes of Sinope, a Greek philosopher, is famously said to have killed himself by holding his breath, but the claims are dubious. —Pie4all88 T C 21:36, 18 June 2009 (UTC)
- In any kind of suicide, you would want a technique that guarantees that you wont be left alive but brain-damaged or mutilated. I would guess that in the conditions described, an effective way may be to bite and chew the blood vessels in your wrists open. 78.147.253.196 (talk) 10:37, 17 June 2009 (UTC)
Cancer rates in mammal species including humans
[edit]Are the rates across different species similar, or do they differ? Do humans have the greatest rates of cancer? 89.243.85.18 (talk) 11:16, 13 June 2009 (UTC)
- See the section on whale cancer above. They seem to differ, and differ significantly. --Stephan Schulz (talk) 11:23, 13 June 2009 (UTC)
- I was hoping for more information than just for whales and cows. 78.151.102.179 (talk) 12:25, 14 June 2009 (UTC)
- It was 1/3 for humans in Ireland a few years ago. Not deaths, just 'adults that will have cancer at some time in their life'. No link sorry. ~ R.T.G 12:12, 13 June 2009 (UTC)
- I'm afraid I couldn't find any reliable figures even for cats and dogs where you'd have thought the statistics would be best and you can be sold insurance and treatment. Dmcq (talk) 12:34, 14 June 2009 (UTC)
I get the impression that its way higher for humsns. Perhaps the chemicals formed during high-temperature cooking (not boiling) are to blame. 78.147.253.196 (talk) 10:33, 17 June 2009 (UTC)
- I've heard statements like this from a number of people, and historically this topic is rife for speculation and conspiracy theories. Consider this though: the risk of cancer in any organism increases as that organism ages because errors and mutations build up over time, as much due to internal reasons (oxidative damage or copy errors during mitosis, for example) as external ones (environmental carcinogens), and since fewer humans die of disease or predation than most other animals, it's often cancer that does us in. Further, since more and more conditions are treatable and/or preventable now than, say, in our grandparents' time, cancer is more and more often the cause of death rather than things like chronic obstructive pulmonary disease, diabetes, or measles. For that reason, it's tempting to think that the rate of cancer is increasing, when it's much closer to the truth to say that death by other causes are generally decreasing. Thus, the observed increase in cancer rates is more a reflection of the overall increase in the average human life expectancy than the fact that we like to cook our meat. (Caveat: don't take this to mean that environmental carcinogens don't do anything, they just aren't quite the bugbear that people often make them out to be). – ClockworkSoul 16:44, 19 June 2009 (UTC)
Air hunger / winded when stuck in lungs
[edit]Some times when I get struck in the lungs, such as being punched in the lungs, I get winded/air hunger. I understand that this only happens when there is too much carbon dioxide. How does getting punched trigger too much co2? Is it because the lung is contracted so the pressure increases and body thinks it detects a high level of carbon dioxide? Does anyone know typically at what level above normal atmosphere air hunger typical triggers at?--58.111.132.138 (talk) 11:45, 13 June 2009 (UTC)
- The wikipedia article on air hunger defines it as the sensation or urge to breathe, and says that it is usually from a build up of carbon dioxide, but that doesn't sound like what you are describing. Us humans have three mechanisms that trigger breathing. One is based on stretch receptors that sense how often we expand our chest. This is the mechanism that most people must fight against to hold their breath, and is often neglected in citation of a second mechanism in which breathing frequency is controlled by the amount of carbon dioxide in the blood. This mechanism is important for long periods of breath holding, and unconscious people (such as in pearl divers and ICU patients) and controls our overall breathing rate more than it does each individual breath. It also keeps us breathing when we sleep. The third, and by far the weakest mechanism is based on the amount of oxygen in the blood. This so-called "anoxic drive" is really only important in that it is something to be avoided in patients who get oxygen therapy for long periods of time and at high concentrations. The air hunger that you describe sounds like the first one, and likely has nothing to do with carbon dioxide. When one is struck in the stomach or chest, the muscle that controls most of our breathing, called the diaphragm, can be suddenly stretched in a way that causes it to spasm or be temporarily paralyzed. This is often called "losing one's wind" or "having the wind knocked out of oneself", and is similar in cause if not duration to a hiccup. When this happens, we must use our far less effective chest wall muscles to exchange air for the short duration of the spasm, and the inability breath deep to stretch those stretch receptors, in addition to being just plain scary, leads to the distinctly uncomfortable "air hunger" that I believe you are describing. Tuckerekcut (talk) 14:05, 13 June 2009 (UTC)
- Wikipedia really does have an article on everything, including getting the wind knocked out of you. Maybe we should redirect/rename that article to something more scientific. It is a common name for a mild (or severe) injury to the thoracic diaphragm, the muscle responsible for inflating and deflating the thoracic cavity (and the lungs, by proxy). When injured, this muscle may temporarily spasm or experience paralysis. Naturally, the severity of this injury ranges from mild to life-threatening. We can not give medical advice or diagnose your injuries, so if you are worried about repeated trauma, you should consult your medical doctor. Nimur (talk) 15:22, 13 June 2009 (UTC)
What is the nature of the big bang?
[edit]There is a theory that for the Big Bang to make sense in a universe, it would have to be equal parts implosion (sorry, define that as equal parts shrinking internally), both internal and external elasticity, accounting for all angles of everything. Supposedly that would be more difficult to prove/disprove than the bang itself. Is that a preposturous thing to consider? ~ R.T.G 18:21, 13 June 2009 (UTC)
- No, not preposterous, but too vague. Dauto (talk) 19:36, 13 June 2009 (UTC)
- For every action there is an equal and opposite reaction, for all matter there is, possibly, an anti-matter. If the bang/fusion were so all-encompassing, it could be very precise. That would give it a definite center. That could make it a skin or a layer. Perhaps there is a light which has been receding? ~ R.T.G 21:08, 13 June 2009 (UTC)
- You still don't get it do you? Just making a long string of fanciful scientific sounding words does not a meaningful statement make. Just answer me that following question: How is the law of action and reaction and the existence of antimatter related to your vague idea of a expanding shell arounding a shrinking center. Be specific and use logic. Dauto (talk) 21:23, 13 June 2009 (UTC)
RTG, your recent questions at this refdesk resemble Chomsky's "Colorless green ideas sleep furiously", in that the string of word can be interpreted to have some metaphorical meaning in some technical sense, but are in plain-speak nonsensical. I would echo the advice you have been previously given that you will be better off spending your time reading some basic high-school physics text, instead of wasting it on idle and pseudo-scientific speculation. You can find a list of useful physics textbooks here. If you don't wish to borrow or buy one of those books, you can read some of the free texts listed here or even browse wikipedia - although personally I would recommend against such a slapdash approach. Hope this helps. Abecedare (talk) 21:29, 13 June 2009 (UTC)
- I am suggesting that there would need to be something in that center. We have anti-everything else in theory and practice so where is the anti-light? ~ R.T.G 21:38, 13 June 2009 (UTC)
- I still can't tell exactly what you're talking about in regards to a centre. I also cannot tell why you're saying there has to be something there, nor what that has to do with light. That seems to all be nonsense. As for "anti-light", you may be confused as to what an antiparticle is. An antiparticle has the opposite electric to its non-antiparticle companion. Photons have no electric charge, so they have no anti. -- Consumed Crustacean (talk) 21:48, 13 June 2009 (UTC)
- I concur that RTG isn't clear on antiparticles, but your definition doesn't hold, either. Antineutrons have been known for 50 years and have no charge (just like their standard counterparts). We don't expect "anti-light" because light isn't composed of hadrons, of which antiparticles are a subset. — Lomn 23:43, 13 June 2009 (UTC)
- Yeah, that's true enough. Antineutrons aren't elementary though; the antiquarks composing the antineutrons do have electric (and color) charge, which is flipped. -- Consumed Crustacean (talk) 23:53, 13 June 2009 (UTC)
- I concur that RTG isn't clear on antiparticles, but your definition doesn't hold, either. Antineutrons have been known for 50 years and have no charge (just like their standard counterparts). We don't expect "anti-light" because light isn't composed of hadrons, of which antiparticles are a subset. — Lomn 23:43, 13 June 2009 (UTC)
- I still can't tell exactly what you're talking about in regards to a centre. I also cannot tell why you're saying there has to be something there, nor what that has to do with light. That seems to all be nonsense. As for "anti-light", you may be confused as to what an antiparticle is. An antiparticle has the opposite electric to its non-antiparticle companion. Photons have no electric charge, so they have no anti. -- Consumed Crustacean (talk) 21:48, 13 June 2009 (UTC)
- (after ec) Surely you meant fermions (and their collections) and not hadrons ! An electron is not a hadron but has an anti-particle (the positron); ditto for other leptons and quarks.
You are right though that gauge bosons such as photons do not have an anti-particle in the standard model.Abecedare (talk) 00:01, 14 June 2009 (UTC) (struck out misleading statement.Abecedare (talk) 06:36, 14 June 2009 (UTC))
- (after ec) Surely you meant fermions (and their collections) and not hadrons ! An electron is not a hadron but has an anti-particle (the positron); ditto for other leptons and quarks.
- That's not right either, in two accounts. First, some elementary fermions are neutral - we call them neutrinos - and they may turn out to be their anti-particles or not. We don't know that yet. Second, some gauge bosons have anti-particles which are distinc from themselves. The anti W+ is the W-, for instance. Even some chargeless bosons may not be their antiparticles such as some of the gluons. The particle-antiparticle thing is a symmetry of nature and some particles simply are their own "reflection" through that symmetry. A good analogy is given by looking at gloves and socks on a mirror. The righthand glove turns into the lefthand glove on a mirror and vice-versa. The rightfoot sock and leftfoot sock are identical. Socks are their own antiparticles. Gloves are not. Dauto (talk) 01:19, 14 June 2009 (UTC)
- Fun fact : RTG first wrote "insulting" before he changed it to "preposterous".
- RTG you're just mentioning a bunch of concepts and stringing them into sentences with no meaning and then asking us for debate.
- Do you have any reference questions for the reference desk? APL (talk) 21:45, 13 June 2009 (UTC)
- Big Bang provides a good overview. Its not really within the remit of a Reference desk to consider the merits or otherwise of theories. We just provide references to such theories. If one is lacking, then there is little we can do to assist you. Is there are specific subject or reference you are interested in? Rockpocket
- I don't think RTG is trolling, floating ideas to prod debate, or asking bad-faith questions. I think he has merely never studied or read a book about the subject and has many incorrect impressions, is all. RTG, I recommend A Brief History of Time, by Stephen Hawking — the book, not the documentary. If you can stick with it then I recommend taking some related courses at your local community college. Tempshill (talk) 05:06, 14 June 2009 (UTC)
- APL and Pocketrocket are right. I thought some of you were being awfully rude but this is not a study group or a discussion center. I am just saying stuff like "What if that lines is curved?" I have read some Stephen Hawking, Tempshill, and it is very good but to go to college with that sort of thing is a matter of mathematics not just idle curiosity. I am just making personal amusement here, please forget about it. ~ R.T.G 13:02, 14 June 2009 (UTC)
Running a Race
[edit]Is it a better idea to run the day before a 5k race or is it a better idea to rest the day before a 5k race?--99.146.124.35 (talk) 19:44, 13 June 2009 (UTC)
- A 5 kilometer race is not an extremely strenuous run, but the answer depends entirely on how you have been training. An optimal training regimen will vary based on a lot of things, like your age, weight, and diet (not to mention your health and medical conditions - consult a doctor, etc). Even if you are training for strenuous run conditioning, it's often best to run no more than every other day. This gives your muscles time to recuperate and generally gives better performance (endurance and sprint capability). Our article about running may help give some ideas about run conditioning training. Nimur (talk) 21:26, 13 June 2009 (UTC)
- The article supercompensation describes the theory behind timing one's training exercise to maximise fitness "on the big day". It also cautions that there is no one right period for the effects described. Therefore an answer to the OPs question should not be given by Wikipedia. The supercompensation theory may be useful only if the phases described have been actually observed and timed in an individual, ideally by a skilled sports coach. Cuddlyable3 (talk) 21:33, 13 June 2009 (UTC)
- I think the standard answer is that the day before a race, it's best to run just enough to keep yourself loose. It matters less for a 5K than for longer distances, but it will still help. Looie496 (talk) 02:16, 14 June 2009 (UTC)
about death
[edit]Can we will be able to overcome over death.can we find out its answer .Here i m not talking about death of universe with change in entropy . —Preceding unsigned comment added by 119.154.10.21 (talk) 21:53, 13 June 2009 (UTC)
- Have you read death? As far as I know when we die we are no more, just like before we were born. Other people believe all sorts of things like that one has an immortal spirit. Even if the universe was eternal I see no way a person could be immortal and have a meaningful existence. Either they could accumulate more and more experiences without any bound or they would stay their original limited selves and forever in effect keep repeating the same old things and not develop. If they develop without bound they would not be the original person. I suppose it would be reassuring though not to have death looming as an immediate anxiety. Dmcq (talk) 23:35, 13 June 2009 (UTC)
- Personally I'm a bit sad about the fact that one day I must die, but the thought that everybody must die as well, greatly cheers me up. Up to now, natural death is the safer way we have to get rid of the worst people around. --pma (talk) 23:43, 13 June 2009 (UTC)
- Much more serious (but possibly exceeding the horizon of the science desk) is the question: "Will we be able to overcome birth?" Other people, as implied by user:Dmcq above, speculate idly that there may be life after birth, yet evidence from any prenatal curly position is not conclusive. --Cookatoo.ergo.ZooM (talk) 23:59, 13 June 2009 (UTC)
- Now that, Cookatoo, if I may be allowed the intimacy of your first name, is funny! // BL \\ (talk) 00:04, 14 June 2009 (UTC)
- I wonder if, by "overcome" the OP is referring to preventing or indefinitely postponing death, rather than surviving it in some form? If so, the work of the Methuselah Foundation may be of some interest to you. The reason we die - assuming disease or misfortune doesn't get us first - is due to the cellular and physiological effects of aging. If you can over-come aging, then you can overcome death (theoretically, in reality you would also have to overcome cancer or else that would get you eventually). Outline of life extension provides some strategies towards this goal.
- The good thing about death is that it provides the ultimate I told you so moment of vindication, for either those who do, or do not, believe in an afterlife. The bad thing is that there is no-one of the other persuasion around to gloat over. Rockpocket 00:32, 14 June 2009 (UTC)
- Now that, Cookatoo, if I may be allowed the intimacy of your first name, is funny! // BL \\ (talk) 00:04, 14 June 2009 (UTC)
- Ray Kurzweil has been forecasting that we will all be able to transfer our minds into computers by 2050, and our intelligences will live on, immortal. Tempshill (talk) 04:34, 14 June 2009 (UTC)
- The theory behind that goes something along the lines of plugging in the minute electrical pulses along our spines and transferring it into a computer. In reality this would prove to be extremely dangerous, and may not even produce results.
- How does one "plug" into a spine? Seems to me (I'm no expert, not by a long shot) that this would be very delicate and dangerous. One false move and the spine would be damaged, thus paralyzing, or even killing the subject.
- It seems like it may be futile because those electrical pulses probably don't contain our "soul". More accurately, they don't hold our memories, our self awareness, or anything that could be identify it as an individual. Even if you could get them into a computer, whats to say it'll actually do anything?
- Which presents another problem. Taking the electric pulses would kill the body. So if it doesn't work you've got a dead guy, and possibly a fried computer. Ultimate "oh shit" moment, no?Drew Smith What I've done 04:50, 14 June 2009 (UTC)
- Attaching electrodes to spines of living people and sending electrical pulses along them has already been done, so is not necesarrily deadly, and would seem conceptually reversible (The only example i know of though was a programm using the technique to install experimental orgasm controllers in women, which had some success.). Measuring an electrical pulse could be done without erasing it, so that would seem soluble to. Which then leaves the questions of "soul", which of course have no scientific answer anyway.YobMod 06:59, 14 June 2009 (UTC)
- No, sending the pulses isn't deadly, but if you take them all out, and leave nothing (which I'm assuming you would have to do to create "existance" on the computer) wouldn't the body die? And I was using "soul" in a non-religious, non-scientific way. I meant the essence of who you are. If you kill the body but don't capture the "soul" it was all for nothing.Drew Smith What I've done 09:27, 14 June 2009 (UTC)
- Drew, have you ever had an ECG recording? If so, did you leave your irreplaceable essence in the ECG machine ? Cuddlyable3 (talk) 11:33, 14 June 2009 (UTC)
- Drew, impulses in the spine are "taken away" quite often, in certain cases of local anaesthesia (an epidural) and all cases of general anaesthesia, which has a death rate of 5-6 per million [3]. --Mark PEA (talk) 17:16, 15 June 2009 (UTC)
- The synaptic gap is a complex electrochemical signaling pathway, but it can be modeled as a high-impedance voltage source. You can connect many electrical loads in parallel to a single voltage source without any trouble - that is an operating principle of an ECG or EEG. (In other words, when the electrical pulse "goes in to the cardiogram machine", it also "goes in to the heart muscle" in parallel, with negligible loss of power. There is no need to worry that the neural electric signals will get "sucked into the machine" forever. Take a look at parallel circuit. A more realistic problem, as has been pointed out, is that we don't really know what signal we are looking at when we probe a nerve, and we can say with pretty good certainty that the electric pulses on the spinal nerve are not "the soul" (they are probably not even related to any cognitive process at all). The spine is primarily used to relay control messages and sensory input between the brain and the rest of the body. Nimur (talk) 14:29, 14 June 2009 (UTC)
- No, sending the pulses isn't deadly, but if you take them all out, and leave nothing (which I'm assuming you would have to do to create "existance" on the computer) wouldn't the body die? And I was using "soul" in a non-religious, non-scientific way. I meant the essence of who you are. If you kill the body but don't capture the "soul" it was all for nothing.Drew Smith What I've done 09:27, 14 June 2009 (UTC)
- Sticking one's personality into a computer seems like a bit of egotism to me. Unless you killed the original person there would then be two people starting off with the same memories. Dmcq (talk) 11:43, 14 June 2009 (UTC)
- Sounds like a great way of improving the methodology of psychology research. --Mark PEA (talk) 17:16, 15 June 2009 (UTC)
- Attaching electrodes to spines of living people and sending electrical pulses along them has already been done, so is not necesarrily deadly, and would seem conceptually reversible (The only example i know of though was a programm using the technique to install experimental orgasm controllers in women, which had some success.). Measuring an electrical pulse could be done without erasing it, so that would seem soluble to. Which then leaves the questions of "soul", which of course have no scientific answer anyway.YobMod 06:59, 14 June 2009 (UTC)
- Which presents another problem. Taking the electric pulses would kill the body. So if it doesn't work you've got a dead guy, and possibly a fried computer. Ultimate "oh shit" moment, no?Drew Smith What I've done 04:50, 14 June 2009 (UTC)
- Is it more egotistical than, say, getting vaccinated or publishing an autobiography? —Tamfang (talk) 06:29, 27 June 2009 (UTC)
- 2050 is a LONG time away in computer technology. If you look back 40 years and see where computers were in 1970 and use that to extrapolate to 2050 - I have no problem believing we could have computers of reasonable size and cost that would have the power to run a neural network simulation of an entire human brain. This issue came up in the Ref desk a few years ago - and I calculated that if Moore's law continues than a suitable computer could be purchased for a million dollars (present value) by about 2035 - it would have the capacity to simulate the brain - but perhaps not in "realtime" (so the simulated human would think rather slowly). By 2040, everyone could afford one - and by 2050, an affordable machine would be cheap and faster than a real human brain. So the computer technology side of things is a done deal...it'll happen so long as Moore's law continues (which, admittedly, can't be forever). Since we know the real brain has the computational power to fit that much computer in that volume of space - I have little doubt that we could do it in the space of (say) a 10' x 10' x 10' cube...which is plenty small enough. So people who have a million dollars put away (which is a surprisingly large number of people) could be "retiring" into a computer in 25 years. The problem of thinking slowly is not so great - but once the brain is inside the computer - the underlying computer hardware can be upgraded over the years until their thought rates catch up (and ultimately exceed) that of living, biological people.
- The trickier question is how and when do we put your brain into the computer? Some kind of non-invasive brain-scanning machine that could somehow figure out all of the interconnections and such would be idea. If it could do the job in a matter of hours, while we sleep perhaps, then you could make a 'backup copy' of your brain every night - and when you die - you'll only have lost the memories formed in the last day...which are probably best forgotten anyway. However, I suspect that people might want their brain scans to be taken while they are in the prime of their mental acuity - and that starts a complicated moral/ethical matter of when to stop recording...since any memories made after that point would be lost.
- But that assumes that this 'brain recording' process is non-invasive and fast (and cheap!). If it's non-invasive but slow (or ruinously expensive), you might have to go get your brain scanned once a year or once a decade. If it's not non-invasive (like maybe you have to slice the brain into thin slices in order to get the data) then clearly you can't do this until you're already dead. Now the moral/ethical part gets even tougher...would people choose to commit suicide at a young age when their brain is functioning at it's peak in order that their electronic "afterlife" would be more productive? I don't know - but I don't think it particularly complicates things.
- The business of the lack of a "soul" is something I completely discount. This is a scientific matter - not religious claptrap. Everything that makes our brains tick is due to the configuration of the neurons - and that can be completely simulated given enough computing horsepower.
- Another issue relates to "quality of life" for these electronic people. Can we interface cameras, microphones, taste and touch sensors into these electronic gizmo's? Would robotic bodies be an appropriate thing to attempt? Would we instead build a virtual world for these virtual people to inhabit? I could certainly imagine the latter...something like "The Matrix" - an elaborate multiplayer computer game in which the inhabitants are driven by people who are now inhabiting these computers.
- For the electronic brain - there would be no reasonable difference between the living person and the electronic version - their thought patterns can be identical - they will feel that they are still alive, they'll have all of their memories - and the means to form new ones - their dearest friends and relatives would recognize them by the things they say - the way they think. The electronic person would be able to hold down a job - which would be necessary in order to pay for electricity, software backups and replacement computer parts.
- Some very exciting things become possible - you could choose to run your brain at varying clock rates. If you're bored - you can "fast-forward" to the next interesting thing. You could pause your brain for years, decades, centuries and "time travel" to the future. You could send an empty computer on a slow rocket to the nearest star - and when it gets there, transmit your brain pattern to it. You'd be able to travel around the universe at the speed of light...but in zero time as far as you are concerned. One moment you're here on earth...then ZZAPP! and you're in orbit around Alpha Centaurii. Of course, when you come back with your new memories and holiday snaps - 8 years have gone by here on earth - but your (dead) friends and relatives can fast-forward through that if they miss you - and for you, it seems like no time has passed.
- Being able to make copies of yourself is an interesting and difficult thing. If the computers involved are merely simulating a human brain using mathematical models of biological neurons - then there would probably be no way for your two copies to be merged back together again...so you couldn't (for example) have a copy go on vacation while the original stays home - then merge them back together again and have memories of both things...that (I think) makes it less interesting to make copies...but it's obviously possible - and more ethical/moral dilemmas are bound to happen as a consequence.
- There is also the possibility of running your electronic brain faster than realtime. This would be one way to make a living - being able to think at (say) ten times the speed of a living human would give you some significant advantages in the job market. Electronic people with more money would be able to upgrade their computers more frequently - and have faster models than poorer people. Rich people would be able to hold down better jobs...not much changes there, I guess.
- In the long term - there might be trillions of electronic people - and only billions of biological people. It might be that we come to think of these brief biological years as a kind of childhood...but the need to look after these trillions of computers - provide power, spare parts. upgrades...that would soon come to dominate the "work" of mankind. But with robotic maintenance workers - maybe even that isn't a problem.
- For people who come to tire of life in this electronic form - suicide is a simple matter. But the ability to "fast forward" your life would probably mean that these people don't choose to kill themselves - but merely slow down their brains more and more to travel futher and further into the future - in the hope that something better comes along. In this form, "fast-forwarding" people would consume virtually zero resources - their minds could be archived for years on "backup" memory (like the hard-drive on your PC) - and only reloaded once in a while as requested before they slept. Much of mankind might be in backup memory form for most of the time - waking up only once a century to see what's new.
- A lot depends on the technology of the brain scanning machine. That's really the only technological hurdle...we have brain scanners that work at larger scales - I could imagine them getting more and more precise - until they could resolve individual neural connections. There is active research in these areas. I wouldn't be surprised to see this happen by 2050...and the closer we get to it - and the more people who realise that the long-sought goal of immortality is within our grasp - the more that research will be pushed.
- I think this can happen - and within the lifetimes of many of the people reading this page. Sadly - the math says that I probably won't live long enough to make use of it...darn! But I think there is every possibility that if you are (say) 30 years old or younger today - then you could wind up living as long as you want...but not in your biological body.
- Nice to meet you, Mr. Kurzweil; it's an honor to work with you here on the refdesk. Wish you hadn't been pseudonymous for all this time. Tempshill (talk) 16:43, 14 June 2009 (UTC)
- Ouch! I'm pretty sure that accusing someone of being Ray Kurzweil falls under the WP:NPA rules! :-( 17:58, 14 June 2009 (UTC)
- Nice to meet you, Mr. Kurzweil; it's an honor to work with you here on the refdesk. Wish you hadn't been pseudonymous for all this time. Tempshill (talk) 16:43, 14 June 2009 (UTC)
Don't kid yourself. Memory banks might extend the life of your computer not your soul. ~ R.T.G 20:15, 14 June 2009 (UTC)
- Well Steve, I'm optimistic for the future that you envision, because it seems like progress to me; but I really wonder if Moore's Law is actually even relevant on the technology roadmap to making this happen. I see more important obstacles than hardware capacity standing between our present society and a computerized, brain-in-robot-body utopia.
- Without doubt, computers are rapidly accelerating (even the last few years of thermal nightmares, clock-rate growth-pains, and economic downturn have not been able to slow the pace by much). Disk storage is so huge it's laughable (at the lab on Friday, they just threw out several sets of 300 GB disks because they're "outdated" and not cost effective!) And with a sizable server room, we can stuff petaflops and zetabytes and gigabytes of bandwidth.
- But what about software? Every day I work with brilliant physicists who still don't know the meaning of "pthread" - so when we provide them a rack of 400 cpus, they use them one-at-a-time (not always, but more often than we'd like). We're at a crossroads of computer interfacing technology - it's becoming harder and harder to stay at the cozy abstraction layer we have been in for a decade or more, because harnessing these new computer technologies means getting down and dirty with the hardware model. 300x faster FFTs, you say, but what's NUMA again? And we're solving "conventional" programming challenges - number crunching type problems. Faster number-crunching machines do not immediately solve all problems - even problems that are already formulated as number-crunching problems!
- You're sort of skipping the step where a crew of bio-/genetico-/computer-/software-/electronics-engineers take the primitive state of understanding of human neurology, and translate it into an engineering schematic for the hardware and software of the computer of 2050. To do this, we need several key accomplishments - first, we need to really really understand the biology behind the brain. How much do we have to replicate and simulate? Individual cells? Neural blocks? Neural connectivity schemes? Or maybe just a "low-pass filtered" version of some kind of psychoneural model. For all we know, we might need to simulate every neuron's intricate internal cellular chemistry, down to the femto-mole concentrations of neurotransmitters - or worse, we might need to simulate the genetic code and protein folding arrangement. We don't even know what we need to simulate, because the best efforts of a hundred years of psychology and neuroscience still haven't answered the basic questions about what exactly a thought is made of.
- Once we have the elaborate biology of brain function worked out, we then have the arduous task of generalizing it (so that it applies equally well to all human brains, not just a few test cases). The Human Genome Project has shown us that this is non-trivial. (And, they only have some 4 billion "bits" worth of information! The whole human genome for one individual fits uncompressed on a DVD-R!) I can't really comprehend how much "state" you need to save if you want to recreate the finite state machine that is a 45-year old human - there are hundreds of thousands of words, millions of sentences, who knows how to count how many images, sounds, sensory perceptions - all stored in the brain. What if there is a different packing format for every individual? You might have to start with a physics/chemistry simulator of the basic genetic code, and "press play" to see how the neuro-chemistry unfolds, given those sets of genes; and then you have to generate an custom-built interpreter for that individual's mind.
- Now mainstreaming this so that every person can afford this technology is the next challenge. Imagine the issues surrounding reliability and stability. When you have (as SteveBaker predicted) a trillion individuals, what sort of bit-error-rate is acceptable? What mean-time-between-failure is acceptable for a project with a million-year lifespan? What tradeoffs in reliability are acceptable if it means that we can mass-produce this technology and allow more people to store their brains (albeit with more risks?) Thus ensues a complex moral/ethical debate.
- I don't know if we have 25 or 50 years before the technology is available. I think it's so difficult to guess because we only have half of the technology (the computer-side) in a form which we can quantize and measure. The biology, neurology, and psychology is so far from mature, that it could be centuries - or never - before it's ready. We don't even know. For all our best guesses, we might have already surpassed the human cognitive ability with a mainstream computer - but without the software to emulate human behavior, we can't run a comparative benchmark suite. Most of the time, when I evaluate a nifty new image recognition program or semantic natural language processor, it doesn't require a hardware upgrade - but new abilities are granted to my almighty, infinitely programmable handheld computing device.
- Anyway, we'll see what the future holds. Nimur (talk) 03:47, 15 June 2009 (UTC)
- But what about software?
- You're sort of skipping the step where a crew of bio-/genetico-/computer-/software-/electronics-engineers take the primitive state of understanding of human neurology, and translate it into an engineering schematic for the hardware and software of the computer of 2050.
- Well, the 2035 estimate assumes that present neural network simulation software would be employed to completely brute-force simulate the brain by replicating the actions of every single neuron. Biologists understand what an individual neuron does - you apply these stimulii - some time later, it fires or not and some signals head outwards...there is some recovery curve or other. Sure there will be questions of blood flow and such like - but our brains seem pretty resiliant to changes in those kinds of parameters. There would presumably need to be careful measurement of the performance of various kinds of neuron - but it's not that tough. Doubtless, advances in understanding the brain at the middle-levels (higher than "neuron" but lower than "the frontal lobe does this and the temporal lobe does that") would result in massive simplifications in software and reductions in hardware - but the brute-force approach ought to work - and it's easy to crunch the math to predict when we might get there.
- I can't really comprehend how much "state" you need to save if you want to recreate the finite state machine that is a 45-year old human - there are hundreds of thousands of words, millions of sentences, who knows how to count how many images, sounds, sensory perceptions - all stored in the brain.
- Well, we know how many neurons there are - and how many connections each one has to it's neighbours. That tells you how much storage space you need for the network - and it's do-able with another 25 years of Moore's law expansion for a $1,000,000 hardware outlay. The rate at which neurons fire gives you a way to measure the amount of CPU time you need - it's huge - but it's very parallelisable...this allows us to use a lot of simple CPU's to provide the horsepower we need. It's do-able.
- What if there is a different packing format for every individual?
- The naive plan that I propose would entail simulating all of the connections between that individuals' neurons. That's all of the memories (although perhaps not the short-term memory) - all of the learned algorithms - the lot. We number all of the neurons - and for each one make a list of all the other neurons it connects to - the lengths of those connections and whether they are excitatory or inhibitory. This connection pattern is obviously unique to that individual - and will change day by day and hour by hour as memories are formed. We'd need to understand how new connections are formed.
- You might have to start with a physics/chemistry simulator of the basic genetic code, and "press play" to see how the neuro-chemistry unfolds, given those sets of genes; and then you have to generate an custom-built interpreter for that individual's mind.
- No - that's truly impossible...if it can't be done at the level of neurons - it's certainly impractical to look to yet lower levels.
- Imagine the issues surrounding reliability and stability. When you have (as SteveBaker predicted) a trillion individuals, what sort of bit-error-rate is acceptable?
- Real neurons are incredibly unreliable - they die of old age - diseases kill them. The brain has enough redundancy and plasticity to survive that. Our software simulation will have that same property. So small errors in these gargantuan computers would be survivable...perhaps even necessary to an accurate simulation! We need to occasionally make mistakes or we won't be human. However, wholesale failure of an entire computer could be serious - like a car wreck. Fortunately, we can make backup copies. You could have the exact state of your software brain backed up (onto "hard disk" - or whatever we have in 2050) on a regular basis. A "car wreck" scale of computer failure would result in you losing a few days of memories - but not much worse.
- Thus ensues a complex moral/ethical debate.
- I don't doubt the moral/ethical issues...they are exceedingly complex. However, we'll have a decade between the technology first seeming "real" before the first people will be using it - and to start with, it's going to take immense computing resources (perhaps a warehousefull) for "early adopters" - so only the very rich (or perhaps very accomplished/valuable) individuals will be able to have it. It'll be another decade after that before the average person would be able to afford it. That's plenty of time for the ethical issues to be worked out.
- SteveBaker (talk) 04:29, 15 June 2009 (UTC)
- I wonder how quickly the brains will have to be imaged and at what resolutions to extract the data with enough fidelity. APL (talk) 06:31, 15 June 2009 (UTC)
- That's a serious concern. I used to think that brains were like computers in that they had software and hardware - and (as with a computer) if you turn it off, the sofware just vanishes and everything the machine was doing at that instant was lost. However, brains are a bit more complicated. It seems that only short-term memory is represented as electrical signals zipping around some kind of closed-loop pathway - but everything that is more than a few minutes old is retained by physically rewiring the neurons. That being the case (and perhaps the jury is still out on this one), we'd only have to replicate the connectivity of the neurons to have the brain be able to 'reboot' successfully with only short-term memory loss. I don't know enough brain biology to be certain of that - but I believe it to be true. SteveBaker (talk) 13:34, 15 June 2009 (UTC)
- I wonder how quickly the brains will have to be imaged and at what resolutions to extract the data with enough fidelity. APL (talk) 06:31, 15 June 2009 (UTC)
- I can't really comprehend how much "state" you need to save if you want to recreate the finite state machine that is a 45-year old human - there are hundreds of thousands of words, millions of sentences, who knows how to count how many images, sounds, sensory perceptions - all stored in the brain.
- Well, we know how many neurons there are - and how many connections each one has to it's neighbours. That tells you how much storage space you need for the network
- What about the key issue of plasticity (both neural and synaptic)? Brains physically change. The brain you had this morning is not the same brain, in terms of the number of neurons and the connections each one has, that you have now. Neurons are born and die, they grow and shrink, they fire and wire. Its very likely it is those very changes that encodes the part of you - your memories, what you sensed, learned, forgot and how you felt - that was formed today.
- So even if we could scan a brain and somehow make sense of what all the neurons in the present synaptic configuration meant at that given moment, its likely that would only provide us with a snapshot in time, analogous to the information a screen-shot gives you about a movie. To understand the movie you need to watch frame after frame after frame, in the correct order. Likewise you might only be able to understand how the plasticity of the brain encodes the information present by doing scan after scan after scan and comparing the changes. But even if you scanned your brain every day from birth until the day of your death, would you still be able to understand its plasticity well enough to predict how it would wire in a completely novel situation? Say, for example, if you found yourself face to face with a great white shark while going for a dip in the sea, how could you possibly predict how your brain would wire in response to that. What frame of reference would you use? Going back to the movie analogy, consider a good twist ending. You can watch every single frame, and - up to that final scene - still not fully predict how it will end and be totally amazed when something unexpected is revealed.
- I predict we will be able to artificially model human brains one day, but never be able to recreate the unpredictable wonder that is human consciousness. Predictable movies are boring movies, so I'll enjoy every second of the original on the big screen and pass on the straight-to-DVD, cash-in sequel, thanks. Rockpocket 06:17, 15 June 2009 (UTC)
- To be clear : You're proposing a paranormal aspect to human consciousness? APL (talk) 06:31, 15 June 2009 (UTC)
- No, I'm not. I'm suggesting a unpredictable aspect to human consciousness. Rockpocket 07:10, 15 June 2009 (UTC)
- But surely accurately modeling the brain would entail modeling even the unpredictable aspects? It's not as if our brains suddenly break the laws of physics every time we have an unexpected thought? APL (talk) 17:53, 15 June 2009 (UTC)
- No, I'm not. I'm suggesting a unpredictable aspect to human consciousness. Rockpocket 07:10, 15 June 2009 (UTC)
- To be clear : You're proposing a paranormal aspect to human consciousness? APL (talk) 06:31, 15 June 2009 (UTC)
- What about the key issue of plasticity (both neural and synaptic)? Brains physically change. The brain you had this morning is not the same brain, in terms of the number of neurons and the connections each one has, that you have now.
- Our software simulation would have to allow dynamic reconnection of simulated neurons - but that's not a hard thing to do. Bear in mind that we already have software that simulates neural networks - they can easily accomodate on-the-fly rewiring...creation and deletion of neurons. I don't think this is an obstacle. Remember - I'm not talking about building some custom computer chip which mimics the wiring of your brain. I'm talking about using a general purpose computer so simulate those connections using tables of data. Naively: You'd have a big array of neuron data - with one entry for each neuron. Each neuron gets a number and inside its data structure would be a long list of the numbers of all of the other neurons it connects to (the synapses). When the neuron 'fires' you'd loop through the list of connections and apply an excitatory or inhibitory signal to each of the synapses - changing the data structure for the neurons you just changed the state of. Over and over again...very fast. Adding new connections to our software brain is a simple matter of changing the list of synapses that this neuron connects to. We would (of course) need to understand what causes new connections to form - at the neural level. That may take some biology that we don't understand yet - but we have 40 years to figure it out and when you consider the state of our understanding of the brain back in 1970, I think we have time.
- Say, for example, if you found yourself face to face with a great white shark while going for a dip in the sea, how could you possibly predict how your brain would wire in response to that.
- Well, I presume that each neuron is mindlessly following a set of rules that some piece of biochemistry makes it follow. Perhaps the concentration of chemicals along a frequently used pathway results in a stimulus for neurons to grow their dendrites in that direction - resulting in shorter connections where the information flow is densest. I'm no biologist - so I don't know whether that's what happens - but the answer is one of chemistry and cell growth - not some complicated question about sharks and swimming. The neurons are just simple switching machines...not unlike the 'gates' in an electronic circuit - except for this rewiring business.
- I predict we will be able to artificially model human brains one day, but never be able to recreate the unpredictable wonder that is human consciousness.
- Unpredictability comes from complexity - as well as from randomness. Computer software can already be so complex as to seem unpredictable by any reasonable measure. Our "brain computer" would be similar in size and complexity to the entire Internet. Chaos theory can easily ensure that degree of seeming randomness because the processes we'd be simulating would suffer from 'sensitive dependence on initial conditions'. I'm sure that if we could do this at all, the person involved would be 100% convinced that they were still "alive". The extent to which we could make things seem "normal" for them is a tricky question...the immediate lack of the normal senses of sight, sound, touch, smell, taste, proprioception, etc would be a big problem. The simulated brain would need to learn how to connect to it's new senses...cameras, microphones, etc. We could perhaps plan for this in advance by planting electronic devices into the living brain which people would learn to use before death - and which could continue to function in precisely the same way in the simulated brain. We know that blind people can be given "sight" by connecting electronics directly to nerves in the retina - and even to other parts of the body.
- 13:34, 15 June 2009 (UTC)
- What about the key issue of plasticity (both neural and synaptic)? Brains physically change. The brain you had this morning is not the same brain, in terms of the number of neurons and the connections each one has, that you have now.
I think the system identification problem is far tougher than the "system simulation" problem discussed above. I am curious to know what technology can possibly be used to "scan the brain" at a sufficient spatial, temporal and functional resolution. Is there one available today that simply needs to be "grown", or do we need to come up with something(s) qualitatively new ? Abecedare (talk) 07:38, 15 June 2009 (UTC)
- I guess you could freeze the brain and take thin slices. If everything necessary to reproducing the brain survived such a process I don't suppose it would be too difficult to automate scanning the slices and generating an electronic replica. It sounds about possible with todays technology. Doing it without killing a person sounds a lot more difficult but it might be possible with nanomachines going around in the blood and reporting back to scanners around the head. There's also the possibility of replacing bits of the brain gradually with prosthesis so one day without a visible transition a person is all imlemented in silicon. Dmcq (talk) 11:56, 15 June 2009 (UTC)
- I agree that the 'scanning' problem is a tough one. It's the hardest because (in a sense) the other part of the problem is already solved. We know we'll be able to build computers big enough - and we know that simulating something as fundamentally simple as a neuron can be done. We know that small errors in that simulation aren't critical because we know that the brain still functions after considerable damage, under changes in chemical environment and so forth. Scanning is hard because we haven't even started to look at that technology.
- We have brain scanners that can look at large-scale structures - perhaps (I don't know) a really precise PET scanner or CT scanner could image such small structures given 40 years of technology advances. There is certainly plenty of research into ever more precise brain imaging...it's not unreasonable to assume that we could do that. Nanotechnology (meaning nanotechnology robots and computers) would certainly make the destructive imaging of brain very do-able - but I'm doubtful that this technology will come to fruition.
- The idea of replacing the brain a bit at a time is an intriguing one. We know that you could chop out a sizeable chunk (say 1%) of someone's brain and that they'd still be "themselves" - able to think, function, etc. If we replaced that 1% with some computer circuitry representing a neural simulation of about the same number of neurons - and did so in such a way that the living brain could make use of it just as if it were biological neurons - then perhaps brain plasticity would result in that 1% being used for thinking and memory storage in the future. Do this 1% at a time and after 100 such operations - the person is living entirely inside the computer. My concern is that the rate at which this might happen could be too slow. If it took the person a year for their brain to adapt to the new circuitry (which seems reasonable) - it might take 100 years to replace the brain at 1% per year. You'd have to do it in small steps or else there might be massive memory loss at each stage.
- SteveBaker (talk) 13:45, 15 June 2009 (UTC)
- For our purpose we need to be able to measure not only the structure of the neuronal network but also synaptic strength that is thought to encode memory, and perhaps other neuronal and synaptic characteristics. None of the technologies listed above are likely to be ever able to do this - the reason being a fundamental mismatch, not just technological barriers that can be resolved by throwing enough money at the problem. For instance the spatial resolution of a MR scanner is not only at (at least) 9-orders of magnitude worse than what would be required to visualize a neuron - the contrast mechanism (T1/T2 relaxation times in anatomical scans, or the haemodynamic response in fMRI) are essentially bulk tissue phenomenon and hence will never be scalable down to a neuron resolution. The contrast mechanism in CT scans (X-ray attenuation coefficient) is even more limited, while PET measure the degree of absorption of the injected radionuclide, which again is a bulk effect. Finally, slicing and freeze drying a brain may preserve the structural properties of the tissue, but many of the functional properties are destroyed.
- In short, I think a fundamentally new enabling technology in order to identify and record the brain state. Personally I think most computer scientists underestimate the difficulty of this "wet sciences" problem. I have no doubt that it will be eventually solved by but I'll be (pleasantly) shocked if it happens in 30-50 year time-frame, even in a laboratory setting. Abecedare (talk) 19:10, 15 June 2009 (UTC)
- Awesome information (albeit a bit depressing!). Thanks! I guess we computer folk had better get a move on with the nanotech robots then. :-( SteveBaker (talk) 00:42, 16 June 2009 (UTC)
- Just wanted to thank everyone for this interesting and informative discussion about mankind's oldest concern in the modern age. And to think, all this from a poorly-worded question from an anon! —Pie4all88 T C 21:51, 16 June 2009 (UTC)