Wikipedia:Reference desk/Archives/Science/2010 November 8
Science desk | ||
---|---|---|
< November 7 | << Oct | November | Dec >> | November 9 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
November 8
[edit]Fastest airship
[edit]For the greatest speed, would it be better to make a practical airship as large as possible, or as small as possible? 92.29.116.53 (talk) 01:06, 8 November 2010 (UTC)
- I don't think it would be so much size as shape, and how smooth the edges are. For maximum speed, you would want to put the cabin inside. --The High Fin Sperm Whale 01:12, 8 November 2010 (UTC)
- Our Airship article says:
Drag coefficient then says that "airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume.""The disadvantages are that an airship has a very large reference area and comparatively large drag coefficient, thus a larger drag force compared to that of airplanes and even helicopters. Given the large flat plate area and wetted surface of an airship, a practical limit is reached around 80–100 miles per hour (130–160 km/h). Thus airships are used where speed is not critical."
- Clearly, all else being equal, a larger airship will have greater drag and will require greater thrust to maintain the same speed as a smaller airship. So you'd want a smaller airship for speed, down to the limit of no longer having enough lift to carry the same propulsion system (though you could probably get away with carrying less fuel, too, depending on your purposes). WikiDao ☯ (talk) 01:34, 8 November 2010 (UTC)
- Given equal volumes and engine powers, a long thin airship can fly faster in still air than a short fat airship. Cuddlyable3 (talk) 08:44, 8 November 2010 (UTC)
- Our Airship article says:
While it's true that a smaller airship would have less drag, it would also only be able to support a smaller less powerful engine. The Zeppelins and similar airships were big things, they could have been built smaller. I'm unclear of the best ratio of power to drag, so the question is still open. I'm imagining an airship built to cross the Atlantic with the greatest speed, no expense spared. 92.15.3.137 (talk) 11:18, 8 November 2010 (UTC)
- An airship would be able to cross the Atlantic from North America to Europe much faster then the reverse by using the jet stream, presuming your specific design was capable of high altitude flight. Googlemeister (talk) 14:38, 8 November 2010 (UTC)
There must be some optimum size. For example while a small aircraft could be faster than a large aircraft, a six-inch model aircraft will not do. 92.29.125.32 (talk) 11:04, 11 November 2010 (UTC)
- The answer is: the bigger the higher the maximum speed. (Assuming the most aerodynamic shape at any one size.) From above:
- Drag coefficient then says that "airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume."
- So, while the reference area (hence drag) rises with the square, the mass rises with the cube. Therefore larger airships can carry progressively more powerful engines in proportion to their surface area, and so achieve a higher maximum velocity. -84user (talk) 23:08, 11 November 2010 (UTC)
Thanks, so that's why Zepplins got bigger and bigger over the years. 92.29.120.164 (talk) 14:39, 12 November 2010 (UTC)
Vaccinations of the Chilean miners
[edit]I'm curious about a line in the article about the Chilean mining accident saying the group was vaccinated against tetanus, diphtheria, flu, and pneumonia. Particularly flu and diphtheria; these diseases are caught from other people, and the group had already been isolated for three weeks by the time the vaccines were sent down, so if the diseases were present, wouldn't everyone have already been exposed? Or were the vaccinations a precautionary measure intended primarily for after the miners were rescued? Mathew5000 (talk) 02:10, 8 November 2010 (UTC)
- It's common to use a DPT vaccine to immunize against both diptheria and tetanus at the same time, although exact protocols vary from country to country. Physchim62 (talk) 02:35, 8 November 2010 (UTC)
- As for flu, it can be acquired through contact with a surface, and the miners were in contact with the "world above", including family members living in less than ideal conditions in Camp Esperanza. Physchim62 (talk) 02:41, 8 November 2010 (UTC)
- Thanks. On the first point you're probably right although if they were given a DPT vaccine I'd expect news sources to mention all three diseases, whereas none of them mention pertussis. On the second point, I think you are correct again as I found a news article in Spanish [1] that explains, in connection with the vaccines, the concern about infection on the supplies they were sending down in the shaft, although they did apparently take “las precauciones de asepsia” before anything went down. Mathew5000 (talk) 07:36, 8 November 2010 (UTC)
- You can get versions of the "DPT vaccine" that don't include the pertussus component, as is mentioned in our article, and (according to this report) these are the ones that are used for the maintenance vaccinations of adults in Chile (the triple DPT vaccine being given at age 2–6 months). The same report mentions that diptheria can be transmitted by "indirect contact with contaminated elements", although this is rare. So my guess is that the medical team were more worried about tetanus infection (an obvious risk for people working in a mine), and gave the DT vaccine either because that was the vaccine they were used to using in Chile or because they thought there was a potential risk of diptheria infection. Physchim62 (talk) 13:11, 8 November 2010 (UTC)
- Thank you very much, Physchim62! —Mathew5000 (talk) 09:09, 9 November 2010 (UTC)
- You can get versions of the "DPT vaccine" that don't include the pertussus component, as is mentioned in our article, and (according to this report) these are the ones that are used for the maintenance vaccinations of adults in Chile (the triple DPT vaccine being given at age 2–6 months). The same report mentions that diptheria can be transmitted by "indirect contact with contaminated elements", although this is rare. So my guess is that the medical team were more worried about tetanus infection (an obvious risk for people working in a mine), and gave the DT vaccine either because that was the vaccine they were used to using in Chile or because they thought there was a potential risk of diptheria infection. Physchim62 (talk) 13:11, 8 November 2010 (UTC)
- Thanks. On the first point you're probably right although if they were given a DPT vaccine I'd expect news sources to mention all three diseases, whereas none of them mention pertussis. On the second point, I think you are correct again as I found a news article in Spanish [1] that explains, in connection with the vaccines, the concern about infection on the supplies they were sending down in the shaft, although they did apparently take “las precauciones de asepsia” before anything went down. Mathew5000 (talk) 07:36, 8 November 2010 (UTC)
gravity
[edit]is gravity repulsive? —Preceding unsigned comment added by Ajay.v.k (talk • contribs) 03:32, 8 November 2010 (UTC)
- Yes, I find it disgusting. How dare it not allow me to fly at will! HalfShadow 03:33, 8 November 2010 (UTC)
- And have you even seen some of those equations that general relativity vomits out? Physchim62 (talk) 03:48, 8 November 2010 (UTC)
- No, gravity always causes an attraction between two masses – it might be a very small attraction, but it is always an attraction, never a repulsion. Physchim62 (talk) 03:48, 8 November 2010 (UTC)
- Unless you happen to have some Negative mass. DMacks (talk) 04:58, 8 November 2010 (UTC)
I just want to say, I think this is a very good question, because I was wondering what it would be like if the laws of gravity were reversed and if there was just a whole different way of looking at gravity. If gravity repeled for example. So anyway, OP if you could like, tell a little more about what got you to ask that question, I would be interested. AdbMonkey (talk) 04:59, 8 November 2010 (UTC)
- The fact that gravity is an attraction only (and never a repulsion) makes it unlike the other fundamental forces. For this and other reasons, no quantum theory of gravity exists; and gravity can be described with general relativity (while other interactions like electrostatic force can not). Nimur (talk) 05:18, 8 November 2010 (UTC)
- Is there a fundamental flaw in the theory that gravity is a repulsion between nothingness and masses? Cuddlyable3 (talk) 08:39, 8 November 2010 (UTC)
- Some kinds of nothingness are very gravitationally attractive to masses. And I can't think of any kinds of nothingness that aren't – "nature abhors a vacuum". WikiDao ☯ (talk) 23:02, 8 November 2010 (UTC)
- Black holes have a heck of a lot of somethingness. Red Act (talk) 23:53, 8 November 2010 (UTC)
- Could you be thinking of virtual particles, Cuddlyable3? That would be, in a sense, "masses" {emerging from/arising out of/being "repulsed" by?} "nothingness", right...? WikiDao ☯ (talk) 05:21, 10 November 2010 (UTC)
- Some kinds of nothingness are very gravitationally attractive to masses. And I can't think of any kinds of nothingness that aren't – "nature abhors a vacuum". WikiDao ☯ (talk) 23:02, 8 November 2010 (UTC)
- Is there a fundamental flaw in the theory that gravity is a repulsion between nothingness and masses? Cuddlyable3 (talk) 08:39, 8 November 2010 (UTC)
General relativity doesn't even consider gravity to be an attraction. For example, the article on Newtonian gravity uses the word "attraction" 11 times, but the article on general relativity doesn't use it once. "Attraction" as used when discussing Newtonian gravity refers to a kind of action at a distance, which general relativity rejects. In reality, mass causes a curvature of spacetime in a purely local manner. Rather than being attracted to that distant massive object, other objects in that vicinity instead just travel along locally straight lines on that curved spacetime. When discussing the forces between particles, "attraction" can be a local phenomenon, in the form of an acceleration effected in a local manner via gauge bosons. But general relativity doesn't even consider gravity to be an acceleration, a complete theory of quantum gravity doesn't exist, and the gauge boson that would be involved in gravity, the graviton, has never been observed, so it's far from clear that that same form of "attraction" mechanism would also apply in any sense to gravity. Red Act (talk) 11:44, 8 November 2010 (UTC)
I just thought maybe there was some theory that a center point in the universe created the repulsion, so that gravity was actually repulsion, but, um, I would not know. AdbMonkey (talk) 14:30, 8 November 2010 (UTC)
Glucose test
[edit]why does glucose react with benedicts solution? —Preceding unsigned comment added by 173.48.177.117 (talk) 04:53, 8 November 2010 (UTC)
- We have an article about Benedict's solution, which explains exactly what sorts of chemicals it reacts with (and the gory chemical details of exactly why those are the ones). We have an article about glucose, with a whole bunch of different types of diagrams...see if you can find one there that has the general functional group type with with Benedict's reacts. DMacks (talk) 04:56, 8 November 2010 (UTC)
- As a further hint, compare the oxidation states of an aldehyde versus a carboxylic acid, and copper(I) oxide versus copper(II) oxide. You might want to check reducing sugar. John Riemann Soong (talk) 09:13, 8 November 2010 (UTC)
Can an airship use siphons rather than fans?
[edit]I had my own airship question, which I'll file separately to make sure I don't take away from the previous question today.
Is it possible to get good efficiency from an airship by not using fans external to the airship, but simply having siphons that take in air from the front and push it out through a nozzle at the rear? A single chamber that uses some fibers to pull open a cavity at the center of the ship, then allows it to contract should be enough in concept, with one-way baffles at front and rear. Of course, multiple chambers separated by flexing partitions would allow the ship to more continuously take in and discharge air, without needing to change its overall shape. The exact form of the nozzle at rear strikes me as rocket science, about which I'm best off saying as little as possible...
I understand that energy may be wasted if the air is significantly compressed or expanded in the process, since this involves changes in temperature; but in general it seems like such a system should convert the entire energy expended into propulsion. Of course, the real appeal is that one dreams of riding a zeppelin that moves effortlessly and silently among the clouds. Wnt (talk) 12:14, 8 November 2010 (UTC)
- You seem to visualise an airborne Jellyfish. Cuddlyable3 (talk) 14:06, 8 November 2010 (UTC)
- Sounds to me even more like a low-intensity jet engine. I don't see why it wouldn't be feasible. TomorrowTime (talk) 17:11, 8 November 2010 (UTC)
- I doubt it would be efficient. Turbulence in airflow is a lossy process. You can help overcome turbulence by keeping the airflow laminar - that means you need smooth surfaces and continuous air streams. The apparatus described above sounds like it would be "pulsating" - this would incur a huge amount of loss. Every time airflow impinged on a baffle or a valve, it would lose energy; the engine or mechanism used to drive the system would have to compensate by adding more energy. We have a great diagram of thermodynamic efficiencies for various engine concepts - you'll have a very hard time beating a turbofan in terms of specific impulse. They are among the most efficient devices ever built by humans for extracting kinetic energy out of chemical combustion. Nimur (talk) 18:33, 8 November 2010 (UTC)
- Sounds to me even more like a low-intensity jet engine. I don't see why it wouldn't be feasible. TomorrowTime (talk) 17:11, 8 November 2010 (UTC)
- The 1930 Omnia Dir did use directable air "jets" at each end (1932 article with diagrams and photos) in addition to a normal external propeller, for low speed manoeuvering. -84user (talk) 23:49, 11 November 2010 (UTC)
Brown sugar
[edit]I wondered what made brown sugar different than regular sugar, so I looked it up. Now I'm a bit confused. It seems, from what I read, that to make sugar you cut down the cane, process it somehow, and this gives you sugar crystals and molasses. Then, to make brown sugar, you add the molasses back into the sugar. So why bother separating them in the first place? The brown sugar article mentions being able to better control the proportion of molasses to sugar, but is this the only reason? It seems overly complicated just to maintain consistency. Dismas|(talk) 12:52, 8 November 2010 (UTC)
- Well other then quality control, which tends to be rather important nowadays our article also mentions enabling the use of beet sugar while keeping the taste of sugar cane brown sugar that consumers in many countries expect. It's likely cheaper anyway. White sugar refineries produce large quantities of refined white sugar cheaply for the variety of markets which use sugar, diverting some of that production to make brown sugar by adding back some molasses before crystallisation is easier the setting up a seperate production line. Highly refining the sugar also makes it easier to remove unwanted purities other then molasses. This also concurs with the cost in most countries AFAIK (at least in NZ), white sugar is the cheapest, brown sugar is more but less refined sugars are even more. (Well in NZ we also get "raw sugar" which tends to be the same price as brown sugar but I'm not sure what it really is, it tends to be less brown and also far less sticky then brown sugar so I would guess it has less molasses, it's also more granulated.) See [2] for an example of how white sugar is produced. The recent LoGiCane [3] sold in Australia [4] and NZ [5] would be another example where something is added back that was removed although it isn't uncommon in other areas either. Nil Einne (talk) 13:25, 8 November 2010 (UTC)
Counter toxicity
[edit]There has been a lot in science journals and what not about the drug salinomycin to kill of cancer stem cells, more than 100 times anything else available at this present time, it also is said that it only kills cancer cells but doesn't disturb other cells. The drug is currently used-produced cheaply, for livestock to kill off their parasites. The tests were done on mice and the major drawback of this drug is that it seems to be very toxic to humans, including possible long term heart problems to muscle problems to being possibly fatal. My question is would it be possible to ever come up with drugs or something else that counteracts the toxicity and could in the future make it possible for humans to use the salinomycin drug to fight cancer? Is it possible to have a counteractive drug against drugs that are toxic or is that a dead end, or in other words once something that is toxic is taken in there is no drugs that can be taken as well to alleviate the toxic effects? —Preceding unsigned comment added by 71.137.248.238 (talk) 14:48, 8 November 2010 (UTC)
- There are many drugs that are given together with other drugs that prevent or counteract side-effects of the first one. Whether it's feasible for a specific case depends on how related (at a biochemical level) the desired effects are to the undesired ones. For example, if the drug hits two chemical receptors, a specific agent could possibly be found that prevented the drug from affecting one (preventing the undesired effect when the drug would hit it) while still allowing it to affect the other (leading to desired effect). Or else one could alter the drug itself to be more specific to the target that has the desired effect. On the other hand, if the side-effect and desired effect are both part of the same biochemical pathway, it becomes hard to stop one effect specifically without also stopping the other. Medicinal chemistry and chemical biology are two fields that study how exactly a chemical exerts its effects--what biochemical binding happens, and how the structure of the drug does or does not affect it--and therefore can study how to alter a drug to be more specific or design a related compound that protects against or rescues the "other" biochemical effects. DMacks (talk) 17:48, 8 November 2010 (UTC)
- Every drug has a therapeutic window, some narrower than others, and salinomycin apparently has some troubles there. Often it is possible to improve a drug to widen the window, because (as in this case) toxicity may be in one tissue (the heart) while the benefit is in another (the breast tumor). Or they could affect different proteins in the same cell. Through trial and error (most often) or perhaps by identifying the desired and undesired targets and trying to do rational drug design, it is possible to modify the drug so that it won't sit as well in the wrong place, or is more perfectly fits (see lock-and-key model (enzyme)). Alternatively a change in the drug might affect whether cancer cells can get rid of it with P-glycoprotein, or whether it penetrates the blood-brain barrier, or how rapidly it is broken down in the liver (since sometimes the breakdown process causes the toxicity), and any number of such idiosyncratic considerations.
- But mostly, people try a lot of different related compounds based on what they can synthesize and hope they get lucky. See high-throughput screening. Also drug discovery and combinatorial chemistry may be interesting. Oh, and last but not least, consider personalized medicine using pharmacogenetics to screen out the patients the drug is most likely to harm. Wnt (talk) 22:23, 8 November 2010 (UTC)
Question
[edit]If I has a bag of sand with some marbles in it and I shake the bag of sand, do the marbles end up at the top or the bottom of the bag? —Preceding unsigned comment added by Mirroringelements (talk • contribs) 14:59, 8 November 2010 (UTC)
- The Brazil nut effect applies when the particles are of similar density. The density of loose sand is about 1,500 kg/m3. The density of a glass marble is about 2,600 kg/m3. Shaking the bag will tend to move the centre of gravity of the whole bag & contents downwards when the bag settles. This is best achieved with the densest particles at the bottom. Axl ¤ [Talk] 09:51, 12 November 2010 (UTC)
Gigantism and evolution
[edit]So I was reading about Robert Wadlow, and I was wondering if his condition could be passed on to his offspring. Is it possible that some giant animals today exist because an ancestor had a disease that caused excessive growth, and those traits were selected for? ScienceApe (talk) 15:17, 8 November 2010 (UTC)
- If by "disease", you mean "genetic abnormality", then yes. But you might want to read about formal definition of disease, as compared to genetic mutation; usually the term "disease" refers to an acquired condition. Most biologists consider the inheritance of acquired traits to be a defunct theory - that means that if the disease that caused a particular trait (like gigantism) was caused by a virus or infection, it is not something that the offspring will inherit. There are a few possible exceptions to this: epigenetics is the modern study of heritable traits by mechanisms other than chromosomal DNA; but I am not aware of any known conditions related to human growth that have such an explanation. A quick search on Google Scholar for epigenetic gigantism turned up Beckwith–Wiedemann syndrome - and that article has a section on genetics that may indicate a developmental condition; but there is stronger evidence for a "random" genetic mutation. Nimur (talk) 18:26, 8 November 2010 (UTC)
- "Most biologists consider the inheritance of acquired traits to be a defunct theory" -- Most, not all??? If anyone considered it to be a live and suitable theory, I would certainly worry about where they got their degree from... Have they ever tried to look into how sperm and eggs are produced? --Lgriot (talk) 09:23, 10 November 2010 (UTC)
Gravitational constant G change
[edit]If the gravitational constant were (or pick an arbitrarily different value) instead of , how would the universe be affected? NW (Talk) 17:59, 8 November 2010 (UTC)
- You have to precisely specify what this means, as explained by Michael Duff here (see Appendix C for specifically the issue of change in G). One way of making this question meaningful is to multiply G by the square of a mass as is suggested here. Count Iblis (talk) 18:52, 8 November 2010 (UTC)
Is it really possible?
[edit]Did (s)he really pull the eyes out like that? Or it's photoshopped? Thanks. —Preceding unsigned comment added by 85.222.86.190 (talk) 18:12, 8 November 2010 (UTC)
- Ask yourself what that 'action' would do to the optic nerve and the muscles around the eyeball. You may like to check the anatomy of the human eye. Then think about the pain that would be generated by the 'action' in the photograph. I think you know the answer. Richard Avery (talk) 19:20, 8 November 2010 (UTC)
- Kim Goodman can extend her eyeballs by 12mm, which is the world record.[6] That's only about a tenth of the distance implied by the photoshopped image. It's not real. Red Act (talk) 19:52, 8 November 2010 (UTC)
- Marty Feldman's face was notable for his bulging eyes, a condition caused by Graves' disease. There are lots of images of him here.Cuddlyable3 (talk) 20:41, 8 November 2010 (UTC)
Sour things
[edit]Why is it, that when you eat something sour, your eyes involuntarily squint? Lexicografía (talk) 18:54, 8 November 2010 (UTC)
- Something to do with it being astringent perhaps. 92.24.186.80 (talk) 20:44, 8 November 2010 (UTC)
- Because pretty much all of the holes in your head are connected. Your eyes are connected to your nose via the Nasolacrimal duct. Your nose is connected to your mouth via the pharynx. So, when you eat something which would burn your eyes if you put it directly into them, it still burns a little because there are ways it can get there. --Jayron32 05:18, 9 November 2010 (UTC)
Lighting circuits
[edit]A couple of times, I've turned off the relevant lighting circuit before starting work chaining the light fitting, only to discover that the house's main trip goes at some point during such work. I was under the impression that turning off the lighting circuit would isolate it from doing exactly that. Is something going wrong here? 92.18.72.181 (talk) 19:46, 8 November 2010 (UTC)
- Definitely time to call in a licensed electrician to figure out what's going on. If breakers/fuses are set up properly (assuming normal codes), disconnecting there will remove power from the downstream circuits and (as you say) isolate them--there would be no power, and nothing you do would affect breakers upstream of the one you pulled. I've seen all sorts of scary miswirings that can give your results: breaker on the neutral with the hot unswitched, more than one circuit wired into the same switch/junction box (i.e., you only pulled one of the feeds to it), etc. Same (or even more) goes for just turning off a wall switch...there could still be a hot wire into the fixture (before heading out to the switch) or the switch could be on the neutral wire, and jiggling the hot might short it against the junction box or some other connection. Once you're in the nonstandard situation you have symptoms of, I don't think wikipedia can recommend a solution due to potential risks. DMacks (talk) 20:01, 8 November 2010 (UTC)
- The OP seems to be in the UK where the mains voltage is 240V AC and not to be messed with. Do not rely on turning off one lighting circuit before working on a light fitting. Most domestic light switches break only one wire and leave the other wire live. Turn off the house's main switch, and if you are sensible like me you will additionally remove the main fuses and check every bare wire with a neon Test light that you know works. Cuddlyable3 (talk) 20:34, 8 November 2010 (UTC)
- Didn't even notice the likely UK of the poster. In that case you also get the "fun" of a possible ring circuit, in which you maybe even can't "just turn off" one circuit (again depends on local switching topology). DMacks (talk) 21:07, 8 November 2010 (UTC)
- A correctly-wired ring circuit has both ends connected to a single (usually 30 amp) fuse or breaker, and no lighting should be connected to it. I agree that there appears to be some illegal wiring in the house, and strongly recommend that the OP take the advice given above. Dbfirs 21:43, 8 November 2010 (UTC)
- I've done this one (I'm in UK). You cut off the lighting circuit (i.e. "live" wire) on the MCB, then you work on the circuit only to find that suddenly all the power goes off. It's because the neutral floats (I've seen 0.8V), and when you touch the neutral to earth, the RCD trips (because the power going down the neutral wire is not the same as the power going down the live wire). It's a pain, all you can do is disconnect the neutral at the box as well. Ronhjones (Talk) 22:01, 8 November 2010 (UTC)
- First if the OP is uncertain she/he should ask a electrician but I think this could be normal as you indicates. I am from Sweden so this may not apply to the OP. If it is the Residual-current device that the OP calls "the house's main trip" it could be due to a low voltage difference between the neutral wire and the protective earth. I do not think it is correct to say that the "neutral floats" since it is still connected to the system. What happens is that there are current flowing through either the protective earth or the neutral on the path to the connection between the protective earth and the neutral (PEN) see Earthing system. This introduce a small voltage difference between PE and N due to voltage drop and if you connect them e.g. by cutting a cable it will result in enough current to trip the Residual-current device. The voltage between PE and N can be due to Neutral return currents through the ground or due to voltage drops along the neutral due to currents from other parts of the installation.Gr8xoz (talk) 22:47, 8 November 2010 (UTC)
- Can this still happen in an installation with "Protective Multiple Earthing" where the neutral is bonded to earth within the house? Dbfirs 10:12, 12 November 2010 (UTC)
- ... (later) ... I see from our article on Earthing systems that the TN-C-S earthing system is used throughout North America, Australia, the UK and the rest of Europe etc. I recall an old (pre-PME) installation which had a constant 6 volts AC between neutral and earth, in fact I was able to light a torch bulb across these terminals, but I thought that such installations were long gone! Dbfirs 12:47, 12 November 2010 (UTC)
- First if the OP is uncertain she/he should ask a electrician but I think this could be normal as you indicates. I am from Sweden so this may not apply to the OP. If it is the Residual-current device that the OP calls "the house's main trip" it could be due to a low voltage difference between the neutral wire and the protective earth. I do not think it is correct to say that the "neutral floats" since it is still connected to the system. What happens is that there are current flowing through either the protective earth or the neutral on the path to the connection between the protective earth and the neutral (PEN) see Earthing system. This introduce a small voltage difference between PE and N due to voltage drop and if you connect them e.g. by cutting a cable it will result in enough current to trip the Residual-current device. The voltage between PE and N can be due to Neutral return currents through the ground or due to voltage drops along the neutral due to currents from other parts of the installation.Gr8xoz (talk) 22:47, 8 November 2010 (UTC)
- I've done this one (I'm in UK). You cut off the lighting circuit (i.e. "live" wire) on the MCB, then you work on the circuit only to find that suddenly all the power goes off. It's because the neutral floats (I've seen 0.8V), and when you touch the neutral to earth, the RCD trips (because the power going down the neutral wire is not the same as the power going down the live wire). It's a pain, all you can do is disconnect the neutral at the box as well. Ronhjones (Talk) 22:01, 8 November 2010 (UTC)
- A correctly-wired ring circuit has both ends connected to a single (usually 30 amp) fuse or breaker, and no lighting should be connected to it. I agree that there appears to be some illegal wiring in the house, and strongly recommend that the OP take the advice given above. Dbfirs 21:43, 8 November 2010 (UTC)
- Didn't even notice the likely UK of the poster. In that case you also get the "fun" of a possible ring circuit, in which you maybe even can't "just turn off" one circuit (again depends on local switching topology). DMacks (talk) 21:07, 8 November 2010 (UTC)
- The OP seems to be in the UK where the mains voltage is 240V AC and not to be messed with. Do not rely on turning off one lighting circuit before working on a light fitting. Most domestic light switches break only one wire and leave the other wire live. Turn off the house's main switch, and if you are sensible like me you will additionally remove the main fuses and check every bare wire with a neon Test light that you know works. Cuddlyable3 (talk) 20:34, 8 November 2010 (UTC)
Dna Code
[edit]Hello, Dna is made up of the four nucleotides (G.A.T.C), thats twice as good as binary. What sort of proteins could be made using only two or more of the existing ones? Is this even possible? Or am I totally understanding things wrong? Is there any fossil records of a simpler form of dna to show how dna evolved to a base of four? Slippycurb (talk) 20:44, 8 November 2010 (UTC) —Preceding unsigned comment added by Slippycurb (talk • contribs) 20:31, 8 November 2010 (UTC)
- If there are only two nucleotides, then our existing codon triplet would only allow for 8 different encoded amino acids or else a codon would have to be 4 or more (rather than 3) to give more encoding possibilities (for example, codon quintet would be needed to encode our existing 20ish amino acid choices). Fewer choices would limit the structural variations possible (fewer combinations of polarity, pKa, hydrophobicity, steric bulk, etc.) and also possibly the redundancy/tolerance for mis-pairing during reading or replication. Our Genetic code article is probably a good place to read about these ideas, and also some possible evolutionary history (especially the "Theories on the origin of the genetic code" section, and maybe also the Nucleic acid analogues article). DMacks (talk) 21:02, 8 November 2010 (UTC)
- (ec) Proteins are made of chains of amino acids. Base-pairs (a nucleotide and its complementary partner) are grouped in threes, which are called codons. Each codon encodes an amino acid. The transcription process starts at a start codon, then creates by attaching to the protein being created the amino acid corresponding to the current codon, until the stop codon is reached. Our Genetic_code#RNA_codon_table has a list of codon to amino acid mapping, and our introduction to genetics has a lay-man's summary of the process. Coming back to your question, it is believed that originally only the first base-pair of each codon was used; the rest were padding. Then the second was used, and finally the third. The first base-pair makes the biggest change in the coded amino acid, normally from hydrophobic to hydrophilic, (water-attracting to water repelling). The second and third make finer changes, and normally encode for an amino acid that will cause only a slightly worse version of the protein. CS Miller (talk) 21:08, 8 November 2010 (UTC)
- If you don't use G and U, you can't start a protein, and GUG start codons are rare anyway; properly you need A, U, and G. And if you don't use U and A you can't stop a protein normally, because all the stop codons contain them. (You could just stop it at the end of the RNA, but then you get non-stop decay...)
- On the other hand, sequences using two nucleotides more than others are important. There are wide variations in GC-content between different organisms, sometimes over surprisingly short intervals of evolution. As purines A is like G, and as pyrimidines T and U are like C, and DNA methylation and deamination make transitions between these more common than any other mutation. As a result, you can run across proteins that are composed 70% or more of just two nucleotides. In extreme cases I think you can see evolutionary divergence of the protein as it has tried to reconcile itself to a constant stream of mutations pushing it toward a certain composition, but that's not established that I know of.
- I think that most people would agree that the RNA world hypothesis involves the establishment of four nucleotide bases well in advance of the invention of proteins (to permit the level of catalytic activity needed for such aspirations), though there's no hard evidence. Wnt (talk) 21:57, 8 November 2010 (UTC)
baseline characteristics and confounder adjustment in a paper
[edit]Hi all I am going to do a presentation reviewing a clinical trial in a few days time. In this trial, the two groups of subjects (control and exposure) differ from each other at baseline in terms of age and smoking status etc. But, at the end of the paper, the authors say that they found an association between their outcome measure and exposure independent of confounders, so my question is do I need to talk about the different baseline characteristics if the authors adjusted for such confounders? Hope I have explained my questions clearly. Thanks, RichYPE (talk) 22:16, 8 November 2010 (UTC)
- I've never had anything to do with clinical trials, but it might help if you at least tell us who your audience is and which capacity are you presenting? Are you lecturing, or being graded on this, or is this in a clinical capacity? You obviously know that ideally trial groups should be randomized. If they can't be then you can do your best to filter out the confounding factors, but no matter how carefully you do that it will make your trial weaker. There's really no such thing as a PERFECT trial, there are strong trials and weak trials, the more aspects of your trial that you can't get "just" right, like randomization, the weaker the trial's results should be treated. You might also want to point out that typically, just one single trial, no matter how strong, isn't usually enough to draw strong conclusions. Of course, it doesn't always happen that way, particularly and unfortunately in the pharma industry. Vespine (talk) 21:55, 9 November 2010 (UTC)
Wrong answer?
[edit]Read question 10 a) ii) : http://www.tqa.tas.gov.au/4DCGI/_WWW_doc/006624/RND01/PH866_paper03.pdf The solution is given here (you have to scroll down below the examiner's comments): http://www.tqa.tas.gov.au/4DCGI/_WWW_doc/006665/RND01/PH866_report_03.pdf Is the solution correct? It seems wrong to me (the Right hand rule tells me otherwise)?. --115.178.29.142 (talk) 22:50, 8 November 2010 (UTC)
- Looks okay to me. With your thumb in the direction of the current, your fingers point up on the left (inside the coil) and down on the right (outside) for magnetic field A. This diagram agrees. B obviously goes the opposite way. Clarityfiend (talk) 03:53, 9 November 2010 (UTC)
- But the solution has it going down on the left (inside the coil) and up on the right (outside) for magnetic field A. 220.253.253.75 (talk) 04:58, 9 November 2010 (UTC)
- Oh, heck. I need to get my eyes checked. It's wrong. Who comes up with these "solutions"? Sarah Palin? Anyway, you're supposed to look at it upside down because you're in Tasmania. Yeah, that's it. Clarityfiend (talk) 05:46, 9 November 2010 (UTC)
- But the solution has it going down on the left (inside the coil) and up on the right (outside) for magnetic field A. 220.253.253.75 (talk) 04:58, 9 November 2010 (UTC)
Ultimate fate of photon
[edit]What ultimately happens to photons after arbitrarily long journey of many billions light years? Can they travel unchanged indefinitely or they do decay, scatter or something? Thanx. —Preceding unsigned comment added by 85.222.86.190 (talk) 23:33, 8 November 2010 (UTC)
- They get redshifted due to the metric expansion of space. Red Act (talk) 23:47, 8 November 2010 (UTC)
- Beyond that, no time passes from their point of reference, so nothing can happen to them. — DanielLC 01:16, 9 November 2010 (UTC)
- Yeah, whether even the redshifting is a "real" change in the photon itself is just a matter of perspective. If my understanding is correct, during a cosmological redshift, the photon's wavelength as measured by cosmological proper distance increases, but the wavelength as measured by comoving distance stays the same. Red Act (talk) 02:47, 9 November 2010 (UTC)
The lifespan of the photon is zero. Neutrino oscillations proved that neutrinos do have a "lifespan" and so the photon sits alone as the only known particle with zero lifespan. Hcobb (talk) 03:06, 9 November 2010 (UTC)
- What does "zero lifespan" mean exactly...? WikiDao ☯ (talk) 03:10, 9 November 2010 (UTC)
- In the photon's own frame of reference it is created and destroyed in the same instant. Hcobb (talk) 03:13, 9 November 2010 (UTC)
- So, in the frame of reference of photons created at about 10 seconds after the Big Bang, the Age of the Universe is... 10 seconds? WikiDao ☯ (talk) 04:15, 9 November 2010 (UTC)
- Sort of. Remember that one of the postulates of special relativity is that light cannot be used as a frame of reference, if it is, then there are all sorts of unresolvable paradoxes introduced. One of them is that the photon does not exist in its own frame of reference, that is it has a zero lifespan, i.e. it exists in OUR frame of reference, but in its own it wouldn't exist for any measurable time. Another perspective on the same paradox is that, from light's frame of reference, the entire universe happens simultaneously, that is all events occur in the same instant. Don't try to rap your mind around this things, unlike some of the unintuitive paradoxes such as the twin paradox, which actually occur, these are real physical impossibilities, do we generally don't even ponder what life is like in lights frame of reference. For all intents and purposes, it doesn't exist. --Jayron32 05:12, 9 November 2010 (UTC)
Now imagine a substance so strange that it slows a beam a light down by a large enough fraction that you'll notice the difference. What does that say about the lifespan of a photon? I'd suggest sitting down with a glass of water while you think about it. Hcobb (talk) 06:48, 9 November 2010 (UTC)
- The fact that the local speed of light in a medium is slower than it is in a vacuum doesn't change the nature of the speed of light. The speed of light in water is still invarient, and still presents the same limits in water as does the speed of light in a vacuum. Slow light covers some of this. That the photons slow down in water doesn't change the fundemental nature of the photons. --Jayron32 06:59, 9 November 2010 (UTC)
- That's not actually true: only the vacuum speed of light is the speed of light with all the special properties (invariance, causality restrictions, etc.). See Cherenkov radiation, for instance. What actually happens is that photons traveling through water are continually destroyed and recreated (coherently) with tiny gaps during which "the photon" makes no progress because it doesn't exist. --Tardis (talk) 14:09, 9 November 2010 (UTC)
- Well, sort of. If you want to claim that the photons are absorbed and re-emitted, you have to do a path integral that sums (or, maybe better, averages) over all possible Feynman diagrams, with no answer to the question "but which one of these diagrams really happened?". Not very human-friendly; we are intuitive realists. Unless you have a very good reason why not, when dealing with light in matter, you should almost always use the wave formulation — electric and magnetic fields rather than photons. --Trovatore (talk) 03:57, 10 November 2010 (UTC)
- Please pardon my lie-to-children: it's just a way of describing how one can "slow light down" while still allowing photons to always travel at c. This is useful because one would suppose them to always be "in vacuum" between interactions, even though of course they're delocalized and (via the path integral) "interacting" with everything around them continuously. --Tardis (talk) 15:57, 10 November 2010 (UTC)
- I utterly despise lies to children; I think they're reprehensible. They're OK if prepended with something like "roughly speaking". --Trovatore (talk) 18:48, 10 November 2010 (UTC)
- Please pardon my lie-to-children: it's just a way of describing how one can "slow light down" while still allowing photons to always travel at c. This is useful because one would suppose them to always be "in vacuum" between interactions, even though of course they're delocalized and (via the path integral) "interacting" with everything around them continuously. --Tardis (talk) 15:57, 10 November 2010 (UTC)
- Well, sort of. If you want to claim that the photons are absorbed and re-emitted, you have to do a path integral that sums (or, maybe better, averages) over all possible Feynman diagrams, with no answer to the question "but which one of these diagrams really happened?". Not very human-friendly; we are intuitive realists. Unless you have a very good reason why not, when dealing with light in matter, you should almost always use the wave formulation — electric and magnetic fields rather than photons. --Trovatore (talk) 03:57, 10 November 2010 (UTC)
- I was wondering if this is the case with neutrino oscillations, does it only happen when neutrinos pass through matter, or have they also been shown in vacuum? Graeme Bartlett (talk) 21:48, 9 November 2010 (UTC)
- Is the solar wind between the Earth and the Sun enough of a vacuum for you? Hcobb (talk) 22:59, 9 November 2010 (UTC)
- But this vacuum is almost nothing compared to the thousands of kilometers of sun plasma they pass through after leaving the sun's core. Graeme Bartlett (talk) 08:07, 10 November 2010 (UTC)
- Is the solar wind between the Earth and the Sun enough of a vacuum for you? Hcobb (talk) 22:59, 9 November 2010 (UTC)
- That's not actually true: only the vacuum speed of light is the speed of light with all the special properties (invariance, causality restrictions, etc.). See Cherenkov radiation, for instance. What actually happens is that photons traveling through water are continually destroyed and recreated (coherently) with tiny gaps during which "the photon" makes no progress because it doesn't exist. --Tardis (talk) 14:09, 9 November 2010 (UTC)