Wikipedia:Reference desk/Archives/Science/2011 December 12
Science desk | ||
---|---|---|
< December 11 | << Nov | December | Jan >> | December 13 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
December 12
[edit]Started with the big bang
[edit]I am not an expert of any kind, this question is suppose to be what findings could discredited the whole big bang theory? (you can answer if you like) and then i wondered, is it more productive to try and disproved scientific theories and findings rather thatn finding more proof about it? (please correct any grammatical errors or spelling) MahAdik usap 00:01, 12 December 2011 (UTC)
- The fundamental evidence of the big bang is that everything we can see in space is moving away from everything else. In other words, space is expanding. Also, looking into the past (if that doesn't make sense, ask about viewing things many light years away), space has always been expanding. So, if it has always been expanding, it must have been much smaller in the past. At some point, it was really small and it got bigger - that is the big bang. To disprove it, you have to show that at some point in the past, space wasn't expanding. Then, it could have been larger at some point in the past, not smaller. You could get into the specifics of the big bang and try to disprove some small detail. The details are always being examined and theories about the details change over time. -- kainaw™ 00:06, 12 December 2011 (UTC)
- Scientists are always trying to disprove theories. You can't prove them - there is no amount of evidence that could make you 100% certain it's right. Scientists do experiments to see if the theory correctly predicts the results. If it gets it wrong, you've disproven the theory. If it gets it right, then nothing really changes. If, after lots of attempts to disprove a theory, no-one has succeeded, then you conclude that it is probably right. --Tango (talk) 00:28, 12 December 2011 (UTC)
- Does that mean you can disprove gravity? MahAdik usap 00:48, 12 December 2011 (UTC)
- Been there, done that, with general relativity.
- As for the OP's question, if a new theory was invented that explains the cosmic microwave background, the redshift of distant galaxies, and the current composition of the universe to a greater precision than the Big Bang theory, the Big Bang theory would be disproved. What might that theory look like? Well, if scientists knew, they wouldn't be supporting the Big Bang theory. --140.180.15.97 (talk) 00:58, 12 December 2011 (UTC)
- Does that mean you can disprove gravity? MahAdik usap 00:48, 12 December 2011 (UTC)
- Science is never really static, it's a balance between induction and deduction - coming up with new ideas and finding ways where it might be disproved or be replaced with better ideas. Some fields of sciences are predominantly about disproving things. By pinpointing what is impossible you can reliably construct a suitable enough explanation for what is possible. The concept of the scientific method and subsequent peer-review is one example of that. If a conclusion can not be observed or replicated in other experiments, its reliability goes down.
- The Theory of Gravity itself is one example, it is now known to be incorrect, but reliable enough to have been used until the advent of the Theory of General Relativity (which is also imperfect and being challenged by Quantum Mechanics). Even the modern Theory of Evolution itself is very different from the original theory by Darwin. That's why the highest level of reliability in Science is and always will be theoretical, because it never presupposes something to be infallibly true. Do not confuse the usage of "Theory" in scientific terminology with colloquial usage though, the former has a great deal of reliability, the latter almost none.-- Obsidi♠n Soul 01:22, 12 December 2011 (UTC)
Regression to the mean
[edit]It seems to me that Darwin's theory of evolution, as he formulated it, doesn't work because any variation would quickly be destroyed due to regression to the mean. How did Darwin explain away this problem, considering that he had no idea about basic Mendelian inheritance, let alone the Hardy-Weinberg equilibrium or modern genetics? --140.180.15.97 (talk) 02:14, 12 December 2011 (UTC)
- It's the "mean" it self that drifts. I don't know if he formally addressed the issue but he first carefully outlined how his theory already works in terms of artificial selection (pigeon and dog breeding for example). He then proposed that natural selection could replace the hand of man to impart a similar, albeit slower or more subtle influence. There were several such "problems" even after Darwin's time, but overall his theory fit the available evidence well enough that the "problems" didn't present absolutely unambiguous refutations, just "loose ends" that needed working out. We're still discovering more loose ends and working them out to this day. In fact it's more correct to call our current understanding Modern evolutionary synthesis then Darwinian evolution, just because we have build so much since Darwin's time, but there is no question that it is Darwin's work that lays as the foundation. Vespine (talk) 03:08, 12 December 2011 (UTC)
- Regression to the mean doesn't really apply here. For it to apply, a single animal must be born with exceptional mutations - very exceptional. Then, that animals offspring will regress to the mean because it simply wouldn't be possible for such exceptional mutations to be maintained. But, that was never Darwin's theory. He didn't claim that a fish was swimming around and suddenly gave birth to a monkey. He suggested that very small - almost impossible to detect genetic variations could occur with each offspring. If one of those mutations were to give the offspring a better shot at reproducing, then those mutations would be passed to the offspring who may pass it to their offspring and so on. Then, as Vespine noted, the mean will shift to include the mutation. -- kainaw™ 05:01, 12 December 2011 (UTC)
- But the mean shifts by a smaller amount for minor mutations than for exceptional mutations. If a fish had a mutation that made it 10% stronger, for example, it doesn't matter how fit the fish is reproductively. Even if it had a 100% chance of reproducing, its mate will almost certainly not have the mutation, so their children will only be 5% stronger, and their grandchildren 2.5% stronger, etc. Within a few generations, the mutation will be forgotten, unless the natural selection is so strong that it can combat the exponential decay of the mutation. Of course, we know now that DNA can remember mutations from the beginning of life to the end of time, but I thought scientists in Darwin's time believed the child's traits were simply the average of its parents' traits. --140.180.15.97 (talk) 06:33, 12 December 2011 (UTC)
- Is this a question? In any case, there is no 'mean' here, except in as much as there is a 'genetic mean' - an (imaginary) 'average' phenotype - but that is exactly where evolution is operating, if you want to study it from a population perspective. It is true enough that Darwin didn't understand the finer points of the mechanisms of inheritance - and thus found some of his conclusions difficult to reconcile with what was understood at the time regarding the subject - but that turned out to be a problem with the contemporary understanding of such mechanisms, rather than with Darwin's theory. 'Darwinism' survived (and evolved) not because it revealed all the answers, but because it presented a 'fitter' explanation of what was evident from studies of nature than was previously available. That is how science works, and why dogmatism doesn't... AndyTheGrump (talk) 06:46, 12 December 2011 (UTC)
- My question is exactly whether or not Darwin found his conclusions "difficult to reconcile with what was understood at the time regarding the subject". Did he regard regression to the mean as a problem that's difficult to reconcile with his theory, or did he explain it in some way? Obviously the modern evolutionary synthesis deals with regression very well, but that's not what I'm interested in. --07:13, 12 December 2011 (UTC) — Preceding unsigned comment added by 140.180.15.97 (talk)
- Sorry, but your issue is based on a incorrect premise. To wit: "If a fish had a mutation that made it 10% stronger ... even if it had a 100% chance of reproducing, its mate will almost certainly not have the mutation, so their children will only be 5% stronger, and their grandchildren 2.5% stronger, etc." This is incorrect as genetics doesn't work like this - to keep it simple, for a basic single gene trait, if the parent with the 10% stronger gene (however you want to interpret that in real life) bred, any offspring that got the gene would also be 10% stronger, and therefore more likely to survive as well, and also pass on the 10% stronger mutation. Meanwhile, those that didn't get the gene would be weaker and more likely not to survive and reproduce. There are no 5% stronger fish. Genes don't get diluted down in the way you propose. (Of course I'm simplifying things greatly, ignoring dominance and recessiveness, etc, but for brevity's sake this is entirely valid.) Thus the trait doesn't regress to the mean, but rather the 10% stronger fish survive and reproduce until that gene and the phenotype it codes for becomes more prevalent in the population. This regression to the mean/dilution of traits argument has long been a creationist tool for those that want to go beyond their more basic nonsense and confuse those with slightly higher levels of reasoning. In terms of Darwin, IIRC he indeed understood this argument and knew it was wrong as he'd done extensive experimentation to prove that traits were not diluted/averaged, but of course he didn't have the knowledge of genetics to explain why it was wrong. I'll have a look when I get home to see if I can find exactly where he addressed it. --jjron (talk) 07:47, 12 December 2011 (UTC)
- OK, couldn't find exactly what I hoped, but here's a little further information. FWIW at the time this concept was referred to as blending inheritance, which I probably should have remembered. In Bully for Brontosaurus (Chap 23, "Fleeming Jenkin Revisited"), the late Darwin expert Stephen Jay Gould writes "Darwin had pondered long and hard about problems provoked by blending ... As for recurrent, small scale variation, blending posed no insurmountable problem, and Darwin had resolved the issue in his own mind long before reading Jenkin. A blending variation can still establish itself in a population under two conditions: first if the favourable variation continues to establish itself anew so that any dilution by blending can be balanced by reappearances, thus keeping the trait visible to natural selection; second, if individuals bearing the favoured trait can recognize each other and mate preferentially - a process known as assortative mating in evolutionary jargon. Assortative mating can arise for several reasons, including aesthetic preference for mates of one's own appearance and simple isolation of the favoured variants from normal individuals. Darwin recognised both recurrent appearance and isolation as the primary reasons for natural selection's continued power in the face of blending." It's probably worth reading the whole essay though if you can get your hands on it.
- Incidentally there's also a bit more discussion of this in the blending inheritance article, including statements such as "Darwin himself also had strong doubts of the blending inheritance hypothesis, despite incorporating a limited form of it into his own explanation of inheritance published in 1868" and "Moreover, prior to Jenkin, Darwin expressed his own distrust of blending inheritance to both T.H. Huxley and Alfred Wallace."
- In a quick look in Darwin's epochal On the Origin of Species in the "Variation Under Domestication" chapter he writes: "If it could be shown that our domestic varieties manifested a strong tendency to reversion ... so that free intercrossing might check, by blending together, any slight deviations of structure, in such case, I grant that we could deduce nothing from domestic varieties in regard to species. But there is not a shadow of evidence in favour of this view ...". He then goes onto a fairly lengthy discussion of almost straight Mendelian inheritance based on his extensive personal research into pigeon breeding. A few pages later he writes: "There can be no doubt that a race may be modified by occasional crosses, ... but that a race could be obtained nearly intermediate between two extremely different races or species, I can hardly believe.", which seems to express considerable doubt on the concept of blending. Anyway, that's bit more to your question directly based on Darwin. Yes, he had thought about it, and no, it didn't lead to his theory not working. --jjron (talk) 11:14, 12 December 2011 (UTC)
- Sorry, but your issue is based on a incorrect premise. To wit: "If a fish had a mutation that made it 10% stronger ... even if it had a 100% chance of reproducing, its mate will almost certainly not have the mutation, so their children will only be 5% stronger, and their grandchildren 2.5% stronger, etc." This is incorrect as genetics doesn't work like this - to keep it simple, for a basic single gene trait, if the parent with the 10% stronger gene (however you want to interpret that in real life) bred, any offspring that got the gene would also be 10% stronger, and therefore more likely to survive as well, and also pass on the 10% stronger mutation. Meanwhile, those that didn't get the gene would be weaker and more likely not to survive and reproduce. There are no 5% stronger fish. Genes don't get diluted down in the way you propose. (Of course I'm simplifying things greatly, ignoring dominance and recessiveness, etc, but for brevity's sake this is entirely valid.) Thus the trait doesn't regress to the mean, but rather the 10% stronger fish survive and reproduce until that gene and the phenotype it codes for becomes more prevalent in the population. This regression to the mean/dilution of traits argument has long been a creationist tool for those that want to go beyond their more basic nonsense and confuse those with slightly higher levels of reasoning. In terms of Darwin, IIRC he indeed understood this argument and knew it was wrong as he'd done extensive experimentation to prove that traits were not diluted/averaged, but of course he didn't have the knowledge of genetics to explain why it was wrong. I'll have a look when I get home to see if I can find exactly where he addressed it. --jjron (talk) 07:47, 12 December 2011 (UTC)
- From a strictly historical point of view, it's worth noting that on the fine points of how heredity worked, Darwin was very confused, very wrong. He didn't realize quite how wrong he was when he published Origin, but by the time Descent of Man came out, he was aware that this was something to address if his theory was to have any standing. Darwin's pangenesis theory was his attempt at this. It was very confused and this was really not Darwin's strong spot (it was partially a "particulate" theory of heredity, like later Mendelism, but also had aspects of blending, as well as odd quasi-Lamarckian feedback loops — just a big mess of ideas). Darwin was not worried so much about regression as he was about limited variation, which pangenesis tried to get around.
- The problem of regression to the mean was heavily studied by later evolutionists, led primarily by Francis Galton, Darwin's younger cousin. Galton was the one who coined the term "regression to the mean" and formulated it statistically, though originally he called it "regression to the mediocre," which points out Galton's interests. Galton has his own not-very-good theory of heredity that was derived from Darwin's, and those of the Galtonian school — Karl Pearson and the other "biometricians" — saw themselves as distinctly non-Darwinian in the sense that they didn't believe that gradualism actually worked. This is part of what set the stage for decades of clashing between the biometricians and the Mendelians later, neither of which saw themselves as strictly Darwinian. In fact, it was entirely common for both camps to give Darwin credit for showing that evolution had occurred, but that natural selection was foolishly wrong. Julian Huxley called this period (from Darwin's death through the 1930s) the "eclipse of Darwinism". Note that this period was not anti-evolution at all — just anti-Darwinian, in the sense that even the gradualists and the "saltationists" (e.g. the Mendelians and the mutation theorists) thought that natural selection by itself wasn't enough. There's a very amusing note in Karl Pearson's biography of Galton (written in the 1920s, I believe) where he says that it's unfortunate that today everybody knows how wrong Darwin was. It comes as quite a shock of a statement from such an eminent scientist if you're not aware of what the debates were (and what Pearson's role was in them).
- The resolution for this is well known today — it's the modern evolutionary synthesis, which combines all of the good insights from the biometricians and the Mendelians, and shows how they actually work out for a pretty great way of understanding why natural selection was right all along.
- It's an interesting history, I think, and it also throws a little light on how bad the "Darwin came up with this and everybody saw he was right, except the Church" narrative, which is very wrong. Recommended reading (a very short, very well-written, very intelligent little volume): Peter J. Bowler, The Mendelian Revolution: The Emergence of Hereditarian Concepts in Modern Science and Society (Baltimore: Johns Hopkins University Press, 1989), esp. chapters 2-4 if you're curious about this period. Big figures in this debate included Francis Galton, Karl Pearson, August Weismann, Walter Frank Raphael Weldon, and esp. William Bateson. --Mr.98 (talk) 16:47, 12 December 2011 (UTC)
- True to an extent, but a lot of Darwin's 'confusion' and 'errors', such as with pangenesis, came about largely in his attempt to respond to critics, and make allowances and corrections for 'problems' that had been identified with his original work. Of course, as we know today, many of the identified problems were not actually problems at all, but with the limited knowledge available at the time they seemed far more significant at the time, especially when a number of very prominent biologists around the world remained ardent anti-evolutionists and there was quite a lot of tension about. Part of the reason Darwin took so long to publish (there's pretty strong evidence he sat on his theory for something like 22 years before being brought to publish) was that he was essentially trying to have all bases covered, and have pre-prepared responses to any likely criticisms. This is also why if anyone's going to read The Origin of Species, they should make sure they read the first edition, as later editions had many (in hindsight ill-considered) changes he made in an attempt to counter the critics. --jjron (talk) 10:18, 13 December 2011 (UTC)
- I tried to limit my discussion of Darwin's confusion and errors to his discussion of heredity. He really had no clue how heredity worked and struggled to come up with a model that would work with his theory of evolution. I don't blame Darwin for this; his critics didn't know how it worked either, and it took a long time to fully flesh out. Heredity is not an insignificant part of evolution, obviously. You can black box that sort of thing if you want to, but I don't blame the biologists for saying, "hold on, how is this supposed to work on a cellular level?" immediately afterwards. Again, it's to Darwin's credit that he convinced them that evolution had occurred in the first place. It is to his lasting legacy, and the reason he is so celebrated today, that his mechanism turned out to be quite on the nose, despite his black boxing, and then fudging, of the hereditary mechanism. --Mr.98 (talk) 20:54, 13 December 2011 (UTC)
- Darwin did explicitly address this issue. His argument was that it was the mean itself (say the mean height of giraffes for example) that drifted over many generations. This position is independent of the mechanism of inheritance (of which Darwin had no understanding). --catslash (talk) 13:44, 17 December 2011 (UTC)
Jehovah's witnesses denial of blood and Hippocratic Oath
[edit]If a doctor is forced by the Hippocratic Oath to preserve life and must not play god, but has to make a blood transfusion to an unconscious Jeovah's witness, what should he do? Which rule has preference? — Preceding unsigned comment added by 88.9.111.78 (talk) 03:11, 12 December 2011 (UTC)
- This is ultimately a legal question, such decisions are generally not left for individuals to make on the spot. Jehovah's_Witnesses_and_blood_transfusions and Criticism_of_Jehovah's_Witnesses#Legal_considerations go into some detail. Vespine (talk) 03:38, 12 December 2011 (UTC)
- I remember reading a newspaper article about a pregnant Jehovah's Witness who rejected a transfusion urged by a doctor and died. Here's a more recent incident, where a minor/teenager in a car accident did the same thing. On the other hand, Canada's Supreme Court ruled that a minor's rights weren't violated when she was given a transfusion against her will. Clarityfiend (talk) 05:18, 12 December 2011 (UTC)
- This should actually go to the humanities desk. Clarityfiend (talk) 06:17, 12 December 2011 (UTC)
- It would probably depend on national/state law and on the doctors' knowledge of the patient's wishes. Many medical facilities require the consent of the patient (or possibly next-of-kin) for medical procedures, and would if possible seek consent in any circumstances. In the UK if it's not possible to ask for consent in an emergency the patient can be treated if the doctor judges it's necessary, but if the patient has explicitly refused the treatment it should not be given (subject to numerous conditions and requirements). In the UK you can refuse treatment even if that means you will die, but other nations may have different laws. The UK practice is described in detail here: [1][2] --Colapeninsula (talk) 10:57, 12 December 2011 (UTC)
- I've been told that British doctors no longer take the Hippocratic oath, although clearly they are under similar professional obligations. The latter could, of course, have a specific policy for this issue in a way the oath would not. Grandiose (me, talk, contribs) 11:05, 12 December 2011 (UTC)
- The Hippocratic oath is mostly about how an apprentice should respect his master. Nobody has actually taken it in centuries. --Tango (talk) 12:17, 12 December 2011 (UTC)
- That depends heavily on how you interpret "taken" -- my sister's graduating class, just a few years ago, recited the oath, complete with Apollo, Asclepius, and the rest. A caveat was provided that the oath was more symbolic than literal -- for example, "belief in Apollo the healer" was specifically noted as neither implied nor required -- but the impression was given that this remains not uncommon at US medical schools. I believe the abortion clause also got talked around some in the explanatory text, but I can't recall the specifics there (as that's one of the more highly-charged and relevant bits). — Lomn 15:00, 12 December 2011 (UTC)
- The Hippocratic oath is mostly about how an apprentice should respect his master. Nobody has actually taken it in centuries. --Tango (talk) 12:17, 12 December 2011 (UTC)
- I've been told that British doctors no longer take the Hippocratic oath, although clearly they are under similar professional obligations. The latter could, of course, have a specific policy for this issue in a way the oath would not. Grandiose (me, talk, contribs) 11:05, 12 December 2011 (UTC)
Soybean and male infertility
[edit]Wikipedia says soybean has no effect on male fertility which is sourced to a 2010 journal article. But a 2008 BBC news piece, citing another journal article, claims soybean is responsible for male infertility. Which is true and what is the latest consensus among scientific community? --Foyrutu (talk) 09:05, 12 December 2011 (UTC)
- The BBC article you link to discusses a single study. Our article specifically mentions:
- Because of the phytoestrogen content, some studies have suggested that soybean ingestion may influence testosterone levels in men. However, a 2010 meta-analysis of 15 placebo controlled studies showed that neither soy foods nor isoflavone supplements alter measures of bioavailable testosterone or estrogen concentrations in men
- In other words, it doesn't dispute that a few studies may suggest there is a possible connection. But studies do that all the time with a large variety of things, it doesn't mean there is anything to worry about, although it may suggest to scientists there's something to look in to. In fact from the BBC article itself, it's seems all the study found was a correlation between soya consumption and low sperm count. The BBC article doesn't even mention if those conducting the study attempted to correct for possible confounding factors (since it only involved 99 participants this would seem difficult to do) and I'm lazy to look it up. But heck the BBC article itself mentions there are reasons to take the study with care.
- What our article makes clear is a 2010 meta analysis of 15 studies suggests there is no effect on testosterone or estrogen concentration. And from the description in our article, it sounds like these were high quality studies where rather then just looking for a correlation, soy consumption was varied and some possible effects (I haven't checked the article so I don't know if they checked anything other then testosterone or estrogen concentration) which may affect fertility were studied. This in itself is generally significantly more reliable then simply looking for a correlation among the general population based on their existing soybean consumption. When you have a meta-analysis of these studies even better (and incidentally is something akin to what we require in medical cases, see Wikipedia:Identifying reliable sources (medicine)).
- Nil Einne (talk) 12:39, 12 December 2011 (UTC)
Non-placebo effect?
[edit]Hi, is there any known effect in which a patient is given a real medication, but is told (or believes) that she is given a placebo, and as a result does not react to the medication? Gil_mo (talk) 10:26, 12 December 2011 (UTC)
- It's not quite what you are asking about, but see nocebo. --Tango (talk) 12:22, 12 December 2011 (UTC)
- I read about nocebo, thanks, I am asking about something else. Gil_mo (talk) 12:32, 12 December 2011 (UTC)
- Intentional deception of clinical trial subjects is frowned upon and done only when necessary due to ethics rules, so this isn't something that would be seen in medical research on a regular basis. I don't think it has a formal name, it's just an aspect of the placebo effect. This might be interesting reading: it more or less states that placebo effects generally only affect how patients feel, which only affects things like pain and nausea, so "not react" is difficult to interpret. SDY (talk) 14:45, 12 December 2011 (UTC)
- I read about nocebo, thanks, I am asking about something else. Gil_mo (talk) 12:32, 12 December 2011 (UTC)
- The standard in medication studies is the double-blind study. The people who prescribe the medications do not know which patient is which. Some are given real medication. Some are given a placebo. Then, the the doctor who meets the patients doesn't know if the prescription is a medication or a placebo. So, it is very possible for a doctor to tell a patient "this is probably a placebo" when it isn't. The doctor doesn't really know. -- kainaw™ 15:54, 12 December 2011 (UTC)
- I don't have any sources for you, though I would guess in most cases, the medicine would do as expected (if you poison someones drink, they still get sick, even though they don't expect it.) To be honest, I doubt anyone has done any real studies where they informed patients they were getting a placebo and, then, gave them something else; there would be no real point to this and it would be of questionable ethics (depending on how it was done) Though, I would imagine if you told people that something wasn't going to be effective and it had a subjective element, then it might have a lessened effect (for example, if you tell someone stopping smoking that nicotine patches don't help, I bet they would report a harder time. Same thing if you told someone that advil doesn't help with headaches.) But this isn't really the same thing, here the patient is having their subjective state more strongly influenced by what they are being told/expecting, then by what they observe; but, nobody is telling them that the nicotine patch is a bandaid, then watching to see if they have an easier time quitting. Was there some specific context you were curious about? Phoenixia1177 (talk) 05:46, 13 December 2011 (UTC)
- No specific context, sheer curiosity. Your answer makes perfect sense. Gil_mo (talk) 06:26, 13 December 2011 (UTC)
- I don't have any sources for you, though I would guess in most cases, the medicine would do as expected (if you poison someones drink, they still get sick, even though they don't expect it.) To be honest, I doubt anyone has done any real studies where they informed patients they were getting a placebo and, then, gave them something else; there would be no real point to this and it would be of questionable ethics (depending on how it was done) Though, I would imagine if you told people that something wasn't going to be effective and it had a subjective element, then it might have a lessened effect (for example, if you tell someone stopping smoking that nicotine patches don't help, I bet they would report a harder time. Same thing if you told someone that advil doesn't help with headaches.) But this isn't really the same thing, here the patient is having their subjective state more strongly influenced by what they are being told/expecting, then by what they observe; but, nobody is telling them that the nicotine patch is a bandaid, then watching to see if they have an easier time quitting. Was there some specific context you were curious about? Phoenixia1177 (talk) 05:46, 13 December 2011 (UTC)
un-work hardening
[edit]A friend told me that when plumbers buy flexible copper tubing it can be bent easily but if they leave it alone for a couple of years it will become too stiff to use. This seems to be the opposite of work hardening. Is there a term for this property? RJFJR (talk) 15:04, 12 December 2011 (UTC)
- I'd just say ageing...
see (maybe) precipitation hardening.--Ouro (blah blah) 15:32, 12 December 2011 (UTC) - Strike that, not relevant. --Ouro (blah blah) 15:35, 12 December 2011 (UTC)
- Isn't it just called 'natural aging' when it happens at room temperature ? Sean.hoyland - talk 15:50, 12 December 2011 (UTC)
- Is this even true? The only place I have used this in my house is running gas line to appliances (never for water). And there it is intended for devices which are only ever moved at several year intervals. If it became brittle it would create serious issues. Rmhermen (talk) 16:37, 12 December 2011 (UTC)
- Natural aging via processes like precipitation hardening at ambient temperatures is certainly true for some alloys. Don't know about copper piping though or whether it would ever make it brittle over decades. Copper pipes approved for use as gas lines are commonplace aren't they ? I assume natural aging isn't an issue or at least someone who knows what they are talking about when it comes to copper pipes (not me) has already thought of it. Sean.hoyland - talk 18:26, 12 December 2011 (UTC)
- Is this even true? The only place I have used this in my house is running gas line to appliances (never for water). And there it is intended for devices which are only ever moved at several year intervals. If it became brittle it would create serious issues. Rmhermen (talk) 16:37, 12 December 2011 (UTC)
- Isn't it just called 'natural aging' when it happens at room temperature ? Sean.hoyland - talk 15:50, 12 December 2011 (UTC)
- I'd expect a patina to form on the surface if the tubing isn't coated with something to prevent it. The patina is likely harder than the base copper, so might crack or flake off if it's a thin layer, when you try to bend the copper tubing, or might be stiff enough to prevent bending, if it's a thicker layer. Presumably most copper tubing only needs to be flexible when initially installed. After that, the need to bend it isn't likely to come up until it needs to be replaced, and then it can just be cut into sections and removed. StuRat (talk) 01:04, 13 December 2011 (UTC)
- No copper tubing is used on appliance gas lines where they are occasionally moved for cleaning/repair (the appliances, not the tubing). You keep a couple large loops of tubing to allow for the distance travel and are careful not to kink it. They never put the gas connection on the front of a stove, for instance. Rmhermen (talk) 19:28, 13 December 2011 (UTC)
Most dense ceramic
[edit]What is the most dense ceramic and how dense is it? ScienceApe (talk) 15:53, 12 December 2011 (UTC)
- I see a claim made that cerium oxide-stabilised zirconium oxide is the densest.[3] But it may be only a promotional claim. Rmhermen (talk) 16:41, 12 December 2011 (UTC)
- Tungsten carbide can be used as a ceramic material, and it's more than twice as dense. ~Amatulić (talk) 01:25, 13 December 2011 (UTC)
What is this new "Elvis monkey" Reuters speaks of, and can somebody please redirect the redlink in the subject line to the proper species article. Rgrds. --64.85.221.193 (talk) 17:25, 12 December 2011 (UTC)
- It's Myanmar snub-nosed monkey. One Reuters article isn't evidence that anyone other than one lazy journalist uses this term, so a redlink isn't appropriate. -- Finlay McWalterჷTalk 17:29, 12 December 2011 (UTC)
- A photo(shop) of it is here. It doesn't remotely look like Elvis. -- Finlay McWalterჷTalk 17:32, 12 December 2011 (UTC)
- Thank you kindly, Mr. McWalter. Turns out I already knew about that discovery, just didn't make the connection. Oh, well. Rgrds. --64.85.221.193 (talk) 17:45, 12 December 2011 (UTC)
Method of image charges
[edit]If a problem in electrostatics is solved by using the method of image charges, the induced charge on the conductor is always equal to the image charge. Is there a simple reason for this? 65.92.7.9 (talk) 21:25, 12 December 2011 (UTC)
- It's a simple syllogism. If the induced charge was different from the one in the solution than the solution wouldn't be a solution. Dauto (talk) 00:41, 13 December 2011 (UTC)
- See Gauss theorem in case my brief explanation above isn't clear. Dauto (talk) 00:43, 13 December 2011 (UTC)
- Sorry, I don't understand, nor do I see the connection with Gauss' theorem. 65.92.7.9 (talk) 01:33, 13 December 2011 (UTC)
- Which do you find most simple? Each of these are a restatement of the derivation provided in our article:
- The solution to the boundary value problem and the (unphysical) assumption of a non-conductive volume on the other side of the ground plane is uniquely a point-charge of value -q.
- The construction of a ground-plane sets up a problem with spatial symmetry; the equations that define electrostatics consequently result in charge symmetry.
- The integral of the surface charge is determined by the electrostatic field due to the test charge.
- These "simple" answers are a little more vague than the complete solution of the defining equations, which are presented in detail in our article. Nimur (talk) 03:32, 13 December 2011 (UTC)
- How do you conclude that the conductor's charge = image charge? I know that the image charge and induced charge produce the same electric field in the upper half plane, but couldn't it be that a certain charge distribution with total charge =/ q produces the same E-field as a point charge q? 65.92.7.9 (talk) 13:54, 13 December 2011 (UTC)
- Which do you find most simple? Each of these are a restatement of the derivation provided in our article:
- That's where Gauss theorem comes in. If the two solutions produce the same field outside of the conductor than they must have the same total charge inside of the conductor because the total charge can be obtained from the field by a surface integration according with Gauss theorem. Dauto (talk) 15:02, 13 December 2011 (UTC)
- Oh, okay! I can see that for the spherical conductor example in the article, but I'm having trouble seeing what the Gaussian surface would be for the infinite plane conductor (first example). 65.92.7.9 (talk) 15:58, 13 December 2011 (UTC)
- Closed-path integrals around regions with infinite size can be computed as Cauchy integrals. It's not a very intuitive concept. To be mathematically rigorous, you must start using terminology like "holomorphic" and "simply connected" and "complete subset." The physicist in me does not like having to fall victim to such mathematical lingo. In any case, if you're willing to accept equations at face value, you can compute an integral for the gaussian surface around the infinitely-sized region underneath the grounding surface. If you want to understand why that integral is valid, you should start by reading Methods of contour integration. Nimur (talk) 20:20, 13 December 2011 (UTC)
- Oh, okay! I can see that for the spherical conductor example in the article, but I'm having trouble seeing what the Gaussian surface would be for the infinite plane conductor (first example). 65.92.7.9 (talk) 15:58, 13 December 2011 (UTC)
- Another way to see that is to solve the problem for a sphere of radius R and then take the limit R -> infinite. Dauto (talk) 08:36, 14 December 2011 (UTC)