Jump to content

Wikipedia:Reference desk/Archives/Science/2009 October 19

From Wikipedia, the free encyclopedia
Science desk
< October 18 << Sep | October | Nov >> October 20 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 19

[edit]

abu hureyra

[edit]

what language is abu hureyra? what in the world does it have to do with the world we call earth? —Preceding unsigned comment added by Mia9013 (talkcontribs) 01:02, 19 October 2009 (UTC)[reply]

Tell Abu Hureyra is a earthen mound which is believed to be the site of a stone-age settlement which is probably the earliest example of agriculture. That's what it has to do with the "world we call earth". The language is arabic, but I don't know the actual english translation. --Jayron32 01:29, 19 October 2009 (UTC)[reply]
Tell is specifically an archaeological mound. Abu Hurairah was a specific person (that was his name); this archaeological site is named in his honor (though he probably has nothing to do with it). Literally, it means "Mound of the Father of the Kitten." (But this literal translation of peoples' names is unusual/uncommon - it would be more appropriate to say "Mound of Abu Hreira"). This is a tough topic for an elementary-school project - being both a recent find and in Syria, it is difficult to do research at even a university level (for political reasons); not much research work is published in English; and since the site was flooded during the construction of the Tabaqah Dam, it's probably never going to be excavated again. This site in particular seems to predate any other major finds in the Levant region - it's important because evidence of seeds and agriculture were found there and dated to around 11,000 BC. Nimur (talk) 04:41, 19 October 2009 (UTC)[reply]
This article [1] may be of help, but it is not really wrtten for an 11 year old pupil. --Cookatoo.ergo.ZooM (talk) 12:48, 20 October 2009 (UTC)[reply]

Ficin and adjuvants

[edit]

Ficin is a reagent that is used in blood banking to enhance or modify certain immunological reactions. Is there any relationship between this enhancement and the effect enhancement produced by Immunologic adjuvants? SDY (talk) 02:47, 19 October 2009 (UTC)[reply]

electrophilic halogenation

[edit]

So going over last semester's stuff for a problem set... ok why does the iron in say, FeBr3 end up being electronegative enough to pull electrons from dibromide?

Is this sort of related to an alkane that already is partially fluorinated becomes even more reactive to fluorinating agents? It seems kind of paradoxical to me on how an electron donor ends up being an electron acceptor ... if Fe is too electronegative, why doesn't it just kick out a bromine? Basically how can Br(+1) and Fe(II) be better than Fe(III) and Br(0)? John Riemann Soong (talk) 03:34, 19 October 2009 (UTC)[reply]

I think you are taking the wrong perspective. I am pretty sure the reaction presents the formation of the FeBr4-, aka the tetrabromoferrate (III) ion. There is no reduction of the iron atom; its a coordination reaction with the bromine; if you want to think of it in redox terms, you are disproportionating the Br2 molecule into Br+ and Br-. The article Electrophilic halogenation actually covers the mechanism pretty well. --Jayron32 04:12, 19 October 2009 (UTC)[reply]
Oh, and the Iron (III) bromide is an electron acceptor (Lewis Acid) in this reaction. It really has little to do with electronegativity, and a lot more to do with the fact that Iron (III) has empty "d" orbitals which are of an appropriate energy to accept electron pairs from Bromine. --Jayron32 04:15, 19 October 2009 (UTC)[reply]
Aren't those d orbitals really high in energy? Why in the world would a halogen such as chlorine or bromine donate its bonding electrons to those orbital? John Riemann Soong (talk) 04:32, 19 October 2009 (UTC)[reply]
Oh, I see. Is it that Br+, Br- is normally bad combination compared to the covalent bond combination, but if Br- can chelate to the Fe, the additional stabilisation drives the formation of Br+? John Riemann Soong (talk) 04:35, 19 October 2009 (UTC)[reply]
The 3d orbitals are actually very close in energy to the valence level. That's one of the properties of transition metals that helps them participate in coordination chemistry. The deal is, its probably an equilibrium between Br2 + FeBr3 <--> Br+ + FeBr4-; and I wouldn't doubt if the K value for this were very small (i.e. favors the left side a LOT). However, since the small amount of Br+ reacts with the aromatic ring, Le Chatelier's Principle tells us that the equilibrium will keep generating the Br+ no matter how little of it there was to start with, and the reaction will go to completion even if the initial equilibrium provided only a negligible amount of Br+. In fact, since the reaction generally takes place in non-protic solvents (water or other protic solvents will prevent the tetrabromoferrate (III) complex from forming), there is very little to stabilize the ions on the right side of the equilibrium, so it is likely a very small amount indeed. However, since you have a mechanism to quickly remove any Br+ that forms as soon as it does, the actual stability of the Fe-Br coordination bond, vis-a-vis the standard Br-Br covalent bond, is moot. It's stable enough to catalyze the reaction. The FeBr4- ion provides the "base" (Br-) to for the deprotonation step as well, so you only need catalytic (much less than stoichiometric) amounts of the iron (iii) bromide anyways. --Jayron32 04:46, 19 October 2009 (UTC)[reply]
Electrophilic halogenation indicates (and all organic texts I've seen agree) that the electrophilic species is "Br2 coordinated to FeBr3": the Lewis acid induces a large dipole along the Br-Br bond, making the bromine atom that is not directly bound to Fe have a large partial-positive. There is not a pure "Br+" ion, merely something that is positive enough to induce the reaction. The Fe-coordinated bromine atom of Br2 does not break off immediately, but rather helps pull off that bromine atom in an SN2-like process. DMacks (talk) 05:16, 19 October 2009 (UTC)[reply]

superacids and aromatic compounds

[edit]

So .... if I use a superacid (magic acid) on an alkane, can I use it to alkylate an aromatic ring? Let's say I have pure benzene. The acid won't react irreversibly with the benzene (or interfere with the alkylation), while the carbocation will go on to perform an electrophilic aromatic substitution, right?

(Just trying to come up with an imaginative way to alkylate benzene on paper.) John Riemann Soong (talk) 05:01, 19 October 2009 (UTC)[reply]

If you can get a cationic carbon that has an open valence site (unsatisfied octet), or at least something very close to that, a pi bond will in principle be able to attack it and go down the Friedel-Crafts pathway. But "[super]acid won't react irreversibly with the benzene (or interfere with the alkylation)"[citation needed]. DMacks (talk) 05:07, 19 October 2009 (UTC)[reply]
Well the way I see it, benzene will kick out the proton, because of the lack of a good nucleophile from the magic acid conjugate base... and isn't an alkyl carbocation more preferable over a benzene carbocation (which has lost lots of resonance stabilisation?) John Riemann Soong (talk) 05:37, 19 October 2009 (UTC)[reply]
But the very meaning of a strong acid is that it provides a large equilibrium amount of H+, and can therefore push protonation reactions far forward even if they are not normally favorable. DOI:10.1021/jo00342a015. DMacks (talk) 05:48, 19 October 2009 (UTC)[reply]

Is there any way to stop overalkylation? I'm supposed to come up with a total synthesis of ibuprofen ... from benzene. I don't know how to alkylate the benzene ring (I can then turn to the green chemistry synthesis yayyy) without overalkylating it at multiple sites. John Riemann Soong (talk) 05:34, 19 October 2009 (UTC)[reply]

What sorts of substituents have what effect on the rate of alkylation? DMacks (talk) 05:37, 19 October 2009 (UTC)[reply]
Well okay I wanted to do the alkylation first so I could basically copy the green synthesis. But I guess I have to do the green synthesis first and then alkylate the ring...? My big worry is that I have this carboxylic acid and that will also be alkylated too, in some fashion. I mean I know carboxylic acids aren't very reactive, but then again an aromatic ring with a deactivating substituent might be less reactive than a carboxylic acid, right? John Riemann Soong (talk) 05:45, 19 October 2009 (UTC)[reply]
The end of the green synthesis does not have a "carboxyl on the ring" in a way that strongly deactivates it (check your textbook for the origin of the deactivation). If the carboxylic acid is alkylated also, what is the product there, and do you know a way to solve it by an additional reaction? On the other hand, suppose you put the alkyl group onto benzne in two steps: first as "something" and then "convert something into alkyl". What are functional groups that you know how to convert to alkyl? Especially think about ones that would also prevent additional alkylations. DMacks (talk) 05:54, 19 October 2009 (UTC)[reply]
Oh whoops. Yeah, convenient that the carboxylic acid is actually an additional carbon away. So .... is this this alkyl chain (with the acid group) an o,p director or an m-director? On one hand, that electropositive carbon might make the benzylic carbon electropositive, (though it's got a pi cloud partially delocalised over it), but on the other hand, alkyl groups tend to be weak activators. Which effect wins?
Let me think about your other suggestions. I'm really inconfident with protecting group strategies, namely because of my fear of side reactions. John Riemann Soong (talk) 06:03, 19 October 2009 (UTC)[reply]

Okay here's my synthesis plan for "alkylating first".

alkylating first

[edit]
  • React benzene with an acyl chloride.
  • Reduce the carbonyl to an alcohol via Wolf-Kishner. This is a benzylic alcohol ... umm, it won't hydrogenate benzene rings right? Is it compatible with aryl carbonyls?
  • Eliminate the alcohol to create a secondary-aryl alkene.
  • Hydrogenate the alkene. The alkene is however, conjugated with the benzene ring. Is there any way to break this pi-cloud interaction with a strong enough catalyst without hydrogenating the benzene ring? John Riemann Soong (talk) 06:14, 19 October 2009 (UTC)[reply]
Oh hey, I guess is the perfect reaction for the Clemmensen reduction? John Riemann Soong (talk) 06:20, 19 October 2009 (UTC)[reply]
Good idea! DMacks (talk) 06:28, 19 October 2009 (UTC)[reply]
W-K also reduces C=O to CH2. There's no practical difference in this case. Tim Song (talk) 06:57, 19 October 2009 (UTC)[reply]

tractionsystems

[edit]

can you tell about how track is used as return conductor using earthing– —Preceding unsigned comment added by 122.252.241.54 (talk) 06:01, 19 October 2009 (UTC)[reply]

See Third rail. Red Act (talk) 07:29, 19 October 2009 (UTC)[reply]

radioactive labelling and fatty acid synthesis

[edit]

I totally don't get this part at all. I guess I get fatty acid synthesis, with iterative aldol condensations, but I don't get why radiotopic labelling ends up on so many carbons of a fatty acid. Basically it's like almost every carbon atom ends up labelled (but even-numbered atoms are labelled differently from odd-numbered ones). John Riemann Soong (talk) 06:29, 19 October 2009 (UTC)[reply]

"even-numbered atoms are labelled differently from odd-numbered ones" is the key. That pattern tells you which carbons come from which parts of which starting materials. DMacks (talk) 06:32, 19 October 2009 (UTC)[reply]

Okay now I have to admit myself I don't really get fatty acid synthesis besides the carbonyl/enol(ate) chemistry, not at least when I look at the fatty acid synthesis article. It seems that the most optimal reaction sites lead to branched-chain fatty acids, not straight-chain ones. John Riemann Soong (talk) 06:45, 19 October 2009 (UTC)[reply]

So basically I'm labelling the CH3 group of acetyl CoA. This time, the malonyl CoA isn't labelled. Is the malonyl CoA used once? Once a fatty acid chain is going, is malonyl CoA needed? Is the malonyl CoA and acetyl CoA used in alternate elongations? Basically I don't get how after one elongation, how to iteratively work up the product to make it reactive for the next elongation. John Riemann Soong (talk) 07:01, 19 October 2009 (UTC)[reply]

You always use a malonyl-ACP to grow the chain. Expelling CO2 drives the reaction along. Tim Song (talk) 07:03, 19 October 2009 (UTC)[reply]
My instructions say "indicate the pattern of radioactive labeling if derived from 14CH3-12C-O2H enriched acetate". So how does that translate to radioactive sites on malonyl-ACP? I also am a loss at how to figure out radioactive labelling sites when it comes to rings, steroids and unsaturated compounds. Help! John Riemann Soong (talk) 07:06, 19 October 2009 (UTC)[reply]
OK, how do I make unconjugated unsaturated fatty acids, and what would their labelling patterns be? Like say, arachidonic acid. My big problem is that if for example I keep the alkene bond, I get conjugated pi systems instead ... John Riemann Soong (talk) 07:09, 19 October 2009 (UTC)[reply]

Engine braking - NA vs forced induction

[edit]

Hi. Archive search brought up SOME results, but nothing exactly answering my question. Simply put: do you get less engine braking from a turbo-charged, super-charged or naturally aspirated car? Say for the following 2 scenarios:

  1. All 3 engines have identical torque curves, all 3 engines have identical gear ratios (e.g. a 1.4L turbo vs. 1.4L SC vs. 2.0L NA). I would assume that the braking torque of the NA engine would be the "full" 2L worth of braking, whereas the turbo would only give "1.4L worth" of braking that the un-forced engine would have given give. Not sure what the SC engine would do? Is my intuition correct?
  2. Engines with different characteristics: my favourite examples are the Honda S2000 (NA) vs. a Seat Leon Cupra (turbo) - both 177kW but with 208Nm vs 300Nm respectively. At a gear ratio that gives them the same nett torque at the wheels at the same road speed (but not at the same engine speed obviously, the Honda would be revving much higher), which one would experience more braking torque?

Second question, is the braking torque the same as the full-throttle accelerative torque (both in the actual value and also the mechanism by which they are generated)?

Thirdly, our articles engine braking and maximum brake torque are in serious need of attention from someone knowledgeable. Regards. Fourthly, I'm expecting Steve to go to town on this question :) Zunaid 09:06, 19 October 2009 (UTC)[reply]

Typically, forced aspiration engines have a lower compression ratio than NA engines (since the charge is forced in, and therefore already somewhat compressed) and this lower compression ratio reduces the engine braking. Therefore TC engines lose engine braking effect owing to 2 factors - the engine is smaller, and the compression ratio lower. I don't know how SC engines would fit into this, but I would guess similarly - the only extra factor may be a slightly greater braking than the TC engine owing to supercharger losses. Engine braking torque is way lower than accelerative torque - it's caused by friction and pumping losses, whereas acceleration is caused by combustion - a series of explosions - and generates much more power. --Phil Holmes (talk) 13:47, 19 October 2009 (UTC)[reply]
Interesting question - and not one I've really thought about. On all of the reasonably modern Turbo and Super-charged cars I've owned, the air injection system has gate valves that turn off when you take your foot off the gas - so the pump is essentially turned off when you're engine-braking. So I'd expect the results to be essentially identical on otherwise identical non-supercharged engines. I guess on the supercharged motors, the supercharger itself eats about 5% of the horsepower (although it gives more than that back in return when it kicks in). That suggests that perhaps a supercharged engine would be able to provide about 5% better engine braking just because of that loss. Not so with turbo's though. SteveBaker (talk) 19:57, 19 October 2009 (UTC)[reply]

PRECISE mechanism of HMG-CoA synthase (how in the world do you replace a methyl group with an oxygen atom?)

[edit]

There seems to be a lot of hand-waving in this article. I need to do radioactive labelling of a huge amount of terpenes, starting with the labelling of the methyl group of acetate (which presumably becomes the methylene group of malonyl-CoA). So which atoms in 3-hydroxy-3-methylglutaryl-CoA would be labelled? How in the world do you replace a methyl group with an oxygen atom? John Riemann Soong (talk) 09:25, 19 October 2009 (UTC)[reply]

The three labeled carbons would be the two CH2 carbons and the CH3 carbon in HMG-CoA. Now, I don't know what you mean by "replacing a methyl with an oxygen". This reaction pathway does the opposite, it changes acetate to acetone; which requires the replacement of an oxygen with a methyl. So I am confused by your second question. --Jayron32 13:55, 19 October 2009 (UTC)[reply]
Go read the literature. What makes you think the exact reaction mechanism is known? It usually isn't. --Pykk (talk) 17:35, 19 October 2009 (UTC)[reply]

SMART Transmitters

[edit]

I'd like to know whether SMART Transmitters used mainly in Oil and Gas plants can be categorized under S.M.A.R.T. technology? I tried lately to search the internet for the meaning by SMART in such transmitters. These transmitters usually support the HART , FOUNDATION. ™ fieldbus, Modbus, and/or Profibus protocols. I was even wondering why there is no Wiki article about SMART Transmitters! I can write some but will need someone to correct my language later.--Email4mobile (talk) 09:52, 19 October 2009 (UTC)[reply]

Do you know which company or contractor makes these SMART transmitters? It looks like a brand-name, but it could also be a generic description of field sensors with extra features (similar to "smart" phones). I suspect the contractor or operator's website would be the best place to look. For example, Omega manufactures a "SMART Transmitter" which is not specifically for oil and gas (it's a general purpose line of wireless transducers), but could certainly be used to instrument an E&P field site. This product does not use S.M.A.R.T. for hard-disk drives, though. Nimur (talk) 15:01, 19 October 2009 (UTC)[reply]
STS Sensor also makes a "Smart Transmitter" to wirelessly convey transducer signals. Again, not to be confused with the S.M.A.R.T. technology. Nimur (talk) 15:05, 19 October 2009 (UTC)[reply]
Also Honeywell [2] Nil Einne (talk) 15:52, 19 October 2009 (UTC)[reply]
SMART transmitters are a new standardized replacement for the pneumatic, and electronic transmitters used in almost all industrial plants (as a part of control loops) that utilize the 4-20mA signal and HART or any communication signal associated with the dc signal. There are many manufacturers, among them: Rosemount (EMERSON), Honeywell, Yokogawa. Althou I studied enough about them but never tried to ask myself if the word "SMART" means "S.M.A.R.T" or "intelligent".--Email4mobile (talk) 17:25, 19 October 2009 (UTC)[reply]
Well, I've just read some information in the "Instrumentation Reference Book" and understood the meaning is referring to "intelligent". I think that should be, not S.M.A.R.T. Thank you very much for your interaction.--Email4mobile (talk) 17:53, 19 October 2009 (UTC)[reply]

Fainting

[edit]

Why things look blurry and yellow when you're about to faint, or when you nearly paint?--Mikespedia (talk) 10:34, 19 October 2009 (UTC)[reply]

I would recommend reading Syncope (medicine). The effects you are describing are known as brownout, and is most likely caused by low blood pressure in the brain. Googlemeister (talk) 13:16, 19 October 2009 (UTC)[reply]
See also paint mask.--Shantavira|feed me 15:28, 19 October 2009 (UTC)[reply]

Sorry,it is should be "or when you nearly faint".--Mikespedia (talk) 00:02, 20 October 2009 (UTC)[reply]

Fusion Reactor Image..

[edit]

I'm trying to track down a (somewhat iconic) image of a fusion reactor firing; the image appears to be taken from above the reactor assembly, and has multiple bots of electricity arcing across the metal. However, I'm stuck at work on a rather slow connection and my google-fu is weak and I've been unable to find it (trawling through wikimedia didn't produce it either). Can anyone help? Plus which reactor did the image come from? Thanks! NeoThermic (talk) 12:59, 19 October 2009 (UTC)[reply]

Is this a fantasy photo you are looking for? As far as I know, there aren't any fusion reactors sitting around for us to take photos of. -- kainaw 13:11, 19 October 2009 (UTC)[reply]
Not at all. There's been many fusion experements that would require some description of a fusion reactor to actually experement with. Granted, we've not had any reactors that have produced a positive power output over the input, but that's why they're all experemental reactors. NeoThermic (talk) 13:16, 19 October 2009 (UTC)[reply]
here is the image of Z machine which I think you are talking about. --Dr Dima (talk) 13:38, 19 October 2009 (UTC)[reply]
YES! Many thanks! That's the exact image I was trying to find! Thanks again! NeoThermic (talk) 13:51, 19 October 2009 (UTC)[reply]
Here it is in high-res, if you want it. It's a crazy image. --Mr.98 (talk) 13:57, 19 October 2009 (UTC)[reply]
That's a high energy X-ray and plasma physics experiment, not a fusion reactor. (Though, plasma physics is closely related, and research from this project often applies to fusion research. Still, it's not a reactor in the analogue of a fission reactor). You might want to read about tokamaks. Nimur (talk) 18:02, 19 October 2009 (UTC)[reply]
It's not that unrelated to fusion reactor research. You can use z-pinch (and even Z machine itself) to induce DT fusion, and they are working on prototype power plant designs. --Mr.98 (talk) 22:15, 19 October 2009 (UTC)[reply]

Bilingualism and Inner Monolouge

[edit]

Do bilingual people think in just their native language or does it switch depending on usage/context? Does the brain change in someway to accommodate not just speaking another language but actually instinctively thinking in that language?TheFutureAwaits (talk) 13:54, 19 October 2009 (UTC)[reply]

My understanding from talking to a lot of people about this is that it varies. Some people only think in their native language. Some get accustomed to thinking in other languages. An American professor of mine (English as first language) told me that she generally thinks in German now because she finds it better at expressing what she's thinking about. I'm not sure all people "think" the same way, either (I don't generally think in "words", for example, unless I am trying to actually compose language in my head). --Mr.98 (talk) 14:00, 19 October 2009 (UTC)[reply]
My first language is Portuguese, but I've been speaking mostly english long enough that I think in english most of the time to the point that I find myself thinking in english and translating it back to portuguese whenever I have to speak portuguese for short periods. Dauto (talk) 14:15, 19 October 2009 (UTC)[reply]
OR. It switches depending on context where the dominant context is the people around one. THere is a tendency to be confused when one "thinks" an expression in the other language that does not readily translate. Cuddlyable3 (talk) 15:13, 19 October 2009 (UTC)[reply]

Most language acquisition data shows that you don't "think" in a language; thought and cognition is independent of language. You can disable language centres and still think and rationalise perfectly fine (but your communication skills suffer). You can also severely impair cognition and have perfectly fine language. Cognition and language are separate. Now, cognition may be rapidly translated into a favourite language (like a native language), but this does not imply "thinking in a language". Notably, you think in a language only when you wish to communicate. Sure, talking to yourself is useful, but is not necessary, for cognition. John Riemann Soong (talk) 16:40, 19 October 2009 (UTC)[reply]

Go back and read the title. "Inner monologue" certainly does involve using language. --Pykk (talk) 17:30, 19 October 2009 (UTC)[reply]
I have to agree. I was thinking about this several times and just before I answered it. I was definitely thinking in English. In fact I'm one of those people who tends to think about all sorts of stuff stuff all the time and lets their mind wander all over the place, quite often I know it's in English. Technically this may simply be my mind analysing my thoughts in a manner I can understand but that seems to be to be mostly a moot point. Perhaps I'm just a freak of nature but I doubt it. This doesn't mean you need language to think nor does it mean your thought process is limited if you don't have language. Nil Einne (talk) 19:02, 19 October 2009 (UTC)[reply]
Ah, but speech isn't a medium for thought. It's simply a byproduct of thought. John Riemann Soong (talk) 00:44, 20 October 2009 (UTC)[reply]
Feral children who never learn language. Deaf people before and after people established schools/institutes where they invented/learnt sign languages. Learning a language is something that has to be done in a certain window in a child's life or never, and is necessary for certain types of reasoning. As Nil said, your thought process is limited if you don't have language. 86.140.149.215 (talk) 01:06, 20 October 2009 (UTC)[reply]
Did Nil really say that? Steven Pinker argues in the The Language Instinct that language is a biological machine, independent from consciousness and cognition. Witness the stroke patient with Broca's aphasia struggle with language even though he knows *exactly* what he wants to say in terms of ideas, but is at loss for words. Deaf people who never learnt a native language but mime out scenes enthusiastically to others and know how to pick locks and do complicated sums. A near-adult who with mental handicaps who struggles with everyday tasks but yet talks with the sophistication of a British noble at tea. This is a case where again correlation does not imply causation. Simply because language is frequently correlated with thought doesn't mean language is necessary for thought. John Riemann Soong (talk) 03:00, 20 October 2009 (UTC)[reply]
British noble's teatime conversation. Cuddlyable3 (talk) 17:59, 24 October 2009 (UTC)[reply]
My wife (who is French) has spent more than half of her life in English-speaking countries and is pretty much fluent in English. She says she thinks in English when she's around English people - and in French when she's with French people. However, one thing is startlingly clear - she can't do mental arithmetic in English...you hear her muttering numbers in French - then translating the answer to English at the end. Presumable rote-learning of multiplication and addition tables is the cause of that. English word order occasionally trips her up and there are a few metaphors she messes up - also she ALWAYS screws up the names of the letters G ("gee") and J ("jay") - saying things on the phone like "No, that's 'jay' as in 'George'."...much to the confusion of the poor person on the other end of the line! Another peculiarity I've noticed is that when she's been on the phone with her french-speaking family, her English accent "goes away" and she sounds incomprehensibly French for about 15 minutes after. SteveBaker (talk) 19:36, 19 October 2009 (UTC)[reply]
Ah, I did the same thing the other day! But your wife has a better excuse: French and English J and G are exactly reversed! — Sebastian 03:42, 20 October 2009 (UTC)[reply]
Well, not exactly, no. The article you linked gives the French pronunciation of G as "jay" in the respelling-pronunciation column, but as "zhay" in the IPA column. If I remember correctly, the "zhay" is closer (I was going to say "correct", but not really, because they "ay" sound in English is a diphthong). --Trovatore (talk) 03:56, 20 October 2009 (UTC)[reply]
Oh, but of course that's not a problem with the IPA column of the article, but with my respelling as "zhay". --Trovatore (talk) 03:57, 20 October 2009 (UTC)[reply]
By "exactly" I meant to point to the amazing fact that two conditions apply: (1) the exchange is symmetric (J->G and G->J) and (2) the pronunciation is the same with the exactness of the phonemes of the language used (i.e. English. In English, neither initial /ʒ/ nor /e/ are phonemic). — Sebastian 06:35, 20 October 2009 (UTC)[reply]
Why would your wife struggle with arithmetic in English? If anything, I would think that in French it would be more difficult, because the language has a messed-up numeral system. English is my native language, and sometimes I have a bit of trouble with mental arithmetic because of nine of the most unnecessary words in the language ("eleven" through "nineteen"). Better to visualize an abacus or Comptometer -- I used to use the latter.

While there is definitely debate in the scholarly community as to the interdependence of language and thought (as the previous comments alluded to) what you are referring to, which I would describe as using language internally to mediate thought, is a well-recognized phenomenon, which psycholinguists often refer to as "inner speech."

When searching to an answer for your question, I googled "second language acquisition "thinking in a language"" and found a review for what looks to be a very comprehensive book on the topic. It is reviewed here:

http://linguistlist.org/issues/17/17-1309.html

The book is called: Inner Speech -- L2: Thinking words in a second language. By Maria C. M. de Guerrero.

(this is how I learned that thinking in a language is called "inner speech")

de Guerrero is a professor at the Inter American University of Puerto Rico, and is a prominent researcher on this topic.

I then searched for this title in SpringerLink (Since this book is published by Springer), which is an electronic research database that many libraries and universities subscribe to. I was able to read portions of this book, and Chapter 3 "Thinking Words in a Second Language" is especially informative on this topic. I would recommend searching for this book at your local library or school, either through the catalog or using an electronic database such as SpringerLink.

I'll summarize some of the main points from this chapter:

Bilingual people do think in their second language at times. Whether an individual does and how much they do seems to depend on a variety of complex factors, including:

1) the level of proficiency in the second language ("inner speech" in a second language increases and changes purposes from that of mere mental rehearsal to true private thinking as a person becomes more fluent in the language)

2)The content/subject of the thoughts (for example, if someone goes to school in a second language and studies medicine, when they are thinking about medicine, they may more naturally think in the language in which they learned about it. OR, if someone is thinking about an event that happened while they were using one language vs. another (e.g. when someone lived in one country vs. another), they are likely to thinking the language that the event occurred in))

3)the purpose that the thought fulfills (e.g. praying, doing mental math, planning/organizing, remembering life experiences) - some people display a preference for one language or another when thinking for different purposes.

As to your question about what happens in the brain when people are able to think in a second language, the best answer is probably found somewhat indirectly, by looking at how multiple languages are represented in the brain and how this is affected by proficiency in a language, since we know that people are better able to think in a second language when they are more proficient in it.

While some research shows that a second language will be represented in a different part of the brain unless you learn it before a certain age, most research shows that level of proficiency is more important - that is, for people proficient in a second language, the second that language is processed in the same part of the brain as the first language regardless of the age that the person learned the second language.

So it seems that the more proficient you are in a second language, the more likely you will be to think in that language in certain contexts, and, the more likely your two languages will use same part of your brain.

Here is another book chapter written by this Maria C. M. de Guerrero that is available via google books preview: It is called "Form and Functions of Inner Speech in Adult Second Language Learning" from the book "Vygotskian approaches to second language research"

http://books.google.com/books?id=QbB-CGkx4hwC&pg=PA83&lpg=PA83&dq=%22inner+speech%22+in+second+language&source=bl&ots=FPvpPtgPiO&sig=8o_JIQJopy_UnOQ1n19boMbu-VA&hl=en&ei=qcPcSri5IYGN8AaGqK23BQ&sa=X&oi=book_result&ct=result&resnum=3&ved=0CBgQ6AEwAg#v=onepage&q=%22inner%20speech%22%20in%20second%20language&f=false

This chapter discusses the nature of inner speech in second language learners, its proposes, and how these might vary with proficiency in the second language.

I found this source with a google search for ""inner speech" in second language"

Finally, here is a discussion of various viewpoints on the relationship between language and thought that other posts have referred to:

http://www.d.umn.edu/~dcole/hearthot.htm

It's called "Hearing Yourself Think: natural language, inner speech and thought" by David Cole, Head of the Department of Philosophy at the University of Minnesota. Coming from a philosopher, this naturally offers a somewhat different perspective on the issue, but nonetheless raises some of the main points of debate.

I found this with a google search for "second language acquisition "inner speech""

I hope this helps answer your question. --Kristin Good (talk) 20:42, 19 October 2009 (UTC)[reply]

I talk in my sleep, and according to my wife I have had Spanish dreams on a few occasions when I've been immersed, even for only a few weeks. I know this doesn't answer your question, but I think it supports the notion that the brain can "switch over" on a very deep level. - Draeco (talk) 01:08, 20 October 2009 (UTC)[reply]
I used to think to myself on occasion in Spanish. That was when I used Spanish everyday. Now that it has fallen into disuse, I only think in English.--Drknkn (talk) 04:07, 20 October 2009 (UTC)[reply]

Quantum mechanics free particle - follow-up question

[edit]

Hi all:

I posted the following question a while back:

http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Archives/Science/2009_September_18#Energy_eigenfunctions_of_a_free_particle

and just had one further question on it.

I asked about the limit as - I believe i am meant to recover the spectrum of a free particle on an infinite line when I take such a limit.

Currently I have and with h the reduced Planck constant, and the wavefunction of a free particle on an infinite line would be for some constants A,B,C - however, for the limit, unless I'm mistaken I'd need - why does this occur in the limit? I can't see any mathematical reason why it would.

Thanks! Spamalert101 (talk) 14:38, 19 October 2009 (UTC)[reply]

First let me get a minor notation neatpick out of the way. The standard notation for the reduced Planck constant is . To answer your question: The first equation assumes a non-negative n. That is n=0,1,2,3,.. etc. The second equation assumes any integer value for n (even negative values). So you cannot compare the two equations without first performing some manipulation. To directly compare the two equations you must write the second equation as where now n is assumed to be a non-negative integer. Does that answer your question? Dauto (talk) 15:28, 19 October 2009 (UTC)[reply]
Yes, thanks, and I did know the standard notation - just not how to LaTeX it: should have tried the obvious I suppose! :) 82.6.96.22 (talk) 17:52, 19 October 2009 (UTC)[reply]

precise chemical composition of butterish products

[edit]

So ... I need to somehow obtain the precise chemical composition (percentage breakdown by fatty acid, etc.), including the precise structural formula of every major constituent of the following products: Land o' Lakes butter, Crisco all-vegetable shortening, I can't believe it's not butter spray, ghee, liquid corn oil, partially hydrogenated soybean oil, whey, sweet cream buttermilk, and sweet cream butter.

Unfortunately, as you can see, manufacturers have this irritating tendency to give me vague mixtures that doesn't tell me what their exact chemical compositions are... help? Should I call the Better Business Bureau or something? John Riemann Soong (talk) 16:36, 19 October 2009 (UTC)[reply]

Or maybe the Butter Business Bureau? --Trovatore (talk) 03:26, 20 October 2009 (UTC)[reply]
GC-MS?!? --Jayron32 18:21, 19 October 2009 (UTC)[reply]
Food that is Generally recognized as safe (in the legal sense of that phrase) is not subject to detailed chemical or agricultural scrutiny. It may be impossible to get chemical contents from the manufacturers. Nimur (talk) 23:13, 19 October 2009 (UTC)[reply]
Google is your friend. Searching "fatty acid composition of butter" (no quotes) gives some good links, as does "fatty acid composition of soybean oil", etc. The article Fatty acid and links therein give you the structure of each component. You're not going to get the precise composition of Land o' Lakes butter, because it's a natural product, and is variable based on what the cows have been eating, etc. Likewise, with something like Crisco, while the ingredients list will indicate what they started with (e.g. "soybean oil") the partial hydrogenation will change the composition in a batch-to-batch fashion (but general changes happen as indicated in the article). Something like ghee is probably a complete mystery, as the cooking process it goes through produces a slew of chemicals that haven't been fully characterized (see Maillard reaction). In general, foodstuffs are poorly characterized at the chemical level. We grab products from nature and eat them, with only the most basic characterization. (i.e. if it's not listed on the Nutrition Facts label or the ingredients list, the producers likely don't know about it.) -- 128.104.112.179 (talk) 14:09, 20 October 2009 (UTC)[reply]

Are we getting taller

[edit]

Now, it seems (and I think is proven) that younger people tend to be talle than their parents. (From my experience, I'm not!) I was wondering, is it nutrition, evolution, genetics or some other factrs that influences or height? Also, will this trend continue, so say in many years, the avarage height may be 6 foot something! Cheers. AtheWeatherman 16:59, 19 October 2009 (UTC)[reply]

Parents live longer and start shrinking. Cuddlyable3 (talk) 17:08, 19 October 2009 (UTC)[reply]
Yes, in general over the past few hundred years people have been getting taller and taller (see this article). The usual reason given is nutrition, critically at young ages and even a mother's nutrition while the child is in the womb. However, it appears that Americans in particular may be getting shorter on average, particularly compared to Europeans. The reasoning given is that there is greater health and nutrition inequality in the US than Europe, with more poor people living on junk food. I'm not sure if immigration from "short countries" (ie Mexico) is also part of the story. TastyCakes (talk) 17:15, 19 October 2009 (UTC)[reply]
Over several generations we (in the west - it is just starting in Asia, although I believe there is a genetic component involved there as well) have gotten taller due to better nutrition. (That is somewhat based on folk wisdom - I'll try and find some reliable sources in a minute.) I've never heard of any significant general change over one generation - children are usually roughly the average of their parent's heights (after accounting for gender differences). --Tango (talk) 17:20, 19 October 2009 (UTC)[reply]
One of the links I put above claims that German children raised during the two World Wars were shorter than their parents, due to chronic food shortages. Anecdotally, I would say a lot of the males I know are taller than both their parents, and a lot of the females are taller than their mothers. I'd also say there is less of a genetic component than generally presumed. The type of food and amount of protein in the regions diet seems to be a big factor, and the reason many American (or Canadian) born Chinese are much taller than their relatives in China (even Hong Kong, which has been a "first world place" for more than a generation). I have also heard, again anecdotally, that people from the north of China are taller than those in the south because their staple food is wheat rather than rice (and presumably they eat more protein). TastyCakes (talk) 17:30, 19 October 2009 (UTC)[reply]
I believe Asians born and living outside Asia typically have a height somewhere between the average where their parents are from and the average where they grew up. That suggests the height difference is partly environmental and partly genetic (although it is also possible that the Asian sub-culture in other continents still has worse nutrition than other people there). --Tango (talk) 18:40, 19 October 2009 (UTC)[reply]
I'm not sure I agree that Asians have "worse nutrition", as much as they have different nutrition. Smaller meals with more carbohydrates (rice) and less meat is common in many Chinese families, wherever they live and however well off they are. Personally, I would put this as the reason behind Chinese people being somewhat shorter than average even when they're born and raised in the West, rather than genetics (I haven't found data one way or the other in my brief google search). That is also why I'd say Japanese people remain somewhat shorter than westerners. TastyCakes (talk) 19:28, 19 October 2009 (UTC)[reply]
little of everything, maybe. prenatal and childhood nutrition has a big effect; also reduction of childhood diseases. this has presumably plateaued in the middle and higher classes of the first world by now. but on a longer time scale, there is a tendency for species to evolve larger and larger in times of plenty, as larger individuals are better able to compete with their own species for territory, food, shelter, mates, etc. that's not exactly 100% relevant for humans, but I still can't shake the unscientific observation that more women will mate with a football player and more men with a supermodel than with a little person. also, i wonder about the effects of our unnatural diet on growth, both in terms of really huge doses of fat (and increasing body fat) and trace hormones in the food, the former probably more significant than the latter. certainly, it's caused a much earlier onset of puberty; however, that might be expected to cause people to be shorter, since that typically starts to terminate physical growth. on a tangential note, the average bust size of american women has gone from 34B in my youth to 36C now. Brassiere. Gzuckier (talk) 17:42, 19 October 2009 (UTC)[reply]
More attractive humans tend to mate with other more attractive humans, but I don't think they generally have fewer children. In fact, poorer families are usually larger, and there is some positive correlation between attractiveness and wealth. --Tango (talk) 18:43, 19 October 2009 (UTC)[reply]
Don't you mean 'I don't think they generally have more children' or 'I think they generally have fewer children'? Nil Einne (talk) 18:56, 19 October 2009 (UTC)[reply]
From anecdotal evidence it seems to me that pretty much every german male (and most females) in my generation (born late 60s/early 70s) is taller than both parents. There is certainly a drastic difference in a generation. Of course our parents generation has been directly affected by WWII and its aftermath, so they were likely exceptionally short. Somewhat jokingly, I have always favoured the "dutch cheese" theory: according to which the dutch fed growth hormones to their cows in the 70s which then got passed on to cheese eating Europeans. I base this on (again anecdotal) perceived correlation between height and consumption of dutch cheese in peoples childhood. I know this is likely nonsense, but I have failed to find another explanation why the Dutch are (or seem to be) that much taller than say the Belgians (can't believe that nutrition was that much different).195.128.251.194 (talk) 22:52, 19 October 2009 (UTC)[reply]
One of the articles linked to above says that the average height of Durch men is over six feet, and mentions it being due to high consumption of dairy products. For young men the average height will be even more. Something I've noticed in the UK is that it is more and more common to see women well over six feet, and I wonder if the height difference between men and women will vanish in the future. I wish clothing manufacturers would catch up and start selling clothes for taller people - sales are being lost. 78.151.83.175 (talk) 00:06, 22 October 2009 (UTC)[reply]

Sharpless epoxidation, dihydroxylation

[edit]

So, in each of these, the catalytic metal complex gets released by hydrolysis... but isn't hydrolysis SN2? Doesn't that reverse the enantiomeric stereochemistry of the intermediate? John Riemann Soong (talk) 17:27, 19 October 2009 (UTC)[reply]

Testicle sweat

[edit]

Why does the sweat of my testicles smell quite differently than that from other parts of my body? --Belchman (talk) 17:45, 19 October 2009 (UTC)[reply]

It's explained in Sweat_glands#Apocrine sweat glands. --NorwegianBlue talk 18:00, 19 October 2009 (UTC)[reply]
Your perception may be biased. Do you have independent confirmation of this difference? 87.81.230.195 (talk) 00:32, 20 October 2009 (UTC)[reply]

Odds of skin color and features in children of multiracial couples

[edit]

Excuse my relative ignorance of the subject, but I didn't find the answer to my question in the miscegenation or admixture articles. Assuming that in a multiracial couple, there are a black and a white person, what will be the odds of the offspring's skin color and features, considering that the children can either seem white, black, or mixed ? Also, is it a regressive trait, or the aforementioned couple can have one child that looks white and the other black ? Rachmaninov Khan (talk) 17:55, 19 October 2009 (UTC)[reply]

Human skin color isn't a simple matter of one or two genes; it's quite complicated. See Human skin color. Also, trying to define the terms "black", "white", and "multiracial" isn't nearly as straight-forward as you might think, either. "Race" is basically a social construct, not a scientific one. See Race (classification of human beings) Red Act (talk) 18:27, 19 October 2009 (UTC)[reply]
I'm not sure if I understand your last comment but [3] [4] [5] may interest you Nil Einne (talk) 19:19, 19 October 2009 (UTC)[reply]
I'm not sure whether the "your last comment" is in reference to the OP's last sentence, or to my last sentence. If it was in reference to my last sentence, my last sentence was basically a shortened paraphrase of the following sentence in the Race (classification of human beings) article: "The academic consensus is that, while racial categories may be marked by sets of common phenotypic or genotypic traits, the popular idea of 'race' is a social construct without base in scientific fact." See the article for more information; it's a very interesting article. Red Act (talk) 20:03, 19 October 2009 (UTC)[reply]
The "odds" cannot be easily calculated because the genetics are more complicated than just one gene, and people who label themselves "black" or "white" are often to some extent "mixed race" themselves. Personally, I object to all three labels, and to the concept of "race" (as explained by Red Act above). Nil Einne's second reference estimates the probability as one in a million, but this is just a rough estimate and should not be taken too seriously as an accurate calculation. Dbfirs 21:04, 19 October 2009 (UTC)[reply]
I was referring to the OP hence the indenting. Specifically the OP said "Also, is it a regressive trait, or the aforementioned couple can have one child that looks white and the other black ?" which I don't really understand but the references may be relevant. On the lack of a scientific basis for race I agree with you as I've often expressed on the RD Nil Einne (talk) 12:54, 20 October 2009 (UTC)[reply]
Yeah, I knew from the indenting that your sentence should be a response to the OP, but sometimes indenting accidentally gets off by an indent (I've done it a time or two), and I think a lot of people would find the last sentence of my post puzzling, so I figured I'd expand on the sentence just in case. I haven't been around the RD enough yet to know for sure that you wouldn't have found that sentence puzzling. Red Act (talk) 13:13, 20 October 2009 (UTC)[reply]

coffee and weight gain

[edit]

I hope this doesn't count as "medical advice". A couple years ago I started a job where they had very good free coffee (I guess the costs were justified by the increased typing speed of caffeinated workers). I'd never been much of a coffee drinker before, but I developed a 1-2 cup a day habit which I thought of as being pretty moderate. I've also never been much of a weight watcher (I've been skinny most of my life) so was very surprised when I recently stepped on a scale for the first time in a couple years, and found I had gained about 15 pounds. I'm still not fat by most standards, but I'm a bit perturbed by this, and wonder if the steady coffee consumption might have anything to do with it (e.g. by causing appetite increase). Any thoughts? 66.127.54.181 (talk) 19:15, 19 October 2009 (UTC)[reply]

Correlation does not imply causation. Lots of people gain weight as they age - for many reasons. It might have been the coffee - but at such low rates of consumption, I kinda doubt it. It might have merely been the change of job. Maybe you don't walk around so much in your new job? Maybe you've changed your lunchtime eating habits? Maybe the building is a few degrees warmer so you're not burning energy that way? There are all sorts of other possibilities and it's almost impossible to guess which of them it is. SteveBaker (talk) 19:28, 19 October 2009 (UTC)[reply]
Well, my first thought was how you're taking your coffee. Plain black coffee has a fairly negligible calorie count (5 calories or so a cup, and I'm using the US convention where a food calorie is really a kilocalorie). Say you use a tablespoon of sugar, though -- that's 50 calories. Creamer appears to be another 20 or so. Two cups a day like that becomes an extra 150 calories, 5%-10% of your daily intake. So that could be a contributing factor. As for the extended effects... that gets trickier to answer. Lots of websites suggest that caffeine contributes to weight gain, but I don't find them to be all that reliable. The Mayo Clinic has a page on the links between caffeine and weight gain/loss, but it avoids making far-reaching statements. — Lomn 19:34, 19 October 2009 (UTC)[reply]
Thanks. I drink the coffee with no sugar but with some creamer. Yeah, I figured the coffee and creamer didn't add many calories but I have generally noticed increased appetite and only today it occurred to me that coffee might have something to do with it. The Mayo Clinic page actually says coffee may promote weight loss. I also read somewhere recently that diet soda promotes weight gain by increasing appetite. Anyway I used to trust my metabolism and eat whenever it told me to. I'll have to start being a bit more careful (whether or not coffee is involved) but I think I can handle it. 66.127.54.181 (talk) 20:23, 19 October 2009 (UTC)[reply]
Caffeine inhibits cAMP phosphodiesterase, causing the increased hydrolysis of glycogen, and that process happening in the liver eventually causes a higher blood glucose level. Low blood glucose would make you hungry, so that's the theoretical background to weight loss caused by caffeine. I think that only works if you do not eat too much after the time of day when you had a high caffeine level; otherwise the glycogen will be replenished for the time when you are less active, and then - I guess - fat would be created from part of the glycogen. Icek (talk) 02:40, 20 October 2009 (UTC)[reply]
Of course for the same reason it's often recommended not to eat much before going to sleep (no matter how much caffeine you ingest). Icek (talk) 02:47, 20 October 2009 (UTC)[reply]
It seems the best thing to do in order to gain weight would be to try to sleep caffeinated. Then your blood glucose would be high, and not much of it is used, so fat cells could absorb much glucose and produce fat. Then in the morning, you have a low glycogen level which will make you hungry quickly. Icek (talk) 18:55, 20 October 2009 (UTC)[reply]
I think stimulants in general suppress appetite, right? On the other hand I believe it's well-established that sleep deprivation promotes obesity. So it could be a lot like the vicious cycle some of us are familiar with regarding alertness and ability to concentrate: We consume caffeine to enhance these, and it works, but then we can't get to sleep that night, and the next day it's worse. So we either up the dosage, or suffer the consequences.... Do I get more done with coffee, in the long run, than I would without it? Who knows. --Trovatore (talk) 02:54, 20 October 2009 (UTC)[reply]
another factor; coffee, even decaf, can be something of a mild stomach irritant and increase stomach acid secretion, and that can serve to stimulate eating. alcohol similar. Gzuckier (talk) 02:35, 21 October 2009 (UTC)[reply]

Hidden variable theories and T-symmetry

[edit]

I am not a scientist and pose this question at severe risk of it going way above my head lol, so I apologise in advance if it's the most stupid thing you've ever heard.

My interest is in philosophy and I've been wondering to what extent quantum mechanics and modern physics preclude the possibility of a deterministic existence. In previous discussion at the Reference Desk (thank you, by the way!) I've been pointed in the direction of the Bohm interpretation and hidden variable theories, which I realise are not by any means accepted, but are at least not technically impossible. My question is about the relationship of these theories to the idea of T-symmetry, elements of physics that would appear the same whichever direction time was moving. The way my mind was moving was, if these theories can account for quantum theory in a deterministic way, do they also account for it in a way that wouldn't break any physical laws if the Universe was played out in reverse, so to speak?

I realise that thermodynamic laws and similar would obviously not work in this way- but my understanding of these is that they are probability on such an enormous scale that they are always demonstrated true. If we were able to account for asymmetrical laws in this way, would that help?

Like I say, I have an absolutely lay understanding of physics, I haven't even studied it for A-level (High School, if you're across the pond) and so am bound to be wrong about this. I just thought that it was no good speculating about things like this without any expert insight. Dan Hartas (talk) 19:36, 19 October 2009 (UTC)[reply]

I don't know much about hidden variable theories, but the Many-worlds interpretation is an alternative to the Copenhagen interpretation that dispenses with a lot of the problems of wave function collapse like being time irreversible. In MWI, states tend to decohere and branch apart as time goes forward, but there's nothing stopping previously non-interfering states from coming together except that it's statistically unlikely, just like with the second law of thermodynamics. The Bohmian mechanics article mentions that it's isomorphic to MWI and consistent with decoherence, so it sounds like it also doesn't have a problem with time symmetry. You might also look at Arrow of time and Entropy (arrow of time). Rckrone (talk) 22:18, 19 October 2009 (UTC)[reply]
(ec)All complex natural processes are irreversible. That has been expressed in the Second law of thermodynamics and the principle of entropy. This law is true not only on an enormous scale; it is true for any isolated system, large or small. The philosophical view Determinism cannot be disproven by logic because it can maintain that perception of logic is pre-determined. Thus identifying any reversible processes neither supports nor defeats the determinist proposition. Cuddlyable3 (talk) 22:23, 19 October 2009 (UTC)[reply]
A process where entropy increases is only irreversible in the statistical sense. There's no violation of the laws a physics if such a process were to undo itself, but it's unlikely to happen by chance (a probability of e-ΔS/k). For example, if after the process is complete we could take stock of the the microscopic state and apply a CPT reflection, those exact starting conditions would theoretically result in the process playing out exactly in reverse, assuming there were deterministic physics. Having that additional information about the starting conditions, above the macroscopic thermodynamic state, changes the entropy considerations. Rckrone (talk) 00:58, 21 October 2009 (UTC)[reply]
On the subject of determinism, calling hidden variable interpretations deterministic is sort of questionable. The future is depends on information that necessarily can't be known until after the fact. I'm not sure that really counts. More generally, determinism isn't a requirement for time symmetry, just as long as the theory is non deterministic in the same way in both directions (which wave function collapse isn't, but quantum decoherence is).
One more unrelated point is that technically T-symmetry can be broken in the Standard Model, but CPT-symmetry holds. Rckrone (talk) 00:37, 20 October 2009 (UTC)[reply]
Chaos theory means that determinism doesn't exist as a practical matter. You can come up with mechanisms and other systems which are infinitely sensitive to their initial conditions. In theory - if you knew the initial conditions to literally infinite precision, then you could predict the outcome - but the slightest error (even an infinitely small error) - and you'd have no idea what the result might be. Imagine a metal-tipped pendulum suspended over a pair of magnets such that when you swing it, it ends up poised over one or other magnet. This is a chaotic system. If you plot the position from which you release the pendulum and color that point red if it ends up over one magnet and blue if it ends up over the other, and do that over a whole range of positions, the picture you'd get as a result is a fractal - there are places where releasing the pendulum from one point leaves it over the red magnet - and releasing it from another position that's INFINITELY close to the first results in it ending up over the blue magnet. Such systems are mathematically deterministic - but in the real world, are most certainly not. SteveBaker (talk) 01:40, 20 October 2009 (UTC)[reply]
OK, somebody's gonna ask, so might as well be me: What exactly do you mean by "infintely close" in this context? --Trovatore (talk) 03:15, 20 October 2009 (UTC)[reply]
The region within which releasing the pendulum will leave it hovering over the red magnet - and the adjacent regions where releasing it will leave it hanging over the blue magnet can (in places) become infinitely small - singularities if you like - meaning that any error whatever in where you place the pendulum bob before releasing it will change the outcome. In that sense, it is non-deterministic because it's impossible to place the bob with literally zero error. However, the math is simple enough - so maybe you could do a calculation to determine where the pendulum would end up if only you knew it's precise starting point? After all, if you plug the numbers for a precise coordinate into the math, you do get a precise answer out - right? But only if you do all of the math to infinite precision...which you can't do. Hence there is literally no way to predict where the pendulum will end up if you release it from one of these infinitely chaotic regions. For any practical purpose whatever, it's indeterminate. Put another way - if you plotted a red or a blue dot on a piece of paper corresponding to where the pendulum would end up if you released it from that point - the result would be large areas of solid red, large areas of solid blue - and regions containing a fractal of infinite complexity...kinda like the mandelbrot set. SteveBaker (talk) 15:42, 20 October 2009 (UTC)[reply]
Trovatore is just picking on your wording. Distinct points in a metric space being "infinitely close" isn't meaningful since for any x and y, either d(x, y) is positive, or x = y. You probably want "arbitrarily close". The same goes for "infinitely small error". Rckrone (talk) 16:22, 20 October 2009 (UTC)[reply]
Well, no, it's more interesting than just a question of wording. Two real numbers cannot be "infinitely close" — either they are the same number, or they differ by more than 1/n for some natural number n. But the substantive question is, do the real numbers correctly describe spacetime?
It has been seriously proposed that spacetime is quantized at scales smaller than the Planck length — even that the universe is a cellular automaton or some such, living on some sort of discrete grid. I don't see any reason to think that's true, but I also can't think of any experimental way to refute it.
At the other extreme, why couldn't spacetime coordinates be correctly modeled by some structure of hyperreal numbers? In that case, two initial conditions really could be "infinitely close" to one another. --Trovatore (talk) 01:06, 21 October 2009 (UTC)[reply]
Hmmm - interesting. We software engineers have a saying: "God made the integers - all else is the work of man." - with an ultimately quantized universe, that might actually contain a grain of truth. If the entire universe did turn out to be nothing more than a vast cellular automaton - I'd start to more strongly suspect that we are actually living in some vast computer simulation - think "The Matrix". In that case, everything would certainly have to be deterministic. SteveBaker (talk) 04:36, 21 October 2009 (UTC)[reply]
I thought you weren't into non-falsifiable hypotheses, Steve? If you have an experiment in mind that could falsify the claim that spacetime is granular, without specifying a particular bound for the granularity, I'd be interested to hear it.
This is actually a really interesting question for figuring out "which way your Occam's razor cuts", as someone once put it. Granular spacetime is ontologically more parsimonious, in the sense that it doesn't need the reals, and the reals are problematic for some folks -- each real number (even 0) being a completed infinite totality, in the sense that each real encodes infinitely much information all in one tidy little package.
But to me the idea that the universe is on some sort of discrete grid seems completely unmotivated, and has Occamish problems of its own — what grid, and where did it come from? Isn't the grid itself an entity that should not be multiplied beyond necessity?
On another note, I see no reason, if the universe were a cellular automaton, that it would need to be a deterministic one. What's wrong with a nondeterministic cellular automaton? --Trovatore (talk) 08:36, 21 October 2009 (UTC)[reply]
The configuration of an iron pendulum over two magnets is analogous to the situation described in the article Langrangian point. If the pendulum has any motion its history can be calculated from that motion; the accuracy of the calculation is limited by random thermal energy. It is possible to place the pendulum exactly at neither the two stable points over each magnet nor on the metastable normal passing through the L1 point between the magnets even though these starting positions are mathematically defined. Attempts to do so will be thwarted by thermal energy that in the metastable case causes the pendulum trajectory eventually to deviate to one or the other magnet. However a 2-D random thermal distribution is not like the Mandelbrot set fractal in which there is no thermal contribution so only an arbitrary limit is ever set to the resolution with which it is computed. From the article Chaos theory: "Systems that exhibit mathematical chaos are deterministic and thus orderly in some sense..Sensitivity to initial conditions is often confused with chaos in popular accounts." Cuddlyable3 (talk) 17:41, 20 October 2009 (UTC)[reply]

Palladium

[edit]

Why is palladium considered to have five electron shells, even though the outermost shell is empty? Dogposter 20:00, 19 October 2009 (UTC)[reply]

  1. All elements have an infinite number of electron shells. They may only have enough electrons to fill a certain number of these shells in the ground state, but the shells are still there.
  2. That being said, Palladium has the following electron configuration: 1s22s22p63s23p64s23d104p64d10. The expected ground state, based on the location in the periodic table would be ... 5s24d8, however the actual configuration occurs due to the slight energy lowering caused by electrons which have paired spins. The deal is, that the difference in energy between an empty s orbital and the d orbital of the level below it (say, 4s & 3d or 5s & 4d in this case) is very small, they are almost the same energy. Thus, small changes in energy can effect them, and the net result of the paired spins of the ...4d10 state more than compensates for the difference in energy for moving those two electrons from the 5s to the 4d orbital. So, you are technically correct that Palladium only has 4 filled energy levels in the ground state. --Jayron32 20:51, 19 October 2009 (UTC)[reply]
Wait, so because small changes can affect the electrons around palladium, they consider it to have 5 electron energy levels? Dogposter 21:18, 19 October 2009 (UTC)[reply]
What Jayron is saying is that small energy changes can easily knock electrons from 4d (or other levels) into the 5s. Because atoms in real materials are constantly in thermal equilibrium with their surroundings, and receiving photons and gaining and losing thermal energy, these electrons are regularly knocked into and out of the 5s "shell", even though that is not their "resting" shell. This is characteristic of many large atoms classified as transition metals. Other types of atoms, where the energy gaps are larger, do not have such a dynamic equilibrium of valence electrons. Nimur (talk) 22:51, 19 October 2009 (UTC)[reply]
That's true, but its not exactly what I was saying. All atoms have 5 energy levels, even hydrogen. They all have an infinite number of energy levels. Its just that the ground state electron configuration may not place any electrons on the fifth energy level for atoms. The second part of my explanation was about the fact that there is a "rubric" for determining the ground-state electron configuration of an element based on its placement in the periodic table. Based on the location along, palladium would be expected to have the [Kr]5s24d8 electron configuration; thus its valence level would be taken to be n=5; which would fit with it being located on the 5th period of the periodic table. HOWEVER, experimental evidence shows that palladium does not obey the expected result; its electron configuration is actually the non-standard [Kr]4d10 configuration. The explanation for this is the additional stability generated by spin-spin coupling of electrons in THAT configuration compared to the expected configuration, which would NOT have spin-spin coupling. To make a very long story short, the description of Palladium as having it's valence level is n=5 since it is located on period 5; but experimental evidence shows this to not actually be so. Palladium does not actually have any ground-state electrons in the n=5 level, so technically its valence level is n=4. Still, the entire set of 5s-4d-5p energy levels are so close in energy as to make them all practically identical. The "fact" that Palladium does not have any ground state electrons in level n=5 isn't all that important, since it does not much affect the behavior of the element, chemically speaking. Such is the nature of transition metals; one can really understand their chemistry much better if you just consider the whole set of ns/n-1d/np orbitals as roughly the same energy. That's how coordination chemistry works anyways. --Jayron32 04:38, 20 October 2009 (UTC)[reply]

mercury as an element

[edit]

I have these few questions which i will be glad to get positive answers to.They are as follows;

  1. can a mobile phone torch light or a normal battery torch light be used to confirm that a mercury(element) is still active and good 4 industrial use?
  2. When exposed,how long does it take for it to get inactive and no longer good 4 industrial use?
  3. Is it really used in making the old pendulum wall clocks,analague telephones,old wooden black and white televisions?
  4. What are its functions in this machines since some of them still works when the mercury(element) has been removed?
  5. What other related items is it used in?

Thanks...Emma —Preceding unsigned comment added by 41.217.1.3 (talk) 20:24, 19 October 2009 (UTC)[reply]

What do you mean by "mercury(element)"? Mercury is a chemical element, it can't be "active" or "inactive". --Tango (talk) 20:38, 19 October 2009 (UTC)[reply]
(edit conflict) See Mercury (element) for more info, especially the "applications" section for specific uses. Mercury doesn't "go bad" in the sense you seem to think it does. It is actually quite stable in open atmosphere, it slowly evaporates, but not on any time scale that would make it disappear. When heated, it will react with oxygen in the air to form mercury oxide compounds; but it generaly does not so react at room temperature, so its pretty stable stuff. Mercury in devices like, say, one of those old mercury tilt-switches used in many devices, will last pretty much indefinately, or at least longer than the useful life of the device. --Jayron32 20:41, 19 October 2009 (UTC)[reply]
Two uses that the OP asks about aren't covered in the Mercury article. Mercury was used in the Bob of a pendulum clock to provide temperature compensation - as the temperature rises, the length of the pendulum rod increases, making the effective length of the pendulum greater and slowing the clock down. The expansion of the mercury in the bob raises its centre of gravity, reducing the effective length of the pendulum and keeping the clock's speed accurate. An alternative way of doing this without using mercury is to make the pendulum rod out of two metals with differing expansion rates, arranged to cancel out thermal expansion (see Bimetallic strip). In electronics, apart from the simple tilt switches mentioned in the Mercury article, it was used in the Ignitron rectifier, to produce DC power from an AC source. However, these were big industrial-scale devices, and wouldn't have been used in a domestic telephone or television. Tevildo (talk) 21:11, 19 October 2009 (UTC)[reply]
(edit conflict)... and in answer to Q5, you will find much larger quantities of mercury in old thermometers and especially old barometers. The amounts in your list at 3 will be minimal (or zero) and not worth trying to recover (except for ancient pendulum clocks that are worth much more with the mercury in place). Dbfirs 21:17, 19 October 2009 (UTC)[reply]
The OP might be asking about mercury battery - some of these questions seem like non-sequitors. Can you rephrase some of these questions? It might help us answer them better. Nimur (talk) 23:01, 19 October 2009 (UTC)[reply]
Restriction of Hazardous Substances Directive (the new electronics chemical safety standards) might be helpful (if not to the OP, maybe to some other responders). Mercury has been used in electronics, specifically in certain lights (I think some types of fluorescent bulbs and halogen bulbs used a mercury oxide coating inside the glass bulb). See also, Mercury-containing lamps and mercury-containing equipment from the US Environmental Protection Agency. Nimur (talk) 23:05, 19 October 2009 (UTC)[reply]
Mercury was also frequently used in switches and mercury-wetted relays - these were used in telephones and thermostats and such. In a regular switch, the metal-on-metal contacts would wear out - but if one of the contacts is liquid mercury, there is almost no friction and the switch lasts MUCH longer. I can't think why it would be used in an old black & white TV - but it's possible it was used in the manufacture of the tube or something. These days, the toxicity of mercury is causing a steep decline in the number of places it's used. SteveBaker (talk) 01:21, 20 October 2009 (UTC)[reply]
See Mercury switch (my thermostat and bedroom lights are still of this style). DMacks (talk) 18:36, 20 October 2009 (UTC)[reply]

DNA contains entire human genome?

[edit]

I was told that all humans possess the same exact genes and that the only difference is the sequences of nucleotides that account for physical differences. So I was wondering if it were possible to have a person like Shaquille O'Neal and if had a son with a woman, to make the son look exactly like Brad Pitt the only we need to do is get different nucleotide sequences? —Preceding unsigned comment added by 139.62.166.238 (talk) 23:17, 19 October 2009 (UTC)[reply]

I don't understand - the nucleotide sequences are the genes. About 99.9% of human DNA (for a particularly way of measuring it) is the same in all humans (see Human genetic variation), the other 0.1% accounts for all the variation we observe. Turning one person's DNA into another person's would probably be beyond our genetic engineering abilities (but if you have the other person's DNA you could just try and clone them - we don't have that ability yet, but we aren't far off), it also wouldn't be enough - physical appearance is determined by a combination of genetics and environmental affects, you would end up with someone similar looking, but not identical. --Tango (talk) 23:39, 19 October 2009 (UTC)[reply]
As a gross comparison, you might get someone that would look like Jose Canseco and would hit like Ozzie Canseco. ←Baseball Bugs What's up, Doc? carrots00:52, 20 October 2009 (UTC)[reply]
It's also worth mentioning that a lot of information is also extra-genetic - promotor sequences can determine how much of a gene is produced, methylated genes can be turned off and on, and some genes code for RNA products that can regulate gene expression, to name a few. ~ Amory (utc) 02:41, 20 October 2009 (UTC)[reply]
Just to clarify: What Amory mentioned in his response as "extra-genetic" does not mean "outside the genome", he just meant "outside of traditional genes". As far as we know, the entire information making up an organism is stored inside your genome and dynamically interacts with the environment. --TheMaster17 (talk) 10:17, 20 October 2009 (UTC)[reply]
Don't forget Mitochondrial DNA - it's not really relevant to this issue, but it is there in addition to the regular genome. --Tango (talk) 10:34, 20 October 2009 (UTC)[reply]
In my view and usage of the word genome it encompasses all DNA-molecules in a cell, regardless of where they are packaged (in this case in organelles). So the "genome of a human" would mean the nuclear chromosomes as well as the mitochondrial DNA. There is no reason why they should be treated differently, both contribute to the phenotype, both are copied into daughter cells in mitosis, both determine the "genetic information" in an organism. But you are right that some people may forget that there is more DNA in a cell than just the "ordinary" chromosomes in the nucleus: Animals have mitochondrial DNA, plants have in addition DNA in chloroplasts, and bacteria have no nucleus but can harbor various kind of plasmids, which are separate from their chromosomal DNA. And then there are viruses around that can copy themself in and out of another stretch of DNA, so they may or may not belong to a given "genome" at a certain timepoint. --TheMaster17 (talk) 11:35, 20 October 2009 (UTC)[reply]
I think the extra-genetic thing needs further explanation. Amory mentioned promoters, and there are also other parts of the noncoding DNA which affect regulation and expression. This remains an area that we're only beginning to understand. However all of this is still part of the DNA sequence (or nucleotide sequence that the OP mentioned) and therefore is something that we should be able to get a representation of via DNA sequencing (although current methods mean there are areas that are basically not accurately sequencable as well, a read of human genome may help). However Amory also mentioned DNA methylation, this isn't part of the DNA sequence proper instead it's part of the genome sometimes talked about as the epigenome It's even less understood then non-coding DNA and is not something which traditional DNA sequencing tries to uncover. Epigenetic code is rather short but should be of interest and if you don't already understand it, having a basic understanding of the genetic code should help. Nil Einne (talk) 12:49, 20 October 2009 (UTC)[reply]
Sorry I brought it up! At any rate, everyone's verbose explanations of my examples are correct, but as Nil Einne says, just because something is genetic doesn't mean there aren't other factors that lie outside of the DNA genetic strand. Hormones and steroids are good examples - the production levels of these compounds widely vary depending on your environment and experiences, and they can greatly affect how your genes are expressed. Your genetic code is all the information, but how that information is used, or even how much, often lies outside of the chromosomes. We are also increasingly finding out that tertiary structures of molecules such as RNA play a very large role in translation; whether you consider RNA folding based on structure to be genetic or extra-genetic is, I suppose, a matter of semantics. ~ Amory (utc) 17:51, 20 October 2009 (UTC)[reply]
To phrase things a slightly different way, we all have two copies of every gene (with some exceptions). Genes are present in the population in different "versions" called alleles. So -- to utterly simplify -- it isn't that someone has "the gene" for a certain characteristic, it's that they have a certain combination of alleles that promotes that particular characteristic. Top that off with a significant environmental influence for most complex traits and you have a unique individual. It's just more complicated than changing a few nucleotides here and there. --- Medical geneticist (talk) 14:33, 20 October 2009 (UTC)[reply]