User talk:SteveBaker/archive13
This is an archive of past discussions about User:SteveBaker. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
new WP:RDREG userbox
This user is a Reference desk regular. |
The box to the right is the newly created userbox for all RefDesk regulars. Since you are an RD regular, you are receiving this notice to remind you to put this box on your userpage! (but when you do, don't include the |no. Just say {{WP:RD regulars/box}} ) This adds you to Category:RD regulars, which is a must. So please, add it. Don't worry, no more spam after this - just check WP:RDREG for updates, news, etc. flaminglawyerc 22:07, 5 January 2009 (UTC)
- Hmmm - we also have:
This WP:RD denizen is:
|
- ...with a bunch of silly options for checking different boxes.
- SteveBaker (talk) 23:35, 5 January 2009 (UTC)
learn how to spell
- it's -> its
- American's -> Americans
-lysdexia 17:31, 6 January 2009 (UTC) —Preceding unsigned comment added by 69.108.164.45 (talk)
- Informal communication - you knew what I meant. Meh. SteveBaker (talk) 17:59, 6 January 2009 (UTC)
Are we cool?
Hi Steve. Someone on the Ref Desk talk page raised a suggestion that I had tried to 'stifle debate' on a previous policy proposal of yours. I'm assuming that they meant this one. While I disagreed with your proposal at that time, I also aimed to treat you with courtesy and respect; I also had no intention of trying to close off the discussion. Please let me know if you think I was being a jackass; if you found my remarks offensive or my tone imperious, I apologize most sincerely.
I genuinely appreciate the effort that you put into the Ref Desk (both its content and its policies), and wouldn't want to lose your hard work or your insightful approach to policy.
And now I've gone and done it again. I think that your proposal is pretty reasonable, but I fear that it's vulnerable to gaming and ruleslawyering. As is always the case, your mileage may vary, and I may be worried about nothing. I think, though, that your proposal is aiming to codify something that can already be comfortably done – in the relatively rare circumstances where it might be necessary – using equal parts common sense and WP:IAR.
Postscript: I seem to keep addressing you as 'Steve' — if you're finding it presumptuous, rude, or plain annoying, let me know. Cheers! TenOfAllTrades(talk) 23:50, 7 January 2009 (UTC)
- huh! I didn't think you stifled debate at all. So - no, I'm not upset about anything. Weird. I have a tendancy (typical of Asperger's people) of feeling happier with things clear cut and concise - so I like extra rules. This is a fault - and I recognise it. So often I'll propose a rule (because I like that kind of thing) with the expectation that nobody else will be keen on it.
- And please - call me "Steve" - that's my name (well, no - actually, it's "Stephen" - but only my mother calls me that).
- So - I don't know why all of this seemed necessary - I thought we were getting on just fine (but again, that's another Asperger thing - I have no clue about all this interpersonal stuff!)...now I'm left wondering what *I* might have said to give the impression that I was upset?! Urgh! Give me computers any day!
- Don't worry; I didn't get the idea that you might be upset from anything that you said or did. DuncanHill asserted recently that I was trying to stifle the discussion at WT:RD on your last proposal, and I wanted to make sure that things were clear between us. You've always come across as a calm, cool guy, and I didn't want to be standing on your toes while you were being too polite to raise a fuss.
- I've occasionally said things on Wikipedia talk pages which come across as much more unpleasant than they ought to be. I've been upset, or I've misread someone else's remarks, or I've just phrased something badly. When I looked back at what I wrote earlier, I didn't think that I had said anything out of line, but I figured I should check my calibration.
- By the way — don't stop trying to find ways to improve our policies. I can tell that you're looking for ways to help people navigate Wikipedia as smoothly as possible. As an admin, I (perhaps unfortunately) tend to examine policy with an eye for ways in which it is likely to be abused; I don't want to discourage your efforts. Happy editing! TenOfAllTrades(talk) 03:31, 8 January 2009 (UTC)
- Yeah - it's VERY easy to misread intent in any online communication, so to err on the side of WP:AGF starts to become a habit! I think it's essential that Policy (with a big 'P') be as safe from abuse as it can be. But the WP:RD stuff is Guidelines - and for that stuff, ignore all rules definitely applies. What we need is a way to say to people "what you did is considered wrong by our community - please don't do it again" - and when someone ignores a guideline for perfectly logical, sensible reasons, we can say "pheh - it's only a guideline". It's not policy precisely because that kind of flexibility works when we all AGF and act together to tell people when they are doing wrong. There are very few - if any - unreasonable people working on the RD. The questioners however...yeah...well...that's a different matter.
- I don't know how Duncan came over the impression that I was upset or that discussion was stifled...how the heck DO you stifle discussion on an open forum like the RD talk page anyway?! Certainly I didn't get that impression. My suggestion was discussed - the consensus to implement it clearly wasn't there - game over. Move on, try to find another way to make things run more smoothly.
- IMHO, the proposal was worthwhile. I've been outspoken in the past about people changing, redacting or otherwise messing with people's answers to questions. That's dangerous because (especially on places like the Science and Math desks) the tendancy to want to correct a "wrong" answer is very strong - and the consequences of people editing and/or deleting each other's answers is potentially dangerous to our ability to provide good answers. There have been many cases when I've come back to the RD after a couple of days away and seen a long series of replies to a question - all of which were flat out wrong - and I needed to step in and explain why. If any of those people wants to stick by their guns and is allowed to delete (or worse still, edit) my reply then that would be a disasterous thing for the lone truth-teller. On the other hand, there have been several occasions (one of which is personally very embarrassing when I got Newton's law of cooling wrong!) when what I have said has been loud and empassioned AND flat out wrong...and it's just as important that I didn't delete the original (correct) answer in the process. So deletion and amendment of answers is something I'm passionate about wishing to disallow - that should be a line that we don't ever cross.
- The deletion (but NOT amendment) of questions and (perhaps) their entire thread of answers is a different matter. We have rules about medical/legal/homework questions that are actually very important to our mission at the RD - and whilst our respondants are almost all smart, reasonable people - our questioners are all over the map: Vandals, trolls, annoying little kids who want to talk about sex, people with so little English skills that they are all-but incomprehensible...all of these things are par for the course. I'm not even opposed to removing all of the answers (all or none though!) if the question itself is deleted. In this case we are "un-asking" the question - and replies to an un-asked question are not necessary - so they can go too. But I'm still passionate about not CHANGING what the questioner wrote - those words belong to them - rightly or wrongly. So it should be an all-or-nothing thing. Either the question stays or it goes. In general, it should stay - people can always choose not to answer it - and that works reasonably well in practice. The time pressures on the RD are what makes this a pain to deal with. We need mechanisms to allow prompt, approximate, decision making - with Wikipedia consensus-making not getting in the way of the time-pressures...yet still using consensus to decide (possibly after the fact) whether or not this was a good call or not. In that way we slowly build up "case law" about what we accept and what we delete - which can be used to help the fast/approximate decision-making do it's thing. This is not all that different from the way that AfD works (for example) with the idea of there being rapid deletion of articles that are "obviously" unacceptable based on a kind of "case law" of what we accepted in the past - with an appeals process and a means for more difficult cases to get pushed through full-scale consensus building.
Nuclear battleships
Hi Steve, with respect to this, yes indeed all steel is contaminated to a certain degree since the onset of atmospheric nuclear explosions. This is due to use of atmospheric air and atmospheric-derived oxygen in iron- and steel-making. The atmosphere was and is contaminated with fallout radionuclides.
I know this because it was one of the very first questions I dared to ask at SciRef. I read it in SciAm probably in the '80's or early 90's (long before I dropped my subscription after SciAm turned into a fluff-rag, they used to have such excellent diagrams plus Gardner and Hofstadter and hard science articles). The tickler for me was their piece describing how steel used in satellites came exclusively from old battleships, so as not to disrupt the sensitive on-board equipment. The difference being that old steel can be remelted without introducing new oxygen, but you can toss other alloying agents in as a solid.
You could find my thread by searching SciRef within the last 12-18 months, title I believe was "Battleships in Space" (or cap/locase variants). I also have a ref discussing how to pin down the age of corpse-teeth by radionuclide content from Nature journal, which I thought I'd added to an article but can't find now. Anyway the effect is real, sorry for being a little late bringing this up! Franamax (talk) 06:36, 8 January 2009 (UTC)
- Note that the radionuclide content in teeth comes from the atmosphere via food, not from the journal itself. :) Franamax (talk) 06:43, 8 January 2009 (UTC)
- Cool! Well, I was merely skeptical about this business - I pointed out (repeatedly) that I didn't know for sure and that it basically just seemed a little 'fishy'. Knowing that excluding atmospheric oxygen from the smelting process is the key goes some way to explaining what's going on - although it still sounds very wrong to me. Thanks for the correction! SteveBaker (talk) 13:32, 8 January 2009 (UTC)
- Hi Franamax - if you're still looking for the age-by-anthropogenic-radiocarbon paper, it's here: [1]. TenOfAllTrades(talk) 14:43, 8 January 2009 (UTC)
- Thanks TOAT, I have a Nature subscription, so I can search their archives (I do that early and often :) It needs to be incorporated somewhere into one of our various articles on "Effects of nuclear testing". I know I made some kind of change to something related at the time, and I discussed with someone creating an updated graph of atmospheric radionuclide levels, but bugger if I can find it now. There's a downside to getting involved in too many things, the loose ends get overwhelming... :(
- Steve, there's no correction involved, since you made no assertions. Think about it though - air is needed to smelt iron and oxygen is needed to make new steel, so unless you have a really major fractionation process to purify the air/oxygen, you have to live with the contaminants. That's normally not a problem, except when you're building an enclosure for a really sensitive neutron/alpha-particle/beta-particle/gamma-ray detector. You can either characterize the emissions from your "bad" steel and subtract from the data, or find better steel.
- As it happens, there is a source for "good" steel, you just have to cut it off a sunken battleship. Then you re-melt (or re-heat) it under controlled conditions and shape it to what you need. I imagine you'd still need to deal with the surface oxide layer (hot-form then pickle&oil maybe) but you avoid the many steps needed to fully purify the process air. It's one of those cool things that catch my eye and clutter up my brain. I'd love to find that reference, but I have no special access to SciAm, I dropped my subscription around the time teh interwebs was being inventitated. :) Franamax (talk) 02:42, 9 January 2009 (UTC)
A way to long replay
This is a little late but I know you always like to talk about science. This is an extension of the dialogue on sexual reproduction would never work sorry I didn't respond there but I got side tracked. I still think your last post was way to reductionistic by treating a base pair as a bit of information. I can see that I wasn't giving practical examples, and was just being way to abstract. What I want to say was not all DNA sequences are equal. Some bonding patterns are more robust and are less likely to face replication errors. Base pairs are more susceptible to methylation such as cytosine and its even more susceptible as a CpG site. There are weak sections of DNA between the genes that have been proposed to be there to simple accept damage, from light and potentials, in order to protect the sections that require high fidelity. The structure of a base pair and where it is in a codon appears to be directly related to the what amino acid it codes for, such as if there is a uracil in the second position the codon will always code for a nonpolar amino acid. You could argue that this has to do with how it interacts with a transcription agent but you wouldn't have the support of the biochemists (check the "Theories on the origin of the genetic code" on the genetic code page). Synthetic base pairs without meaning in the four bit system can be introduced. Some of the proofing proteins appear to be designed to remove natural base pairs in the wrong place and and ignore unnatural ones (thats based on a PhD thesis I went to a few years back). These are just a few of the known examples where DNA behaves as more than just 00, 01, 10, or 11; I'm pretty sure in a pure "4 bit" system these other functions would be independent. I wish I could cite everything for you but I don't have times and I suspect you already know much of this. I'm sure there are similar examples for things that happen at PN junctions but I bet most of them just result in failures in the systems function. In contrast these asymmetries in the nature of DNA "bits" having been incorporated into the systems function. DNA can be considered a digital system which is built on a substantially more complex system; a complex system which significance influence the function of digital system. I don't know how you would describe this system in "information theory" but I expect it gets a distinction from data that is stored without such operational conditions. I'm interested to see if you have anything more to offer on the subject. Personally, I am always bothered by reductionism in the treatment of things like DNA and neurons. People act as if they understand how "action potentials" combine, like they known how dendrite's "math" works. I'm referring the passive cable theory which in many text books is treated like gospel. In reality the neuron is doing a ton more then described by this model, good thing or we probably wouldn't be having this conversation. I guess the normal audience at the reference desk usually just need the intro text book explanation. Finally, your arguments carry a lot more weight when you don't make personal comments. I'm well aware of what I do and don't know in science. I don't know any more than the average laymen about information theory, which I would guess is next to nothing; but I know a fair amount about DNA and PN junctions.--OMCV (talk) 02:09, 9 January 2009 (UTC)
- The whole "DNA is binary data" and "mish-mash of software" thing missed a few points. The "software combination" involved in meiosis, gamete formation and zygote formation is actually controlled by a pre-existing "software" mechanism, defined by the software itself, and recursive levels of pre-existing oversight. This includes selective imprinting of the maternal and paternal genomes, homologous recombination during meiosis, nucleosome patterning, histone acetylation, RNA interference, blah blah... Suffice to say that if we'd developed computer software within the context of such recursive machinery, such that the only way the software could exist was within its evolved framework, combining two programs would be a snap.
- Another missing point was the estimate of only a 1 in 100 chance that an embryo could develop. That may indeed be correct in any case. I spent awhile searching for statistics on that, but the search gets clouded by "in vitro" results, so I bailed out. It's definite though that not all gamete combinations result in a competent embryo, so software conflicts indeed seem to occur. Franamax (talk) 02:59, 9 January 2009 (UTC)
OK there are lots and lots of long explanations and complicated jargon and all sorts of interesting stuff in those posts...but the sad part is that NONE of that matters - not at all! The fact is that at each point in the DNA strand there are only four possible things that can happen next - you have A,G,C,T...there isn't a Q or a Z. That means that like it or not - I don't give a damn how those bits are INTERPRETED by the biological machinery - I only care that there are only four options - that's two bits of data no matter how you slice it. This isn't a question about biology - it's a question about choice - there are four choices - that's two bits - bits are a measure of choice. If someone finds another four new base-pairs in the DNA strand of some weird creature (P,Q,R,S say) then you now have 8 choices and that's three bits per base pair.
I can try to explain - but you clearly don't understand fundamental information theory - and that's a problem.
Let me try one more time - how about this. If I type:
F:P()*%$)(Q$P N@
...that's 16 random bytes of data - 128 bits in total. This string of bits is meaningless gibberish - but it's still 128 bits. You can count them. On the other hand, I could write:
17th July, 1955.
...that's also 16 bytes of data - it's also 128 bits in total. It's obviously a date - and now it has some useful information content. But it's still not very important information. Now, if I embed those exact same 128 bits into a larger context:
Steve's Birthday is 17th July, 1955.
...then your mental processor (or your computer) can do more about it. Suddenly - those exact same 128 bits are more useful to you. But there is still only 128 bits - the raw information content doesn't change whether it's gibberish or useful data. If I take (again) that exact same data and put a couple of square brackets around it: 17th July, 1955. - then that same exact 128 bits will act as a link to a document - or in this case, a command to MediaWiki to open an edit window if you click on it.
The is true of that DNA data - it doesn't matter a damn what you're telling me about how that data is processed or what fancy processing happens or that this letter combination here means something different in this context than in that context. It's STILL precisely 4 bits per base pair and cannot ever be anything other than that.
I don't doubt that the same bit pattern (or base-pair sequence) does different things in different contexts - that's not additional data in the sequence - it's other things external to that sequence that changed the MEANING of those bits. Just like my birthday - which means something quite different when it's presented without context, with my name in front of it, or inside double-square brackets.
This is the same thing with software and computer data of all kinds. If I take the string of 8 bit 'base-pairs' and say:
i = i + 2
...and present it to a mathematician - he's going to say "Hey - that's not true!" - but present it to a C-language compiler and you'll produce a machine-code instruction to add two to the variable called 'i'. The MEANING of those bits depends on the context - but that has IN NO WAY CHANGED THE NUMBER OF BITS.
Furthermore - although "s49087()*&_#$&rY" is seemingly gibberish, it's 128 bits - but it's not very useful. It has fewer than 128 bits of USEFUL information in it...it's always possible that the useful content of your DNA basepairs is less than 2 bits each - and that's almost certainly true because of the 64 codons they form, many code for the same "STOP" command...that makes those STOP codons interchangeable - which effectively reduces your useful choice of codon to less than 64. But one thing information theory teaches us is that you can't ever GAIN bits. If something takes 128 bits to store - then AT MOST there can only be 2128 different states - so you can't ever store more than 128 bits in that space. The same is true of your base-pairs. They can NEVER be worth more than 2 bits each - no matter how much fancy science you tell me about them.
So, I'm sorry - but all that you wrote above is utterly irrelevent - I don't even need to read it. There are only four kinds of base-pair in a DNA strand - so they are 2 bits per base-pair - and nothing that biologists can say will change that...it's pure information theory...mathematics. If some biologist tells you otherwise - then he doesn't know what he's talking about.
SteveBaker (talk) 04:52, 9 January 2009 (UTC)
- What he wrote isn't entirely irrelevant, Steve. Its worth reading. If you did you'd note he mentioned DNA methylation, particularly at cytosines. We call nucleotides A, T, C and G, and think of them as sole basis of information. However we often forget that this is a shorthand code for a molecular structure: C stands for cytosine. But, in many species, a significant number of what we just refer to as "C" are actually a different structure, 5-methylcytosine (we'll call it C'). So we actually have a common, 5th possibility (there are other modifications, that are rarer. In some species, adenine also undergoes methylation.) Then consider that this switch from C → C' is transient and bidirectional. So any given cytosine should probably be considered as having twice the potential for informational storage. I agree with your wider point about how increasing complexity of biological systems doesn't really impact on the information capacity of DNA, but its worth noting that the complexity actually stretches to the very structure (and hence information coding potential) of DNA itself. Rockpocket 06:16, 9 January 2009 (UTC)
- (e/c) A - C - T - G - methylC - methylCpG. All represent distinct items of information, can you enumerate them in two bits? Methylated-cytosine not= cytosine. Information theory ultimately depends on the information it conveys. By the same token, analyzing the information content of triplets has to account for the redundancy of codons, so you can't say that a triplet conveys more than (roughly) 24 pieces of information in the context of protein coding. This is the reverse case, methylation status represents an extra bit of information. Put it another way: if you methylate all the C's in your genome, the total quantity of information is the same, but life will be radically different for you. You propose to say that no information has changed, because they're all still C's. Franamax (talk) 06:21, 9 January 2009 (UTC)
- (Of course I read it! I only meant that I didn't NEED to read it in order to explain the issue of context). OK - well if there are "other", functionally different base-pairs kicking around (let's call this methylated cytosine C') - then the original assertion that there are only four base-pairs was incorrect and I have been misinformed! So we do indeed have to increase the bit count from 2 bits to a little more than 2. Technically - the number of bits is the log-to-base-2 of the number of possible states. So with 5 base pairs the number of bits is log2(5) - don't worry that there are a fractional number of bits - that's a common thing (the digits of a decimal number, for example contain log2(10) bits of information each - somewhere between 3 and 4 bits. But the CONTEXT in which these things appear is still irrelevent (in information-theoretic terms) to assessing how much data there is. Suppose there were some places in the genome where it didn't matter whether a particular codon contained a C or a C' - that wouldn't change the number of bits that were being encoded - only the efficiency with which the underlying machinery uses them. So we have to revise our idea about the total information content of a DNA strand to be (log2(5) x NumberOfBasePairs) instead of (2 x NumberOfBasePairs) - but that's the only thing that changes. SteveBaker (talk) 18:53, 9 January 2009 (UTC)
- I still have a hard time removing the integral function which results from information beyond the A, C, T, & G code and exists in all naturally occurring contexts from its pure data storage aspect. Even if its not important for storing the data set it seems important for describing what is stored in the system. But lets see if we can put one part of what I said in Steve's terms. Take the data string "F:P()*%$)(Q$P N@" that would store fine in a silicone but lets say there is a alphanumerical data system like DNA that would store the "P()" with 99.99% fidelity but "Q$P" has only a 90.09% fidelity. As it turn out every time the "$" is used the fidelity at that position and the adjacent bits drops considerably. In fact any efforts in sending you "$" would make the surrounding information suspect. That seems significant to me in reducing the system to bits. I have comment on a few more sections but I'm trying to keep my reply tight. Have a good one.--OMCV (talk) 13:08, 9 January 2009 (UTC)
- The problem you're having is that the CONTEXT in which data occurs matters (both in DNA strands and in software and human speech and in all other systems that I can imagine) - and the PROCESSING RULES which are applied when you 'express' a gene or 'execute' a computer program matter in a given context with a given processor. Hence, neither of those things alters the total number of outcomes that there could possibly be from expressing/executing that data. That's precisely WHY we insist on counting bits and not looking at end results.
- If we stick to that rule then the total "information content" of software and DNA can be directly and meaningfully compared without having to descend into all the gory details of how cellular biology works - and we don't have to wonder whether the answer for the computer data would be different on a Mac or a Linux machine. It's JUST information. That's what "bits" are - a simple measure of the number of possible combinations a particular kind of object can represent.
- You keep telling me that the same sequence of base-pairs produces different results depending on the surrounding context (and - according to you - this means it's "worth more" than digital data). I'm telling you that this is true of ANY binary data - synthetic or natural - and it doesn't affect the number of bits that the data can represent. Just because the binary pattern for a computer program can be pushed through a loudspeaker to make an annoying squawk - or put into a digital picture frame to produce a random-looking splatter of pixels - doesn't mean that I can claim that the binary data has any more bit hidden inside it - it's the exact same data - it's just been expressed as audio or light using a different processor. Similarly, a 2000-base-pair DNA sequence that happens to produce (I dunno) Insulin when you play it forwards and (say) L-Triptophan when you play it backwards - or in the presence of an acidic environment or whatever the heck it is that cells do...that's irrelevent to the DNA sequence itself - it's still only 2000 choices between 4 (or maybe now it's 5) base pairs - so it's STILL only 4000 bits of information. The thing that "decided" to express it as one thing or the other added extra information of it's own to make that happen. That extra information was either originally stored at some other location in the DNA - or is an environmental factor or something - but it wasn't in that 2000 base-pair sequence because it doesn't have any 'space' to store anything more than the 4000 bits we can 'see' by counting the base-pairs and multiplying by the number of kinds of base-pair.
- The business of sending me a '$' using DNA and it only coming out as a '$' 80% of the time is again, to do with the processing. You evidently have a faulty replay mechanism ("faulty" is the wrong word in a biological context...but not in a computing context). My car ignition key has three positions - and you can take the key out altogether. It's a 2 bit system (four states). The fact that my car fails to start 20% of the time when I put the key in and turn it all the way to the right is neither here nor there. It's still a 2 bit switch. Same deal with the '$' stored in your DNA.
- SteveBaker (talk) 18:53, 9 January 2009 (UTC)
- I think I understand what how our idea relate at this point. I'm surprised that "Information Theory" does not have language to describe when a "faulty replay mechanism" or pa rocess mechanism is intrinsically tied to a type of bit within the constraints of a specific system.
- SteveBaker (talk) 18:53, 9 January 2009 (UTC)
- So for the sake of review there is the "unique data set" and the "context". The "unique data" of DNA can be reduced to 4 bits; fine, I agreed with that a long time ago. The "context" is more complex and contains data that is not unique; clearly I have a hard time considering this data separately form the "unique data". After all I'm a chemist and thus like the empirical real world examples, first principles do little for me. After all you were the one who said nothing can exist without a context so it bothers me to do thought experiments without a context. Let the physicists have their theory and abstractions. None the less I see the value, there is only one of four possible bits of unique data per nucleotide, DNA can be reduced or compressed to that abstraction.
- In silicone the "context" is usually divided between the hardware, that allows for binary states, and an "adjacent unique data set" that acts on the "unique data" we care about to achieve a "function". In DNA the context is again part external hardware, which is the environment of the test tube or cell the DNA resides in with buffer, small molecules, proteins, different forms of RNA; suffice it to say there the significant states far exceed binary. Even if the DNA is a four bit system the context is already way more complex than a PN junction. The second portion of the context which is intrinsic to DNA's function is redundantly stored with each bit/nucleotide, actually each of the four bits carries a slight different portion of the overall context. The amount of data embedded in the nucleotide "function control system" is enormous and complimented (if that's the right word) by the ability to act on that data. This differentiation between adjacent and embedded context I find interesting. Thanks for you time, have a good one.--OMCV (talk) 04:17, 10 January 2009 (UTC)
- I think we're all coming at this from different viewpoints. Steve is taking the quite correct approach that if the choices are CTGA, it's a two-bit system, no more, no less. Others of us are saying that 1) there are more than just those four choices; 2) it depends on the context; 3) it's a "lossy" system. Steve is right that in terms of pure information capacity, 1) is the only determining factor - how many possible combinations can be made? C-C' and T-U substitutions are two salient examples of why there are more than just four possibilities. However, there are others, so the true information content of a DNA strand would need to consider quite a few possible chemical modifications. Then there's the base-pairing bit - two strands which are not paired in perfect fidelity. This delves into the context part, because depending on which strand is being read, the information content may or may not be different. Nevertheless, mispaired bases represent another contribution to the total number of combinations in a DNA strand of an arbitrary length.
- Not all these combinations are relevant, so I'll dare to suggest a synthesis that also addresses the lossy nature of DNA: we should instead be thinking of DNA as a communication medium (which Steve has been doing all along I think). Then we have to look at channel capacity, and it's a "noisy channel", where B is the "bandwidth" representing the total set of permutations, S/N is the signal-to-noise ratio defined by the context, C is the maximum rate through the channel, and R is the effective rate defined by the error-correction capacity of the system (the "context"). Steve would know more about that than me, but it's certainly more complex than just CTGA. Franamax (talk) 07:01, 10 January 2009 (UTC)
- Yes, exactly. This is why it's important to stick to a rigorous counting system at each level of abstraction. Coming back to what I happen to understand the best - computers - we have a system where a RAM memory cell is strictly 1 bit. That's strictly an implementation decision - it wouldn't make ANY difference if we had chosen to store voltages of 1,0 and -1 rather than just 1 and 0 as we choose to do. We'd have had a 3-state logic system with 1.5 bits per cell. But nobody other than the guy who built the computer would have to care about that. From everyone else's perspective, it's just bits. That's a liberating way to limit the complexity of reasoning about the system. Similarly, in a binary computer, we typically group our bits into bytes - collections of 8 bits - and arbitarily choose to store numbers in the range 0..255 in those bytes. We now have a base-256 logic system built on top of our base-2 logic system. However, that's not the only way. In the 1970's when microprocessors were new and we were all figuring out the best way to use them - many people decided to group their bits into 'nibbles' of 4 bits each and to store a decimal number with one digit in each nibble (this is called 'Binary Coded Decimal' - BCD for short). Since a nibble could store 4 bits - it has the ability to store things in the range 0..15 - but there are only 10 decimal digits - so about a third of that range was wasted. Now - when you ask "How much storage space is there in a BCD computer?" you have to answer on more than one level:
- At the hardware level, we have bits and nibbles - so you just add up the number of bits.
- At the BCD level though - the number 999 has three digits. So it consumes 3 nibbles. You can store anything from 0 to 999 in three BCD nibbles - and that's almost 10 bits...but at the hardware level, three nibbles is 12 bits - so a BCD digit has just about 3.3 bits of storage even though we stored it in a 4 bit chunk of memory.
- It might be that you use your BCD computer to compute company payroll. Suppose you have to store your worker's salaries in the machine. If the highest paid person gets $98,000 a year - it would be kinda stupid to use 5 nibbles to store that because next year you might give that person a 5% salary increase and now you need 6 nibbles...which would break all of your software. So you might choose to be super-safe and use 6 or more nibbles per salary "number" - but in truth, your salary information is only 5 digits - so you're wasting a nibble for future expansion. At this level of representation, we have even fewer bits per nibble than at the BCD level.
- What happened was that in choosing how we store information on our underlying hardware, we decided to waste about 0.6 bits per nibble for the sake of convenience. That doesn't change the storage capacity of the underlying hardware - only the way we choose to use it. However (and this is important) you can never GAIN information content - at each level of representation, you can only lose it or break even. The laws of information theory are a lot like the laws of thermodynamics - and actually, there is a solid science connection there. You can't get 'free energy' because of the laws of thermodynamics - and you can't get 'free information' for the exact same reason. (And I mean "exact"!)
- Yes, exactly. This is why it's important to stick to a rigorous counting system at each level of abstraction. Coming back to what I happen to understand the best - computers - we have a system where a RAM memory cell is strictly 1 bit. That's strictly an implementation decision - it wouldn't make ANY difference if we had chosen to store voltages of 1,0 and -1 rather than just 1 and 0 as we choose to do. We'd have had a 3-state logic system with 1.5 bits per cell. But nobody other than the guy who built the computer would have to care about that. From everyone else's perspective, it's just bits. That's a liberating way to limit the complexity of reasoning about the system. Similarly, in a binary computer, we typically group our bits into bytes - collections of 8 bits - and arbitarily choose to store numbers in the range 0..255 in those bytes. We now have a base-256 logic system built on top of our base-2 logic system. However, that's not the only way. In the 1970's when microprocessors were new and we were all figuring out the best way to use them - many people decided to group their bits into 'nibbles' of 4 bits each and to store a decimal number with one digit in each nibble (this is called 'Binary Coded Decimal' - BCD for short). Since a nibble could store 4 bits - it has the ability to store things in the range 0..15 - but there are only 10 decimal digits - so about a third of that range was wasted. Now - when you ask "How much storage space is there in a BCD computer?" you have to answer on more than one level:
- In silicone the "context" is usually divided between the hardware, that allows for binary states, and an "adjacent unique data set" that acts on the "unique data" we care about to achieve a "function". In DNA the context is again part external hardware, which is the environment of the test tube or cell the DNA resides in with buffer, small molecules, proteins, different forms of RNA; suffice it to say there the significant states far exceed binary. Even if the DNA is a four bit system the context is already way more complex than a PN junction. The second portion of the context which is intrinsic to DNA's function is redundantly stored with each bit/nucleotide, actually each of the four bits carries a slight different portion of the overall context. The amount of data embedded in the nucleotide "function control system" is enormous and complimented (if that's the right word) by the ability to act on that data. This differentiation between adjacent and embedded context I find interesting. Thanks for you time, have a good one.--OMCV (talk) 04:17, 10 January 2009 (UTC)
- Forgetting the 'extra' base pairs for the moment, we know that the cell takes 3 base pairs (2 bits each - so 6 bits in total) and treats those as codons. Which are really like the instruction set of a computer - the parallels are kinda creepy! 6 bits means that there can only be 64 codons. However, (according to our genetic code article at least) this is kinda like BCD representation because some of those codons share the same meaning. The 64 codons only represent 'commands' for 20 amino acids plus 'START' and 'STOP' commands - 22 possible states - just over 4 bits of information. So there is waste and if we consider the information content at the codon level, 3 base-pairs is storing only just over 4 bits rather than the 6 bits they are theoretically capable of storing. That's just like BCD. Furthermore, not all codon sequences make sense - STOP,START.STOP,START, for example is useless (presumably) - so at the level of "what represents a protein", you have even more states that don't do anything useful - and the amount of information content is reduced still further.
- So the intermediate conclusion is that at each level of representation, you count a different number of bits. However, each 'level' of representation tends to lose information compared to the level of representation below it.
- The relevance of that to processing is important. But what it does mean is that counting the number of base-pair possibilities puts an UPPER limit on the number of bits you can store in a DNA molecule. The true number of proteins or whatever that it actually codes for is going to be a lot less than that...but absolutely, certainly, without any doubt whatever...it can't code for any more. No matter what context or what processing you provide - that's a hard upper limit. Claiming that some fancy biological 'thing' can make more information come out of a DNA strand is PRECISELY the same thing as claiming that you've invented a perpetual motion machine. "PRECISELY" because information theory and thermodynamics are actually the same thing 'under the hood'.
- So yeah - there may be more bits per base-pair because there are these other base-pairs and mismatches and whatever - but you can (and should!) decide how many bits there are per base-pair and use that at the UPPER limit for the storage capacity for your DNA strand...and be aware that the actual, practical limit (which determines the number of possible unique individuals it could code for) is by absolute fundamental thermodynamic necessity a lot less than that. I've heard all sorts of weird ways in which DNA operates - where (for example) some sequences of codons are read backwards as well as forwards....or that a 'skip' in processing can result in 2 base pairs from one codon and one from the next being accidentally read as a different codon. That's all very interesting and amusing but it doesn't alter the answer to the "number of unique possibilities" question because all that is is a change of context or a change of processing. For any given strand, what you get with the 'correctly lined up' reading of the codons has a precise 1:1 correspondance with what you get for a particular misregistration reading of that same sequence. So that doesn't increase the number of unique individuals you can code for - although it does provide for some cunning ways of getting more proteins from a single strand of DNA.
- As for lossiness and bandwidth: Bandwidth and storage space are the same thing in information-theoretic terms. A communications path is just a time-ordered sequence of bits rather than a space-ordered sequence such as you get in a DNA strand or a RAM chip. The same exact rules apply. A lossy system (where some bits that you put into the system get changed, or deleted or extra bits get stuck in there) is still able to be useful - but you need some form of error correction (or merely error detection in most situations - if you can detect an error you can say "try again" and keep doing that until you get a good one - so error correction and error detection are essentially the same thing). In order to detect errors (and therefore to be able to correct them) you need some redundancy in the system. It's possible that the reason the cell uses 3 base pairs to store only 22 unique instructions is precisely that. Certainly, the fact that you have two copies is a classic redundancy. But what all fault-tolerant systems share in common is that you MUST waste some more bits in order to achieve that tolerance - and (I won't bother you with the math) you can predict from the number of bits you waste what the maximum possible degree of fault-tolerance is. But again - fault tolerance is not getting you "free energy" - you can only lose storage capacity.
- The other big issue is data compression. We all know that you can take a digital photograph and if it's a 1000x1000 pixel image, with Red,Green and Blue color data at a byte per color per pixel - then that requires 3,000,000 bytes to store accurately. However, you can compress it by storing it as a JPEG file - and now it takes maybe only a tenth of that. It sounds like we've broken the 1st law of thermodynamics - we got 'free information' storage...no different from 'free energy'. However, we don't. These compression tricks fall into two classes - "Lossy" and "Lossless". Lossy compression schemes result in the reconstituted data being different from the data you started with. A JPEG photo and it's original pristine photo look more or less the same - but if you look carefully, you'll see that the colors aren't quite the same and it's a bit more blurry and there are color fringes around some edges. That's because of the "no free lunch" part of information theory. If you use less bits - you lose something. There is also "lossless" compression (the "PNG" file format does that - as does Zipping something into a '.Z' file on your PC). But the thing about lossless compression is that it only works on things like our BCD numbers that are already wasting bits. So you could compress a bunch of BCD numbers back down to the point where no memory was wasted anymore. If you try to compress something losslessly and it doesn't have any wasted bits inside - it'll actually get larger - not smaller. This isn't a surprise - because the laws of thermodynamics are hard-and-fast rules, with no exceptions.
- So I guess my conclusion here is that counting the bits in the base-pairs of your DNA strand imposes a HARD limit on the maximum amount of variation that strand can encode...which is why all of the other layers of encoding, context, interpretation, clever trickery, redundancy and compression - don't matter a damn. The number of base-pairs multiplied by the log-to-base-2 of the number of kinds of base pair is the upper limit...all of the other things you are telling me can only possibly reduce that number. When you try to tell me otherwise - it's PRECISELY like some idiot claiming to have invented a perpetual motion machine. You get to the point where you can tell such a person that their invention doesn't work BEFORE they even start to explain it. It's the exact same deal with the information content of DNA. It's thermodynamics.
- SteveBaker (talk) 13:39, 10 January 2009 (UTC)
- As another tangential (and perhaps not relevant to the wider point, but useful for future reference) correction on detail: The codon set that encodes the 20 standard amino acids actually includes start codons, but not stop codons. The amino acid methionine doubles up as a "START" in eukaryotes. Whether the methionine codon's "command" is "START" or not depends only on wider context, not on the information encoded in the codon itself. Incidentally, in prokaryotes there is a further quirk. The same start codon, ATG, codes for both methionine and N-formylmethionine. Which, again, depends on context. I think this impacts your statement, "The 64 codons only represent 'commands' for 20 amino acids plus 'START' and 'STOP' commands - 22 possible states - just over 4 bits of information. So there is waste and if we consider the information content at the codon level, 3 base-pairs is storing only just over 4 bits rather than the 6 bits they are theoretically capable of storing." How does this take into account the exact same codon providing very different 'commands', Doesn't that increase the number of bits of information stored in that codon? Rockpocket 20:46, 10 January 2009 (UTC)
- Oh - that's odd. Our genetic code article definitely says that UAA, UAG and UGA are "STOP" instructions and AUG is "START". But whatever - the principle remains. The idea that the same "instruction" codes for different things depending on the context is also not a novelty. Pretty much all modern computers do this too. I'm not going to research a particular example - so let me make up something. On the XYZ-2000 computer, the bit pattern 11001100 means "ADD" - unless it's preceded by the 01010101 code - in which case it means "SUBTRACT". This is a common enough thing and it does complicate the business of counting the number of bits represented by the concept of an "INSTRUCTION". On real Pentium chips, the shortest instructions (for the most common operations) are stored in a single byte (8 bits) - but there are instructions that are (IIRC) up to 7 bytes long depending on the context. The total number of "instructions" is easy enough to count though. So yes, the word "codon" has evidently gotten a bit messed up - you want to think of it as an "INSTRUCTION" but you also want it to represent 3 base-pairs when in fact, some codons depend on what came before so they are REALLY 6-base-pair codons. That confusion is not there in computers - we have "bytes" that are the convenient group of 8 bits - and bytes and instructions are not necessarily correlated. So, again, fuzzy biological language is confusing the thought patterns! If we talk instead about 'base-pair-triplets' and 'actions' then we'd say that one action was typically stored in a single base-pair-triplet but some actions (such as the 'methionine' and 'N-formylmethionine' actions) require multiple base-pair-triplets to encode their full meaning. That's exactly like how the Pentium works...and I'd bet that the REASON is exactly the same. When they designed the Pentium, most existing PCs were using 80486's - and they wanted to make sure that all programs written for the '486 would still run on the Pentium. So instead of completely changing the mapping of bit patterns to instructions (so there would be a separate base-pair-triplet for 'methionine' and 'N-formylmethionine') they decided to change the rules for replaying instructions such that some instructions would depend on the previous context. This 'reverse compatibility' thing would be needed in lifeforms because when the RNA molecule learned (evolved) how to make proteins with N-formylmethionene AS WELL AS methionine - it still had to replay all of the rest of the methionene-only-DNA correctly. So evolution had to pick a genetic code combination that didn't come up very often to use for this special new thing. Same deal with the design of the Pentium.
- It's weird - the more you guys try to pursuade me that DNA is different to software - the more things I see that are chillingly similar. It's incredible that we worked out almost all of these computer techniques BEFORE we figured out how DNA works...yet amazingly the two mechanisms are virtually identical in every important respect! SteveBaker (talk) 21:08, 10 January 2009 (UTC)
- I'd disagree that the "total number of instructions is easy enough to count". It's easy to count the number of instructions the compiler emitted, but for any non-trivial piece of emitted code, inspecting the code to count the instructions quickly becomes intractable. Once you find the first jump instruction, you're in trouble. The jump is to an address specified in a register, and you have no a priori way of knowing what's actually in the register. You might know it first time through, but the same jump could be executed later to a different address - so you can never be sure whether or not the following code is instruction or data, and it's even conceivable that jumping into the middle of a multi-byte instruction could yield a valid fewer-byte instruction, or a valid overlapping multi-byte instruction. You would have to actually execute the code to figure out where all the instructions are, and then you run into the problem of figuring out whether the code will ever stop executing. Franamax (talk) 22:30, 10 January 2009 (UTC)
- That makes a lot of sense, thanks for explaining it. I actually like the comparison between DNA and software, it makes a lot of intuitive sense to me. I simply doing understand software well enough to know whether the analogy holds up, you are doing a good job of explaining that. I'm beginning to think that the major difference between the mechanisms is not in how information is stored, but is in the amount of "feedback" the code exerts on itself. The information the DNA holds is used to build hardware, which is directed back on itself to modify the code in incredibly complex ways. This, obviously, is how it manages to be self replicating and thus evolves. My (limited) understanding of software is that this happens in a more limited way, but we are not quite at same level. We can use software to build hardware, but have not quite perfected self-replicating machines yet. Rockpocket 21:45, 10 January 2009 (UTC)
- It's certainly true that software doesn't often change the way it is to be interpreted. We do occasionally use "self-modifying code" - but as a technique, it's very much frowned upon by professionals because the resulting mess is so incredibly hard to understand...which I think is what you're telling me about what happens when DNA modifies its own "execution environment"...it gets so complicated that you can't easily understand it. Sadly (for the biologists) evolution doesn't care about how hard the code is to understand! We certainly could use software to direct robots to build more robots - but there really isn't a market need to do that - so we build cars with them instead. Using software to write software happens quite a lot...in fact, what I was doing only last week involved writing a very repetitive set of closely-related bits of software - and I ended up writing a program to write the programs for me...but that's not the same thing either because the software I was writing was unrelated (structurally and functionally) from the software IT was writing...this isn't reproduction. Quite a few software viruses are known to change their own software when they go on to infect another machine in order to slow down the ability of anti-virus software to identify it...that's much closer to what biological viruses do...except that (as I understand it)
- SteveBaker (talk) 13:39, 10 January 2009 (UTC)
- Thanks Steve. I completely agree with you treatment of the unique date and I had never heard of a nibble which is is pretty cool. But rounding back to my original point "functional software" can be accurately and completely described as software while "functional DNA" must be described as software and hardware. "Software" and "DNA" are descriptions of things at different levels of abstraction.--OMCV (talk) 19:37, 10 January 2009 (UTC)
- (A nibble is 4 bits. Some people call 2 bits a 'nibblet'. Some people spell it nybble on the grounds that we don't talk about bites!)
- The distinction you are still making about DNA is 100% one of biologist's own making. The term "DNA" refers to a chemical - a molecule. Except that there are a bazillion variations on DNA (one for every creature on the planet give or take). It's unfortunate that there seems (to an outsider) to be no clear names for:
- The general class of molecules that are a double-helix of base-pairs.
- A specific molecule of that class.
- The data encoded on that one of those molecules.
- The language it's encoded in (I guess "genetic code" comes close).
- The 'computer' - or more technically the "interpreter" that processes it (Although "RNA" is close).
- But the main problem is that we don't have different names for the molecule and the data that's encoded on it. In computers, we hardly ever confuse the RAM chip (2) and the software (3) that's stored inside the RAM chip. In biology - there isn't a clear separation. If you sequence a DNA strand and store all of the A's, G's, T's and C's on a CD-ROM - the data on that CD-ROM is exactly the same thing as the data that's encoded on the molecule(3). The NAME for that thing would be your analog of software. So let's do that: DNAdata and DNAatoms are those two concepts. DNAdata is capable of being stored on a CD-ROM or printed on paper or memorized by some savant - OR you can store it on DNAatoms so that a biological creature can execute it.
- Analogy: If I take a copy of Microsoft WORD and store it on a flash drive or on a spinning magnetic disk or place it in RAM and execute it - it's still Microsoft WORD...it's the same software because the information content is identical no matter where we happen to have stored it. We can even Zip up the "WORD.EXE" file so that none of the bits are the same and we STILL call it "Microsoft WORD" because the fundamental information content hasn't changed. I have to put it into a RAM chip in order to execute it - but it's still the same program even when it's on disk someplace where I can't execute it.
- So when you copy the C/G/A/T stuff onto a CD-ROM, it's still the same information - and you could (in principle if not in practice) reconstitute a functioning DNA molecule from the data on the CD-ROM. The hardware is kinda disposable.
- The thing that IS a bit different from a RAM chip is that the entire physical structure of your storage medium is made up of the base-pairs. In a RAM chip, you can erase the data and you've still got a RAM chip...not so with DNA! But that's just a hardware detail from an information-theoretic perspective. I could take a very large number of Lego bricks in Red,Green,Blue and Yellow and use those four colors to define a 2 bit code. Then I could build a huge tower of bricks that would 'store' Microsoft WORD in Lego at 2 bits per brick. I could later on, build a robot with a camera that would examine that tower brick by brick - looking at the colors - and copy that data into computer memory and run it. So we can make artificial Lego-DNAatoms and store software on it. At THIS point - is there truly any difference at all between the DATA that's stored on the DNA and software stored in Lego?
- You're probably going to say that it's because we 'execute' DNAdata directly from the DNAatoms strand without copying it into another storage medium - but that can be true of software too. I could build a little robotic RNA analog that would move up and down the tower of Lego bricks and execute the program (god-awfully slowly!) directly from the Lego. This was exactly the kind of thing that Alan Turing was thinking of when he came up with his "Turing machine" thought experiment...and the 'Church-Turing thesis' says that all Turing machines (be they Lego and LegoRNA - or a Pentium IV chip) are equivalent.
- So, IMHO, the only reason there is this distinction in your mind between DNA being hardware and software together - and my view of the universe is that you don't have two separate words for the DNAatoms and the DNAdata. If you did then you'd probably be agreeing with me.
- Once you mentally and linguistically separate the two concepts - it's possible to talk rationally about whether we should consider that 'data' to really be 'executable commands' or 'data' in the classic sense of numbers and words. As a computer geek I have to tell you that the line between data and executable is more than just blurry - it doesn't exist.
- I'm happy with that but if you ever see a hardwares/software combination other than DNA capable of self-replication, even if its not sexual, let me know. If you havn't seen it already I thought you might get a kick out of this. xkcd.--OMCV (talk) 22:01, 10 January 2009 (UTC)
- Maybe the distinction is that DNA actually does form the hardware sometimes. During the translation process, DNA directly specifies an RNA strand. But the RNA strand is not just a passive carrier of information - it can adopt a topology and catalyze chemical reactions. A single strand of DNA also adopts a topology and this fact is being exploited, but I don't think that's used in the context of the genetic mechanism itself (that we know of yet anyway :). Franamax (talk) 22:16, 10 January 2009 (UTC)
if you ever see a hardwares/software combination other than DNA capable of self-replication
- RNA virus
- pure-software "replication" exhibited by quine (computing) programs. The ones I've seen have a strict division between a passive data-carrying section and an "active" section.
- That division that was, I believe, first proposed by von Neumann in his description of a Von Neumann universal constructor, before the structure of DNA was known.
- I see that self-replicating machine has a photo of a hardware machine (and its "daughter") that can construct all of its plastic parts from raw plastic filament feedstock.
- ...
- Any others I missed? Is there an article whose talk page would be better for discussing this topic?
All of these need a lot more "help" replicating than typical DNA-based living things. But then, many DNA-based living things need a bit of help -- some more than others. Humans need help producing essential amino acids, flowering plants need a bit of help with pollination, etc. --68.0.124.33 (talk) 16:53, 22 January 2009 (UTC)
- Can RNA viruses really self-replicate? I thought they had to hijack a regular cell to do that work for them?
- Certainly pure-software can do it - and you can find it in places like Conways' "Game of Life" and other 'alife' applications. But OMCV wants to see 'hardware/software combination' - which suggests that you're supposed to replicate the hardware too.
- The image in self-replicating machine is the 'RepRap' (which is a project I happen to contribute to) - it can replicate its own plastic parts - but not motors, nuts and bolts, electronics and metal rods...and it requires a human to bolt together the "child" machine. So it's a VERY long way from "true" self-replication. The tricky part is that all animals take pre-built "parts" as "food" - we don't make our own Amino Acids - we extract them from our food. Can a RepRap machine be said to "eat" a cardboard box with an electric motor inside, 'excrete' the box and then incorporate the motor into it's offspring? Tough call!
- For different concepts of "Food" we end up with different results. If you have to take your "food" in terms of raw atoms of pure elements - then humans can't "self reproduce" because they need pre-built amino acid molecules. If "food" can be a completed robot with everything ready to go - but no software loaded up - then a robot could easily reproduce by finding the USB port on the other robot - plugging in and downloading the contents of its own hard-drive onto the 'offspring'. Somewhere between those two ridiculous limits is what we would consider to be acceptable. SteveBaker (talk) 18:38, 22 January 2009 (UTC)
Flowchart
Hello. I include here a flowchart that I occasionally find useful, in the hopes that you might find it useful too. The difficulty is "Does the message contain really ridiculous misunderstandings and claims?", since what is 'ridiculous' can be hard to tell. That is where Poe's Law unfortunately raises its head. Anyway, I tend to find that if I ask whether someone is being ironic and they aren't, this either leads to them checking their text and seeing a mistake (good) or making it clear that they really do think some very strange things, giving you a good opening for ripping into these views if you so choose (fun!). Or, of course, explaining so that I see I was mistaken.
If you find you are able to tell who I am, please do not write that name on here. Sadly, I'm keeping a low easily-checkable profile with that username on this project. Thanks.
79.66.109.89 (talk) 20:03, 10 January 2009 (UTC)
- I've checked the flow chart and decided that you sent it to me ironically. Hence I should ignore it and therefore I must treat your messa...oh oh...No you don't catch me out that easily! :-) SteveBaker (talk) 20:12, 10 January 2009 (UTC)
At Wikipedia, the goal is to be impartial as explained by Wikipedia's Neutral Point of View. At least that was what I was lead to believe somewhere along the line. Might have been something I read... who knows. Anyway... Distressed Wiki-Surfer, the Psychic article demonstrates some information I believe you might find helpful on your personal quest to believe or not believe in parapsychological phenomena. As for your missing friend, I do sincerely hope you hear from him soon. In the mean time, if you know his basic information, 9 times out of 10 the police department in the area he lives (at least in the US) can help you make sure he's ok. My Best Wishes to you. Operator873 (talk) 08:30, 21 January 2009 (UTC)
- To Operator873: Wikipedia is indeed required to be impartial - but that in no way forces us to tell lies. Please read WP:FRINGE and (easier reading): Wikipedia:Why Wikipedia cannot claim the earth is not flat which make it abundantly clear that we are NOT supposed to go around saying that psychic powers are real when the abundance of peer-reviewed, respected scientific journals says they are unambiguously NOT real. We are allowed to say things like "Psychics claim such-and-such (insert reference here)but mainstream science says this is all bullshit." - that's what "impartial" means in our terms...it most certainly DOES NOT mean that we give equal weight to the wild-assed opinions of nut-jobs as you are clearly doing. If you cannot abide by those rules - then go take your crazy theories someplace else because they aren't welcome here.
I was not my intention to endorse, promote, or qualify anything about the pyschic community. I'm just not arrogant enough to assume that I have the scientific qualifications to make a deffinitive decision regarding something that has been in contention for the past 130 years in medical science.
If I have some how offended you, I apologize. However, I do not believe public chastism is ever warranted. And I'm especially offended when anyone puts words in my mouth or beliefs in my head. I agree with you that pyschic anything is hogwash. But do not assume that you are able to decide what I believe or the context in which I may or may not have that belief.
Thank you. Operator873 (talk) 06:40, 22 January 2009 (UTC)
If you don't believe in public chastism (I don't think that's really a word!) why did you start your post by publically chastising people for a lack of impartiality? The first three sentences of your post did exactly that. For people who pride themselves on the veracity of their words - that's a NASTY accusation. Worse still - you were 100% incorrect. Being impartial does not mean that you have to say that something that is patently untrue is true - or even (in a weaker sense) fail to indicate that something that is known to be false might not be false. Let me ask you: To be impartial do I have to give equal weight to the flat earth theory and the round earth theory? Do I have to say that all of modern chemistry is of equal weight to phlogiston-theory? No - I do not. I can (and indeed, must) say - unambiguously - that the earth IS round and that there ARE more than four chemical elements. Similarly, I do not have to give equal weight to the existance and non-existance of psychic powers. Indeed to do so without some pretty damned impressive proof would be quite contrary to Wikipedia rules. That's not just my view - it's a Wikipedia core principle. If you doubt that - read the two articles I linked for your benefit (especially the second one - which is an excellent article).
Impartiality means presenting all versions of the truth with appropriate amounts of weight. I'm not allowed to say that the Isreali incursion into Gaza is either right or wrong - to do so would be a lack of impartiality - I have to present both sides of that debate with weight appropriate to the strength of the arguments in referenceable material.
Wikipedia is continually assaulted by all kinds of nut-jobs trying to get their pet pseudo-science into the encyclopedia and we have carefully thought out rules to avoid us turning into promoters of bullshit. In a very real sense, we are defenders of truth in a sea of misinformation. So we are careful about what we say and what we don't say. Telling our OP that that psychics powers are unambiguously not real is NOT 'partial' - it's true! At least it's true to the standards that Wikipedia requires. If you say that there is possibly some tiny scrap of truth to psychic abilities then it is incumbent upon YOU to prove it by providing references to peer-reviewed scientific papers in recent and reputable journals that say so - and failing that we must take the mainstream viewpoint. If you are unable to do so (and, trust me - you ARE unable to do so) then you MUST NOT tell our readers that there might be truth in it.
Now I don't mind if you break the rules and say an untruth in the WP:RD - that happens a lot - and you'll be corrected soon enough. But I DO care (passionately, as it happens) when you start chastising people who are getting it right for "lack of impartiality" - and implying that somehow you know this because you've read it in some guideline somewhere - when clearly you are horribly ill-informed in such matters.
IMHO, you should retract that comment and apologize to those you accused of inappropriate behavior.
But in the meantime, it's necessary for me to say clearly that you are wrong. If you don't want me to say things like that in the public part of the RD - then you should not make your accusations of lack of impartiality on those pages (as you clearly did). I'm not supposed to edit your answer (nor would I desire to do so - I am a strong advocate of letting people stand or fall by what they say) - so the only way to correct the public record of your incorrect and grossly unfair accusation was to reply to it directly in the same forum it was made. If you are so concerned that the public arena stay clear of that kind of thing (and I could understand why) then you must reserve your (incorrect) claims of non-impartial posting for the RD Talk: page.
Hence: You were initially wrong to claim (in the context of Wikipedia's rules on fringe theories) that psychic powers might, just maybe, be real. I corrected you. You posted an unfair accusation against hugely intelligent people whom I respect greatly. I countered it. You posted that accusation in the wrong place...forcing me to reply to it in the wrong place. You made three mistakes and brought all of this down on yourself.
I hope that's clear - and I'm sorry if you misunderstood what was going on there.
SteveBaker (talk) 14:47, 22 January 2009 (UTC)
- I was going to address your concern and perhaps a bit of misunderstanding on my part one last time. Mr. Baker I do not believe in anything pyschic. I, however, do understand what you were trying to communicate to me through the examples given in both the RefDesk answers and in your note here. I'll keep in mind what you said and will endevor to keep Wikipedia's intent at the forefront of any and every answer I am able to give, as I have been.
- Again, I apologize to you for striking what seems to be a very raw nerve in your patience. I'll again reiterate that it was not my intention to offend, chastise, scold or otherwise discipline any user at any time. My comment on the RefDesk itself was not directed towards the person posting the first answer. It was directed towards the person asking the question. Since this reader must be an avid believer in such nonsense, I didn't want to leave them feeling insulted. I often find a willing ear the easiest way to gain the attention of someone in dire need of direction.
- You can only lead a horse to water...
- My most humble apologies again, Mr. Baker. Operator873 (talk) 12:19, 25 January 2009 (UTC)
Planetary nebulae
Does palnet's get destroy in planetary nebulae? Youu siad when sun becomes a planetary nebulae, even Pluto will become too hot?--69.226.46.118 (talk) 21:33, 22 January 2009 (UTC)
- Dude - slow down and type straight...you're getting seriously hard to understand!
- I don't know what the effect of expelling the planetary nebula would do to Pluto - it's a hard question to answer. What I am saying is that the 'Helium flash' (or 'Helium Pulse') is for sure going to nuke anything remotely alive in the solar system. For a brief period of time (days - not years) the sun's brightness becomes 100,000,000 times brighter and for a matter of seconds, it'll grow to 100,000,000,000 times brighter. The temperature of the surface of the sun (currently around 5,700 degrees) grows to some 100,000,000 degrees! The radiation from that single brief event is more than enough to turn all of the planets into radioactive cinders - boil oceans and blow atmospheres away. Lots of other things that are more or less debatable will happen too - but that brief flash is clearly enough to end all prospects of life of any kind continuing past that moment...everything else is mere detail! SteveBaker (talk) 00:34, 23 January 2009 (UTC)
- You said before sun becomes a giant, sun will be hot enough to evaporate the oceans on Europa-Tango keep telling me once when Titan heats up to Earth-like surface temperature, Titan's atmopshere will drain away quickly even without solar wind or ultraviolet light-but the PDF said Titan may keep some of it's atmopshere. Is this true before the sun becomes a giant (approx. 4 billion years-all or most of Titan's atmopshere will been black-off?--69.226.46.118 (talk) 02:13, 23 January 2009 (UTC)
Earth and Jupiter
Everybody tells me different thing, think I think you are the expert of science, amy I ask you this. When I look down from the Earth, would it even look blue? Becasue Earth's ocean usually looks navy blue from space, many people have been in space each year, and from TV News, Earth looks blue. If I go to space myself, will Earth still look that blue, or Earth will look transparent or brown if I'm looking at Earth. You said jupiter is 25 times dimmer than Earth, i think we can still see Jupiter clearly, but if i'm orbiting around Jupiter, will I still see the colors or not, I just won't notice the orange and brown colors?--69.226.46.118 (talk) 22:01, 23 January 2009 (UTC)
- The earth's ocean do look blue - maybe dark blue from orbit. The land looks green or brown or whatever color it is. The ISS orbits at a height of only 300 miles - and there is much less atmosphere between it and the ground than there is between where you are standing now and the horizon. The earth is well lit (in daylight at least) - so things look pretty normal. Jupiter is much dimmer - the reason it looks fairly bright in the sky is twofold:
- Your eyes get dark-adapted.
- You are getting at ALL of the light reflected from the entire planet hitting one tiny part of your retina. When you are closer to it (say, in orbit), then only a small fraction of the surface would be visible and that light would be spread over your entire retina instead of concentrated in one place...so it wouldn't be as bright as you'd think.
- Astronauts reported that the earth looked INCREDIBLY bright from the moon - for exactly that reason. SteveBaker (talk) 23:47, 23 January 2009 (UTC)
Mars and Saturn
If I'm orbiting Mars and looking down what you meant by I can't see the ruby color-would the disc rather look blue-green or cyan to me, or it wold look brownish yellow to my vision. I have watched the TV about aliens from Mars, and Mars still look scralet to them. Plus Mars is only 1.5 times furhter away from sun, so the light would be only 5 times dimmer-the light cast we get on a cloudy day. But you said if I'm orbiting a spaceship around Saturn, I can still see the full disc. You siad the light we get on Saturn is 100 times dimmer, would that equal the light we get when a thunder strike our house at midnight. That's dim enough the disc looks almost black, or close to black right?--69.226.46.118 (talk) 18:14, 26 January 2009 (UTC)
intensity light
You said if I'm on Mars I won't notice the vermilion color becasue my eyes is in vermilion color, and the color I'll see is just tranparent light. Then why on Earth, on foggy day the I still notcie white, I see sky everyday, they always look azure (light blue). The yellow (signal light), I think way inside, humans can notice the yellow colour. But on Saturn/Titan, the problem is the light we get is 1/100 that of Earth, that is only light of a thunder hit our house at night. If I orbit around looking down, would Saturn look almost black? When I descend in Titan's atmosphere, you said I won't see the orange color because orange colour is in my eyes. I thought the color won't burn my eyes blind, just turn my color vision off? is strong color not rich just burn off vision?--69.226.46.118 (talk) 02:00, 28 January 2009 (UTC)
Please ask questions on the WP:RD. SteveBaker (talk) 02:18, 28 January 2009 (UTC)
glad you posted that
re: a post on the Reference Desk about graphic progams. Blender eh? ... looks promising. I had done some Terragan (sp?) years ago, and a little Vue d' Esprit and Lightwave (over my head) - thanks for the info Steve! Might have to get back into that a bit and maybe make some new wallpapers. Ched (talk) 04:20, 30 January 2009 (UTC)
- I should warn you: Blender is a bit odd. It uses some very strange GUI approaches - like you've never seen in any other application. Hence it takes some getting used to! Fortunately, if you head over to their web site - or even search on places like YouTube - you'll find more written, video and interactive tutorials than you could possibly manage to go through in a lifetime! Having said that, it's pretty capable - and whilst it's probably not as good as (say) Maya, it's not THAT far behind. You can save yourself several thousand dollars by choosing blender - and for that much, you can live with a lot! But the odd GUI is definitely the hurdle to cross - I'd say that about half the people who use blender will tell you that the GUI is just awful and is enough to pursuade them to go out and buy Maya...the other half swear that the GUI is clever, slick and super-efficient for the job of making 3D models. Sadly, I'm one of the first 50% - I think it's just awful and I can only just bear to use it! My son, on the other hand, is doing a Computer Game developer course at UT Dallas - and he has both Maya AND Blender - and he actually prefers blender!! SteveBaker (talk) 11:08, 30 January 2009 (UTC)
Of beans and noses
Hi. Re your comment on the refdesk about covering your body in antiperspirant, I'm afraid that some idiot might take that to mean that doing that is at once safe and desirable, and I do believe it's neither. It might be better to go back and rephrase that. Totally up to you. I'm sure this was the last thing on your mind, but it just set my "beans up the nose" flag, and I thought I'd tell you. --Milkbreath (talk) 15:28, 3 February 2009 (UTC)
- I thought that what I said covered that - but perhaps I should rephrase it. I've often wondered why people don't die from antiperspirant-induced-hyperthermia. SteveBaker (talk) 15:58, 3 February 2009 (UTC)
Speed of sound
Steve, I've left a question for you particularly in the speed of light and sound thread on RD/S. I'm interested in your thoughts on this; maybe I'm just missing the obvious. — Lomn 21:25, 4 February 2009 (UTC)
You said Saturn is about ten times further from the sun than we are - so the light there is 100 times dimmer than on earth - but that's still enough to see quite clearly and in full color. The lighting inside your house at night is probably around 100 times dimmer than the sun. on earleir post Saturn is still bright enough to see full color. I'm not sure what you mean by that. Is 100 times less sunlight the amount of light we get as 20 feet away from candle? youmenat lighting inside our house at dark in night? That's very dim, would it make Saturn look almost black if I'm in a orbit Or the yellow, green, and blue light will just be surrounding me drifting around? --69.229.108.39 (talk) 23:32, 4 February 2009 (UTC)
Sunlight on Saturn is 100 times dimmer than sunlight on earth. But a candle seen from 20 feet is about 1/100th as bright as the sun here on earth. Since you can see in color (albeit dimly) by candlelight - you ought to be able to see Saturn OK. REMINDER: Questions & clarifications should be posted to the RD page please. SteveBaker (talk) 00:27, 5 February 2009 (UTC)
Did you really need so many exclamation marks?
I think I contribute usefully to the refdesks, in answers and questions. Factual errors need to be corrected within the question section, of course. But if I err in any other way, I would rather be told so on my talkpage than shouted at in public. Wishing you a calm day , I remain, sir, yours etc. BrainyBabe (talk) 14:05, 10 February 2009 (UTC)
- I posted the above before I had heard of your accident. Best wishes for your recuperation. BrainyBabe (talk) 15:54, 12 February 2009 (UTC)
Accident
Steve, I hope the insurance trouble gets sorted out with a minimum of frustration. Glad to hear you're still in one piece. -- Captain Disdain (talk) 07:30, 11 February 2009 (UTC)
- Hear hear! And perhaps next time you should consider purchasing a reasonably-sized car. /ducks --Sean 13:45, 11 February 2009 (UTC)
- By "reasonably sized" you mean "a MINI" - right? The way that car protected me is simply incredible. The police, ambulance and tow-truck guy all said that they were amazed that any car could survive a full-speed rear-ending from a huge pickup truck towing an RV trailer and have the occupant merely be able to open the door and step out. So my next car will also be a MINI Cooper'S. SteveBaker (talk) 02:49, 13 February 2009 (UTC)
- Welcome to the "My MINI got smashed up and I walked away from it" club. (Want to make a userbox?) Glad to hear you're doing well. -- Coneslayer (talk) 13:59, 13 February 2009 (UTC)
- Good idea! Look at User:SteveBaker/Userboxes, I made three. SteveBaker (talk) 14:50, 13 February 2009 (UTC)
- Awesome! I think mine falls between "minor" and "totaled". I got T-boned by a New Beetle at 50+ mph, at the right rear wheel. Spun around 270 degrees. The passenger side curtain airbag deployed. The damage was about $15,000 and three months in the body shop, but it wasn't totaled. (I was fine... a little sore for a few days, but I was packing up for a house move, and half the soreness was probably from that.) -- Coneslayer (talk) 14:56, 13 February 2009 (UTC)
- Wow! Mine got hit five times by four different cars - and I was spun around 180 degrees! They officially totalled mine last week. Anyway - I made another userbox for serious (but not totalled) wrecks. SteveBaker (talk) 15:15, 13 February 2009 (UTC)
- Awesome! I think mine falls between "minor" and "totaled". I got T-boned by a New Beetle at 50+ mph, at the right rear wheel. Spun around 270 degrees. The passenger side curtain airbag deployed. The damage was about $15,000 and three months in the body shop, but it wasn't totaled. (I was fine... a little sore for a few days, but I was packing up for a house move, and half the soreness was probably from that.) -- Coneslayer (talk) 14:56, 13 February 2009 (UTC)
- Good idea! Look at User:SteveBaker/Userboxes, I made three. SteveBaker (talk) 14:50, 13 February 2009 (UTC)
- Welcome to the "My MINI got smashed up and I walked away from it" club. (Want to make a userbox?) Glad to hear you're doing well. -- Coneslayer (talk) 13:59, 13 February 2009 (UTC)
- By "reasonably sized" you mean "a MINI" - right? The way that car protected me is simply incredible. The police, ambulance and tow-truck guy all said that they were amazed that any car could survive a full-speed rear-ending from a huge pickup truck towing an RV trailer and have the occupant merely be able to open the door and step out. So my next car will also be a MINI Cooper'S. SteveBaker (talk) 02:49, 13 February 2009 (UTC)
"there are two completely different colors that we call 'yellow'"
Steve, you stated on the Science reference desk about there being "two completely different colors that we call 'yellow'". Is there a graphic that you can show me that has both colors on it that humans cannot distiguish? Can this even be demonstrated on a computer monitor? 216.239.234.196 (talk) 14:22, 11 February 2009 (UTC)
- No. Because of the way computers and printer's inks work, such a graphic is impossible. But let me explain for you:
- Well, here is the deal - our eyes have three types of 'cone' cell - one sees red light, another kind sees green and a third sees blue. This means that none of our cone cells directly see 'yellow' - what actually happens is that pure yellow light (such as might be emitted by an old-fashioned low pressure sodium street lamp) weakly stimulates both the red and green cone cells (both of which are weakly sensitive to pure yellow light)...and our brains say "Aha! A mixture of red and green...that's yellow!".
- Now - what happens when you put yellow up on your computer monitor? Well, computer monitors (and TV's and cinema film and camera film and...etc) cheat. Their designers know that humans only have red, green and blue cone cells - so they make these devices so that they only emit pure red, green and blue light. (Take a magnifying glass and look closely at a white part of your computer monitor and you'll see three separate dots for each pixel...one red, one green and one blue). So when your computer displays the color yellow, it emits some red AND some green...but no light of the pure frequency that we'd call yellow. Our eyes can't tell the difference because "true" yellow from a streetlamp stimulates both red and green cone cells - and "fake" yellow from our computer monitors show a mixture of red and yellow that stimulates our eyes in precisely the same way.
- But even in nature, there are "yellow" objects that are reflecting mixtures of red and green light and others that are emitting true yellow light and yet others that are emitting complicated mixtures of true yellow, red and green. And to our pathetic human visual system, these all look "yellow". I'm not talking about subtly different SHADES of yellow...under the right circumstances a pure yellow and a fake yellow can be precisely identical in appearance - although they contain completely different light frequencies if (for example) you split the light up into a spectrum using a glass prism.
- Another way to think about this would be if light was like sound. Imagine a Piano keyboard with middle-C being "Red", the E above middle-C was "Green" and the G above that was "Blue". Each key sounds a single frequency. True yellow light is between Red and Green in frequency - so it's like a 'D' on the piano keyboard. So the yellow you get from a sodium lamp is like hitting 'D' on the keyboard - and what we "see" is more like a chord containing the notes 'C' and 'E'. If you think about how different a 'D' sounds to a 'C+E' chord...that's how different the color yellow and the "other" color yellow are! You can take a photo of a sodium street lamp and put it up onto your computer monitor and (with the right adjustments) it'll look EXACTLY like the real street lamp to you...but it's wildly different.
- But even that doesn't quite show the full magnitude of this misapprehension that our eyes give to us.
- Suppose you were color blind and lacked the 'green' cones (which is actually pretty common). You'd see the color 'green' as a color that weakly stimulates your blue and red cells...just like someone with normal vision sees yellow as a color that weakly stimulates red and green cells. But if someone who is not color blind is presented with red+blue - they see that vivid pinkish-purple we call "Magenta". In a very real sense, Magenta is to Green what Yellow is to Yellow!! So if we were somehow able to have a fourth kind of cone cell that percieved yellow directly - then the yellow from a sodium lamp would look as different from the yellow picture of that same sodium lamp as a bright pink/purple looks compared to green!!!
- So - there are actually many kinds of yellow that look exactly the same to us which are "really" as different as magenta and green.
- Interestingly, goldfish and certain freshwater shrimps have as many as 12 different kinds of cone cell. For them, there must be HUGE differences in "color" (technically "hue") between all of those 'kinds' of yellow.
- Even more interesting, there is a lady in the UK who has been shown to be a tetrachromat - meaning that she has FOUR kinds of cone cell in her eyes instead of three. She was found by a genetic study of certain kinds of colorblindness that predicted that the daughter (it's never a man) of a mother with one sort of colorblindness and a father with a different sort would have a one in four chance of being a tetrochromat. Since colorblindness of any kind is pretty rare in women - this is an exceedingly rare thing. Anyway, the lady worked in a shop selling wool knitting yarns - and she remarked about how she felt that nobody else was able to match colors of yellow, orange and lime-green yarns as well as she can. So it's VERY clear that she really can see more 'colors' than you or I...and that there truly are many kinds of yellow.
- Incidentally - I always use yellow as the color in my example because of the kicker about the tetrochromat...but this is also true of 'baby-blue' (or "Cyan") which is a mixture of green and blue AND a color somewhere between green and blue. It's also somewhat true (although in a much more complicated way) of colors like magenta, violet, purple, indigo and so forth.
- I've been blown away by this realisation since it came to me a couple of years ago. As kids, many of us wonder what it would be like if there was a "new" color - and as adults, we've believed that this is an impossible thing. Yet, there is a little old lady in England who sees colors we cannot even imagine. What would you give to see the world through her eyes for just a few minutes?!
- As a final note - our blue cone cells are actually able to see all the way up into the ultra-violet - but UV light is filtered out by the lens of our eyes in order to protect the rod and cone cells from sunburn(!) so we really don't see into the ultraviolet at all. People who have undergone cataract surgery sometimes have that part of the lens that does the filtering removed. My mother is one of them. When that happens, you perceive UV light as if it were merely blue. Since bees can see into the UV, many flowers have evolved patterns of spots and lines specifically to help the bee home in on the nectar. In most cases, we can't see them...but after cataract surgery, many patients (my mother included) are amazed to find that many very plain flowers now have blue spots and stripes on them that nobody else can see!
- SteveBaker (talk) 19:05, 11 February 2009 (UTC)
- Fascinating stuff. I'd read about this once before, and it still amazes me. This has huge implications on the use of LED's for lighting. Apparently colored lights on colored objects don't behave the same for your Yellow #1 and Yellow #2...I never quite understood why, but now I get it. Guyonthesubway (talk) 15:01, 12 February 2009 (UTC)
- Yes - if you had some hypothetical object that could reflect 'true' yellow light strongly but not reflect much red or green - then it would look yellow in natural sunlight but almost black under red+green+blue 'fake white' light such as you get from 'white' LED's. In fact though, objects mostly reflect a range of frequencies close to their 'true color' - so most yellow objects are also able to reflect some red and some green - so they do still look yellow in LED light. Lighting designers are aware of these things though - and LED's that are designed for lighting have phosphor coatings that convert the light from the diode into a wider range of frequencies so that they more closely match natural sunlight. SteveBaker (talk) 18:51, 12 February 2009 (UTC)
- Fascinating stuff. I'd read about this once before, and it still amazes me. This has huge implications on the use of LED's for lighting. Apparently colored lights on colored objects don't behave the same for your Yellow #1 and Yellow #2...I never quite understood why, but now I get it. Guyonthesubway (talk) 15:01, 12 February 2009 (UTC)
- Wow, thank you for such a detailed explanation. A Quest For Knowledge (talk) 16:29, 16 February 2009 (UTC)
Hi
Just read about your prang - so glad you are not badly hurt, I'm amazed at how well the MINI protected you. DuncanHill (talk) 16:35, 12 February 2009 (UTC)
- Yeah - the accident-resistance of the MINI is well-known but it was still pretty amazing. I'm off work until Monday - then I'll be working half my normal hours for another week. Between the muscle relaxant and the vicodin - I'm only conscious for a few hours a day and for much of that time I'm in happy-happy-land. I scraped my forehead on the roof liner so I have this gigantic scabbed-over friction burn that makes me looks like someone from Klingon high-command!
- Oh well - it'll all be OK soon and it looks like I'll at least get a new car out of it.
- Well take things easy & don't over do the vicodin - we don't want you turning into House! DuncanHill (talk) 21:27, 12 February 2009 (UTC)
- That was my first thought when the Doctor prescribed it! Ah - the power of the media! SteveBaker (talk) 02:43, 13 February 2009 (UTC)
Just to be clear.
Keep up the good work Steve every world needs a good character and you are ours here in the Ref Desk world. I wish you well and hope your motoring troubles are soon a memory.Richard Avery (talk) 23:13, 12 February 2009 (UTC)
- Well, I'm holding out hope that I'll get a new car out of it. The wreck also crunched a laptop and a camcorder into rubble - so when the great day of settlement comes around, there should be some new toys to play with! SteveBaker (talk) 02:44, 13 February 2009 (UTC)
Darwin redux
Hey, Steve -- this week's TIME Magazine (cover date February 23) has three pages (including artwork) entitled "Evolving Darwin", which I can report says many of the things you've been saying for a while now. (Article subtitle: "He recognized how life-forms adapt and survive. But only today are scientists uncovering many of evolution's deepest secrets")
With your deep and abiding interest in the material, I have no doubt you'll run out (yet tonight, probably) and pick up a copy :-) --DaHorsesMouth (talk) 03:07, 15 February 2009 (UTC)
- I'll check it out - thanks! SteveBaker (talk) 03:10, 15 February 2009 (UTC)
Happy SteveBaker's Day!
SteveBaker has been identified as an Awesome Wikipedian, |
Oooh! Thank you! Will there be cake? SteveBaker (talk) 14:52, 16 February 2009 (UTC)
Thanks!
The Reference Desk Barnstar
Thanks for answering my Guitar Hero Karaoke question on the Reference Desk! --Ye Olde Luke (talk) 06:53, 17 February 2009 (UTC) |
Marmite
You can get it on the internet! [2] DuncanHill (talk) 19:03, 19 February 2009 (UTC)
- Yes, I was kidding. You can actually get it in 'The British Supermarket' in Grapevine, TX - but that's a 400 mile round-trip now we've moved to the Austin area. I have resorted to mail-order from that store - but at about N times the normal price. Where 'N' is a large number. SteveBaker (talk) 19:32, 19 February 2009 (UTC)
The voice of reason
Keep up the good work. TenOfAllTrades(talk) 21:27, 28 February 2009 (UTC)
- Um...thanks! I'll try. SteveBaker (talk) 21:35, 28 February 2009 (UTC)
Your knife
Don't know if you're still monitoring your refdesk Q so I put this here. If you click "Maintenance" at the bottom here [3] and then on the pictures at the bottom of that page, it will get you to some instructions for taking it apart. Once the plastic parts are off your could try a cotton swab and "Liquid Nails Remover" "Adhesive and Caulk remover" available at Ace Hardware (and the others too, I guess). Warning! That stuff will eat every ting that's not nailed down. Try it in a small spot first, let it sit for just a little while, and see how it works. Good luck. BTW. Did you end up building that indoor garden or did you buy it? (Lisa4edit) 76.97.245.5 (talk) 03:09, 1 March 2009 (UTC)
- Thanks - it really says what people already said - which is that you can't get the side plates off without breaking them (which isn't so terrible because you can order spares) - there is some information about replacing the little springs that work the scissors and the pliers - but for that you need a bunch of specialist tools. There is nothing to suggest that you can actually dismantle the knife completely. The problem with solvents is that I can't get them into all of the tiny little spaces without dropping the entire knife into the solvents to soak - but that (I fear) will wreck the few plastic gadgets such as the magnifying glass.SteveBaker (talk) 22:33, 1 March 2009 (UTC)
- I have a friend who used his Victorinox "Swiss Champ" knife as a hammer, ruining it. He sent it back to the manufacturer and they gave him a new knife for free. This was a few years ago, I don't know if their warranty is still that generous. APL (talk) 22:51, 1 March 2009 (UTC)
- Wow! Well, mine's a bit out of warranty - it's about 30 years old...and I've probably used mine as a hammer once or twice! SteveBaker (talk) 23:07, 1 March 2009 (UTC)