Wikipedia:Reference desk/Archives/Science/2017 July 26
Science desk | ||
---|---|---|
< July 25 | << Jun | July | Aug >> | July 27 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
July 26
[edit]First there is a problem with question b). To find e.g. x(t4) , we need v(t3 + 0.5Δt). To find v(t3 + 0.5Δt) we need a(t3). To find a(t3) we need v(t3). But we are asked to calculate steps with time interval Δt. Even with interval 0.5Δt the errors appear xlspng.
Second, there are two functions that are similar with Excel graph x(t) : and . From different m and k , I see that when k=1 & m=2 max. x increases to 2 units , when k = 2 & m = 1 max. x decreases to 0.5 units. So equation must look like x = (m/k) f(t).
By means of guessing, from eq. , it seems that satisfies.
Username160611000000 (talk) 10:17, 26 July 2017 (UTC)
- It seems when I take double interval and arithmetical mean , I get increasing accuracy twice (during double time) : xls png . But it is not enough I think, as acceleration is changing with same rapidity as speed, and we maybe have to count this someway (maybe analogous formula 0.5at2 for distance) Username160611000000 (talk) 14:50, 26 July 2017 (UTC)
- The numerical methods that Feynman is illustrating in section 9-6 and 9-7 are the Euler method and the midpoint method. For an interval of size h the local error in the Euler method is order O(h2) whereas the local error in the midpoint method is O(h3). Gandalf61 (talk) 15:54, 26 July 2017 (UTC)
- So for Δt = 0.1 the midpoint method is 10 times more accurate than Euler's one. But it's impossible to use the midpoint method in this exercise . Username160611000000 (talk) 18:20, 26 July 2017 (UTC)
- Combining the graph and the formula F = -kv , I think eq. of motion must be . Is it correct? Username160611000000 (talk) 18:12, 26 July 2017 (UTC)
- Yes, that is the correct exact solution. If then:
- Gandalf61 (talk) 12:48, 27 July 2017 (UTC)
- Yes, that is the correct exact solution. If then:
- It seems that the simplest Euler method gives ( xlspng) much better accuracy than my attempt where I use v(t+ 2Δt) = v(t) + 2Δt a(t + Δt). I also noticed that if I calculate speed as v(t+Δt) = v(t) + Δt•a(t) + Δa•Δt•0.5 , where Δa = (-k/m)Δv = (-k/m)Δt•a(t) , then the accuracy rises xlspng, but not as good as with Euler method. Username160611000000 (talk) 18:36, 27 July 2017 (UTC)
- Can't solve a) question. It seems that Feynman is expecting a formula . So we must have next relating formulas and .
But combining and we can write . It does not agree with . Why?
And also I am not sure should we convert and to different units. Username160611000000 (talk) 10:40, 30 July 2017 (UTC) - By plotting graphs of and with arbitrary values of m,k,vo png, I found that for t = m/k sec , 2m/k sec etc. the function f1 possesses values (m/k) f2(1) , (m/k) f2(2).
Therefore, it is possible to call m/k sec as 1 sec' and (m/k)f2(1) meters as f2(1) meters' or (m/k) meters = 1 meter'. Then, by calculating the pairs (x' ; t'), and having performed the inverse conversion, we get the correct pairs (x ; t ).
I still can't understand next: e.g. in power to eliminate the factor we should use a formula ...Username160611000000 (talk) 16:03, 30 July 2017 (UTC)
- Can't solve a) question. It seems that Feynman is expecting a formula . So we must have next relating formulas and .
- It seems that the simplest Euler method gives ( xlspng) much better accuracy than my attempt where I use v(t+ 2Δt) = v(t) + 2Δt a(t + Δt). I also noticed that if I calculate speed as v(t+Δt) = v(t) + Δt•a(t) + Δa•Δt•0.5 , where Δa = (-k/m)Δv = (-k/m)Δt•a(t) , then the accuracy rises xlspng, but not as good as with Euler method. Username160611000000 (talk) 18:36, 27 July 2017 (UTC)
Where does this drop of water push down with a million-g force?
[edit]Fun article at phys.org about Buller drops (Reginald Buller) that launch fungal spores. They show an artificial system in which one drop of water joins with another and launches it into the air. The energy comes from surface tension. However -- there also has to be conservation of momentum, action and reaction. They say that the force on the drop is actually "a million g's". Yet it looks like only a tiny part of the drop touches the substrate. It's as if there is a truss inside the thing that would put a skyscraper to shame, and I can't even tell where it presses down. Wnt (talk) 11:08, 26 July 2017 (UTC)
- The force is small. g is an acceleration. When you divide a small thing (force) by a tiny thing (mass) the result (acceleration) can still be high.
- One of my favourite books is Pennycuick, C.J. (1992). Newton Rules Biology. Oxford University Press. ISBN 0198540213. It covers lots of this sort of thing about the fundamental and unavoidable scaling laws: why elephants can't jump (big things can't have legs strong enough), why fleas are so bad at jumping (when you're that light, jumping becomes so easy in comparison that fleas aren't actually that special) and why bees can't fly (at their scale the air is viscous enough to allow them to swim). Andy Dingley (talk) 11:27, 26 July 2017 (UTC)
- I should have been more careful in my wording; yes, the force is a million g's times the mass of the drop. But that still implies that the drop can become, in a sense, one million times heavier than it normally is, yet it still does not wet or even deform to fit against the substrate! I suppose I am guilty of a scaling error in not appreciating that the force required to move a given volume drops per the square of scale, and I also don't know how small these drops really are. But it seems surprising that water drops could be so solid that they do not visibly react to the force. Also, I still don't actually know even which drop presses down. Wnt (talk) 12:53, 26 July 2017 (UTC)
- Don't think about pressing down, it's done by pulling.
- The spores are flat sided. That's energetically a poor shape for a surface (considering surface tension). Place a small drop alongside and lower than it, and the drop merges to the liquid layer covering the spore. The small radius sphere, and the constrained flat surface, are replaced by an energetically favoured larger radius. The attached drop CoG is pulled upwards by this shrinking surface. The overall effect is like one of those permanent magnet railguns (Youtube will have videos) where a magnet is accelerated by having others hit it from behind. By Newton there has to be a reaction force, but that's the spore pushing downwards, not the drop. Andy Dingley (talk) 14:20, 26 July 2017 (UTC)
- The very nice video shows the water movement: you can see it first creep upward the spore (and it must push the spore downward meanwhile, but the spore just cannot go down because of the floor), then it continues to go up on its momentum, but pulling the spore upward with it.
- 1 million g is impressive, you can get it by accelerating a thing from zero to 1m/s (not a tremendous speed) in less than 0.1 microsecond, and for a 1 microgram thing it does mean that it gets 1 million time heavier that it is, but it still just endure a 1gram weight force, which is not a trouble even for soft substance (just imagine a 1gram pin standing on its point: it wont break your skin, you have to push much harder for that) Gem fr (talk) 23:43, 26 July 2017 (UTC)
- I should have been more careful in my wording; yes, the force is a million g's times the mass of the drop. But that still implies that the drop can become, in a sense, one million times heavier than it normally is, yet it still does not wet or even deform to fit against the substrate! I suppose I am guilty of a scaling error in not appreciating that the force required to move a given volume drops per the square of scale, and I also don't know how small these drops really are. But it seems surprising that water drops could be so solid that they do not visibly react to the force. Also, I still don't actually know even which drop presses down. Wnt (talk) 12:53, 26 July 2017 (UTC)
Does quickly removing a plaster/band-aid/waxing strip result in quantifiably less pain than doing it more cautiously?
[edit]Does quickly removing a plaster/band-aid/waxing strip result in quantifiably less pain than doing it more cautiously? --129.215.47.59 (talk) 11:26, 26 July 2017 (UTC)
- No-one seems to agree with a single, simple reason, but there are plenty of contenders.
- Anticipation (waiting to do it is worse than doing it), duration and mostly speed - there's a limit on how much pain you can notice per second. If you exceed that, you don't feel it as any worse than you would by doing it slowly, but it's certainly over more quickly. Andy Dingley (talk) 11:30, 26 July 2017 (UTC)
- What does your own experience tell you? ←Baseball Bugs What's up, Doc? carrots→ 11:30, 26 July 2017 (UTC)
- To answer this question, it is important to know that the only accepted way of quantifying pain is by a subjective scale, for example, "On a scale of 1 to 10, with 10 being the worst pain you can imagine, how would you rate this?" There is no objective tool available for measuring pain levels. So the question basically comes down to whether people say the pain is less; and the answer is yes, most people do. Looie496 (talk) 13:40, 26 July 2017 (UTC)
- There are a number of Pain scales for self-reporting. The Wikipedia article about Pain notes that fMRI brain scanning has been used to measure pain, giving good correlations with self-reported pain. Blooteuth (talk) 14:22, 26 July 2017 (UTC)
- On my own tedious/hate/pain scale ripping ecg patches off is less annoying than tearing them off slowly. The most important thing is shaving before they are applied. Greglocock (talk) 14:37, 26 July 2017 (UTC)
- Quantifiably? No. It's not that Looie is 100% correct in saying that we lack all ability to examine this question empirically; his comment refers to the fact that pain, like all sensation, is a type of qualia, which is to say that it is an experiential phenomena and not a physical one, and thus there are no metrics to measure or even qualify an "amount" of pain in a scientific sense. That said, there have been many, many studies which attempt to measure pain via proxies which we intuitively associate with it. The methodologies vary from self-reports, to physiological responses, to direct examination of the neural networks known to be associated with pain. However, none to my knowledge has ever looked at the much narrower issue of comparing the fast vs. slow methods of band-aid removal, so you are out of luck for even this weaker form of analysis. There has been much research and speculation about the element of anticipation (and more general, the effects upon pain of mental focus), but it's far too generalized to be of much use in answering the question at hand. Snow let's rap 21:42, 26 July 2017 (UTC)
- This assumption, " pain, like all sensation, is a type of qualia, which is to say that it is an experiential phenomena and not a physical one, and thus there are no metrics to measure or even qualify an "amount" of pain in a scientific sense" is based on a hidden and false dualistic premise. The hidden assumption is that there can be no physical explanation for the mental, but that simply represents our current state of ignorance, and a long tradition of assuming the soul is separate from the flesh as if we were ghosts trapped in zombies.
- Vision expert Stephen E. Palmer has written on the phenomenon of reverse trichromacy (Color, consciousness, and the isomorphism constraint), where rare individuals have both red-green and green-red color blindness have the ability to distinguish all the hues normal trichromats do, but they see a very broad range of yellows with no unique focal yellow whereas their range of blues is extremely narrow. This is the opposite of normal trichromats, where pure yellow is unique, while a large portion of our color gamut is blue. This is a physically verifiable explanation for a difference in qualia.
- It may be true that qualia can't be shared, but they are not inherently incomprehensible, just not yet understood. μηδείς (talk) 00:57, 27 July 2017 (UTC)
- I've favored here a dualistic version of qualia based on causality violation, but even such a position is not inconsistent with physical explanations for every intermediate stage in sensation. And the general idea with pain scales is that even if qualia is unquantifiable, the way that people express reactions to qualia (such as rating on a scale of 1 to 10) is quantifiable in aggregate. Wnt (talk) 09:33, 27 July 2017 (UTC)
- The issue is not that qualia are not quantifiable, it's that qualia are not transferable. That is, for the pain scale, a person can report their own pain as a "5" or a "2" on some pain scale, and for them that means something; but the physical thing causing the pain for one person to report a "5" is not consistent from person to person. Or for colors, for me red is always red, but I have no way of knowing that my red looks like your red. --Jayron32 10:46, 27 July 2017 (UTC)
- actually, the "red" example is very good, and just disprove your point: color and sound (Musical note) are qualia for lay people (those who don't use precise technical instrument able to measure frequency), but they are transferable nonetheless. It is true that you have no way to be sure that your red (your B flat) is the same as mine, but you do know that when i tell you something is red (my red), it will be red (your red) for you, too. You'd be surprised how well qualia scale are transferable, when properly done (if they didn't, they wouldn't exist). Gem fr (talk) 12:22, 27 July 2017 (UTC)
- Pain is actually more complex than other qualities such as color, because in addition to its basic sensory dimension, it also has an affective dimension: pain is intrinsically unpleasant. The affective dimension is at least partly independent of the sensory dimension -- for example, opiate drugs act to reduce the unpleasantness of pain without greatly altering its sensory quality. Ronald Melzack argued that pain actually has three basic dimensions, which he called sensory, affective, and cognitive-evaluative. Looie496 (talk) 14:06, 27 July 2017 (UTC)
- There is some transferability in the sense that almost everybody agrees that getting kicked in the thigh isn't as bad as getting kicked in the nuts. So if you're marketing the Pocket Intimidator and you can point to a study where researchers hit a hundred people with it and most of them said "not quite as bad as getting kicked in the nuts", now you have something - and you can tell that it is sort of quantifiably less than the deluxe version that is "about the same as getting kicked in the nuts". Wnt (talk) 16:58, 27 July 2017 (UTC)
- LOL. Gem fr (talk) 17:07, 27 July 2017 (UTC)
- The problem is that my being kicked in the nuts may only be as painful to me as you getting kicked in the thigh is to you except there's no way to quantify such a difference; that is the comparison of pain between people is not possible in any way. I may be able to describe a pain as "getting kicked in the nuts" and you may imagine a pain fair worse, or far less painful, than I actually experience, and there's no way to know. It's the same with color; yes, when I describe red to you, we can consistently agree that red is red (in the same way that we all agree that genital injuries hurt more than muscular injuries), however there's no way to know that the way I experience red is the same as the way you experience red. That's the point of non-transferability, and it's a real problem with pain management; does a doctor prescribe powerful opioids because the pain is otherwise unmanageable for me, but open me up for a lifetime of brutal addiction, or do they prescribe less effective, but safer, pain killers because my pain would be manageable for me with them? --Jayron32 12:17, 28 July 2017 (UTC)
- Also, [1] --Jayron32 12:21, 28 July 2017 (UTC)
- Your alleged problem is true of just everything (cars, food, chicks, etc. Everything), leading to solipsism. it is a philosophical problem, but not a practical one: we don't need to be sure that your red and mine, your pain and mine, are exactly the same, we just need to know that when i tell red you won't tell green, and pain (or any other personal feeling, for that matter) is just perfect for that, BECAUSE you have no way tell otherwise (while you could, regarding a red). So, if i say my pain is a 4 on a scale from 1 to 10, it is a 4, period. And the doctor can and will assign me the painkiller associated with 4, not the one associated with 8 (higher, but not necessary double : the pain scale is not supposed to be linear), or just don't ask (my dentist don't ask, she just prescribe the usual painkiller for toothache). Any difference in result to what is supposed to happen will be attributed to individual difference in sensibility to painkillers, and adjusted accordingly, as is done for every other medication (you know : the doctor prescribe a pill a day, then tells you to modulate up to a maximum of ... if this happens, or down if that). In hospital doctors even went to quite simpler: ensure a quite safe maximum dose regarding addiction and other possible side effects, and let the patient auto-shoot himself as need be, as experience shows that most of them inject themselves even smaller dose, for enough (not complete) pain relief. Gem fr (talk) 15:04, 28 July 2017 (UTC)
- If it were so simple, we wouldn't have the problem with the Opioid epidemic in North America we have now; self medication only works a) with highly adjustable dosing and b) in a managed care environment where doctors could intervene if needed. Most people hooked on opioids started taking them under a doctors advice, were not well regulated (i.e. self-dosed as needed using pills rather than injectables), and switched to non-prescription opioids when their addiction worsened. While it would be simplistic to lay the complex causes of such a problem to a single antecedent, certainly part of the problem is that the treatment of pain is not as simple as "He called it a "4" so I can just give him X drug and it'll take care of itself the same way every time" --Jayron32 15:21, 28 July 2017 (UTC)
- You cite a real problem - we are looking for a "standard candle" with pain, but how do we tell if the standard candle itself varies? But one can still make comparisons based on other outcomes. The most relevant article I can think of with this is Red hair, which summarizes some putative differences from the literature. It is, to be sure, not easy. But you can hand folks some whippets of nitrous and see how much they have to inhale before they stop complaining when you kick them in the nuts. (I really should look up how the scientific studies were done...) Wnt (talk) 20:15, 28 July 2017 (UTC)
- If it were so simple, we wouldn't have the problem with the Opioid epidemic in North America we have now; self medication only works a) with highly adjustable dosing and b) in a managed care environment where doctors could intervene if needed. Most people hooked on opioids started taking them under a doctors advice, were not well regulated (i.e. self-dosed as needed using pills rather than injectables), and switched to non-prescription opioids when their addiction worsened. While it would be simplistic to lay the complex causes of such a problem to a single antecedent, certainly part of the problem is that the treatment of pain is not as simple as "He called it a "4" so I can just give him X drug and it'll take care of itself the same way every time" --Jayron32 15:21, 28 July 2017 (UTC)
- Your alleged problem is true of just everything (cars, food, chicks, etc. Everything), leading to solipsism. it is a philosophical problem, but not a practical one: we don't need to be sure that your red and mine, your pain and mine, are exactly the same, we just need to know that when i tell red you won't tell green, and pain (or any other personal feeling, for that matter) is just perfect for that, BECAUSE you have no way tell otherwise (while you could, regarding a red). So, if i say my pain is a 4 on a scale from 1 to 10, it is a 4, period. And the doctor can and will assign me the painkiller associated with 4, not the one associated with 8 (higher, but not necessary double : the pain scale is not supposed to be linear), or just don't ask (my dentist don't ask, she just prescribe the usual painkiller for toothache). Any difference in result to what is supposed to happen will be attributed to individual difference in sensibility to painkillers, and adjusted accordingly, as is done for every other medication (you know : the doctor prescribe a pill a day, then tells you to modulate up to a maximum of ... if this happens, or down if that). In hospital doctors even went to quite simpler: ensure a quite safe maximum dose regarding addiction and other possible side effects, and let the patient auto-shoot himself as need be, as experience shows that most of them inject themselves even smaller dose, for enough (not complete) pain relief. Gem fr (talk) 15:04, 28 July 2017 (UTC)
- There is some transferability in the sense that almost everybody agrees that getting kicked in the thigh isn't as bad as getting kicked in the nuts. So if you're marketing the Pocket Intimidator and you can point to a study where researchers hit a hundred people with it and most of them said "not quite as bad as getting kicked in the nuts", now you have something - and you can tell that it is sort of quantifiably less than the deluxe version that is "about the same as getting kicked in the nuts". Wnt (talk) 16:58, 27 July 2017 (UTC)
- Pain is actually more complex than other qualities such as color, because in addition to its basic sensory dimension, it also has an affective dimension: pain is intrinsically unpleasant. The affective dimension is at least partly independent of the sensory dimension -- for example, opiate drugs act to reduce the unpleasantness of pain without greatly altering its sensory quality. Ronald Melzack argued that pain actually has three basic dimensions, which he called sensory, affective, and cognitive-evaluative. Looie496 (talk) 14:06, 27 July 2017 (UTC)
- actually, the "red" example is very good, and just disprove your point: color and sound (Musical note) are qualia for lay people (those who don't use precise technical instrument able to measure frequency), but they are transferable nonetheless. It is true that you have no way to be sure that your red (your B flat) is the same as mine, but you do know that when i tell you something is red (my red), it will be red (your red) for you, too. You'd be surprised how well qualia scale are transferable, when properly done (if they didn't, they wouldn't exist). Gem fr (talk) 12:22, 27 July 2017 (UTC)
- The issue is not that qualia are not quantifiable, it's that qualia are not transferable. That is, for the pain scale, a person can report their own pain as a "5" or a "2" on some pain scale, and for them that means something; but the physical thing causing the pain for one person to report a "5" is not consistent from person to person. Or for colors, for me red is always red, but I have no way of knowing that my red looks like your red. --Jayron32 10:46, 27 July 2017 (UTC)
- Medeis, how on earth do you get "dualism" out of anything I said? The phenomena I described is one of the most pervasive and well-recognized issues in cognitive science (and ontology, theory of mind, and science broadly), known as the the hard problem of consciousness. It is widely accepted in the studies of neuroscience and cognitive psychology that no one, from ancient sophists through modern researchers with the most advanced contemporary technology and methodologies, has ever been able to explain how the experience of consciousness arises out of physical matter. Not only is that not dualism, it's the very definition of the diametric opposite of dualism. A dualist can always appeal to some mystic explanation, even if it is one that an empiricist can't accept. The hard problem is a problem for researchers specifically because of the assumption that consciousness should not be unique amongst observed phenomena in the universe in having no physical cause or explanation. You've got it completely backwards. Saying that we don't have the capacity (in terms of understanding, methodology, or even cognitive limitation) to quantify something, is not the same thing as appealing to the notion that it doesn't arise from physical matter. Utter nonsense.
- I've favored here a dualistic version of qualia based on causality violation, but even such a position is not inconsistent with physical explanations for every intermediate stage in sensation. And the general idea with pain scales is that even if qualia is unquantifiable, the way that people express reactions to qualia (such as rating on a scale of 1 to 10) is quantifiable in aggregate. Wnt (talk) 09:33, 27 July 2017 (UTC)
- There are many things that happen to be beyond the scope of our understanding--this happens to be chief amongst them, in terms of things humans have pondered for a long time and come up with very little to explain, but it's not in principle different from any other phenomena we lack the tools to explain or even properly define. The only caveat to that statement being that some cognitive scientists believe that the hard problem may be fundamentally different from even the most complex scientific problems involving the physical sciences because our brain evolved in a very specific context where it's perceptual and problem solving heuristics favour certain types of physical problems; put otherwise, we evolved in a context where spatial mechanics and principles of physical causality exerted adaptive pressure that gave us a cognitive array that could be leveraged to eventually understand certain kinds of complex questions regarding the nature of reality, while leaving us with blind spots with regard to certain aspects of our own basic reality vis-a-vis perception and cognition.
- In other words, we may be smart enough to discover/infer the existence of subatomic particles ad-nauseum and still be destined to never understand why we perceive anything. Or more illustratively, we could capable of explaining every aspect of how the physiology of the sensory organs and our brains reacts to physical stimuli, and still not entirely understand how it gives rise to the experience of sensation. It's even entirely possible that we could stay in that state of ignorance forever, as a species. And there's absolutely no appeal to dualism, mysticism, or spirituality necessary to accept that basic status of this field of inquiry.
- There's even a (completely feasible) theory that no "thinking thing" can ever be able to completely understand itself, because systems can only be described fully by another system of higher complexity. Take a neural net, for example; it can't define the parameters of all of its nodes at once, because some of them would be used in the very process, leading to confused results. Another, slightly larger net could define the relationships and firing status of every node in that smaller neural net at any given instant, but then that larger neural net would need yet another, even larger one to define it. It's a "turtles all the way up" kind of argument, with increasingly complex forms of consciousness, each of which is nevertheless, by definition, incapable of completely understanding itself. Snow let's rap 01:08, 28 July 2017 (UTC)
- Also, Medeis, your trichromacy example not only does not relate to your basic thesis, but actually reflects a fundamental confusion about the interplay of the phenomena here and how they are described in a scientific inquiry. Saying that a change in a physical mechanism involved in vision leads to an effect upon what is perceived is a trivial and obvious observation. Of course it does (assuming the difference is significant enough to not be filtered out by higher level processing in the visual cognition system); why on earth would anyone expect otherwise? Saying that changes to the physical system lead to a difference in perception get you not a bit closer to being able to explain the issue underlying the OP's question: namely how the experience of that colour arises from the physical matter that makes up the perceiving system. That's the entire reason we have the term "the hard problem"--to distinguish it from all other types of problems that arise from simply not knowing enough about the basic neurophysiological mechanisms. Some of these problems are not "easy" at all, and may take centuries more or concerted study with increasingly complex and precise techniques before we understand all of the biophysical and organizational properties of the brain that give rise to them. But they are still considered "easy" problems by comparison to the hard problem, which is considered so fundamentally difficult because it defies our capabilities to establish even the basics of how it (qualia) happens. Snow let's rap 01:55, 28 July 2017 (UTC)
- There's no answer because "more" and "less" are subjective. Firstly, there are two distinct components of pain - intensity and duration. People make their own individual judgements over which they prefer. Many make choices much more consequential than band-aids based on these tradeoffs. Some people will live with chronic pain rather than surgery/rehab and vice/versa. It' subjective when a choice is involved. --DHeyward (talk) 18:16, 27 July 2017 (UTC)
- The point of the reversed red/green trichromacy issue is not one of transferability, but a physically verifiable proof (two genetic mutations) can be shown to disprove the notion that one's colors could be "reversed" without it being verifiable. It has long been said that it would not be possible to tell if one's colours (red/green; blue/yellow; white/black) could be reversed without notice. But examples of such people have been discovered. Imagine pure lemon yellow, and then try to imagine "pure" blue. Yellow is the supposed opposite of blue, but there are only really two off-yellow shades (greenish or whitish) yellow while there are many more shades of pure and off-blue. For reversed-trichromats this is reversed. This can be tested objectively both psychologically and genetically, which disproves the dualistic dichotomy between qualia and the physical is not ontologically primary. μηδείς (talk) 00:41, 28 July 2017 (UTC)
- People may have skin of greater or lesser strength, but I have found that if I pull off Bandaid type bandages, adhesive tape, or other medical adhesive pads quickly rather than pulling them off slowly, they take the skin with them, leaving a raw patch of the underlying tissue. This is painful for a long time, takes days to heal, has a risk of infection, and leaves a scar.I have typical skin so far as I know. Edison (talk) 17:14, 28 July 2017 (UTC)
- I always do this cautiously such that no hair is put under tension, I can free the hair from the plaster and I avoid feeling any pain :) . Count Iblis (talk) 11:49, 29 July 2017 (UTC)
- Note that the qualia we experience should be understood as the computational state of the software implemented by the brain. Any changes in the hardware that would render the same computation would leave the person and his/her experiences invariant. E.g. simulating Snow Rise's brain in a virtual environment where he is discussing things with Medeis would render exactly the same experience that Snow Rise would have if that were happening in the real world, even though there is now no brain involved, just the running of an algorithm. Count Iblis (talk) 12:16, 29 July 2017 (UTC)
- Just so. Of course, there is the question of the necessary complexity which the alternative hardware would have to achieve in order to perfectly emulate every last biophysical variable which is involved in defining any one state of a brain at any one moment in time. The virtualization would be many more orders complex than most people would probably assume, based on the apparently speedy developments in brain science over the last half century. It may in fact be impossible. However, even in your brain in a vat + scenario (+ because you are talking about not just simulating the stimuli, but also the system that processes it), note that it would still leave us facing the hard problem. We've just supplanted one physical medium (some kind of designed hardware) for another (the brain). It still leaves us with this fundamental question we have never been able to even so much as scratch the surface of: why do we have the subjective experience of consciousness arising out of systems that we can only scientifically/empirically describe in terms of stimuli-response and other physical properties? We assume that virtual Snow would experience those thoughts and feelings exactly as I do, but we no empirical methodology to ever prove it (or for that matter, to know if you and I actually see remotely the same thing even if we both are looking at it). We tend to treat the idea as silly or the product of narcissism, or delusion, or overly-wrought sophistry whenever someone poses the question "What if I'm the only "real" person who actually thinks, and everyone else is a philosophical zombie?". But (counter-intuitive as it seems) the truth is that, insofar as what thousands of years of concerted examination (and now modern scientific research) can supply, there's not a person who ever lived who could be proven wrong if they made that assumption. We're just used to living with the suggestion that the physical proxies we associate with consciousness (similar brain designs, and so forth) suggest that if one of us is a conscious thing, then most or all of us are too. But insofar as we have not established the causal relationship between qualia and physical processes, that's actually a scientifically invalid assumption, not withstanding the fact that, impressionistically, it feels right. Snow let's rap 00:54, 30 July 2017 (UTC)
- That's utter nonsense. The brain is not digital, does not run programs, is not itself mathematical, runs on no algorithms; there is no such evidence, and the notions contradict all we know about neurophysiology. Nerves and action potentials are not wires and currents. A simulation of consciousness is just as conscious as a simulation of a hurricane is wet. The brain is analog and dynamic in structure, and harmonic in function--it runs no programs. Any machine that could fully simulate a brain would be orders of magnitude more complex than a brain, just as any supposed machine that could run a simulation of the universe would be orders of magnitude more complex than the universe. Such fantasies explain nothing, they are as fruitful and naive as saying some guy with a beard on a throne in the sky created the universe. They simply beg the question in the literal sense of that phrase. μηδείς (talk) 01:13, 30 July 2017 (UTC)
- Medeis, I suggest you read the link which CI included with his post, because I think you may have misread his point. Of course a brain is not analog; I don't see anything that the good Count said that would suggest it was. But with sufficient hardware, an analog machine absolutely can perfectly simulate a neural network (they've been doing that almost as long as we've had computers, as the term applies in general parlance). As you note (and as I went on about at length immediately above) the machine would have to immensely complicated (also in all likelihood, immense in physical size). Even at some speculative point of time far int he future where the human race is creating megastructures of astronomical proportions and other wonders, it could well prove to be a practical impossibility to create a machine capable of simulating a brain down to the atomic scale (which is what you would need to do in order to truly reflect all of the biophysical properties necessary to create an accurate model).
- But that doesn't mean the speculation isn't useful. In fact, exactly the scenario CI proposes has been pondered by serious cognitive scientists for a long while. When you're dealing with phenomena that advance so far beyond the horizons of our technical ability to measure or even model, thought experiments are often one of the few tools which open the issues up to examination, however imperfectly. Scientists do this all the time with physics and cosmology, and I can assure you, it's not uncommon in cognitive psychology/theory of mind discussions between leading experts. Those experts find the discussion quite "fruitful", whatever your impressions as an enthusiastic amateur. And I can tell you this much for certain, your statement that "A simulation of consciousness is just as conscious as a simulation of a hurricane is wet." is about as unscientific as they come. In order for an empirical claim to have any kind of validity, it must be falsifiable, which your statement decidedly is not.
- And the brain absolutely is mathematical, as is any physical system. It may not run all of its operations through straight forward arithmetic, but given sufficient time and processing, you could explain all of the activity of its constituent parts in mathematical terms (it is, afterall, a network of nodes, however complex it may be). There are in fact entire subfields of neuroscience devoted to analyzing the brain on exactly this level. Further, one of the leading modern theories of cognition is the computational model of the mind. Snow let's rap 02:13, 30 July 2017 (UTC)
- And since we're on the topic, you might be interested in Functionalism (philosophy of mind) and multiple realizability (particularly the details on functional isomorphism), because plenty of researchers believe that it's very much possible that you wouldn't even have to replicate an identical brain state to arrive the same mental state. I am agnostic as to this view, personally, neither convinced that it is likely nor that it can be disproven. But it remains a live debate. Snow let's rap 02:38, 30 July 2017 (UTC)
- I don't see much point in further argument. When you say the brain is mathematical because aspects of it can be measured mathematically that does not mean it works by sending numerical symbols or using mathematical equations. Computational models of the mind are very popular with non-biologists, but they lead to fallacies such as the Turing Test (which says nothing about the machine, just the gullibility of the interviewer). I am with Searle; the Chinese Room is not conscious. See also the homunculus argument. μηδείς (talk) 03:01, 30 July 2017(UTC)
- No, your wild supposition (presented as knowledgeable insight) is again wrong; the computational model is in fact favoured by researchers working in biopsychology, and in evolutionary psychology in particular. In fact, it was most famously popularized by some of the leading figures of that field. Not the other way around, as you suggest. And you seem to be deeply confused by the superficial use of the word "computational" here. It does not in any way reflect the notion of a digital machine processing arithmetic logic; the theory merely postulates that the mind is interstitial phenomena produced by the brain processing input, and input does not need to be bussed into a system via symbolic logic or arithmetic calculation--a neural net is fully consistent with that theory (again, a very popular theory with researchers working on the biophysical side of things and less popular (traditionally) with those working from the analytic end of the spectrum, not the other way around). And...it has absolutely nothing at all to do with Turning tests...
- Also, you were the one who raised the notion that the brain is "non-mathematical" which is a more or less nonsensical statement. I merely pointed out that it is as "mathematical" as any other physical system. Nobody here has suggested that it arises from arithmetic or symbolic logic, or that it is digital. Those are straw man arguments you have brought into the argument yourself, but in each case, the assertion itself does not support your larger conclusions. Snow let's rap 06:14, 30 July 2017 (UTC)
- We are well afield here, and neither of you can claim scientific justification. I think it is worth noting, however, that if a finite state machine can be conscious, and if all consciousnesses are finite state machines, then every possible sensation of every possible consciousness could be expressed as a program + data (not that object oriented programming even distinguishes these any more). Since any given computer memory state is essentially a large number, written in a special notation onto semiconductor circuits but conceivably encodable in many media with the same effect, it would seem that whatever conscious sensation a person has at this moment is in fact a number. Since all numbers exist, all possible patterns of sensation exist. Any "reality" these sensations would have would seem to be a matter of some function connecting one number to the next, which defines physical law(s) or memory or history or something. But what determines whether the function "really" exists? Do we now refer back and say that only matter toreally exists, and the function is only real because it is somehow encoded in matter? And the physical laws are defined not by your choice of function but by what function is chosen by a universe that can only be perceived as "real" because ... the numbers exist? There is something very circular about that; it seems like such a model practically demands that solipsism is the only reality.
- My preference, therefore, is to suppose that consciousness is not a finite state machine; my hypothesis for why is that it does not follow laws of causality; and the experimental evidence, however difficult it is research, share, or prove, is that precognition is a genuine phenomenon, one which underlies all paranormal phenomena, including qualia and free will. I think that the advance knowledge of the immutable future causes paradoxes, and these paradoxes can either project into the future (free will) or into the past (qualia), acting as boundary conditions from which the specific content of the cosmos derives. Wnt (talk) 19:55, 30 July 2017 (UTC)
Unidentified seashore life (San Diego)
[edit]I saw this odd pink seaweed and these tiny shells on a kelp float at the beach yesterday. I haven't gotten a response on iNaturalist; does anyone here know what they are? 2602:306:321B:5970:D9:F312:FC8A:C37C (talk) 13:50, 26 July 2017 (UTC)
- I'm not 100% certain, but I think the tiny shells are a type of Spirorbis worm. Even less certain about the "pink seaweed", but I suspect it's actually a piece of a coral colony, perhaps Leptogorgia chilensis? ---Sluzzelin talk
- (OP) Spirorbis looks correct for the first one, and I looked up relatives of the coral you suggested and it seems plausible. Thanks! 2602:306:321B:5970:F947:D46C:8DE9:5D7F (talk) 17:13, 26 July 2017 (UTC)
Novice magnifier question
[edit]I've noticed that the more powerful a magnifier glass the smaller the lens is and the closer to the subject the lens has to be. Is there any way to overcome this? I need a free standing 10x strength magnifier similar to this but with a wider field of view and increased distance from the subject from 5cm tall to 15cm tall. Basically imagine the picture I linked but scaled three times bigger. I can't seem to find anything available with these specs. Is it impossible?
- Hopefully a better answer is on the way but I think it might be possible with more than one lens. Loupes for medical professionals allow higher magnification and at an increase focal distance than any single lense I could find and I think it's enabled by the use of compound lenses. I'll be interested in the answers to your question also. --129.215.47.59 (talk) 16:36, 26 July 2017 (UTC)
- There is no way. Magnifying glasses are typically placed at about their focal distance from the object. For 10x magnification the focal distance is 2.5 cm. Ruslik_Zero 16:44, 26 July 2017 (UTC)
- What you really need is something like an operating microscope or jewelry microscope -- but they aren't cheap. Looie496 (talk) 18:17, 26 July 2017 (UTC)
- I like QuickTest in the UK (http://quicktest.co.uk) as an honest vendor of magnifiers. They sell a range, at a range of prices, and they're refreshingly open about the limitations of what you're buying at the cheap end - you have to love a vendor with a section called Do Not Buy. Some of their background articles are worth reading.
- Eye relief is important for some tasks. It's easy to design a high magnification magnifier at the cost of shortening this eye relief, much harder to do so but still leave long relief. Andy Dingley (talk) 20:36, 26 July 2017 (UTC)
- Forget about lenses - try a high resolution camera and a screen. Wymspen (talk) 20:48, 26 July 2017 (UTC)
- If you need really high magnification, that can be a very good approach. Andy Dingley (talk) 21:05, 26 July 2017 (UTC)
- You might consider a Fresnel lens. I'm not sure if it can do what you want, but they are quite different from simple lenses, so it may be possible. Note, however, that this type of lens does cause some distortion due to the sharp discontinuities on the lens. As you can see, the large size of the lens and distance from the subject may be in the range you want, but 10x might look rather distorted with a Fresnel (the example shown seems to be about 2x). They typically magnify less than that, except for when a blurry image is acceptable, as in magnifying a light. StuRat (talk) 23:54, 26 July 2017 (UTC)
- i'll go for Wymspen advice, or, if this doesn't suits you, try to find some dusty Overhead projector (they use a fresnel lens, as suggested by StuRat) Gem fr (talk) 00:11, 27 July 2017 (UTC)
- Overhead projector lenses are there as condensers, not imaging lenses. They can be used as magnifiers, but they tend to have very coarse lines and so give a lot of distortion. Andy Dingley (talk) 09:26, 27 July 2017 (UTC)
Another mysterious San Diego sea creature
[edit]What is this transparent, tubular, gelatinous thing? I think it's a pyrosome but I'm not certain. 2602:306:321B:5970:E590:A039:83E2:D9A9 (talk) 20:46, 26 July 2017 (UTC)
Figured this out myself, it's a Corolla spectabilis pseudo conch.
- If you're sure that's correct, would you care to upload/release it to Commons? We don't seem to have a reasonable picture ourselves. Matt Deres (talk) 14:51, 29 July 2017 (UTC)