Jump to content

Wikipedia:Reference desk/Archives/Science/2017 October 30

From Wikipedia, the free encyclopedia
Science desk
< October 29 << Sep | October | Nov >> October 31 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 30

[edit]

Locomotive efficiency

[edit]

What is the highest thermal efficiency ever achieved by a steam locomotive? I know their typical efficiency is only 5-10%, yet I've read somewhere that some of them (notably some of the later French De Glenn compounds) achieved an efficiency of up to 25% (at the cost of greatly complicating both operation and maintenance) -- is that true? 2601:646:8E01:7E0B:65AB:303D:F2EB:232 (talk) 06:16, 30 October 2017 (UTC)[reply]

Locomotive engines in particular face severe practical limitations that reduce their efficiency. Even battleships had to make some compromise in their reciprocating steam engines, for example few if any used quad expansion expansion engines. Even so the efficiency of their massive engines maxed out at 13%. Derived from "Ship Form, Resistance and Screw Propulsion" by GS Baker, published in 1920. So how could the necessarily compromised railway locomotives get double that? By going to a steam turbine possibly. But that incurs losses in the transmission. I suggest you challenge the 25% figure, it doesn't pass the sniff test. Greglocock (talk) 06:44, 30 October 2017 (UTC)[reply]
Coincidentally the highest efficiency I can find for a Andre Chapelon design is 12%. Greglocock (talk) 06:55, 30 October 2017 (UTC)[reply]
And 12-13% for Argentina https://static1.squarespace.com/static/55e5ef3fe4b0d3b9ddaa5954/t/55e637bee4b0bef289260255/1441150910433/%23+DOMS-2_PORTA_Argentina.pdf Greglocock (talk) 07:09, 30 October 2017 (UTC)[reply]
  • The most efficient weren't the French de Glehn's, but rather Chapelon's larger 8-coupled locos, the 242 A 1 and 240P. Another couple of engineers worth looking at L.D. Porta in Argentina and David Wardale's Red Devil in South Africa. Much of this later work wasn't about thermal efficiency so much as improved mechanics (the developing technology of the time was offering useful developments here, such as roller bearings), and in more efficient combustion with worse fuels. Porta's Gas Producer Combustion System in particular. Koopmans. The Fire Burns Much Better. ISBN 1909358053. is an important text in this field.
A steam locomotive is first of all a locomotive: it has to move itself, it has to fit through the railway loading gauge. This has always been a limitation on their performance and the sophistication possible. As a result they always lagged behind marine and stationary engine practice. High boiler pressures, steam turbines, condensing and even superheating either didn't appear on locomotives or only later and with less success. Turbines in particular were a notable failure owing to the lack of a successful high pressure watertube boiler and the only one that was adequately reliable was the Turbomotive, the least technically adventurous of them. Andy Dingley (talk) 11:01, 30 October 2017 (UTC)[reply]
I found a project to rebuild a 1956 locomotive, ATSF 3463, to run on torrified biomass. The modernised boiler arrangement is being claimed to double the original thermal efficiency, although there is much scepticism [1]. Alansplodge (talk) 17:16, 30 October 2017 (UTC)[reply]
Whatever the peak efficiency was, it was likely "topped" by the DR 18 201. Unfortunately for you it was not common in the steam engine technology to measure efficiency in "%". Never the less i doubt classical steam engines can manage more than 10%, because they only use the pressure part of the total thermal energy and even that rather unefficient. Steam turbine systems in power stations can reach up to 45% but these are huge, stationary cyclic systems with buid investments of up to 1 billion $, so its sure that these are all state of the art in peak efficient at the time they where build. --Kharon (talk) 01:30, 31 October 2017 (UTC)[reply]
  • A rather out-of-sequence one-off loco, not particularly fast (182.4 km/h or 113.3 mph is not exceptional for steam locos) and designed in its day just to be fast enough to test new coaching stock (see the MÁV Class 242 too). It's notable today for having been preserved, not for its speed. There's no indication that it was ever especially efficient. Andy Dingley (talk) 11:28, 31 October 2017 (UTC)[reply]
"they only use the pressure part of the total thermal energy" Also you're wrong there too. There wasn't a huge amount of attention paid to this, but some designers (notably Chapelon and Stumpf with his uniflow designs) did do so. Andy Dingley (talk) 00:17, 2 November 2017 (UTC)[reply]
[un-indent] Thanks, all! No wonder diesels replaced steam trains so quickly... 2601:646:8E01:7E0B:B9F4:7CD7:EC0A:69F7 (talk) 04:18, 3 November 2017 (UTC)[reply]
Mostly that wasn't the reason. Energy was cheap, so it wasn't a major driving factor. However after WWI, staff costs became very expensive, so they were the main driver. It costs two people on the footplate of a steam engine (affordable) but also a lot more of them in the engine shed and mostly, the cost of the two footplate crew have to be paid over a long, long working day even when the loco only does useful billable work for a small part of this. They have to be lit up and warmed through long before they're ready, then cleaned and oiled (usually by junior staff). At the end of the day there's another hour or two's work to put the engine away. If the service is to take a train down Thomas' branchline and wait for the afternoon return train at the far end, that's still time when the fireman has to look after a hot engine in steam (and burning a small quantity of fuel). Then the loco takes a day off once a fortnight or month for a boiler washout.
Not much could be done about this in the 1920s, but after 1930 and the availability of the practical diesel engine in small locomotive sizes it became possible to start replacing many small intermittently used engines with diesels. Small shunters are workable, as are light railcars for low-traffic lines. Only the USA really tries to build many big diesel locos at this time though.
After WWII, crew costs are very expensive, many railways are in ruins, and it's both an economic time to avoid steam and a rebuilding opportunity to do so. So some countries (US, German), strong in diesel knowledge, go for diesels, others (France) choose electricity. The UK unusually sticks with coal for another decade or so, because coal is native and cheap, oil is imported and expensive. But even then, it's the large locos that stay with steam and the railcars go to diesel. Similar economics operate in Argentina and South Africa. Andy Dingley (talk) 11:08, 3 November 2017 (UTC)[reply]

Zero Living Diet Pt2

[edit]

I was intrigued by the question above. Unless I missed some nuance in the question, my immediate thought was "milk and honey". Neither of these has ever "lived". Sure they were produced by living creatures but they in themselves are not considered alive. Would this fit the OPs question? 185.217.68.208 (talk) 07:12, 30 October 2017 (UTC)[reply]

You'd have to ask the OP (193.64.221.25 (talk · contribs)) that question. ←Baseball Bugs What's up, Doc? carrots10:31, 30 October 2017 (UTC)[reply]
Both of those things are mode from living things.
Honey is made from pollen. Pollen was certainly once alive.
Milk is less obvious, but it must ultimately came from whatever the cow ate. (Probably grass? Or corn?)
Of course, as Bugs points out, you'd have to ask the guy who wrote the original question if that "counts" for his purposes. ApLundell (talk) 20:50, 30 October 2017 (UTC)[reply]
Yes. It would be nice if he would come back here and Finnish. ←Baseball Bugs What's up, Doc? carrots08:05, 31 October 2017 (UTC)[reply]

Moist sodium chloride density data

[edit]

What data sources are available for the density of moist ordinary salt (sodium chloride) samples as a function of water content and perhaps porosity or void fraction? (Thanks)--82.137.11.59 (talk) 10:43, 30 October 2017 (UTC)[reply]

At Manley’s Technology of Biscuits, Crackers and Cookies you can see the bulk density of granular dry salt 1.22 to 1.32. Using this and the density of sodium chloride crystals of 2.165, you could work out the void space that could contain water and then work out how much water would add what weight, and so get a new density for damp salt. See Bulk density to read about issues to do with density. Graeme Bartlett (talk) 12:02, 30 October 2017 (UTC)[reply]
It's not quite that simple, because some of the water will dissolve some of the salt and form a saturated brine. Because of this, the relationship is likely to be highly non-linear, and just a raw calculation of "filling the void space with water" is unlikely to work; it would work for an insoluble solid like sand, but for salt it gets quite messy to work it out by calculation. You could get a number assuming simply filling in the void space; but that number would bear little connection to the actual denisty. --Jayron32 12:12, 30 October 2017 (UTC)[reply]
Perhaps predetermining porosity of dry solid salt with liquids like mercury would be a workable variant? Or perhaps checking the plausibility of the assumption that solid dry salt has near zero porosity? An other aspect I think it should be considered and had in mind when formulating the above question is water activity in humid solid salt! I have put the question mainly to address the issue of water activity in this solid substance and to check the degree of non-ideality of the water salt solid mixture as non-ideal solution!
Considering these aspects, another question arises: How can the brine content in the possible void spaces in solid salt be determined?(Thanks)--82.137.14.216 (talk) 13:35, 30 October 2017 (UTC)[reply]
Sodium chloride, thankfully, has a relatively flat solubility curve, so the density of saturated brine is fairly constant at all temperatures from the freezing to the boiling point, 1.202 grams/mL That may be useful. --Jayron32 15:37, 30 October 2017 (UTC)[reply]
Salt is expected to be sold dry. Normally it does not absorb water from the atmosphere unless it is very humid.[2] The density of bulk depends on how it is handled.
Isn't the rate of change in density with respect to the propotion of water a linear relationship at constant temperature and pressure, in the special case where the solution is saturated? Should the density lie on a straight line between the density at a concentration of 359 g/L and the density of pure salt at 2.165 g/mL? It could also depend on a energy minimum, whether or not the system has an energetic preference for a certain amount of water to be incorporated. Plasmic Physics (talk) 19:23, 30 October 2017 (UTC)[reply]

What's the average or median tidal range of the coast of the World Ocean?

[edit]

For some reasonable definition of coastline and ocean. I always liked "where mean water level = mean sea level". Do small islands change the answer much through sheer numbers and often being far offshore where tides are smaller? Sagittarian Milky Way (talk) 14:03, 30 October 2017 (UTC)[reply]

There may be a real number for this, but I can't find anyone that has actually calculated it. I've checked several likely google searches, and I can't find anywhere that anyone has ever calculated a worldwide mean tide. The variation is highly dependent on where and when the tide is measured. Hypothetically it may be calculable. Realistically, I can't find any reference to help you figure it out. --Jayron32 15:35, 30 October 2017 (UTC)[reply]
Sorry, maybe I'm just being dense. Are you asking about the average (mean) difference in height between high and low tide, based on the sample of all the coastlines in the world? The coastline paradox still comes back to bite you, right? Whatever unit you use to determine how many points to measure (every x miles or millimeters of coastline) will affect your answer with the added bonus that the tides also affect some portion of river's edge deeper inland (as with a tidal bore). Matt Deres (talk) 16:31, 30 October 2017 (UTC)[reply]
Right. High seas are generally 200 nautical miles from a line that can shortcut capes ≤24 nautical miles apart if semicircle between capes ≤ area than actual bay which seems a reasonable finition of coastline but as Jayron says any definition at all seems hard to find. Sagittarian Milky Way (talk) 14:35, 31 October 2017 (UTC)[reply]
  • far offshore where tides are smaller - I would have thought this was "obviously" wrong because outside of areas where water flow is restricted (e.g. Gibraltar is a small passage to the Mediterranean Sea), sea level would simply follow the equipotential of gravitational energy (IIRC if you assume two point-like masses at the centers of Earth and Moon, it is an ellipse). It turns out that is not the case (example: the Azores have much less tide than Lisbon at the same latitude).
This and that indicates that the tide level is a matter of forced oscillation of the water masses in the ocean basins. It is therefore unlikely that there is an easy way to compute tide height at any given location.
You could pull a database of historical tide heights at a lot of locations where that is measured and average them, hoping that it gives a good proxy of the average tidal height (it probably isn't; for instance, ports are at places where tide is low and measurements are done where people are interested to have the data i.e. at ports). I was initially hopeful to find this in a reasonable format for free on the web, but my enthusiasm dissipated after reading this. The closest I found is [3] but that is a pdf format probably impossible to feed to a program. TigraanClick here to contact me 18:31, 30 October 2017 (UTC)[reply]
An OCR program could pull (digitize) the tide data from the pdf tables but the same data is more readily available ready digitized at [4]. Coastlines may be drawn at Mean High Water (on maps and charts), at Mean Sea Level (on maps showing sea depth) or at Lowest Astronomical Tide (on nautical charts), see Tide#Definitions. Harmonic analysis of tides was introduced in the 1860s by William Thomson (titled "Lord Kelvin" after the river near his laboratory) who built impressive mechanical Tide-predicting machines that employed Ball-and-disk integrators. Harmonic analysis offers the means to subtract all the oscillatory terms of a long-term (19-year, see Metonic cycle) Fourier series analysis to leave only the zeroth term corresponding to the mathematical average. Tidal prediction data thus obtained were kept secret during WW1 and WW2, which is understandable, were then made public, but then in the USA were removed from the public domain after the fact by SCOTUS in Golan v. Holder in 2012 - a ruling on which the WMF in collaboration with the EFF had words to say. Blooteuth (talk) 20:15, 30 October 2017 (UTC)[reply]
As I understand it, and this seems to be supported by our article, Golan v. Holder did not remove anything from the public domain. It simply affirmed the removal from the public domain by the Act in question. Quite a few parties felt the removal was unconstitutional in some way, but few disputed that the Act in question claimed to do so. Note also while a district court had initially found in favour of the constitutional claim, this had already been reversed by a circuit court before it came to the Supreme Court. Incidentally, in case there's some confusion remember that there is a difference between something being in the public domain, which generally refers to copyright and definitely does in the court case in question, and whether something is a classified/secret or public information. (Both can restrict access to information in various ways, but the manner of these restrictions is often quite different, hence they are not normally treated the same.) A point of note, while this isn't legal advice, if a work received an authorised publication in the US prior to 1923, so anything which was published during WW1, it is fairly unlikely it is not in the public domain [5] Copyright law of the United States#Works created before 1978. Nil Einne (talk) 12:11, 31 October 2017 (UTC)[reply]
I realised I made a mistake in my above comment. The publication doesn't have to have been in the US. Also I should mention that there are some limited exceptions for non English works. Sorry for any confusion that resulted. Nil Einne (talk) 11:06, 3 November 2017 (UTC)[reply]
  • Thomson's tide predicting machine didn't need to use the ball and disc integrators. They were used in the first of the two machines, the harmonic analyser. Once the constants had been determined by this, the prediction machines which then produced the various tide tables could be a lot simpler: it was largely based on slotted yoke sine generators, a single string to add their values, and varying diameter pulleys to adjust the magnitude of the components. Andy Dingley (talk) 15:51, 31 October 2017 (UTC)[reply]
Note that while mean and median aren't the same, mathematically, they may work out to be similar in the case of a sinusoidal wave, like tides. See measures of central tendency. StuRat (talk) 04:16, 31 October 2017 (UTC)[reply]
If the tides were perfectly sinusoidal (they only approximate that) there's no reason why mean and median tidal ranges must be the same. The rare points about 40 feet above average have disproportionate effect on the average but little effect on the median while points with little tide like the Mediterranean and Gulf of Mexico are common but can only be a handful of feet below average cause tidal range can't be below 0. It seems like one of these factors would win and make the mean and median not identical. Sagittarian Milky Way (talk) 14:35, 31 October 2017 (UTC)[reply]

Finding Voyager 1 goes dark

[edit]

When Voyager 1 runs out of power and we develop manned space travel past our planet's orbit, how easy will it be to find Voyager 1 to study it? I realize this is sort of crystal balling but maybe someone has thought of it before and did some research. †dismas†|(talk) 18:34, 30 October 2017 (UTC)[reply]

Well, we know the trajectory quite precisely, but the problem is that the Trans-Neptunian objects aren't comprehensively mapped out, and there may be large objects Voyager will encounter. Now the chances of an impact are extremely small, but even the most modest of gravitational deflection could have a major effect on the location, over centuries. So, the time period elapsed would be important in knowing how great the error will be, and we also don't know how sophisticated our scanning devices will be by then. There's also the political climate to consider in the future. That is, would they really think retrieving Voyager was a good use of taxpayer money ? So yes, unfortunately, this does get into crystal ball category. StuRat (talk) 19:08, 30 October 2017 (UTC)[reply]
The two Voyager missions will start the process of shutting down according to the schedule here. According to that webpage (from NASA) in 2020-2021, NASA will begin powering down various science experiments on the probes to conserve fuel, and all science experiments will cease by 2025, however NASA will still receive telemetry data from the probes until about 2036, when all power to the probes will fail completely. --Jayron32 19:19, 30 October 2017 (UTC)[reply]
Also, you mentioned manned space travel, but such a task would be far better suited to a unmanned spacecraft. That is, unless we develop some way to get there much faster, such a mission would take years, and a human would need food, water, air, heat, etc., for all that time. (Considering that they've been flying away from Earth for over 40 years now, even if we had some way to get there 10 times as fast, that would be 4 years, and by then they would have moved on a bit further, and then we have to add the time to locate and retrieve the ship, and the return trip, so we'd be talking about some 9 years.) StuRat (talk) 19:31, 30 October 2017 (UTC)[reply]
There was an XKCD "Whatif" column about retrieving Voyager. [6] Unfortunately, it's one of the early ones that's not really referenced. But you can usually trust Randall Munroe's math. ApLundell (talk) 20:29, 30 October 2017 (UTC)[reply]
Excellent info. But what if a chemical rocket with planetary gravity assists was used to get part of the way there, with fast acceleration, then ejected, using an ion engine for the rest ? Hopefully that combo would cut the time down somewhat. StuRat (talk) 21:06, 30 October 2017 (UTC)[reply]
If we don't have to wait until it goes dark then I think two missions would be most realistic. Mission one catches up to Voyager 1 when it's still transmitting, grabs hold of it, and brings enough reliable resources to keep transmitting for far longer. Mission two launches when rocket technology is good enough for a return mission. PrimeHunter (talk) 21:14, 30 October 2017 (UTC)[reply]
This is no "crystal balling" but plain madness! Are you aware of the pricetags of such projects? In case you are a multi-billionair dump you money where you like, else get sober and think again about asking this. --Kharon (talk) 02:07, 31 October 2017 (UTC)[reply]
Actually cost may be less of a factor than you would think, if the cost of space travel starts to steadily decrease. Give it a century, and such a thing might be financially feasible. In particular, at some point I would expect space probes to start being mass produced, and no longer cost billions each. StuRat (talk) 04:20, 31 October 2017 (UTC)[reply]
Even so, such an exercise would be pretty pointless. I'm struggling to think of anything that could be usefully gained from physically studying Voyager 1.--Shantavira|feed me 07:55, 31 October 2017 (UTC)[reply]
The effects on technology of long-time exposure to interstellar space? If the return cost becomes low enough then you might want that before launching more expensive interstellar missions, but it would probably be easier to extrapolate from short-term exposure. Future space historians may also be interested. It would be a nice museum exhibition. PrimeHunter (talk) 10:59, 31 October 2017 (UTC)[reply]
I think some exogenous point is needed. A typical sci-fi explanation might be that when the probe gets X distance from the system, aliens are allowed by their procedures to reveal their existence ... and tell the puny humans that if anything man-made gets Y distance away they will be annihilated. Apart from that, you're basically waiting for ultra-cheap space travel and a museum with too much money. Wnt (talk) 11:52, 31 October 2017 (UTC)[reply]
See Mary Rose and Star Trek: The Motion Picture for possibilities. Dbfirs 12:11, 31 October 2017 (UTC)[reply]

Okay, so nothing about finding it. Got it. Thanks, everyone. †dismas†|(talk) 23:41, 31 October 2017 (UTC)[reply]

Well tracking a path and calculation a distance with a given speed and time is what is done in astronomy every day. If they can see and track an Asteroid 10 million miles away and predict how far from earth it will pass where and when they are probably capable to predict where a satellite they exactly piloted for years will exactly be in the future. Very precise with an error of maybe less than 5km. Does this answer how to find it for you? --Kharon (talk) 04:36, 1 November 2017 (UTC)[reply]

See Asteroid#Computerized methods. Many asteroids are not known about until they approach Earth and become visible. The only way then to avoid losing track of them is to keep them under constant observation. 80.5.88.70 (talk) 09:29, 1 November 2017 (UTC)[reply]

... and see n-body problem for some of the complications. Dbfirs 17:42, 1 November 2017 (UTC)[reply]
@Dismas: I am assuming that by "find it" you actually mean the challenge in just locating the craft, not the difficulties associated with the mission (manned or unmanned) to rendezvous and return with Voyager 1. If I were to reach for a Fermi estimate, I might look at the Pioneer anomaly to get an order-of-magnitude for the uncertainty in location. (Brief background—like Voyager 1 and 2, the Pioneer 10 and 11 probes followed long trajectories out of the Solar System; like the Voyager craft, the Pioneer probes carried radioisotope thermoelectric generators to produce electricity. For at least the last three or four decades, the Pioneer probes exhibited a minuscule but unexpected acceleration, dubbed the Pioneer anomaly. Various hypotheses were offered to explain the acceleration; in the last few years, it's generally been agreed that it is the result of radiation pressure. Heat radiated from the thermal generator provides a weak unbalanced thrust to the craft.) The acceleration there is on the order of 1 km/h per year, or roughly 10 000 km/year per year. If Voyager 1 experienced the same sort of acceleration, after 1 year it would be about 5 000 kilometers away from where we would expect it to be. After 100 years, it would be about 50 million kilometers out of place.
So then the question becomes, at what distance and using what technology can we detect a Voyager-sized object at those sorts of distances? I'll throw the field open here. Do you use radar? Infrared? Something else? (Thinking a bit more, the Voyager RTGs used Pu-238 fuel with a half-life of 88 years, so that source of anomalous acceleration would have faded quite a bit after the first few centuries. But that also means that detection by heat signature would be less effective as the core cools. One hand giveth, the other taketh away.) What sort of objects are going to be 'out there' in interstellar space, and how hard will it be to distinguish Voyager from them? TenOfAllTrades(talk) 01:36, 3 November 2017 (UTC) (math corrected TenOfAllTrades(talk) 11:57, 3 November 2017 (UTC))[reply]

Thank you! The last couple answers helped sate my curiosity over this question. †dismas†|(talk) 20:54, 3 November 2017 (UTC)[reply]