Wikipedia:Reference desk/Archives/Science/2015 June 15
Science desk | ||
---|---|---|
< June 14 | << May | June | Jul >> | June 16 > |
Welcome to the Wikipedia Science Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
June 15
[edit]What are Strigimorphae?
[edit]Our Owl article says that owls are order Strigiformes, which in turn are members of the super order Strigimorphae. I was wanting to find out more about the Strigimorphae (e.g. what other birds are in that group), but the link just redirects to "Owl". Do we have anything about that group, and if not, does anyone know a good source of info? (I tried an internet search, but it mainly turned up Wikipedia articles for various owl species, on account of the taxobox). Iapetus (talk) 15:08, 15 June 2015 (UTC)
- Based on a quick browse of these results it seems that all Strigiformes are owls. I could be wrong but google scholar seems a good starting point Agent of the nine (talk) 15:13, 15 June 2015 (UTC)
- (I fixed your WP:INDENT level, assuming you won't mind. SemanticMantis (talk) 15:51, 15 June 2015 (UTC)
- (Thank you I will remember to do that in the future. Agent of the nine (talk) 15:56, 15 June 2015 (UTC)
- Right, so Strigidae is the family that consists of the "true owls" - Strigiformes is the name of the order that includes all owls, even the not-quite-true-owl family Tytonidae . This follows a common naming convention in taxonomy - name the Order after a typical family that is the most famous/ well known. E.g. Rosa is a rose genus, Rosaceae is the rose family, and Rosales is the order that includes roses. Rosid is an even higher-level unranked clade of a ton of plants that are vaguely rose like.
- Anyway, our article Owl is our article on Strigiformes - it describes features that are fairly common and typical for the order, though some individual species will vary in their traits and behaviors. If you look at Owl#Evolution_and_systematics - you'll see that the two families above are the only extant families in the order, and each family article above has a partial list of genera and species included in the family. That section also explains that, of the current bird orders, Strigiformes are most closely related to the Caprimulgiformes, and less so to the Falconiformes. Also note that systematics is hard, and many bird orders are currently being shuffled/redefined as findings based on molecular/genetic techniques displace earlier classifications based on morphological taxonomy - "the relationships of the Caprimulgiformes, the owls, the falcons and the accipitrid raptors are not resolved to satisfaction; currently there is an increasing trend to consider each group (with the possible exception of the accipitrids) a distinct order." If you look at Bird#Classification_of_bird_orders, you'll see that it's sourced to some research published in 2013/2014, and several previously claimed relationships have been changed to match our current understanding.
- If there's anything more specific you want to know about Strigiformes/owls, we might be able to give you more specific reference materials. SemanticMantis (talk) 15:51, 15 June 2015 (UTC)
- Oh, sorry, I misread a bit of your question. Strigimorphae includes the Strigiformes, as well as the Musophagiformes, according to this 1988 pub by Sibley [1]. However - keep in mind this group may very well be superseded/ deprecated by today's ornithologists - the fact that I only get 18 results on google scholar for Strigimorphae [2] indicates to me that the term/grouping never got very widespread use. Again Bird#Classification_of_bird_orders does not use the term, and the cladogram would indicate that Coraciimorphae are closer to the owls, and the Turaco/Musophagiformes are quite a bit farther away. So unless we hear from a real systematist, I think it's safe to assume that "Strigimorphae" is best avoided. Since the Sibley et al. (1988) is paywalled, their complete categorization of Strigimorphae is
categorization of Strigimorphae, collapsed due to poor formatting
|
---|
Subfamily Criniferinae, Plaintain-eaters
(here is where I give up formatting) Parvorder Strigida Family Strigidae, Typical Owls Suborder Aegotheli Family Aegothelidae, Owlet-nightjars Suborder Caprimulgi Infraorder Podargides Family Podargidae, Australian Frogmouths Family Batrachostomidae,*** Asian Frogmouths Infraorder Caprimulgides Parvorder Steatornithida Superfamily Steatornithoidea Family Steatornithidae, Oilbird Superfamily Nyctibioidea Family Nyctibiidae, Potoos Parvorder Caprimulgida Superfamily Eurostopodoidea Family Eurostopodidae,*** Eared Nightjars Superfamily Caprimulgoidea Family Caprimulgidae Subfamily Chordeilinae, Nighthawks Subfamily Caprimulginae, Nightjars |
- Hope that helps, SemanticMantis (talk) 16:19, 15 June 2015 (UTC)
- Recent research, Iapetus, does not agree with some of the claims alluded to above; the Strigimorphae hypothesis is not currently accepted. The owls were traditionally believed to be closely related to the the Caprimulgiformes due to anatomical similarity. More recently they have been separated from the nightjars and "owlet" frogmouths (whom they resemble due to convergent evolution caused by their predacious nocturnal habits); The owls were moved closer to (or within) the Accipitriformes, while the Caprimulgiformes have been allied with the swifts and hummingbirds of the Apodiformes.
- That analysis was recently confirmed with this whole genome analysis of the birds with the full, peer-reviewed article downloadable in pdf form, including cladograms. It turns out the Falconiformes are a separate clade from the Accipitriformes, and are actually the sister group to the parrots and songbirds, while the owls and other birds of prey are not direct sister groups but are indeed members together of a larger clade, the Afroaves, which also includes bee-eaters and woodpeckers and other groups. Here you can go directly to the cladogram in that paper. μηδείς (talk) 17:10, 16 June 2015 (UTC)
In light of all the above, should we remove teh line "Superorder: Strigimorphae" from the Owl's taxobox? (Having checked a few owl species and genera, I don't see them including it).Iapetus (talk) 08:51, 17 June 2015 (UTC)
- @Wardog: I would support that. @Medeis: Thanks for your response too. I think the only part where our responses differ is where I said "Strigiformes are most closely related to the Caprimulgiformes" - I was just paraphrasing from the article, but you are right - that closeness has now been rejected, so that part of the article should be changed too. I won't be able to work on it for a few days, but I'll be happy to consult/review if either of you want to start the updates. SemanticMantis (talk) 13:58, 17 June 2015 (UTC)
- I support the suggested changes, and would be bold about removing strigimorphae from the infobox, but discuss the rest on the talk page first. (As a courtesy, it really shouldn't be controversial, the view can be retained as an historical hypothesis.)
- Strigimorphae itself was made into a redirect on Jan 1 this year with the comment no longer valid. (See edit history) I'd have removed it myself already, but you have to click on the red pencil to edit the template, and I haven't done that before so don't want to fool with it myself.
- If there's a discussion which needs my comment there let me know.
- μηδείς (talk) 18:55, 17 June 2015 (UTC)
Maximizing productivity of computer programmers
[edit]Hi there. Are there any scientific findings about how to maximize the output of computer programmers? I'm not even sure how "output" might be measured. Lines of code? How many hours per week should they work, how many breaks should they get and when, should their internet use be monitored or should they be allowed to browse freely, etc? We've all seen that famous study of maximizing the output of munitions factory workers in WW1. Does similar data exist for the programming industry?--88.81.124.1 (talk) 16:28, 15 June 2015 (UTC)
- Maybe this question would be better discussed at the Computing Reference Desk. By the way, I am not familiar with the study about munitions factory workers in World War One. Robert McClenon (talk) 16:59, 15 June 2015 (UTC)
- That type of micromanagement you are talking about is likely to make actual productivity worse, while maximizing whatever metric you are going for. For example, if you pay based on lines of code, you will find code which increments a counter by 10 by repeating a line that increments it by one, ten times. If you pay by the job, instead of by the hour, then it's in the programmer's interest to get the job done as quickly as possible, so he can get paid and go on to the next paying job. Of course, then there's always the issue of how much each job should pay. You can even add bonuses for early completion and pay less if it contains bugs. StuRat (talk) 17:07, 15 June 2015 (UTC)
- It is probably worth reading a few issues of a publication like Management Science (journal) to help orient the question. Operations research and management science attempt to apply scientific methodology to answer such questions. As is already evident, defining "productivity" is one of the hardest problems that must be addressed. Once that has been defined, techniques and theories can be tested to improve it.
- Consider:
- Nimur (talk) 17:10, 15 June 2015 (UTC)
- One of these papers reports a result that is apparently astonishing (to non-programmers); the very same fact is self-evident to most programmers. The use of a code generator to automatically produce code will increase software complexity, entailing large long-term costs in manpower and resources. Therefore, the use of technology to increase the number of lines of code actually reduces productivity-per-capita and increases total cost. It would seem that the best way to optimize productivity, as measured by total-cost analyses of empirical case studies, is to reduce complexity. This is the "keep it simple" principle. Nimur (talk) 19:12, 15 June 2015 (UTC)
- Agreed, and getting the programmer(s) in on the specs phase is critical here, to prevent specs being written that require unnecessarily complex code. StuRat (talk) 19:21, 15 June 2015 (UTC)
- Say this with Tongue-in-cheek but it ever more appears to be true: To make rich people (bankers etc... ) more productive one has to pay them more. To make poor people (programmers etc...) more productive, one has to find ways to pay them less. Does this match up with anybodies personal experience?--Aspro (talk) 19:50, 15 June 2015 (UTC)
- The difficulty is that measuring productivity is nearly impossible. Any kind of metric you can come up with will fail to capture the actual amount of work that gets done. Keystrokes, lines of code, hours worked, "modules" completed, quality-assurance bug tracking data...not one of those things captures what's being done. If you can't measure productivity, how can you know whether one practice or another is successfully maximizing it.
- Speed of generating lines of code may just mean that the person is writing grossly redundant code rather than tight, efficient code - so you DEFINITELY don't want to use lines-of-code-written as a productivity metric! There is also a problem with fast-but-sloppy programmers who generate a mountain of code - but also a mountain of bugs. These people can be disastrous to productivity because a bug can easily take a hundred times longer to track down than the original code took to write. But counting the number of bugs generated and/or fixed doesn't work because some bugs take seconds to fix, and other may need weeks of careful detective-work to nail. Counting "modules" was popular at one place I worked - but the modules varied from 10 lines of code to 1000 lines - so that's a complete bust. Worse still, someone can easily work for a month on an existing chunk of code and find simplifications that actually REDUCE the total lines-of-code, reduce the number of modules and cause a stack of bugs to disappear without anyone every consciously fixing them.
- You can measure whether (say) a bricklayer is working well by counting the number of bricks they lay in an 8 hour shift - and applying simple quality metrics like how straight the wall is to degrade that number. Armed with that number you can say "this bricklayer is 20% more productive than that bricklayer" with a high degree of confidence. Knowing that, you can look at what's different between your ten best bricklayers and your ten worst bricklayers and draw some reasonable conclusions.
- But with programmers, you can't measure any definite quantities - and you can't tell within a factor of ten or more whether one programmer is faster than another.
- In the end, we're left with 'soft' metrics - peer reviews, annual appraisals, that kind of thing.
- That said, one approach that I've used that is moderately effective is "Planning poker" - used as a part of the "Scrum" approach. The idea is that programmers are organized into small teams of at most a dozen people. Every couple of weeks, they get together to list out the tasks that need to be done next - these are called "stories". Then they go through each task in turn...the person who's most likely to be working on it presents a short description of the task, and each team member secretly picks a highly abstract "story points" score (an estimate of difficulty) from a deck of cards. We count "One, Two, Three!" then we all reveal how we scored the task. Generally there is a gaussian-like spread of opinions, and we ask the outliers how they came to a decision so far from the mean. This allows you to capture ideas for drastically shortening the task - or to dig out "gotcha's" that nobody else thought of. It may take multiple rounds of this to arrive at a more-or-less-agreed "story point" score.
- Over time, you can record how long tasks of varying difficulty scores took to complete and come up with an average rate at which tasks are being completed. The "story-points-per-man-week" scores go up and down over time - but it's possible to spot trends where the team is getting demoralized, or where one team member is having difficulties.
- This does give management some kind of a handle on how productive people are - and I suppose that in principle, they could figure out what helps productivity and what doesn't. The downside is that it only really works when the results are averaged over large numbers of teams and years of time - and most work environments don't have that many programmers working on the same kinds of task.
- In my long experience, happy programmers are productive and unhappy programmers aren't. So, keep your team happy. Read Happiness_at_work. Remove as many annoyances and time-sinks as you can...apply general happiness-at-work approaches (giving people the 'big picture', avoiding micro-management, giving individuals as much responsibility as you can, free soda, free snacks, comfortable offices, casual dress code, meeting-free days, email-free days, that kind of thing).
- SteveBaker (talk) 22:57, 15 June 2015 (UTC)
- SteveBaker, can I come work for you?? ;) I'm in a senior IT support team in an enterprise environment and I have this argument with management every 6 months when they try to tell me how many "tickets" I've closed or incidents I've helped avoid, sigh. The recent trend where i work at least is away from the soft, make staff happy approach and towards the "score board" stat counting approach. Vespine (talk) 23:17, 16 June 2015 (UTC)
- I'm doing contract coding/design work these days - so I'm a solo operation. But things are particularly bad for IT people. Years ago I worked for a software group in the UK who had one "IT guy" maintaining our servers and desktop machines. He had a ton of tickets to service - he was a very busy guy. Eventually, he moved on and we got a new guy to do the job. He went in and did radical cleanup and reorganization, he automated a bunch of routine tasks and the number of tickets he had to service on fell to almost zero. He was idle for much of the day and started to learn C++ programming so he could help out with programming in his spare time. He got fired because management didn't think he was needed anymore - and within a month, we were in chaos and management went back to recruiting a new IT guy. It's a rough life. SteveBaker (talk) 23:26, 16 June 2015 (UTC)
- Agree. An experienced programer can write quality code (rather than what we used to called spaghetti) . However, I know of no metric that can measure quality – other than the months and months spent on debugging poor code. But try telling the Pointy-haired Boss that. Mind you, it was probably his incompetence to be productive, that lead him into the position of being one's boss. Oh c'est la vie.--Aspro (talk) 23:55, 15 June 2015 (UTC)
- Like the time the boss told his team that the company had decided to pay coders for finding mistakes. The team cheered, and Wally said, "I'm going to write myself a mini-van today!" ←Baseball Bugs What's up, Doc? carrots→ 01:26, 16 June 2015 (UTC)
- Agree. An experienced programer can write quality code (rather than what we used to called spaghetti) . However, I know of no metric that can measure quality – other than the months and months spent on debugging poor code. But try telling the Pointy-haired Boss that. Mind you, it was probably his incompetence to be productive, that lead him into the position of being one's boss. Oh c'est la vie.--Aspro (talk) 23:55, 15 June 2015 (UTC)
- The best correlates I've seen with programmer productivity are a quiet environment free of interruptions and lots of space with good lighting. Organizational factors also play a part. It is important to be able to discuss freely what is actually important, what is really needed and what the timetable is. The particular software tools used don't seem to be all that important and rigid adherence to some guru's system can lead to projects failing. Dmcq (talk) 00:01, 16 June 2015 (UTC)
- Tangentially, Steve, thanks for the link to "Scrum", which illuminates the use of the term in a Spy/Horror novel I'm currently reading! (Charlie Stross's The Rhesus Chart, if anyone's interested: it involves a team (or "wunch" as others refer to them) of bankers using this technique.) {The poster formerly known as 87.81.230.195} 212.95.237.92 (talk) 12:30, 16 June 2015 (UTC)
- Yes, being free to go home and work there. Unplug the phone and the next thing you know, the clock is telling you it's 2 O'clock in the morning and you’ve got done the equivalent of a weeks work. --Aspro (talk) 00:12, 16 June 2015 (UTC)
- With due respect, are you not confusing "things which would make you happy" with "things that scientific study has shown to improve overall productivity"? I will readily grant that happiness is an important workplace objective and it probably has some correspondence with productivity, but the question was asking for evidence, which we should interpret to mean "reputable publication in the form of peer-reviewed scientific research."
- You might feel productive if you pull a self-directed all-nighter: but does your actual productivity stand up to scientific scrutiny, or is it merely observer bias?
- Nimur (talk) 09:45, 16 June 2015 (UTC)
- The problem (as I expounded at length above) is that you simply cannot measure productivity - so scientific scrutiny is almost impossible. I agree that it's easy to fool yourself into thinking you're being productive, then a month from now discover that you'd buried an especially insidious bug into the code while you thought you were on a roll. That said, it's clear that we all get creative streaks where everything seems to go together like clockwork - and when you look up from the screen, you've accidentally pulled a 15 hour shift. I think we mostly agree that these are productive sessions - but it is very hard to know for sure. SteveBaker (talk) 23:31, 16 June 2015 (UTC)
- Yes, being free to go home and work there. Unplug the phone and the next thing you know, the clock is telling you it's 2 O'clock in the morning and you’ve got done the equivalent of a weeks work. --Aspro (talk) 00:12, 16 June 2015 (UTC)
- Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials BMJ 2003; 327 doi: http://dx.doi.org/10.1136/bmj.327.7429.1459 (Published 18 December 2003). Some things like this are so obvious that one doesn’t need a double blind study to discount any observed bias. Period! --Aspro (talk) 14:40, 16 June 2015 (UTC)
- In other words, you believe your performance is obvious and you do not need any outside review?
- At least Steve Baker provided some kind of method. I won't pass judgement on whether it is a good method or a bad method: it is a method, and so he and his peers are on the fast track to objectively evaluating performance. They can test new policies (say, "no all-nighters allowed" or "mandatory all-nighter sessions once a week"), and then review their results via the system he described. Other teams could independently re-test to see if the results are generalizable. It is this method that distinguishes science from witch-doctory.
- But it seems User:Aspro has unilaterally decided, and therefore no evidence is requisite! To foment the discussion, he presents a joke-article advocating the use of less method and more subjectivity in a completely different problem-space. Whether we believe evidence-based medicine is a good thing - or whether we believe parachutes offer great safety tradeoffs (a debate which is hardly obvious in aviation circles - consider, if you will, that commercial airliners opt not to emplace parachutes on board, and have safer records by most estimates! If you actually read the citations in the very well-referenced journal article you linked, you might see that this is a fascinating research problem with very high stakes if you reach a wrong conclusion by way of gut instinct! Another day, I will regale you with a tale of my first flight in an SR22, the aircraft most famous as the first commercial success of the whole-airplane parachute system. It is difficult to feel safe when Item 1 on the preflight checklist is to arm the on-board explosive, an item enumerated literally as "1. UNSAFE PARACHUTE"... ) ... all of this has no bearing on our standards for empirical research on worker productivity.
- Nimur (talk) 16:16, 16 June 2015 (UTC)
- Think other editors will have grasp my point in the context of my other posts, in answer to yours.--Aspro (talk) 14:39, 17 June 2015 (UTC)
- Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials BMJ 2003; 327 doi: http://dx.doi.org/10.1136/bmj.327.7429.1459 (Published 18 December 2003). Some things like this are so obvious that one doesn’t need a double blind study to discount any observed bias. Period! --Aspro (talk) 14:40, 16 June 2015 (UTC)
- There have been a number of studies and books on the subject, one of the earliest with solid figures is Peopleware: Productive Projects and Teams and it is still a good read. Dmcq (talk) 22:10, 16 June 2015 (UTC)