Jump to content

Wikipedia:Reference desk/Archives/Computing/2016 March 24

From Wikipedia, the free encyclopedia
Computing desk
< March 23 << Feb | March | Apr >> March 25 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 24

[edit]

Browser extentions/add-ons/plug-in

[edit]

Are browser plug-ins/add-ons/extensions basically just like web-pages too? It's javascript + XML (or another XML-like mark-up language), + graphical elements, right?--Scicurious (talk) 01:13, 24 March 2016 (UTC)[reply]

The answer is "it depends". Really you're going to have to look at each individual browser engine you're targeting for details. A source of confusion is that the terminology is not standardized. "Plug-in", "add-on", and "extension" are often used interchangeably in everyday speech, but they can mean different things "under the hood". To illustrate this let's look at the Mozilla ecosystem. In Mozilla Land, "add-ons" is the generic term for anything that modifies the application. "Extensions" are a subset of "add-ons". These are JavaScript and markup language. These are the things you can download from addons.mozilla.org. "Plugins" are shared libraries that get loaded into the application. These are things like the Adobe Flash plugin. But confusingly, there are also "search engine plugins" that are just XML documents. --71.110.8.102 (talk) 03:52, 24 March 2016 (UTC)[reply]
You might find our article on Browser extensions of some help... Vespine (talk) 04:19, 24 March 2016 (UTC)[reply]

Is it possible to measure what OS is more stable?

[edit]

Is it all an impressionistic measure? For example, some people have the impression that their Windows XP (of 10 years ago) was more stable than Ubuntu nowadays. But given the fact that we are doing different things, using different programs, with different amounts of data, different drivers, and in different machines, can we measure stability at all? --Llaanngg (talk) 22:52, 24 March 2016 (UTC)[reply]

In its broadest sense, to me, the stability of any system its ability to resist or recover from any input disturbance. So, to test relative stability of a particular system over others, one must hit each system with the same disturbances and evaluate the system that is least affected. The real problem, though, is probably choosing which disturbances to use in your testing. A system that can survive common disturbances may crash with one uncommon disturbance that it has not been designed to resist.--178.111.96.35 (talk) 01:28, 25 March 2016 (UTC)[reply]
How about 16 years of uptime?[1] Or look at Voyager 1 and 2, which were launched in 1977 and are expected to remain operational until approximately 2025. And, of course, embedded systems that never crash are fairly common; when was the last time you saw a Casio solar watch crash? Or a 5ESS switch? Other than things like botched updates or the building burning to the ground, I don't think anyone has ever seen a 5ESS crash. I have designed embedded systems that do a hard reset 60 times a second, at which point they do the job I designed them to do and then go to sleep until the next reset. The entire concept of "crashing" doesn't apply to a system like that. --Guy Macon (talk) 05:49, 25 March 2016 (UTC)[reply]
It is possible to measure just about anything if you can define how to measure it. You must define what "stable" means. In computer science, stable tends to refer to the stability of algorithms. Even then, it can mean different things. If I say that a statistics algorithm is stable, I mean that it isn't overly influenced by outliers. If I say that a data-set algorithm is stable, I mean that it preserves the identity of each data item (eg: If I sort a series of objects, I don't change the ID of the objects that I am sorting). You appear to be referring to stable as in continuing to function. That is an engineering definition of stability. However, I've personally never seen "stability" used in computer engineering. I assume it is because it would be confusing in a field so closely related to computer science. I've seen the common mean-time-between-failures (MTBF) used. In actual operation, what is the MTBF? That, however, tends to refer to catastrophic failures. I think that you are referring to minor failures that cause a software failure, not a hardware failure. So, MTBF would be the wrong measure. Overall, I think you can stick with the common "uptime" measure. That is a measure of the percent of time that an active server can be considered to be "up". It should never be 100% because there has to be some period in time in which the operating system itself is upgraded. Even that time is shrinking fast as operating systems are gaining the ability to update while running. They are technically down to microseconds as they upgrade in place. The problem you will run into with uptime is that it takes the environment into account. What if the power in the server room fails? What if the network fails and the server is unreachable? Also, it doesn't take into account the software. What if I have a database server with 99% uptime, but the database program tends to crash a few times every day? From my point of view, the server has about an 8 hour uptime before I have to restart something. Sorry for the long-winded non-answer, but your question doesn't make it possible to give an informed correct answer and I don't want to spout out opinions or anecdotes. 209.149.114.215 (talk) 15:22, 25 March 2016 (UTC)[reply]
I find the term "stable" to be inapt and subject to lots of abuse of terminology. Many people use the word "stable" to refer to software that is free of bugs. This seems incorrect: a better phrase would be "software that is free of bugs."
In the dictionaries that I use, "stable" implies that there are no changes. This is an entirely orthogonal property of software from "software that is free of bugs."
I work on a large number of software projects - including several free and non-free operating systems - that turn nightly builds. I'll frequently get an early morning phone call - is today's build "stable"?
Well, how can it be? It changed since last night!
Even worse - "is today's build buggy?" System software in this decade (2016) contains hundreds of billions of lines of program code. (For example, consider only Linux kernel, which is altogether a very tiny portion of only one free operating system). In any system with hundreds of billions of lines of program-code, there are probably hundreds of millions of bugs. Many of those bugs are entirely irrelevant to what you need to do, and they will have no impact on you. Most human brains do not seem designed to comprehend the abject vastness of this complexity-space.
I find that engineers and programmers can communicate much more productively when they evict the word "stable" from their vocabulary. Software should be described as either "free of relevant bugs," or if a bug exists, it ought to be immediately ticketed so the bug may be described precisely. If you aren't sure how to describe a problem, that's fine: you can still ticket a bug-report that says, for example, "Software A intermittently experienced Problem B while I was doing Task C." Software developers may not be able to fix Problem B yet, but your report does not exist in isolation - it can help identify the statistics of systemic, difficult-to-reproduce problems. When an issue is problematic, one should say "Bug #X prevents me from doing Task Y on Software Version Z."
Such precise descriptions are much more useful than vague statements about which software is "stable" or "buggy."
Here's a fantastic essay by Simon Tatham, author of PuTTY: How to Report Bugs Effectively.
Nimur (talk) 16:35, 25 March 2016 (UTC)[reply]
Regarding "hundreds of billions of lines of program code.", Tiny Core Linux has a total size for the entire OS of 11MB (16MB if you want a GUI), and appears to be very crash-resistant. --Guy Macon (talk) 16:51, 25 March 2016 (UTC)[reply]
You're just distinguishing between stability of code base vs. stability of operation (or something like that). That's not abuse of terminology, just good old polysemy. Of course "stable" can mean lots of different things, just look at our disambiguation page for stability. I think IP 178 has a very good answer- it gives a rather general notion of stability, and correctly points out that we'd have to standardize the perturbation/disturbance, which itself is a tricky problem. Some ideas on this are discussed at stress testing, and the article on stress testing_(software) covers some of what OP is looking for. There's at least 5 different notions of stability just for solutions to ordinary differential equations, and many others for different sorts of physical and informational concepts. That doesn't mean that I'm wrong to say that e.g. NetHack or TeX has a very stable code base. I don't think I've heard anyone use "stable" to mean "free of bugs", but I do hear stable to mean "program rarely crashes". I think this is the sense that OP means, and is the sense that is commonly used for OSs. This is related to "free of changes" or "resist/recover from disturbance", but rather than asking for a single state that doesn't change or is returned to, we must have a region of state or phase space that defines "normal operation". It is true that this is very hard to carefully codify and quantify, but that doesn't mean it doesn't exist. OP may also consider that any OS is inherently very unstable, in the sense that an OS stuck in one state or configuration would be useless. The fact that my computer does all sorts of different things when I do different things is a feature. This is all just to say that Instability is also crucial for control systems - without some instability (in a strict, technical sense of equilibria), we can't get a system to do anything. SemanticMantis (talk) 18:15, 25 March 2016 (UTC)[reply]