Wikipedia:Reference desk/Archives/Computing/2016 May 5
Computing desk | ||
---|---|---|
< May 4 | << Apr | May | Jun >> | May 6 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
May 5
[edit]Online bitcoin betting site other than bitbet.us
[edit]Dear Wikipedians:
Does anyone know of a bitcoin betting site other than bitbet.us?
I really want to try my luck betting on Donald Trump becoming the U.S. President.
Thanks,
69.158.76.23 (talk) 00:49, 5 May 2016 (UTC)
- I would bet all these online gambling sites are rigged to cheat you huge. If you like to bet on Trump go find something that will gain value if Trump becomes President, like his "Make Amerika great again"-Basketball Caps. Buy 500 and resell them when his fame goes over the top. --Kharon (talk) 01:14, 5 May 2016 (UTC)
- Congratulations Kharon! You have won your bet. Please give me all your personal information and bank account details to receive your fabulous price! The Quixotic Potato (talk) 01:52, 5 May 2016 (UTC)
- [1] and [2] has reviews and information on sites which accept BitCoin. However I'm not sure if all these sites actually price their bets in BitCoin if that's what you want or simply accept it for conversion to USD or EUR or GBP or whatever they price their bets in. Note that I also have zero knowledge of these review and info sites. For example, they could be run by some betting site conglomerate themselves. I also suspect neither site will have info on which of the sites is accepting bets on the US presidential election although it's possible they will have info on which sites generally accept electoral or similar bets.
For specific examples, both sites I noticed listed on [3] as offering US Presidential election bets namely Bovada and BetOnline (there may be more, I didn't look very well) seem to accept Bitcoin although I'm pretty sure this is only for conversion. Also the link for US presidential election betting on Bovada doesn't work for me. But I also see it from an internet search suggesting it did at least exist at one time. Whether they stopped accepting bets or perhaps more likely it doesn't work because of my location (when I first visited it said they can't accept bets from my location) I'm not sure. I have zero experience with either site so you'd want to check out reviews etc.
The first review/info site I linked to also mentions ways you can attempt to detect if a site isn't cheating. However it seems to me this is mostly irrelevant to you. You can compare the odds to more reliable sites like BetFair [4], Paddy Power [5] (actually these 2 seem to be the same company now), Ladbrokes sports.ladbrokes.com/en-gb/betting/politics/american/presidential-election/2016-presidential-election-winner/216136503/ or whatever [6] to see if they're giving you really bad odds which will be the main way they can cheat.
Next, there is a slight risk that there will be a very weird situation e.g. as happened in 2000 or an even more controversial one such as the electoral college voting for someone other than who they were supposed to or even someone gaining the most electoral college voters in early November followed by a military coup and no actual electoral college vote let alone the person being inaugurated. How the lesser regulated sites will handle such a situation I'm not sure although I presume any remotely decent one will say precisely what you're betting on and when they will pay out, whether they will observe it or not when the time comes. Ultimately this risk IMO seems small and you could call it part of your odds anyway.
The bigger risk will be whether they'll pay out in general or pay out with different than promised odds. Many of these sites have existed for a resonable length of time. For such simple bets (and these tend to be treated similar to sports bets), it seems likely people would have realised by now what's going on and you should be able to find lots of complaints. It gets more complicated if they pay out properly on some bets but not others, still it seems likely the complaints would have come in that you can find. So presuming you properly check, it becomes a case of their future performance being different from current and the unknown but likely small risk of that happening. (I'm not sure how likely it is that they will accept so many long odd bets on Trump to cause them problems, since many of them seem fairly large and I somehow doubt it's going to compare to generall sport betting, but I could easily be wrong.)
You could come up with other risks like "match" fixing but there's so much money in the US election that it seems very unlikely the sites could do that.
In other words, while I personally think betting on the presidential election is a dumb idea; and using sites which accept Bitcoin, many of which seem to be incorporated and regulated in environments without much oversight is riskier; the comment above seems to IMO overestimate the risk. The exception may be if you do want a site which prices their bets in Bitcoin (as opposed to simply accepting Bitcoin), as I'm not sure how many of these there are and how long they've existed. (Although if they were playing a longcon, again I'm not sure the US presidential election or anything in between is what they'll target.)
- I am the OP. Thanks everyone for your euthusiastic replies, especially Nil. I have decided to go with BetMoose for now. 69.158.76.23 (talk) 19:41, 5 May 2016 (UTC)
Chromebook Recovery Utility doesn't recognize model
[edit]I have an Acer Chromebook 11 (model number CB3-111 and hardware class GNAWTY C2A-E7J-Q8Q). I am trying to create a recovery SD, something that I have done numerous times before, but now the app reports that it cannot find the model. Any ideas on why this might be happening? — Melab±1 ☎ 01:26, 5 May 2016 (UTC)
- Try using the standalone executable. https://dl.google.com/dl/chromeos/recovery/chromeosimagecreatorV2.exe or the new version. Try using a different USB flash drive or SD card. People report that using SanDisk products causes problems, others claim that their antivirus software messed it up. The Quixotic Potato (talk) 01:46, 5 May 2016 (UTC)
List of instruction usage by various computing tasks
[edit]Where is there a list that spells out how many MIPS various tasks need as opposed to the myriad of lists that shows how many MIPS a specific CPU can muster. So how many MIPS does Mpeg1 layer3 44.1 kHz stereo decoding take, compressing a 640x480 15 bpp image to jpeg, compressing source code text using bzip2, compiling 1000 lines of C source, drawing a circle, calculating which holidays that occur next year, FFT etc? Bytesock (talk) 14:53, 5 May 2016 (UTC)
- The answer will depend on the specific CPU architecture (particularly if there are optimisations for the task) and software implementation of the task. See Instructions per second, Benchmark (computing), Instructions per cycle etc for more explaination. Nil Einne (talk) 15:18, 5 May 2016 (UTC)
- With all due respect, I think your question is malformed. FFT can be in principle be accomplished with one instruction per second, and it can be accomplished with 10^10 MIPS. Think of two students doing a Fourier transform by hand. One student might be faster than another, but the answer is the same. The main point is that the algorithm doesn't generally put any requirements the time each step or instruction takes. The MP3 decoding question almost makes sense, because we can predicate the question on no lag or stutter in the playback, and that puts some restriction on time-per-step, i.e. we can decode MP3 by hand, but not in a way that produces smooth playback in realtime. But even then, Nil Einne's point comes into play and processor A might accomplish the task using X MIPS, but processor B could accomplish the same task with Y MIPS, and Y<X. SemanticMantis (talk) 15:19, 5 May 2016 (UTC)
- Compound this problem by the fact of the CISC philosophy - architecting computers with very complicated instructions. I used to work for a company that built and sold "cameras on a computer chip." I informally used to call it "PhotoShop in a can." We had a machine instruction to "make the photo good." That machine instruction could take millions or billions of clock cycles to run. It triggered massive quantities of mathematical sub-operations. This was years ago. Today's technology has similarly complex, application-specific machine-instructions. One machine instruction might exist to "turn on the sensor, start a timer for the flash photograph, turn on the flash, take the photo, then postprocess it to look like a grainy film-noire image, and compress the picture for upload to Popular Social Media website using their favored compression settings." Computers today are made from billions of cheap transistors: the days of using TTL logic to implement add-and-multiplies are long gone.
- Some purists might object to the terminology, or fall on the microcode argument to suggest that what I call a machine instruction was actually millions of separate machine instructions... but, irrespective of those quibbles, we actually built those special computers in silicon, and we made them work, and hundreds of millions of them escaped out into the world to power a just massive percentage of all consumer cameras. As terrifying as this is, statistics indicate that you've probably got one pointed at you right now. So, these are not "hypothetical" or "academic" thought experiments - they're the simple realities of compute architectures in this decade.
- So, for any algorithm you can contrive, a sufficiently-well-paid engineer can turn it into fabricatable logic on silicon; and therefore, we can make any computer program into a single-machine-instruction operation. This realization is at the very heart of the infamous "megahertz myth": it's silly to count "number of clock cycles" or "number of instructions" unless you completely describe how those machine instructions operate. This also has absolutely immense implications for computer software security and integrity and detectability... but I don't want to sound too paranoid this week!
- Algorithm experts, compiler-writers, and hardware logic designers sometimes do actually go through the work to count clocks or instructions; but this is invariably a deep and complex research project. It will generally be fruitless to compile a list of representative algorithms and compare them on modern computers. But if you really want, you can always go to the processor documentation. For example, here are Texas Instruments' ARM processor benchmarks for common multimedia tasks. You can find the same kind of documentation for chips produced by many other reputable vendors. Tom's Hardware, a computer technology publication, used to publish comparative benchmarks for representative multimedia applications on various processors. The reality is, today's multimedia algorithms are so complex, it's difficult to find representative benchmarks, let alone to interpret their results, unless you are an expert in state-of-the-art multimedia algorithms.
- Nimur (talk) 15:56, 5 May 2016 (UTC)
- In light of all these conceptual and practical difficulties, OP may do well to simply take an empirical approach, and time a few different operations. That at least gives information that may be useful in determining whether a given FFT or code compilation takes longer on one specific computer. SemanticMantis (talk) 16:18, 5 May 2016 (UTC)
- It should still be possible to make some back of the envelope predictions. The task of calculating the date of Easter in the year 2588 ought to be less computationally expensive than decoding a mpeg-4 video. The computer architecture matters, but it can to some extent be compared using the number or instructions done per second. The only real difference is the number of normal instructions and floating point ones. And of course what's needed for real-time (mp3) and just getting it done (compiling). Bytesock (talk) 16:56, 5 May 2016 (UTC)
- What is a "normal" computer instruction?
- I'd bet money that you're either using a computer that implements the Intel architecture or the ARM architecture right now. Have you looked at those instruction sets lately? The hardware instruction sets for such computers are documented in multi-volume encyclopedias. These are not toy-computers. Their "normal" instruction set is quite complicated.
- I think you mean to distinguish between "integer" and "floating point" instructions. That's a false dichotomy and it's a comically oversimplified view of computer architectures in this century. That may have been an apt way to categorize computer instructions in 1961, but... today? Not so much. You need to find yourself a more up-to-date textbook! I really got a lot out of Computer Architecture: A Quantitative Approach, by John L. Hennessy (who founded MIPS, no less, and he named the company using a bit of programmer-humor!)
- I have not comparatively benchmarked this specific task, but it would not surprise me at all if decoding one frame of video uses fewer clock-cycles (or executes faster, or uses less energy) than a common software-implementation for the equation of time or for simple calendar operations.
- Nimur (talk) 18:10, 5 May 2016 (UTC)
- [edit conflict] The Alpha 21064 supposedly beat Intel Pentium on floating point calculations by a factor of 6x. So presumably executing those imposed a larger computational penalty than any integer instruction. Some software even use integer math (like WavPack) to circumvent this to some extent. As for complexity, the 80386 instruction codes which almost all Intel x86-32 processors are compatible with, only takes a book in A5 format with ~500 pages to describe. The example question I'm thinking on is "How fast ARM CPU is needed to real-time decode mpeg1-l3 at 44.1 kHz stereo and how much memory will it need?", or compression of 320x240 video into mpeg-2 or say realtime PRML on a x Msps signal etc. Without having to acquire and test the hardware. The table provided by Texas Instrument was interesting, but their table blurs which tasks that belongs to which result. And gives memory as percent without specifying the absolute amount elsewhere. As for what instruction sets that are being used. I refer to generic instruction sets. Like those on x86, ARM, MIPS, PPC, etc. If instructions trigger a specialized unit not available on most generic platforms, then things becomes very unpredictable (I bet that is used by some less transparent letter organizations). An interesting twist is the guy that booted MMU and 32-bit demanding Linux a 8-bit AVR. Bytesock (talk) 19:37, 5 May 2016 (UTC)
- It should still be possible to make some back of the envelope predictions. The task of calculating the date of Easter in the year 2588 ought to be less computationally expensive than decoding a mpeg-4 video. The computer architecture matters, but it can to some extent be compared using the number or instructions done per second. The only real difference is the number of normal instructions and floating point ones. And of course what's needed for real-time (mp3) and just getting it done (compiling). Bytesock (talk) 16:56, 5 May 2016 (UTC)
- Maybe you are actually interested in theory of computation? Things like time complexity or space complexity of an algorithm or program? Of course calculating a date of a future Easter is less computationally expensive than video decoding, but that is true even if there is no hardware- that is true if we do it with pencil and paper too. Whether or not a certain task is "computationally expensive" doesn't really depend on hardware, it depends on the mathematical structure of the problem (e.g. NP hard problems are expensive once they get to a certain size, no matter what hardware you use (ignoring potential computers that may exist in the future)).
- Also you seem to have missed my point above - I'm making a different point than Nimur and Nil Einne are. My point is that there is no "per second" inherent in any of these problems. When you ask how many instructions per second it takes to calculate a given future Easter, that's like asking how many miles per hour it takes to get from Albuquerque to Los Angeles- it's the wrong unit, and it becomes more correct if you take time out of the denominator. You can compare the number of steps necessary to complete different computational tasks, but that puts you into theory of algorithms, not hardware performance. In the context of algorithm analysis, we still have to say what we mean by "step", but that's much more easily done in the abstract than on a given processor, and this has been basically codified and agreed upon in the past. Does that help at all? There are a number of textbooks and resources we can direct you to if you want to learn more about analysis of algorithms. SemanticMantis (talk) 19:07, 5 May 2016 (UTC)
- Yes, thank you for clarifying. Some tasks may be more computationally complex in the abstract sense; but if big computer companies have already spent ten million engineer-hours over the last twenty years painstakingly optimizing that task, we might have a machine that can do it incredibly efficiently. That doesn't mean it's "less complex" in the sense of algorithmic complexity - but it might be "less complex" in the sense of "machine instruction input/output." Nimur (talk) 19:14, 5 May 2016 (UTC)
- The article on time complexity had an interesting table, but it won't give a sufficient clue on say actual CPU selection etc. And you're right there's no per second aspect of calculating say future Easter dates, I just wanted to make the question less complex. There's essentially two cases. How many generic instructions does a specific task take. And how many instructions per second is need to accomplish realtime performance for a task (compare with energy-power-time). But one might want that the Easter calculation or compilation takes less than a day, and that the video decode is faster than the viewing process. Bytesock (talk) 19:52, 5 May 2016 (UTC)
- Reading the OP's clarification above, it definitely sounds to me like what they're really looking for are benchmarks. There are public benchmarks for things somewhat similar to some of the examples the OP asked for, but as SemanticMantis have said, these benchmarks are normally in time taken to run or something similar. And these benchmarks are only true for the precise software (including version and to some extent other software) being run, and to some extent, the precise hardware.
Benchmarks are used to compare hardware but it's important to understand exactly what's being tested and how. (And some benchmarks are poorly done e.g. lacking repetition.) Note that some of the specific examples used are still a bit weird. You'll likely find it difficult to find info on ARM computers "real-time decode mpeg1-l3 at 44.1 kHz stereo and how much memory will it need" because any ARM computer you're likely to encounter can do it.
Instead, you may find benchmarks testing what sort of ARM CPUs can realtime decode h264 of a certain resolution, frame rate, bitdepth and profile. However such results should be treated with care, since it will depend on the precise software and ARM variant and the stream. (Besides the already mentioned resolution, frame rate, bit depth and profile; the complexity of the stream and bitrate also matter but these aren't necessarily mentioned by whoever did the benchmark.). Also this would generally only be useful for non hardware assisted decoding. ARM CPUs intended for media devices, phones and tablets normally have specialised hardware decoding functions for h264 so provided your stream is supported by the hardware decoder, you can usually decode it.
Definitely of current interest is the hardware required for decoding h265 realtime at various resolutions and framerates so you'll find various benchmarks. And this is for both ARM and x86. (Although hardware decoding of h265 is starting to be added. And it's perhaps the time to mention that sometimes vendors add hardware assist where there are functions which are supposed to speed up decoding but where other parts of the decoding still rely on the more generalised hardware functions. Sometimes these speedups are actually slower than a good software implementation using only generalised functions.)
Likewise "compression of 320x240 video into mpeg-2" is something even 15 years ago wasn't looked at much much. The resolution is too low to be of interest. If you go back 15 years or perhaps slightly more, you'll probably find benchmarks of MPEG-2 encoding higher resolutions. (I think I remember some.) Bear in mind also that the software and hardware has moved on since then. So your performance now is going to be higher than if your were using the same software with modern hardware. (Well presuming the hardware still properly supports the software.)
Compression benchmarks, compilation benchmarks etc are also common. Although LZMA2 (7-zip) or RAR are what's often used not bzip2 and the source code is normally far more than 1000 lines. BTW to give another specific example of why you need to consider carefully what you're looking at, refer to the AES benchmarks [7]. There's unsurprisingly a very big increase (four times or more) between older x86 CPUs without AES-NI and newer ones with. Of course, such a big speed boost will only be seen on software that takes advantage of AES-NI.Nil Einne (talk) 20:18, 5 May 2016 (UTC)
- Sticking with the algorithm tack, I also found some refs for time complexity of video decoding (below). I'll add one final thought that may help OP - benchmarks are generally about comparing different hardware on the same task. Analysis of algorithms is about comparing different tasks, on abstracted or idealized hardware. If the goal is to compare different algorithms on different hardware, then I really don't think that there is any good theory-based approach, because of the things Nimur describes where you can in principle design a chip solely to do that algorithm in one instruction. SemanticMantis (talk) 20:41, 5 May 2016 (UTC)
- Regarding crypto in hardware. One might never know if the accelerator saves the key "for later use".. :-) Bytesock (talk) 22:37, 5 May 2016 (UTC)
- Reading the OP's clarification above, it definitely sounds to me like what they're really looking for are benchmarks. There are public benchmarks for things somewhat similar to some of the examples the OP asked for, but as SemanticMantis have said, these benchmarks are normally in time taken to run or something similar. And these benchmarks are only true for the precise software (including version and to some extent other software) being run, and to some extent, the precise hardware.
- Right, in the real world performance matters sometimes. But it's important to separate the three things I think: 1) the time it takes for a step. 2) what you count as a step 3) how many steps you need to do a thing. When analyzing algorithms, we don't even (usually) talk about exactly how many steps they take. We talk about how fast the number of steps grows, considered as a function of the size of the input. Even then, we don't (usually) specify the exact function, but just describe its general characteristics: is it constant, linear, exponential, factorial or maybe something else? Landau notation is useful for describing how fast things grow. It's a common exercise in some courses to ask students to derive, say, the time complexity of matrix multiplication and other elementary algorithms. And there's a whole field of research that amounts to getting better expression inside the "big O". For example here [8] is a research article presenting a clever way to doing something in time O(N logN), which can be a lot better than the O(N^2) you'd get with a naive method. Some people spend their whole careers on this kind of stuff.
- Anyway, characterizing video decoding then has something to do with both CPU performance and analyzing the math behind the algorithms. Here [9] [10] are two research papers I've found that address some of these issues. They are both freely accessible, and might give you an idea of the scope and feel of this kind of work. The first one has a bit more about algorithm analysis, and the second has a bit more about architecture. So yes, people can analyze this stuff, but it's hard work. I am not aware of any "back of the envelope" way to do this stuff, but maybe someone else has some ideas. I do have a sense of what things are easy and hard, but that is mostly derived from my empirical experience on one class of computers, and that is why I recommended an empirical approach above. If your goal is research and understanding, for fun and education, knock yourself out on theory - it's cool stuff. If your goal is to know which process will hog more CPU cycles on your computer, my advice is to try a few tests and see. SemanticMantis (talk) 20:41, 5 May 2016 (UTC)
- My question is really say.. will a 200 MHz ARM with 32 kB RAM be able encode 320x240 video and so on. So it's more about if the CPU (SoC) is good enough. Processes on a PC matters less. There's plenty of cycles to spare. The slower CPU one can use, the less worries there are about impedance matching, ground planes etc. And the slower CPUs tend to use chip packages that are easier to handle. Extra computational power units like SIMD, NEON etc. Tend to require very specific software support so it might as well not be there to begin with. Btw, your links had some really interesting information. Regarding mpeg1-l3, I have tested with a 80486DX-33 MHz and it took about 6x realtime to decode it. Thus a 200 MHz version should do it, but then a Pentium running at 75 MHz can handle it so obviously it's more efficient. So the question is then, how slow ARM can outperform a P-75.. Bytesock (talk) 21:57, 5 May 2016 (UTC)
- User:Bytesock well I finally understand both your question and your goal, so that's progress. I do think some algorithm-level considerations will help you roughly rank tasks. Then you can cross index with the rankings for the chips, and have some motivations for your guesses. I'm glad you like the refs, but unfortunately, I don't have any how to make the kinds of estimates you want in a rigorous manner. I understand now that empirical methods are not great if you want to get a quick feel for how many of 10 (or 100) different chips can perform a decoding task without stuttering and you don't want to buy them all just to test. This has the feeling of work that people get paid to do, but then again if I were paying an engineer for this I think I might find the cost of the chips less expensive than the time of the engineer :) SemanticMantis (talk) 14:14, 6 May 2016 (UTC)
- I think the problem is you still seemed to be looking at this as very simple when it's not. Even assuming no specialised hardware encoder, a 200 mhz ARM of one achitecture may be able to encode the video, whereas it may take a 600 mhz ARM of another achitecture. You seemed to mention this with your x86 example, but then not consider it for your ARM case. And besides that you've only defined resolution. As already mentioned, frame rate, bitrate, complexity of the codec (note that most modern codecs e.g. h264 or even MPEG2 aren't one complexity, they often have different profiles and different options that can be turned on an off which can make a big difference in complexity) etc will make a difference. And even if all these are the same, video 1 may encode successful while video 2 may not. And software implementation 1 may be able to achieve realtime on a 200mhz, whereas software implementation 2 may require a 600mhz of the same achitecture. And many software implementations have adjustable features which don't relate much to the codec's defined complexity, but which will have a significant effect on speed (with some effect on quality), so again you may be able to achieve realtime with 200 mhz on level, but need 800 mhz for the other. Then you have to consider that all these factors intersect and not in clear cut ways. For example, software implementation 1 could easily generally achieve better subjective quality than software implementation 2 despite si2 being far more demanding. Baring perhaps really old results, I don't think you're going to find many benchmarks comparing a Pentium 75 mhz to a slow ARM. But you definitely can find benchmarks comparing modern x86-64 CPUs to modern ARM CPUs at various tasks. And as expected even just comparing 2 different defined systems one with ARM one with x64-64, the relative performance easily varies by 2x or more in cases where there isn't really any specific acceleration (like AES-NI) depending on precisely what you're benchmarking. BTW, as for the risk that AES-NI stores stuff, most experts I've read (which admitedly isn't many) generally agree that there are far bigger things most people should be worried about (e.g. RdRand and Dual_EC_DRBG although many things only use these as one source of randomness anyway or simple some sort of detection of stuff of interest or stuff outside the CPU). Also AES-NI is generally so fast that if you aren't require to use only AES, you could combine AES with something else . Nil Einne (talk) 15:05, 6 May 2016 (UTC)
- My question is really say.. will a 200 MHz ARM with 32 kB RAM be able encode 320x240 video and so on. So it's more about if the CPU (SoC) is good enough. Processes on a PC matters less. There's plenty of cycles to spare. The slower CPU one can use, the less worries there are about impedance matching, ground planes etc. And the slower CPUs tend to use chip packages that are easier to handle. Extra computational power units like SIMD, NEON etc. Tend to require very specific software support so it might as well not be there to begin with. Btw, your links had some really interesting information. Regarding mpeg1-l3, I have tested with a 80486DX-33 MHz and it took about 6x realtime to decode it. Thus a 200 MHz version should do it, but then a Pentium running at 75 MHz can handle it so obviously it's more efficient. So the question is then, how slow ARM can outperform a P-75.. Bytesock (talk) 21:57, 5 May 2016 (UTC)
Old versus new software
[edit]I own a Core i7 PC, albeit with only 4 GB of RAM and an old-school HDD. I find that Word 2013 takes time to load, and isn't always that responsive when working with tables and images.
More than 15 years ago, I worked with a P100 with 24 MB of RAM, and MS Works functioned pretty well, and the table environment seemed similarly fast to Word on my present machine.
I appreciate that Word 2013 is much richer in features than MS Works, but even so. Why is the jump in resource demands so extreme?--Leon (talk) 19:18, 5 May 2016 (UTC)
- Because Microsoft has added a lot of functionality that you may or may not need. And once the capacity of a machine is available, many programmers tend to fill it up without any good reason. One might suspect collusion between chip makers and software design houses too. But that's just speculation. If you want a speed boost. Use as old software you can get away with on a modern machine. Or go with the free open source systems and software. They are usually more efficiently designed but may lack in driver availability. Bytesock (talk) 19:58, 5 May 2016 (UTC)
- Writing tight code is expensive. It can also lead to inconsistencies (sure, I can write a faster drop-down menu than the standard Windows drop down menu, but then it will behave slightly differently, not support styling, or use a fixed font, or...). Real software written under economic and market constraints thus relies on increasingly more powerful, but also deeper, larger, and hence slower libraries and frameworks. Also, even Word now has better text layout than it had 15 years ago - another thing that does not come for free. And each character on the screen probably has 25 times more pixels, and may even use sub-pixel antialiasing, something that simply was not around 15 years ago. I handle the problem as Bytesock suggested - I use Emacs with AUCTeX and LaTeX for most text, and org-mode if I need anything with tables. Perceived speed and responsiveness is one of the main reasons why I stick with these tools. --Stephan Schulz (talk) 20:42, 5 May 2016 (UTC)
- I would not agree that tighter computer resources are particularly hard to write unless we are talking 14-bit CPU with bank switching trying to handle floating point in realtime with only a few kB of flash available. It's mainly about asking, what features actually benefits the task at hand. And what is just "nice to have" but is really just in the way. It may be simple things like allocating a 32-bit array to handle graphics when only 4-bits are needed which may then cause the machine to start swapping. And so on. Bytesock (talk) 21:02, 5 May 2016 (UTC)
- I very much disagree. A doubly linked list is easier to write than a splay tree. Linear search is easier than good indexing. Arguably, when you are not born a recursive thinker, insertion sort is easier than mergesort. I've reduced the CPU usage of a program from 100% (and dying, i.e. falling farther and farther behind in its processing of real-time events) to 2% by moving the event queue from a singly linked list to a skip list. But it is harder to come up with good data structures and algorithms than to use the obvious solution. --Stephan Schulz (talk) 06:25, 6 May 2016 (UTC)
- To use whatever is need to process a list is one thing. To haplessly allocate lot's of memory for no good is bad practice. There's a difference between doing a good engineering choice and being sloppy. Your example seems more like exercising a sound choice. At other times it may pay to replicate parts of a large library to avoid the baggage they impose. Bytesock (talk) 13:42, 6 May 2016 (UTC)
- Beyond what you've said, the OP also mentioned images. I don't know what images they're referring to but if these are e.g. 8MP 24 bit photos I'm guessing this isn't something they regularly worked with on their Pentium 100 mhz. Nil Einne (talk) 14:37, 6 May 2016 (UTC)
- The point is programming practices. Not the particular computer. Bytesock (talk) 18:50, 6 May 2016 (UTC)
- You seem to have misunderstood my point as it has nothing to do with programming practices or any specific computer. My point was that although programs have changed significantly since then, in some cases so have the workload that the person using the program imposes on the program. And I'm not simply referring to things like automatic spell check etc which others have mentioned, but stuff which is directly imposed by the user like image resolutions (as the OP specifically mentioned working with images).
If you do want to talk about those things, well programming practices can't magically allow a computer to easily work with an image which takes nearly all the RAM. And that would be the case for the examples me and the OP mentioned, i.e. a 8MP 24 bit image on a Pentium 100 mhz with 24 bit RAM. Of course you can make different decisions on how to handle things. For example, if a word processor during the time of the OP's Penium 100 mhz could even handle such a large image, it would surely have reduced the resolution and only ever worked with the reduced resolution variant. This may not necessarily be the case nowadays, that relates to programming but there are various reasons for difference in how you handle high resolution images, sometimes mistakes or pointless work may be done, sometimes there are actually decent reasons for the decision. So to just call it poor programming practice is partially missing the point.
Note that per my WP:indenting, I was replying to Stephan Schulz's first replies (and the previous replies before Stephan Schulz) not to you. This was beause my comment was related to the stuff Stephan Schulz and people before had said, and not to stuff you or the people between you and post I was replying to had said. (Sometimes there may be some comments which I took into minor consideration but this time there was nothing really as it was irrelevant to my point.
- You seem to have misunderstood my point as it has nothing to do with programming practices or any specific computer. My point was that although programs have changed significantly since then, in some cases so have the workload that the person using the program imposes on the program. And I'm not simply referring to things like automatic spell check etc which others have mentioned, but stuff which is directly imposed by the user like image resolutions (as the OP specifically mentioned working with images).
- The point is programming practices. Not the particular computer. Bytesock (talk) 18:50, 6 May 2016 (UTC)
- Beyond what you've said, the OP also mentioned images. I don't know what images they're referring to but if these are e.g. 8MP 24 bit photos I'm guessing this isn't something they regularly worked with on their Pentium 100 mhz. Nil Einne (talk) 14:37, 6 May 2016 (UTC)
- See software bloat. It doesn't have to be that way, but it usually is. Something to do with capitalism, the treadmill of production, keeping up with the Joneses and a little Planned_obsolescence. For alternatives, you might check out LibreOffice if you don't have the stomach for Emacs or Vim_(text editor). While LibreOffice is by no means lightweight, it may run better than Word on your system. SemanticMantis (talk) 20:50, 5 May 2016 (UTC)
- As a computer programmer, I saw some of the reasons for software bloat personally:
- 1) There often is no spec for performance when software is written. That is, they say it must do X, but they don't say it must do X in Y time.
- 2) Programmers then are evaluated for writing code that meets the spec (without performance benchmarking) in as quickly a method as possible. For example, writing a bubble sort is nice and quick, so they might do that rather than find a more efficient method.
- Some absurdly slow programming methods I've seen are:
- TSO had a search function in the text editor that apparently did a backwards search by loading the entire file into memory, discarding all but the last line, searching that, then loaded the entire file into memory, searching the next to last line, etc.
- A 2D graphics program (I forget which, maybe CorelDraw ?) redisplayed every vector on the screen whenever anything was changed, including panning the image.
- I saw a database application that, rather than doing a join of two tables as an embedded SQL SELECT command, instead did a SELECT of the entire contents of both (huge) tables, and then manually performed the join. The result was a program that took some 20 minutes to do a simple DB query. After I fixed it, it took under a second, but the tester was rather disappointed to lose his "coffee break function".
- CATIA V4, mainframe version, apparently forced a fatal error when you hit the EXIT button, rather than closing down properly, presumably causing all sorts of memory leaks, etc.
- I saw an applications that allow you to select 3 different options, but it instead creates the result page for all 3 options, then, once you select an option, it deletes the other 2.
- Then the problem is the management. Or in some cases even the customers that don't get software efficiency. From an outside perspective. Either company does a good job or not. Bytesock (talk) 03:20, 6 May 2016 (UTC)
Bump it up to at least 8GB of RAM. My daughter's computer was creeping along with only 4GB - even making it 6GB helped noticably and 8GB is better. A GB is costs about US$5 these days. Bubba73 You talkin' to me? 07:01, 6 May 2016 (UTC)
- Definitely agree with Bubba73. I went from 4GB to 8GB (Note: you must have a 64-bit OS to use more than 4GB) and noticed a big speedup. Then I went to 16GB and the speedup wasn't nearly as dramatic,. but I did notice it. The other thing that really helps is loading Windows on an SSD and using the old rotating drive to hold data.
- Want to try something to see how hast your current hardware runs without software bloat? Try Tiny Core Linux. You don't have to disturb your windows install - Tiny Core runs fine off a bootable USB thumb drive (256MB works, but 2GB drives are less than $10). As an experiment I loaded Tiny Core on an old 486DX with 64MB of RAM and it was acceptably fast for email, word processing, etc. Warning: Tiny Core is different from Windows and from other Linux distributions so you will definitely need to read the docs. Took me less than an evening to become productive. --Guy Macon (talk) 07:56, 6 May 2016 (UTC)
- For even more speed boost, try MenuetOS. And similar OS. Bytesock (talk) 13:44, 6 May 2016 (UTC)
- That is impressive! at 1.44 MB and a couple of seconds to boot, that's is only 300 times bigger and 100 times slower booting than the software inside a Commodore C64 (8KB, 16 millisecond boot). Of course it doesn't have the C64's blazing fast 10 minutes to read a 170KB floppy and 30 minutes to read a 100KB tape... :( --Guy Macon (talk) 15:48, 6 May 2016 (UTC)
- OT.. If the printed circuit board production engineers had not screwed up the connections because of some crappy screwhole the C64 and thus also the 1541 would been something like 13 times faster. Bytesock (talk) 18:49, 6 May 2016 (UTC)
- That is impressive! at 1.44 MB and a couple of seconds to boot, that's is only 300 times bigger and 100 times slower booting than the software inside a Commodore C64 (8KB, 16 millisecond boot). Of course it doesn't have the C64's blazing fast 10 minutes to read a 170KB floppy and 30 minutes to read a 100KB tape... :( --Guy Macon (talk) 15:48, 6 May 2016 (UTC)
- Another approach is to turn off some of the unneeded features. For example, dynamic spellchecking and grammar checking. In the "good old days" you finished your document, then hit the spellcheck button, which prevents it from slowing down as you type. StuRat (talk) 15:31, 6 May 2016 (UTC)
How much computing power that is needed depend mostly on the task at hand. Live encoding of video do have some absolute minimum demands. While wordprocessing inherently has less. People have really lost perspective of resources and their value. Some examples:
- Wordprocessing and basic painting: 8-bit 6502 with 64 kB RAM, (Commodore 64) + GEOS
- Graphical terminal Unix+X11: Intel 80486 + 32 MB RAM (FreeBSD + XFree86)
- MP3 playback: Intel Pentium 100 MHz + 8 MB RAM
- Video playback: Intel Pentium-III 500 MHz + 64 MB RAM
- Realtime MP3 encoding: Intel Pentium-III 1000 MHz + 8 MB RAM
- Video encoding: Intel Core2Duo 2 GHz (circa) + 512 MB RAM
Now if software design teams makes unwise choices. A lot of processing power will be lost. Bad tools that generate a lot of junk code also interferes with efficient coding. Even if an A-bomb went of in the same house as yourself. A modern 4 GHz CPU would still have time to process circa 200 instructions. So all this about slow computers is mainly a question of bad selection of software. Bytesock (talk) 17:46, 8 May 2016 (UTC)