Wikipedia:Reference desk/Archives/Computing/2016 February 24
Computing desk | ||
---|---|---|
< February 23 | << Jan | February | Mar >> | Current desk > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is a transcluded archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
February 24
[edit]What does LHA stand for?
[edit]LHA is a data compression format, similar to ZIP etc. But what does LHA stand for? I assume the A is for "archive". Equinox (talk) 03:14, 24 February 2016 (UTC)
- "LH" might stand for Lempel-Huffman and "A" for Archive - see LHA (file format). Bubba73 You talkin' to me? 03:35, 24 February 2016 (UTC)
- Based on the history, going backwards.. LHA was preceded by LHarc. LHarc was preceded by LZHUF. LZHUF was LZARI with the algebra replaced with Huffman encoding. So, the HUF stood for Huffman. The LZ is a continuation of all the LZ... names based on Ziv-Lempel method. So, the guess that L=Lempel and H=Huffman is pretty good. I assume that the author didn't mean anything by them. The author took LZHUF and turned it into LZHUFarc, shortening the project to LHarc and then shortening it further to LHA. 209.149.114.211 (talk) 19:09, 24 February 2016 (UTC)
- I know this isn't the place to complain but why on earth does every single "extension" have its own heading in that article? Looks terrible. Vespine (talk) 22:04, 25 February 2016 (UTC)
- It's probably so you can jump to it quickly from the table of contents. I agree that it's ugly though. SteveBaker (talk) 15:08, 26 February 2016 (UTC)
- Bubba73 fixed it, good job, much better. Vespine (talk) 22:24, 29 February 2016 (UTC)
- It's probably so you can jump to it quickly from the table of contents. I agree that it's ugly though. SteveBaker (talk) 15:08, 26 February 2016 (UTC)
- I know this isn't the place to complain but why on earth does every single "extension" have its own heading in that article? Looks terrible. Vespine (talk) 22:04, 25 February 2016 (UTC)
Economic viability of a university using its desktop PCs for distributed computing?
[edit]I was wondering, since my university has many, many desktop PCs throughout its various buildings, why doesn't it use them for distributed computing purposes? I know for a fact that they have machines dedicated to intensive computing although I don't really know what they're used for. They have a new "cluster" that they're calling [Eddie 3]. Is it because the energy consumption is inefficient or something else? ----Seans Potato Business 14:29, 24 February 2016 (UTC)
- Maintenance and configuration costs, most likely. Desktop PCs get switched on and off, plugged in and out of networks, and run many different OSes (or, worse, versions of Windows, which are useless for most scientific computing). There are only a few projects that would work well with such a heterogeneous and unstable network. And hardware costs are very low compared to salaries. --Stephan Schulz (talk) 14:36, 24 February 2016 (UTC)
- Right, the cost of a nice super-computing cluster for scientific computation is more about setup and maintenance in many university contexts. For things that do work well in OP's context (and have been known to run on computers in labs and administrative offices at universities), there are things like Folding@home, SETI@home, PrimeGrid, and World Community Grid. SemanticMantis (talk) 16:36, 24 February 2016 (UTC)
- Yeah but my university would have no motivation to pay the electrical costs for computations that are used to publish papers by another university. 129.215.47.59 (talk) 17:25, 24 February 2016 (UTC)
- If your university is any good, it would have an interest in the general progress of science. And it understands the tit-for-tat of much of the scientific process (e.g. I regularly review papers for free, because I want my papers to be reviewed - although there is no formal requirement or dependency). --Stephan Schulz (talk) 17:28, 24 February 2016 (UTC)
- Why would "versions of Windows, (be) useless for most scientific computing" Stephan Schulz? At the first glance, that looks like Windows bashing, just because it's Windows, and it's OK to bash it. Scicurious (talk) 15:25, 25 February 2016 (UTC)
- That's certainly reason enough for me. But more seriously, most scientific applications, especially distributed ones, are written for UNIXoid operating systems. They seem to work with and leverage open source tools, they rely on remote login, portability and scriptability, and the more successful ones have live times that are an order of magnitude longer that the typical new Windows framework du jour. --Stephan Schulz (talk) 16:57, 25 February 2016 (UTC)
- This is why BOINC comes with a VirtualBox installation. It's apparently literally easier to spin up a Linux VPN which does the calculations than rewrite it to run on Windows or OS X, apparently. Blythwood (talk) 19:55, 26 February 2016 (UTC)
- Although only used for a small number of BOINC apps at the moment [1]. It also sounds like technically you could use Windows or OS X if you were able to solve the licencing issues and another advantage is even with Linux it means you have a consistent system [2]. Nil Einne (talk) 07:28, 27 February 2016 (UTC)
- This is why BOINC comes with a VirtualBox installation. It's apparently literally easier to spin up a Linux VPN which does the calculations than rewrite it to run on Windows or OS X, apparently. Blythwood (talk) 19:55, 26 February 2016 (UTC)
- That's certainly reason enough for me. But more seriously, most scientific applications, especially distributed ones, are written for UNIXoid operating systems. They seem to work with and leverage open source tools, they rely on remote login, portability and scriptability, and the more successful ones have live times that are an order of magnitude longer that the typical new Windows framework du jour. --Stephan Schulz (talk) 16:57, 25 February 2016 (UTC)
- Why would "versions of Windows, (be) useless for most scientific computing" Stephan Schulz? At the first glance, that looks like Windows bashing, just because it's Windows, and it's OK to bash it. Scicurious (talk) 15:25, 25 February 2016 (UTC)
- If your university is any good, it would have an interest in the general progress of science. And it understands the tit-for-tat of much of the scientific process (e.g. I regularly review papers for free, because I want my papers to be reviewed - although there is no formal requirement or dependency). --Stephan Schulz (talk) 17:28, 24 February 2016 (UTC)
- OP, you might also be interested in Berkeley Open Infrastructure for Network Computing which allows a computer to run any of several distributed computing efforts. Dismas|(talk) 16:58, 24 February 2016 (UTC)
- Yeah but my university would have no motivation to pay the electrical costs for computations that are used to publish papers by another university. 129.215.47.59 (talk) 17:25, 24 February 2016 (UTC)
- Right, the cost of a nice super-computing cluster for scientific computation is more about setup and maintenance in many university contexts. For things that do work well in OP's context (and have been known to run on computers in labs and administrative offices at universities), there are things like Folding@home, SETI@home, PrimeGrid, and World Community Grid. SemanticMantis (talk) 16:36, 24 February 2016 (UTC)
- Some places do do it: The University of Central Missouri is apparently GIMPS' biggest contributor. Mingmingla (talk) 19:52, 24 February 2016 (UTC)
- Fundamentally, it comes down to power, I believe (although I'm no expert). Desktop computers aren't built to be particularly energy-efficient and running BOINC-type stuff on them may well be more expensive than just building a proper cluster, unless they are idling somewhere like daytime Arizona and you have a lot of spare solar panels. A close analogue is bitcoin mining, and there people have found that doing it on a normal desktop computer is just not worth it. At least, not unless it's someone else's you've hacked into and taken over. Blythwood (talk) 19:54, 26 February 2016 (UTC)
- I don't think this is particularly accurate. The power efficiency of most modern Intel CPUs aren't actually that different from those commonly used in clusters although it does depend on the precise CPU in question. Lower end Intel CPUs (which may be most common for desktops un universites) can sometimes be effectively less energy efficient because they have features disabled. Also they only have to meet the TDP level (which is often the same as higher end CPUs) so to some extent binning can mean those with higher power usage end up on the lower end lines. However this effect nowadays isn't anywhere near it used to be. Further you can often get energy efficient versions of the lower end CPUs with lower TDPs. Also desktop CPUs tend to get the newer versions faster those targeted at clusters and further desktop systems are probably often upgraded more than cluster ones. (So e.g. your cluster may still be Ivy Bridge whereas perhaps your desktop is Haswell or even Skylake.)
A complicating factor is whether you have any advantage to say use 8 threads on a single CPU in your cluster compared to spreading this out over 2 CPUs on different computers. Also if you're program needs lots of RAM, it may be likely the average desktop computer doesn't have anything close to what is available on the cluster. Further even when using multiple CPUs on a cluster, there are often some advantage compared to different CPU on different desktops.
If you're using a GPU things will be worse, particularly since most consumer GPUs are intentionally limited in some areas such as double precision. So it may be more likely you'll effectively get a significant advantage with using a specialise system. In any case, most desktop systems for universities probably don't have a dedicated GPU and if they do, it's likely to be a very weak one.
The situation with mining isn't really comparable. In fact for a while, mining with certain GPUs may have been effective. Nowadays it generally isn't but this is because the comparison is highly specialised ASICs. The situation with clusters is different since the actual hardware in particular the CPU isn't that different between clusters and desktop CPUs. The main difference tends to be higher thread count and higher cache as well as the way they work together. It may still be the small advantages are enough to make up for the added cost but I strongly suspect the other answers above are more correct. It's mostly about support and maintenence. Having spoken to someone who's used such a university cluster before I know that even with a dedicated cluster you can get weird behaviour which requires assistance to work out what's going on. Trying to work out what's the problem when you're having issues with random desktop computers could be a nightmare.
- I don't think this is particularly accurate. The power efficiency of most modern Intel CPUs aren't actually that different from those commonly used in clusters although it does depend on the precise CPU in question. Lower end Intel CPUs (which may be most common for desktops un universites) can sometimes be effectively less energy efficient because they have features disabled. Also they only have to meet the TDP level (which is often the same as higher end CPUs) so to some extent binning can mean those with higher power usage end up on the lower end lines. However this effect nowadays isn't anywhere near it used to be. Further you can often get energy efficient versions of the lower end CPUs with lower TDPs. Also desktop CPUs tend to get the newer versions faster those targeted at clusters and further desktop systems are probably often upgraded more than cluster ones. (So e.g. your cluster may still be Ivy Bridge whereas perhaps your desktop is Haswell or even Skylake.)
- Fundamentally, it comes down to power, I believe (although I'm no expert). Desktop computers aren't built to be particularly energy-efficient and running BOINC-type stuff on them may well be more expensive than just building a proper cluster, unless they are idling somewhere like daytime Arizona and you have a lot of spare solar panels. A close analogue is bitcoin mining, and there people have found that doing it on a normal desktop computer is just not worth it. At least, not unless it's someone else's you've hacked into and taken over. Blythwood (talk) 19:54, 26 February 2016 (UTC)
- Perhaps something like the classic Stone Soupercomputer project? SteveBaker (talk) 20:03, 26 February 2016 (UTC)
Colourful text on console
[edit]Today, as I was developing a program at work that runs as a Windows service on production use, but can be also be run as a console application for debug use, I got to thinking. The program uses a logging library that uses colour to make different log levels stand out. The colours are the same basic 16 colours that I saw on a CGA/EGA PC in my elementary school in the middle 1980s. Now I understand there's some kind of ANSI standard for this kind of coloured console output, but then I got to thinking. It's 2016. Computer displays have been capable of full 24-bit colour output for years. Why are consoles, both on Windows and on Linux, still stuck in the 1980s? Why don't they allow setting arbitrary RGB hues on text output?
I remember that the Amiga uses the same kind of ANSI text colouring, but in contrast to the PC, the Amiga uses an indexed-colour screen where every single indexed colour can have its actual RGB hue defined freely. So, the Amiga also uses the same 16-colour ANSI formatting, but instead of having the same black, blue, green, cyan, red, purple, yellow and white in two intensities, the 16 colours can be chosen freely. Is even this possible on Windows or Linux? JIP | Talk 20:36, 24 February 2016 (UTC)
- This thread is gigantic, so I'm going to rather impolitely interject some things at the beginning where they might be noticed. • The Windows console doesn't use escape codes to set colors; it uses API functions like SetConsoleTextAttribute. ANSI.SYS existed in the MS-DOS days and worked in some Windows versions but I don't think any equivalent was ever supported for native Windows programs. • You can redefine the RGB values of the 16 console colors, as on the Amiga, using SetConsoleScreenBufferInfoEx. • Unix-style terminals work in Windows; mintty is one. But software written for the Windows console won't work in mintty, unless it doesn't use the console API functions. • There's no reason you couldn't even draw arbitrary graphics in a console window. The NAPLPS standard has existed for decades, but standard *ix terminals don't support it or any other way to escape from the grid of monospaced characters, despite the many command-line programs that would benefit from it. So yes, everyone is stuck in the 1980s, not just Windows, and I don't know why. -- BenRG (talk) 20:17, 25 February 2016 (UTC)
- ANSI escape code describes the system, and that article's #colors section notes that many Linux (et al) terminals support "ISO-8613-3 24-bit foreground and background color setting". The Windows console is a rather sad neglected place. -- Finlay McWalter··–·Talk 20:59, 24 February 2016 (UTC)
- Yes, but I was also thinking, why do we still only use ANSI? Why isn't there a way to colour text freely, instead of sticking to 16 colours, even if their hues can be selected freely, such as on many Linux terminals but not on Windows? JIP | Talk 21:03, 24 February 2016 (UTC)
- I don't understand your question. If you're simply asking why the Windows console is bad, it's because it's neglected, presumably because Microsoft doesn't consider the console to be important and wants people to use its graphical management tools. -- Finlay McWalter··–·Talk 21:06, 24 February 2016 (UTC)
- I think the reason I'm asking is text output. Consoles provide a convenient way for programs to provide output by simply calling
printf()
or its equivalent, instead of having to learn intricate, OS-specific graphics output libraries. If I wanted to, I could read through the MS Windows and X Windows APIs to find a way to output free graphics, but my job concerns background processing, not graphics output. There doesn't seem to be any better way, either on Windows or Linux, for a program to simply callprintf()
and get its output displayed other than consoles. So why aren't they taking advantage of the technology that's been there for years? JIP | Talk 21:13, 24 February 2016 (UTC)
- I think the reason I'm asking is text output. Consoles provide a convenient way for programs to provide output by simply calling
- Now I'm confused too. Finlay has mentioned rich 24-bit colored text on Linux terminals, I'm using similar on my OSX Terminal.app right now. I don't know how it works in Windows, but the point is that good consoles do support lots of colors for text these days. While each OS has pros and cons, I think you will find very few people who will defend the position that the windows console/terminal is good compared to the other major OSs. This sort of concern does come up in the real world. The game Brogue_(video_game) sort of wants to live in an ASCII terminal, but it can't because it couldn't then display in these nice colors [3] on Windows machines. It still uses ASCII characters, and I think it could in principle do fine in *nix terminal, but it renders through SDL instead due to the huge marketshare of Windows. SemanticMantis (talk) 22:47, 24 February 2016 (UTC)
- Conway's Game of Life, Minesweeper and even chess are more examples of games which can just have an ASCII interface, but would look a lot nicer with color. If you're more interested in programming the game engine than graphics, it would be good to have an ASCII option, at least in the early stages. StuRat (talk) 01:43, 25 February 2016 (UTC)
- Back in my day, when you could write directly to the video memory in DOS, you could easily set the color of each character. I assume that you can't do that now. Bubba73 You talkin' to me? 02:37, 25 February 2016 (UTC)
I think I will have to show by example. Suppose I'm writing a console application in C. Why can't I do something like this (hypothetical)?
for (int i=0; i<256; i++) { settextcolour(0, 0, i); printf("This should be in shade %n of blue\n", i); }
JIP | Talk 06:04, 25 February 2016 (UTC)
- You might read through this: [4]. It looks like they have a way to do what you want. StuRat (talk) 06:16, 25 February 2016 (UTC)
- Which is ANSI escape codes. I don't know why JIP persist in asking "why can't I do X" when I've cited an article that show how you can. -- Finlay McWalter··–·Talk 11:37, 25 February 2016 (UTC)
- AFAIK all the cited article says is that I can do the basic 16 colours with ANSI escape codes. This is not what I'm asking. I would want to be able to colour the text with any possible 24-bit hue. If I had a 256-line terminal and let the program run its course, I would end up with 256 lines, every single one of a different colour. If I were to write:
for (int r=0; r<256; r++) { for (int g=0; g<256; g++) { for (int b=0; b<256; b++) { settextcolour(r, g, b); printf("This should be in hue %n, %n, %n\n", r, g, b); } } }
- I would end up with 16,777,216 lines, every single one of a different colour, were it not for the fact that current display technology is not capable of showing so many pixels, let alone lines of text, at the same time. JIP | Talk 15:46, 25 February 2016 (UTC)
- No, as I quoted above, Linux terminals support "ISO-8613-3 24-bit foreground and background color setting". 224 is not "16 basic colours". So this python program (which just prints characters to stdout)
#!/usr/bin/python3
ANSI_CSI = '\033['
ANSI_RESET = ANSI_CSI+"m"
def bg_colour(r,g,b):
return "{}48;2;{};{};{}m".format(ANSI_CSI,r,g,b)
for r in range(0, 256, 8):
for g in range(0, 256, 2):
print (bg_colour(r,g,0)+" ", end="")
print(ANSI_RESET)
- produces this output on gnome-terminal. -- Finlay McWalter··–·Talk 16:48, 25 February 2016 (UTC)
- I got a syntax error at the line
print (bg_colour(r,g,0)+" ", end="")
. I removed the, end=""
part and the program ran, but I didn't see anything except empty space all of the same colour. I even tried disabling any preset theme from the Gnome-Terminal preferences, but still got the same result. I am running Gnome-Terminal version 3.10.2. JIP | Talk 17:32, 25 February 2016 (UTC)- It works for me - even using Cywin's terminal windows under Win7! What does 'echo $TERM' say for you? For me, it says 'xterm' and works great - but other terminal emulators may not be able to do that if they're emulating (say) a vanilla vt100 or something. SteveBaker (talk) 15:05, 26 February 2016 (UTC)
- I translated the program to C to the best of my ability, and ran it. It behaved the exact same way on Gnome-Terminal. Then I ran it under XTerm. It showed me pretty colours in shades of red, green, orange and yellow, but not as many shades as the picture you linked. It seemed to somehow cut some of the detail of the hues, the actual hue only changed about once every 8 characters (I verified by taking a screenshot and looking at it under GIMP). But it definitely showed more than 16 colours. So I think Linux is capable of more than 16 ANSI colours, if you use a decent enough terminal. Still, the StackOverflow and MSDN articles User:StuRat linked to at [5] still don't seem like it's possible to have more than 16 ANSI colours on Windows. JIP | Talk 18:21, 25 February 2016 (UTC)
- Well, I suppose you might have your graphics card set up to render 16 bit color (5+6+5 RGB or maybe even 5+5+5 RGB) - in which case nothing you do will make more colors! SteveBaker (talk) 15:05, 26 February 2016 (UTC)
- I got a syntax error at the line
- I would actually prefer if it could be done using a system command, versus having a C-specific command for setting console color. So, something more like:
for (int i=0; i<256; i++) { system ("FG = (0, 0, " // i // ");"); printf("This should be in shade %n of blue\n", i); }
- Here, "FG" would be the console command to change the foreground text color. (Some type of conversion of the i value into a text string might also be needed.) Unfortunately, where there are such commands, changing the foreground or background color of the window may change the entire window, not just the text written from that point on. StuRat (talk) 06:27, 25 February 2016 (UTC)
- Calling system(3) spins up another process, runs a shell in it, which parses the string arguments and then spins up another process, which in this case does a printf. Printing a handful of characters to stdout yourself, or calling a function in a dll to do so, is maybe a million times faster than outsourcing it to an external process. -- Finlay McWalter··–·Talk 11:42, 25 February 2016 (UTC)
- I can just imagine it, the new compiler version comes out with nicely colored error and warning messages but for some odd reason much slower than the old compiler :) Dmcq (talk) 13:23, 25 February 2016 (UTC)
- Yes, system() is a terribly inefficient way to do this. You just need to print the
SGISGR sequence as described in the ANSI escape code article cited by Finlay McWalter way back at the beginning of this thread. This changes the foreground color:
- Yes, system() is a terribly inefficient way to do this. You just need to print the
printf("\33[38;2;%d;%d;%dm", red, green, blue);
where red, green and blue are values between 0 and 255. Replace the "38" with "48" to change the background color. Mnudelman (talk) 17:25, 25 February 2016 (UTC)
So far I've found that what I want is somehow possible on Linux. I have got it sort-of working on XTerm, but not with Gnome-Terminal. Is there some sort of configuration I need to do in Gnome-Terminal, or do I have a too old version (3.10.2)? I still haven't seen how it is possible in Windows. JIP | Talk 20:26, 25 February 2016 (UTC)
- As I said before, check what 'echo $TERM' says. You can get it to work under Windows by installing Cygwin and running their "Terminal" tool - which is really an xterm...but the DOS shell windows only emulate a DOS terminal - so you're unlikely to see more than 16 colors. SteveBaker (talk) 15:05, 26 February 2016 (UTC)
- ^^^This is important I think. I got confused before with the terminology (are we talking about consoles, terminals, CLIs, shells etc.) But the terminal emulator is what ultimately serves up the characters, right? And I have had all sorts of problems matching up the various .config, .*rc files, etc. to get my colors and prompts the way I wanted them. Comparison_of_terminal_emulators may be helpful if OP decides to fiddle with other options. It does say there clearly that the default win32 console does not support even 256 colors. WP:OR Cygwin or similar is mandatory for turning Win* into a tolerable computing environment for those who are experienced with *nix :) SemanticMantis (talk) 15:24, 26 February 2016 (UTC)
echo $TERM
saysxterm-256color
. I suppose I am running a too old version of Linux. Once I finally upgrade to Fedora 22 or 23, I suppose my dream will finally become true. =) Still, it doesn't look Windows will support even 256 colours, let alone true 24-bit colour, any time soon, if ever. And this is the OS used by over 99% in consumer computer systems throughout the world, and is the only OS over 99% of the people in the world are even able to conceive of, let alone have heard of. Note that when I said that the Amiga only supported 16 ANSI colours, but let the actual RGB hues be chosen freely, this concerned the entire screen. The Amiga was never capable of true 24-bit colour throughout the screen. It was only ever capable of indexed colour, up to 256 indices. Each of these could be chosen freely from a palette of 16,777,216 hues, but this choice always affected the entire screen. JIP | Talk 21:33, 26 February 2016 (UTC)- Actually, you could get more colors out of the Amiga. You could cause the video chip to generate a software interrupt at the start of each scanline - and you could write code to reload the color lookup table at the start of each line. That gave you a limit of 256 colors on each scanline. I think there were timing issues that meant that you didn't have time to replace the entire table before pixels started to be displayed. But with sufficient cleverness, you could have far more colors than you'd think! SteveBaker (talk) 06:40, 27 February 2016 (UTC)