Wikipedia:Reference desk/Archives/Computing/2008 March 14
Computing desk | ||
---|---|---|
< March 13 | << Feb | March | Apr >> | March 15 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 14
[edit]Windows Live Skydrive + Windows Explorer
[edit]Dear Wikipedians:
Are there any software that allows me to map my Windows Live Skydrive as a virtual drive in Windows Explorer like what's shown in the following illustration?:
Thanks.
L33th4x0r (talk) 00:27, 14 March 2008 (UTC)
Can I ask where you got this image? Thanks. Kushal 02:31, 14 March 2008 (UTC)
- I made it myself this time. Thanks. L33th4x0r (talk) 03:20, 14 March 2008 (UTC)
- Apparently it can't be done yet.--155.144.251.120 (talk) 03:44, 14 March 2008 (UTC)
- L33th4x0r, is it photoshopped then? Kushal 20:34, 14 March 2008 (UTC)
- Fair use image, improperly tagged, and shouldn't be outside Article space. 24.76.169.85 (talk) 06:30, 15 March 2008 (UTC)
- Like anyone cares :/ :D\=< (talk) 04:44, 16 March 2008 (UTC)
- Which article uses it anyway? If any article did use it, then we would not be having this dscussion. Kushal 15:58, 16 March 2008 (UTC)
rip
[edit]can one rip clips from you tubb and save them to his pc? —Preceding unsigned comment added by 41.220.113.117 (talk) 02:13, 14 March 2008 (UTC)
Yes! However, YouTube compresses files in a flash video (flv) format to save on storage space and download time. One can use Mozilla Firefox (aren't you excited about Firefox 3?) and add an addon like download helper to download the video. Then, one can use a media player like VLC media player to play the video.
Cheers,
Kushal 02:30, 14 March 2008 (UTC)
- Yep and if you don't want to download an addon you can use a web based tool to determine the actual URL of the FLV file. You still need to install VLC though :D\=< (talk) 03:01, 14 March 2008 (UTC)
- There are some tools out there that will let you transcode the FLV into other formats. Vixy.net can do this though I've sometimes had problems getting it to work. If you convert it to AVI or MOV then you don't need VLC (you can use Windows Media Player to play AVI files if you use Windows, or Quicktime Player to view MOV files). (Can you tell I don't like VLC much? I find it just a wee bit too buggy for anything other than emergency use. It has a habit of crashing on me or just not doing what I tell it to do.) --98.217.18.109 (talk) 03:14, 14 March 2008 (UTC)
- The video is compressed with H.263 or H.264- transcoding would just result in loss of information. :D\=< (talk) 03:55, 14 March 2008 (UTC)
- Yes and no. Yes, you lose information, no, that's not "just" what would result. Part of the result is having it in a format you can do other things with. Which can be important too. Especially if VLC crashes when seeking FLVs (which it usually does for me), which makes it pretty useless in this scenario. --98.217.18.109 (talk) 12:31, 14 March 2008 (UTC)
- VLC fails at seeking anything, not just FLV. And I'd rather have all the image information than be able to seek, for archiving purposes.. which is really the only reason I'd download a youtube video. I can watch it online, the only reason I keep it is to archive, and for that purpose retaining video information is the most important thing :D\=< (talk) 16:37, 14 March 2008 (UTC)
- Yes and no. Yes, you lose information, no, that's not "just" what would result. Part of the result is having it in a format you can do other things with. Which can be important too. Especially if VLC crashes when seeking FLVs (which it usually does for me), which makes it pretty useless in this scenario. --98.217.18.109 (talk) 12:31, 14 March 2008 (UTC)
- The video is compressed with H.263 or H.264- transcoding would just result in loss of information. :D\=< (talk) 03:55, 14 March 2008 (UTC)
- You can use Media Player Classic or GOM player, both of which can scrub through FLVs. —Wayward Talk 07:31, 15 March 2008 (UTC)
- Only if you have the proper directshow filters and a FLV splitter :D\=< (talk) 04:50, 16 March 2008 (UTC)
- You can use Media Player Classic or GOM player, both of which can scrub through FLVs. —Wayward Talk 07:31, 15 March 2008 (UTC)
- I personally use SaveVideos to get the .flv files and Riva FLV, which comes with an encoder and player. crassic![talk] 03:34, 14 March 2008 (UTC)
Segfault hunting
[edit]How does one trap a segmentation fault? The OS is managing memory; how does the processor know to switch back to the kernel when it encounters a segfault? I can't imagine that the OS can be like checking each instruction before it's executed; if you tried to check what's in the register you're jmping to, you'd have to context switch every instruction to actually have registers to work with in the checking code, which would just be insane so it has to be built into the processor somehow-- but how does it even know? :D\=< (talk) 02:59, 14 March 2008 (UTC)
- See protected mode. The operating system makes programs run in protected mode. If they try to reference memory outside of their alligned space, the CPU itself detects that the address is out side of an allowed segment range and an exception occurs, which the OS handles. It is a hardware thing. You cannot have a segment fault on CPU's that don't have protected type modes. Many microprocessors in embedded electronics for example have no such thing as real/protected mode and its impossible to detect and stop a segment fault. —Preceding unsigned comment added by 155.144.251.120 (talk) 03:41, 14 March 2008 (UTC)
- It's not really protected mode per-se, but the presence of hardware memory mapping - indeed one can get get memory violations in kernel mode (see note at the end). The availability of a protected-memory userspace model is a consequence of there being memory management hardware. Froth: any memory access (read, write, execute) is actioned by the CPU via the hardware Memory management unit, which generally maintains a cache (the TLB) of segments (I hesitate to call them "pages", because depending on the architecture they're single pages or contiguous runs of pages) of memory that are mapped. On full-featured CPUs (not weird stuff or embedded) each entry in the TLB has permissions bits (can read, can write, can exec). When actioning a memory access, the MMU looks the requested address and action up in the TLB - if there isn't a mapping, or if the requested action isn't permitted, then the MMU throws an exception. Most of the time you don't actually see this as a SIGSEGV in your program, because this same mechanism is exactly how the paged-memory system works - most of the time the OS intercepts the exception, uses that information to load/unload virtual memory pages between disk and ram, twiddles the TLB accordinly, and then clears the exception - the new TLB entry allows your request to proceed and all is well. You only see a SIGSEGV when the OS can't figure out a way to meaningfully satisy your request - when you're addressing memory for which the OS's own tables don't have a valid virtual page to swap in - so the OS chucks a SIGSEGV up to the usermode program.
- The reasons kernel-space programs can still hit page faults is the entire physical address space isn't entirely filled with mapped RAM, ROM, Flash etc chips - only certain areas are, for which the MMU has a (generally fixed, sometimes hardwired) mapping to actual chip selects - so (depending on the architecture) if kernel space code tries to access physical memory that has no valid CS mapping the MMU will barf at it. -- Finlay McWalter | Talk 13:23, 14 March 2008 (UTC)
- If you want the ultimate low-level knowledge, see "AMD64 Architecture Programmer's Manual Volume 2: System Programming" here. It also includes the 32-bit x86 things. --ÖhmMan (talk) 14:07, 14 March 2008 (UTC)
- Actually "the MMU maintains the TLB" is misleading - the TLB presents a bunch of memory-mapped registers inside the MMU (or somewhere in physical memory that's read by the MMU, depending on architecture) but their contents are largely maintained by the memory-management part of the OS. -- Finlay McWalter | Talk 14:35, 14 March 2008 (UTC)
- Some good things to read to understand this stuff without wading through CPU manuals are the memory-management bits of Understanding the Linux Kernel, and especially the excellent Unix Systems for Modern Architectures. --Sean 15:33, 14 March 2008 (UTC)
Distributing computing for Leopard
[edit]What's a distributed computing application that runs well on OS X Leopard? I've had very bad experience with Folding@Home. --68.23.161.173 (talk) 04:21, 14 March 2008 (UTC)
- Seti@Home? --hello, i'm a member | talk to me! 04:57, 14 March 2008 (UTC)
- Also Rosetta and WCG. --hello, i'm a member | talk to me! 04:58, 14 March 2008 (UTC)
- BOINC as well. (seti is lame) Mac Davis (talk) 04:44, 17 March 2008 (UTC)
Overclocking CPU multiplier
[edit]Hi. Is it in general possible to increase a CPU's clock multiplier (not FSB) above its factory setting? With my current ASUS P5B-V board and Intel E4500 CPU, I seem to only be able to decrease it. Will it be possible, say, with an ASUS Maximus Formula and Intel E8400? Thanks. -- Meni Rosenfeld (talk) 10:32, 14 March 2008 (UTC)
- I know some CPU's multipliers are unlocked in the BIOS, but some are locked. Mine (AMD FX-60) is unlocked, but I haven't ever messed with it, so I don't really know the best procedures for overclocking. Useight (talk) 15:26, 14 March 2008 (UTC)
- Erm I'm not very expert on these things and never have tried myself - but - I understand that the clock multiplier is a hardware thing - so if you've got say a 10 or 11x multiplier it's impossible to go higher since the circuitry isn't there.? That may all be wrong..
have you tried downloading a different BIOS - eg have you got this one ASUS P5B-V BIOS 0804 87.102.83.204 (talk) 16:20, 14 March 2008 (UTC)that probably wont help.87.102.83.204 (talk) 16:28, 14 March 2008 (UTC)
- Nowadays the multipliers are all locked except for the very few (eg. AMD Black Edition CPUs, and some of the C2E's) specifically targeted to enthusiasts, with a high premium added on top. Back in the olden (read: Athlon) days the multiplier is controlled by cutting copper tracks on the surface of the package, and you could change it by using a craft knife and a 6B pencil. Nowadays though, hacks like these don't really exist no more. --antilivedT | C | G 21:42, 14 March 2008 (UTC)
- In general it would depend on the type of frequency multiplier used - but I'm only familiar with one type. That is (multiple) frequency doublers - so an input clock is converted to 2n times higher frequency - and then the high frequency 'chopped' into chunks, to give the new clock pulses - this type inherently has timing jitter when the multiplier is not set to a multiple of two - and the jitter increases at higher multiples in general. Up to a point where the jitter is simply too much I would guess. That's a very primitive type.
- An alternative method is to generate a higher clock with a separate device and use feedback to prevent 'drift' of the signal over time.
- I'd guess you would need to know the type of multiplier being used to know if it could be possible to set a higher value. I doubt this info is freely available. There's probably little reason for the final product to have the option to change the multiplier, since it will already have been optimised for the 'best' combination of error margin/speed etc.
- Maybe you could contact intel and ask them. I've no idea what they're like in terms of providing non-neccessary technical details - but you might get lucky..87.102.21.171 (talk) 10:49, 15 March 2008 (UTC)
- "it will already have been optimised" - well, the same could be said about any of the other n parameters one can tweak when overclocking. Changing the multiplier can be useful when trying to reach a good arrangement of FSB, CPU frequency and RAM frequency.
- My own main reason for being interested is that I want the numbers to be round (stupid, I know) and would therefore like to increase the E8400's multiplier to 10 (stop looking at me like that. We still communicate numbers in decimal, don't we?). The rumored E8600, with its native 10 multiplier, seems to be worth neither the price nor the wait. But I guess it isn't that important, I'll make do with the available possibilities. Thanks for the help. -- Meni Rosenfeld (talk) 19:31, 15 March 2008 (UTC)
- I did wonder why you'd want to increase the multiplier, then again I've often wondered by x86/intel stuff has such apparently high values of multiplier.. can't seem to get past the wall of marketting material (ie pure ignorance on my part ) I'm not going to start waffling.must not mention risc.must not mention L3.must not suggest buying sram.must not suggest buying second hand cray.ah. yes Anyway there are whole websites and forums devoted to this subject - were the experts in the field hang out no doubt. Go there quickly. I willing to bet (microsoft points) that it is possible and somebody there will be able to help you.87.102.2.103 (talk) 20:48, 15 March 2008 (UTC)
- edit conflict
- ok 99%+ sources say multiplier is locked, or can only drop in number (if overheating occurs) as antilived says - looks like a trip to those forums would not bring any results.87.102.2.103 (talk) 22:07, 15 March 2008 (UTC)
- Well, I'm not planning on designing my own microarchitecture any time soon, so I'm stuck with what's commonly available and trying to make the most out of it.
- There's actually another reason I am interested in the possibility, but this one is based purely on speculation - I think my bottleneck might be the motherboard's FSB rather than the CPU's frequency, in which case a higher multiplier will give me more headroom. Still, not important enough to do what is probably only possible, if at all, by physically messing with the unit. -- Meni Rosenfeld (talk) 21:44, 15 March 2008 (UTC)
- I guesses that a good reason to just increase the multiplier while leaving the RAM mostly unchanged would be if the programs you were running spent most of their time in the L2 cache - most likely small computationally intensive programs not using large data sets. possiblt stuff like 'folding home'. But if your dataset/program is big (bigger than L1/L2) then the DRAM speed would start to limit. In the case of more than one core with the smaller type of compute problems it might be worth trying to dedicate a whole core to it - with nothing else allowed to run on it - in an ideal case you could get the entire problem running on the processor/associated data caches once the cache was filled. (possible to clear and then fill the L2 with the program+data) (that should give real speed increases if you can do it) I've no idea how to control which processor(s) are allowed to run specific processes only. Clearly that's not going to work with something that uses big files like photoshop.
- If you have got small data sets to that need a lot of processing there are alternatives out there, some are even afforable. If it's more general stuff to speed up then the answer with intel seems to be buy cooling equipment and turn the voltage up.87.102.2.103 (talk) 22:07, 15 March 2008 (UTC)
- Well, I obviously use the CPU for many different things, so I have no doubt I will see some returns, even if diminishing, by increasing CPU clock.
- It has now occurred to me that my most CPU-intensive application is working in Mathematica on large (over 1GB) matrices, and I never really measured the influence RAM speed has on it. Thanks for reminding me I should do it. -- Meni Rosenfeld (talk) 22:38, 15 March 2008 (UTC)
- I have to ask- what matrix takes a GB of memory? :D\=< (talk) 04:29, 16 March 2008 (UTC)
- A dense 10000 * 10000 matrix of double-precision floating-point numbers. Though I'll admit, that is on the extreme end, I rarely work with more than 2000 * 2000.
- In case anyone is interested - I tried multiplication of two dense 5000*5000 matrices at 2.75 GHz. I could find no difference in performance between 833 MHz RAM and 667. My personal OR conclusion is that RAM speed is overrated. -- Meni Rosenfeld (talk) 08:04, 16 March 2008 (UTC)
- That would be strange if the CPU clockspeed were constant in both? was it - but if I've understood you correctly not only the FSB but the CPU clock should be faster in the 833MHz case. Very strange.87.102.75.250 (talk) 10:23, 16 March 2008 (UTC)
- The CPU speed was constant. I changed the RAM:FSB ratio, not the FSB. -- Meni Rosenfeld (talk) 10:33, 16 March 2008 (UTC)
- [edit conflict] More specifically - CPU-Z readings in both cases were:
- Name: Intel Core 2 Duo E4500
- Core speed: 2750 MHz
- Multiplier: x11
- Bus Speed: 250 MHz
- Rated FSB: 1000 MHz
- For DRAM frequency, it was 416.7 or 333.4. The timings were the same.
- The Mathematica command executed:
- m = 10; n = 5000; {Mean[a = Table[Timing[A = Table[Random[], {n}, {n}]; B = Table[Random[], {n}, {n}]; A.B;]1,{m}]], Variance[a]}
- This estimates the time taken for the operation (I included the generation of the matrices, but it's negligible) as well as its variance, which helps me determine statistical significance of differences. The accuracy is about 1% and no difference within it was found.
- I guess this means, again, that standard RAM speeds are already too fast to be a bottleneck, at least for this kind of calculation. -- Meni Rosenfeld (talk) 10:47, 16 March 2008 (UTC)
- That would be strange if the CPU clockspeed were constant in both? was it - but if I've understood you correctly not only the FSB but the CPU clock should be faster in the 833MHz case. Very strange.87.102.75.250 (talk) 10:23, 16 March 2008 (UTC)
- I have to ask- what matrix takes a GB of memory? :D\=< (talk) 04:29, 16 March 2008 (UTC)
- I did wonder why you'd want to increase the multiplier, then again I've often wondered by x86/intel stuff has such apparently high values of multiplier.. can't seem to get past the wall of marketting material (ie pure ignorance on my part ) I'm not going to start waffling.must not mention risc.must not mention L3.must not suggest buying sram.must not suggest buying second hand cray.ah. yes Anyway there are whole websites and forums devoted to this subject - were the experts in the field hang out no doubt. Go there quickly. I willing to bet (microsoft points) that it is possible and somebody there will be able to help you.87.102.2.103 (talk) 20:48, 15 March 2008 (UTC)
Isn't that the multilpier - I thought you had one that was fixed at a certain value?87.102.75.250 (talk) 10:38, 16 March 2008 (UTC)Sorry not quite right.- If the FSB was constant and you bumped the RAM speed somewhow (how does this work?) then the FSB would be the limiting factor and so there would be expected no improvement?87.102.75.250 (talk) 10:42, 16 March 2008 (UTC)
- I don't know the technical details, but the mainboard sets the RAM clock at a certain ratio of the FSB clock, and you can choose which. Better boards have more choices for this. In my case, the default is FSB 200MHz and RAM 800MHz. I have raised FSB to 250, and since my memory is not stable at 1000MHz, I have reduced it to 833MHz for general use, and again to 667MHz for the sake of experiment. -- Meni Rosenfeld (talk) 11:18, 16 March 2008 (UTC)
- I'm pretty sure it is the CPU that is the limiting factor and not the FSB (especially considering that I have reduced the RAM ratio below the default). I can make more tests with changing CPU multiplier if you want... -- Meni Rosenfeld (talk) 10:50, 16 March 2008 (UTC)
- ((Moved left - even I can't read my own writing))
- First correct myself - in your example - having thought about it a small effect of RAM speed is not that suprising - though I am still suprised that it's less than 1%
- I'm assuming that the reduced ram speed means that 1 out of 4 cycles is ignored (leaving the FSB waiting for the next cycle - giving an average cycle length of (1+1+1+2)/4 =5/4 (again assuming the memory is not being 'hammered') this would actually correspond to a max reduction of 20% (rough estimate) - However I don't think that's the case here..
- You're doing matrix multiplys - so I assume the processor will get an entire row and entire column and multply each element then add (you said doubles so an entire row might be 64Kb? ) this easily fits into cache, - so one transfer (of a row from memory) should be good for multiple row.column multiplys - greatly reducing the bandwidth. Plus those 64bit multiplys take time and so further reduce the precentage if time the main memory is being accessed. So it does look like the actual increase is processing time from a lower speed memory would be small. To work it out theoretically I'd need amongst otherthings the number of cycles it takes to calculate a single element of the resultant array compared to the time it takes to transfer a row from memory (ie a row not already present in cache), plus the size of the muliplier program (though I'd guess this would be simple enough to sit in L1 cache).
- Effectively your matrix multi tests CPU speed and L2 cache size as the major contributing factors to speed of operation.
- I suppose if you wanted an example of a prog. that tests FSB/RAM speed it would be something like a simple block move of 1GB eg move 1GB upwards 1byte in memory, or swap 1GB putting 0 into 1000000, 100 into 10009000 etc. If that didn't slow down with RAM speed erm I'd be even more suprised.87.102.75.250 (talk) 11:16, 16 March 2008 (UTC)
- According to this, in Mathematica "Machine-precision matrices are typically converted to a special internal representation for processing". Also, I don't think it multiplies matrices naively - while the reference doesn't address this, I have good reasons to believe it uses the Strassen algorithm. I have no idea how this influences the memory usage pattern. What I do know is that Mathematica reports using 800MB in this calculation, and if this doesn't stress RAM, I wonder what real-world application (as opposed to your suggested memory shift) will. If I fail to see any difference with some other calculations I need (pseudo-inverse, linear solve and shortest distance of large sparse matrices), I'll write RAM speed off as unimportant. -- Meni Rosenfeld (talk) 11:49, 16 March 2008 (UTC)
- Real world memory limited programs - things that take data seemingly randomly from a large database - maybe something like attempts at human like AI, traversing tree like databases, in general anything that doesn't utilise a data item many times. Though again, data access is likely to be only a small part of the computation - and in the first two examples branching in code will most likely take up more time/latency than memory access. I can't think of a realistic example.87.102.75.250 (talk) 13:00, 16 March 2008 (UTC)
- According to this, in Mathematica "Machine-precision matrices are typically converted to a special internal representation for processing". Also, I don't think it multiplies matrices naively - while the reference doesn't address this, I have good reasons to believe it uses the Strassen algorithm. I have no idea how this influences the memory usage pattern. What I do know is that Mathematica reports using 800MB in this calculation, and if this doesn't stress RAM, I wonder what real-world application (as opposed to your suggested memory shift) will. If I fail to see any difference with some other calculations I need (pseudo-inverse, linear solve and shortest distance of large sparse matrices), I'll write RAM speed off as unimportant. -- Meni Rosenfeld (talk) 11:49, 16 March 2008 (UTC)
- By the way thanks for bringing this up - you've made me think about how I'd handle data sets (do n x m multiplication or similar ) that don't easily fit into 'workspace' and still do it quickly ie optimise the program to take account of the actual amount of 'cache'.87.102.75.250 (talk) 11:39, 16 March 2008 (UTC)
- Depending on the size of the matrices, you may gain more by using a better algorithm than by optimizing its implementation. -- Meni Rosenfeld (talk) 11:52, 16 March 2008 (UTC)
- A good algorhthym tends to optimise the problems implementation as well - at first sight (I'm not familiar with the method) it looks like the method you mentioned does both, not only reducing the amount of 'maths' but reducing the size of the matrices as well -( good for the cache) win - win.87.102.75.250 (talk) 13:00, 16 March 2008 (UTC)
- Depends on what you mean by algorithm. To my knowledge it usually refers to those parts of the computation that are independent of any specific hardware or software implementation. The idea that memory access takes time, and that a cache is used to facilitate it, is irrelevant to algorithm specification. The optimization comes after algorithm selection, based on the specific computational platform.
- This algorithm does reduce the calculation to the multiplication of smaller submatrices. Whether this helps as far as memory access is concerned is beyond my knowledge. -- Meni Rosenfeld (talk) 13:17, 16 March 2008 (UTC)
- er yes. I wandered out to the shops.. and was too far gone when I realised what I'd written to do anything about it. Spent the journey home wondering trying to think of an example that justified what I'd said but couldn't. In computer science the boundaries can get bit blurred, some make the distinction, the unambiguous term is of course 'implementation'.87.102.75.250 (talk) 16:03, 16 March 2008 (UTC)
- A good algorhthym tends to optimise the problems implementation as well - at first sight (I'm not familiar with the method) it looks like the method you mentioned does both, not only reducing the amount of 'maths' but reducing the size of the matrices as well -( good for the cache) win - win.87.102.75.250 (talk) 13:00, 16 March 2008 (UTC)
- Depending on the size of the matrices, you may gain more by using a better algorithm than by optimizing its implementation. -- Meni Rosenfeld (talk) 11:52, 16 March 2008 (UTC)
bash here-document
[edit]In bash, how do I make cat watch for a here document which is actually me pressing enter 3 times?
I want to do something like
cat << $(echo -e \\n\\n\\n)
But the above doesn't work. Any ideas?
Thanks --194.223.156.1 (talk) 12:05, 14 March 2008 (UTC)
cat
is used to concatenate text (usually) files together. It doesn't "watch" for things. Also, what is a "here" document? The keyboard is considered a stream, not a file, and is normally called "standard in" or "stdin" for short. It appears you are trying to hit enter three times and have it appear on the screen. What do you want to happen after that? Do you want it to stop and go back to the command prompt? -- kainaw™ 12:29, 14 March 2008 (UTC)
- A "here document" is a bash trick whereby one can create a tmp file inline, without creating one explicitly - that's what the
<<
thing does. So one can saysort << FOO
and then (at the>
prompts that appear) typeobama
(return) thenmccain
(return) thenclinton
(return) thenFOO
(return) and it'll print those three names sorted. The OP wants this sequence to stop on an explicit three empty lines (rather than an explicit delimiter likeFOO
). I don't know how to do that - you can get it to stop on a single empty line withsort << ""
-- Finlay McWalter | Talk 12:38, 14 March 2008 (UTC)
- A "here document" is a bash trick whereby one can create a tmp file inline, without creating one explicitly - that's what the
- Cool. I've never used cat for that. When I want code before/after a delimiter (even when sending it from the console), I tend to use grep. -- kainaw™ 13:05, 14 March 2008 (UTC)
- It's not particularly for
cat
, and I think the OP's use ofcat
may just have been for illustrative purposes. A here-document is particularly useful for single-file installers, config scripts, and self-extracting archives, where you can have multiple final files (or things that, after processing, will become individual files) all nicey stored and pleasantly formatted inside a single master bash script. -- Finlay McWalter | Talk 13:40, 14 March 2008 (UTC)
- It's not particularly for
- This isn't bad: here document. --Sean 15:36, 14 March 2008 (UTC)
- Surprises surprises. You learn something heartachingly awesome about arcane shell syntax every day. :D\=< (talk) 18:04, 19 March 2008 (UTC)
- On looking at the relevant section of the bash manual it doesn't look like what you ask for is possible. It looks like the code for here documents works something like:
forever: x = read a line (probably using fgets or the like) if x==theDelimiter: quit do stuff on x
- so it's line by line - three carriage returns in the here document cause three seperate trips through the loop body (because fgets uses \n to split lines), so you'll never get a match. I can't think of a clever way to overcome this. <snarky comment>if you'd done this in perl or python you'd be sipping pina coladas with Natalie Portman by now :)</snarky comment>-- Finlay McWalter | Talk 12:54, 14 March 2008 (UTC)
Yep, thinking about it that seems likely, unfortunately. To answer Kainaw: I was mostly using cat for illustrative purposes but in fact my script does use it (although of course it's then piped off somewhere else). I don't want users (well, me) to have to do Ctrl-D in this particular context. Thanks to all anyway. —Preceding unsigned comment added by 194.223.156.1 (talk) 13:51, 14 March 2008 (UTC)
MSN reply refusal
[edit]I have just had two replies to MSN clients refused by MSN as "suspected namespace mining". What is "namespace mining" and how can it have anything to do with clicking the "reply" button on a message? TIA 86.217.135.241 (talk) 13:05, 14 March 2008 (UTC)
- Not sure exactly what it's supposed to mean, but "namespace mining" sounds like some kind of effort to collect valid identifiers in a namespace, such as valid (i.e. actually used) usernames among all identifiers allowable as usernames. --71.162.242.230 (talk) 14:11, 14 March 2008 (UTC)
Serenity add on
[edit]In Orbiter, with the Firefly "Jumbo" Transport 2008 Edition add on, how do I get Serenity's main propulsion ("firefly drive") to work? I can only get the jets going, far too slow for interplanetary travel, useless you increase the time skip (which creatus all sorts of problems of its own) Think outside the box 13:14, 8 March 2008 (UTC)
- You need a new dilithium crystal. Oh, and change the polarity, while you're at it. --Oskar 17:02, 16 March 2008 (UTC)
- Yeah, and I'll re-route some auxiliary power, too. Think outside the box 09:45, 17 March 2008 (UTC)
- Maybe you need to let some pirates aboard and hope that they'll negotiate for the missing part and not hijack your ship. :D\=< (talk) 18:05, 19 March 2008 (UTC)
- Ha ha ha! I loved the music on that one. Think outside the box 14:21, 27 March 2008 (UTC)
- Maybe you need to let some pirates aboard and hope that they'll negotiate for the missing part and not hijack your ship. :D\=< (talk) 18:05, 19 March 2008 (UTC)
- Yeah, and I'll re-route some auxiliary power, too. Think outside the box 09:45, 17 March 2008 (UTC)
Does Mozy still offer free accounts? --grawity talk / PGP 19:26, 14 March 2008 (UTC)
- Did you read the article you linked?.. :D\=< (talk) 19:28, 14 March 2008 (UTC)
- uh...yeah, nevermind. thanks. *click* --grawity talk / PGP 19:32, 14 March 2008 (UTC)
- Ha ha, that was awesome. Useight (talk) 23:33, 14 March 2008 (UTC)
- Was that sarcasm? --grawity talk / PGP 14:19, 15 March 2008 (UTC)
- >:3 get. in. the. car. it's sarcasm :D\=< (talk) 03:53, 16 March 2008 (UTC)
- Was that sarcasm? --grawity talk / PGP 14:19, 15 March 2008 (UTC)
- Ha ha, that was awesome. Useight (talk) 23:33, 14 March 2008 (UTC)
- uh...yeah, nevermind. thanks. *click* --grawity talk / PGP 19:32, 14 March 2008 (UTC)