Wikipedia:Reference desk/Archives/Computing/2010 August 24
Computing desk | ||
---|---|---|
< August 23 | << Jul | August | Sep >> | August 25 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
August 24
[edit]Trackmania
[edit]I want to put some of the buildings from the Bay enviroment into the Island enviroment for more variety because the Island enviroment has only one type of building and I find that boring. How can you put entire pieces from one enviroment into another, and would that cause any errors in the game programming? 64.75.158.194 (talk) 11:43, 24 August 2010 (UTC)
Spindle servo failure
[edit]Short and sweet, hello. I got the message 'Spindle servo failure' while trying to burn a DL DVD+R on an LG-H55N DVD recorder, which is supposed to support this mode. What could be the cause? Burns single-layer DVD-Rs without troubles. System is an expanded Fujitsu/Siemens Scenic E, OS is Fedora 13. I've searched online forums but didn't seem to find a clear answer as to whether I should get a better PSU, clean the laser or throw the drive out the window. Thank You all. --Ouro (blah blah) 14:06, 24 August 2010 (UTC)
- The "spindle servo failure" tends to happen a lot with cheap disks. Have you tried more expensive disks? -- kainaw™ 14:13, 24 August 2010 (UTC)
- I use exclusively TDKs. Are these expensive enough? --Ouro (blah blah) 14:17, 24 August 2010 (UTC)
Flock "Search" bar option
[edit]I have been using flock browser (version 2.6.1) for two weeks. so far I am quite satisfied with this browser. previously, I used to use mozilla firefox 3.6.8. In mozilla, if I search something in "search" bar, it gives an option to open my desired info in new tab. There's a "magnifying glass icon" on search bar where I can click and it takes me to new tab. But Flock lacks this option. Is there any possible way to set Flock's "search" bar option like mozilla? --180.234.24.148 (talk) 15:15, 24 August 2010 (UTC)
Matlab on a Mac
[edit]I am currently trying to print something from the computer program Matlab on my Mac but cannot for the life of me work out how to set up my printer so that Matlab recognises it (my printer is set up perfectly on my computer and I've never had a problem with it until now). I've tried using the 'Help' option but it all seems aimed at someone who's more computer literate than I am. Can anyone give me a set of instructions for dummies? Thanks 92.0.157.58 (talk) 16:43, 24 August 2010 (UTC)
Page view limit
[edit]Is there a limit to how many pages someone can view on a site? Like if I downloaded every page would wikipedoia admins care? —Preceding unsigned comment added by 125.172.222.4 (talk) 18:17, 24 August 2010 (UTC)
- Unless you download a lot of pages in a very short span of time, it's unlikely you'd even be noticed, especially if you spread the load over the many different Wikipedia webservers. Search engines do the same thing, and noone cares (although search index crawlers are not likely to download every Wikipedia page every time they visit) Unilynx (talk) 18:42, 24 August 2010 (UTC)
- The Google index seems to register Wikipedia edits in a matter of minutes, which makes me believe that it is watching Special:RecentChanges instead of crawling Wikipedia in the usual way.—Emil J. 19:00, 24 August 2010 (UTC)
- Yes, Wikipedia admins would care, and quite possibly block you. (Fixed title for sanity's sake.) Marnanel (talk) 19:14, 24 August 2010 (UTC)
- Wikipedia admins don't have access to see how much bandwidth is being consumed by an IP...you probably meant META:System_administrators at the WMF. There is a limit for page views...see API. Other than that, aside from a DDoS#Distributed_attack, you're probably not going to be blocked.Smallman12q (talk) 13:38, 25 August 2010 (UTC)
- It'd be best to respect the API limits, which are generous. But if you're asking the question here, I doubt you will intentionally overdo it. Shadowjams (talk) 08:27, 26 August 2010 (UTC)
- Wikipedia admins don't have access to see how much bandwidth is being consumed by an IP...you probably meant META:System_administrators at the WMF. There is a limit for page views...see API. Other than that, aside from a DDoS#Distributed_attack, you're probably not going to be blocked.Smallman12q (talk) 13:38, 25 August 2010 (UTC)
mp3
[edit]Someone recommended that I use something called "HD mp3" which is "lossless". Is there really such a thing as "HD mp3"? They also said that for each year an MP3 sits on your hard drive, it will lose roughly 12kbps. I'm assuming that part isn't true, or can mp3s really degrade over time? —Preceding unsigned comment added by Prize Winning Tomato (talk • contribs) 18:37, 24 August 2010 (UTC)
- Digital data does not decay. It's either there, or it's completely lost, under usual circumstances (a harddisk with standard error correction). If you're taking averages, you might get to the mentioned 12kbps/year degradation. Suppose all your MP3s are at 256kbps, and a harddisk has a 5% failure rate per year. Then, on average, you would lose about 12kbps of data per year. But it's far from a degradation over time - you would just be losing 5% of all your mp3s each year, but all the remaining mp3s are still at 100% of their original quality. Proper RAID setups would almost completely eliminate the chance loss of data, and would almost certainly allow you to retain all your mp3s, unchanged (and thus without decay) over decennia. Unilynx (talk) 18:52, 24 August 2010 (UTC)
- Or they might have meant that the older an mp3 is, the more likely it is to have been encoded at a lower bit-rate originally. Back in dial-up days particularly, mp3s were often encoded at what now seems like horribly low bit-rates. APL (talk) 19:14, 24 August 2010 (UTC)
- Proper RAID setups do not almost completely eliminate the chance of data loss. RAID is not a substitute for backups and there are many scenarios where you could lose data with a RAID setup. This is semi OT here so I won't discuss it further but it has been discussed many times before on the RD and in other places. Also your example is potentially confusing. Also your example is potentially confusing. If you store your MP3s on a single disk with no backups, the most likely scenario is probably that you will lose all your MP3s or you will lose nothing. While hard disks do sometimes develop bad sectors, and there are plenty of other ways you could lose only some of your data, hard disks often just die completely (well a professional recovery studio may be able to recover data for a very high price and there are various tricks you can use to try and get the data off). So the average thing really only works out if you're talking a lot of people or you have so many MP3s your storing them on a lot of HDs. Nil Einne (talk) 09:15, 25 August 2010 (UTC)
- As for a lossless MP3, it seems to be referring to mp3HD, which is a format promoted by the company Thomson. It claims to be backwards compatible to regular mp3. I don't know. In theory it's not hard to have lossless audio, if you don't mind massive files. (Real audiophiles seem to prefer FLAC at the moment.) I find the file sizes pretty prohibitive, though. With FLAC, an album ranges from 200MB-500MB in size. That's a bit much by my standards; even with a big honking mp3 player (or hard drive), you're talking about it filling up pretty quick. Personally I can't really hear any significant difference between 256kbs and lossless. (I'm not entirely convinced audiophiles actually can either.) --Mr.98 (talk) 00:25, 25 August 2010 (UTC)
- While I agree very often people can't tell the difference between lossless and lossy audio (and there are a number of ABXs which show this), I don't know if I'd agree you'd fill up a HD pretty quick with lossless 48k or 44.1k 16 bit 2 channel audio nowadays. Taking your 500MB figure, you can easily see you can have 1000 albums in 500GB. That's a lot of music in my book. And not likely to be cheap either. Yet 2TB hard drives are fairly cheap nowadays (let's not worry about whether we're talking about binary or decimal based units here). In fact if we say it's US$2 per album which seems a fairly low price to me, you're talking US$2000 for all that music which is way, way, more then the price of even a 2TB HD. Nil Einne (talk) 10:01, 25 August 2010 (UTC)
Are these unneeded adware?
[edit]I used RegCleaner, and it told me that the following have been recently installed into the registry:
Author, Program.
(Unknown) SMPlayer
Antanda Toolbar
ASProtect SpecData
Ej-technologies Install4j
Ej-technologies Exe4j
MozillaPlugins @videolan.org/vlc,version=1.1.3
Piriform Recuva
Softonic UniversalDownloader
I know what SMPlayer and Recuva are, but what about the rest? I've recently updated SMPlayer and VLC, full versions of each. Are the other things something to do with them? The word "Toolbar" makes me suspicious, and ASProtect at least seems suspect as well. Thanks 92.15.3.135 (talk) 19:34, 24 August 2010 (UTC)
- As is software stating explicitly in its name what it supposedly does. Phrases like Universal downloader, Mega protector or Super duper speeder-upper usually point to crap, for me at least. You might want to do away with the ones you don't know, it probably won't do any harm. --Ouro (blah blah) 05:37, 25 August 2010 (UTC)
Question about URL formatting
[edit]What is the difference between a URL such as "x.y.com" vs. "y.com/x"? Am I correct in assuming that x.y.com is its own server, whereas y.com/x is just a page on y.com's server? Everard Proudfoot (talk) 23:03, 24 August 2010 (UTC)
- Roughly. The first is a domain address, the second a specific page on a domain. However a single server may host many domains. The domain webserver will serve a default page in response to the first URL, and the named page to the second URL. Domain name might be a place to visit. --Tagishsimon (talk) 23:27, 24 August 2010 (UTC)
- Expanding on that... You were correct in the past. In the present, y.com is a domain name. Everything else is adaptable to an administrator's needs. For example, I own a server that has multiple domains on it: everybusywoman.com, marykayhasaposse.com, theresearchdynamo.com, etc... All of those domains point to the same server. If you go to everybusywoman.com/vhosts/marykayhasaposse, you get the same site as marykayhasaposse.com. Further, charleston.everybusywoman.com goes to everybusywoman.com/charleston. They are all just shortcuts on the same server to get to the webpage you want. -- kainaw™ 23:51, 24 August 2010 (UTC)
- Just to beat a dead horse: the thing to know is, the domain name is resolved hierarchically, and the URL suffix is resolved by the final server. When a transaction is initiated, the first step is address resolution - translating a DNS name into an IP address. The URL gets parsed and passed on until a DNS server can be found who "knows" what IP maps to that specific name. Usually, you start by checking a top-level name-server (or your ISP's cached data from one). At every "." in the URL, if the current Name Server does not know the final IP address for the exact, complete DNS-name, it has the option to "pass the buck" to a new domain name controller who might be "closer" to the ultimate host (using the suffix of the DNS name to determine "closeness"). Note that this does guarantee a one-to-one correspondence with true network distance in terms of routing hops!) In the case of a very deep DNS name, ("u.v.w.x.y.z.com"), it is probable that one or more DNS servers is actually owned and operated by the web host - who can control DNS resolution to do whatever he/she wants, including mapping multiple DNS names to the same physical machine (as Kainaw described above). (The same physical machine might have multiple IP addresses, or it might just host multiple software servers that can be uniquely identified by DNS lookup - this is a feature supported by Apache HTTP server, for example). After all the "x.y.z" gets resolved, the server has been uniquely identified, and a transport stream using the HTTP protocol is established. The web server now must interpret URL suffixes (everything following the very first "/"). These usually directly map onto file-systems on the host; but they can be interpreted any way the HTTP server wants. For example, a virtual file system can map something that looks like a subdirectory to actually be a command to run a particular program with the directory-name as an argument, and dump the output as the web-page to deliver. For more details, you can read about URLs and in particular the anatomy of a complete URI. These specifications are standardized in RFC3986. Nimur (talk) 00:12, 25 August 2010 (UTC)
- "x.y.com" is a subdomain, whereas "y.com/x" is a directory on y.com for x. Usually, these will have the same end result.Smallman12q (talk) 13:30, 25 August 2010 (UTC)
- Just to beat a dead horse: the thing to know is, the domain name is resolved hierarchically, and the URL suffix is resolved by the final server. When a transaction is initiated, the first step is address resolution - translating a DNS name into an IP address. The URL gets parsed and passed on until a DNS server can be found who "knows" what IP maps to that specific name. Usually, you start by checking a top-level name-server (or your ISP's cached data from one). At every "." in the URL, if the current Name Server does not know the final IP address for the exact, complete DNS-name, it has the option to "pass the buck" to a new domain name controller who might be "closer" to the ultimate host (using the suffix of the DNS name to determine "closeness"). Note that this does guarantee a one-to-one correspondence with true network distance in terms of routing hops!) In the case of a very deep DNS name, ("u.v.w.x.y.z.com"), it is probable that one or more DNS servers is actually owned and operated by the web host - who can control DNS resolution to do whatever he/she wants, including mapping multiple DNS names to the same physical machine (as Kainaw described above). (The same physical machine might have multiple IP addresses, or it might just host multiple software servers that can be uniquely identified by DNS lookup - this is a feature supported by Apache HTTP server, for example). After all the "x.y.z" gets resolved, the server has been uniquely identified, and a transport stream using the HTTP protocol is established. The web server now must interpret URL suffixes (everything following the very first "/"). These usually directly map onto file-systems on the host; but they can be interpreted any way the HTTP server wants. For example, a virtual file system can map something that looks like a subdirectory to actually be a command to run a particular program with the directory-name as an argument, and dump the output as the web-page to deliver. For more details, you can read about URLs and in particular the anatomy of a complete URI. These specifications are standardized in RFC3986. Nimur (talk) 00:12, 25 August 2010 (UTC)
- Expanding on that... You were correct in the past. In the present, y.com is a domain name. Everything else is adaptable to an administrator's needs. For example, I own a server that has multiple domains on it: everybusywoman.com, marykayhasaposse.com, theresearchdynamo.com, etc... All of those domains point to the same server. If you go to everybusywoman.com/vhosts/marykayhasaposse, you get the same site as marykayhasaposse.com. Further, charleston.everybusywoman.com goes to everybusywoman.com/charleston. They are all just shortcuts on the same server to get to the webpage you want. -- kainaw™ 23:51, 24 August 2010 (UTC)
- Thanks, everybody. Everard Proudfoot (talk) 06:56, 26 August 2010 (UTC)