Wikipedia:Reference desk/Archives/Computing/2015 December 16
Computing desk | ||
---|---|---|
< December 15 | << Nov | December | Jan >> | December 17 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
December 16
[edit]Representation of numbers in binary
[edit]I would like to know why we can't represent binary numbers in power of two?
For example 5381 can be represented as 5 × 10^3 + 3 × 10^2 + 8 × 10^1 + 1 × 10^0 = 5381 while 1101 can't be represented as power of two's. We can't write 1101 as 1 × 2^3 + 1 × 2^2 + 0 × 2^1 + 1 × 2^0 even though this gives the decimal equivalent of 1101 i.e 13. Is there any way to represent 1101 in power of two's but after adding the terms in the represented form we should again get 1101?JUSTIN JOHNS (talk) 09:08, 16 December 2015 (UTC)
- That is because you are adding in decimal instead of binary. Bubba73 You talkin' to me? 06:12, 17 December 2015 (UTC)
- Um, 1101 in binary is equivalent to 1 × 2^3 + 1 × 2^2 + 0 × 2^1 + 1 × 2^0. Why did you think it wasn't representable that way? --76.69.45.64 (talk) 09:57, 16 December 2015 (UTC)
- 1101 in base 10 (i.e. 10^3 + 10^2 + 10^0) is 10001001101 (=2^10 + 2^6 + 2^3 + 2^2 + 2^0) in binary - does that help? AndrewWTaylor (talk) 12:09, 16 December 2015 (UTC)
- Is your concern that writing
- 11012 = 1 × 23 + 1 × 22 + 0 × 21 + 1 × 20
- uses the digits 2 and 3 which not binary? That is because the right hand side is in base ten notation, not binary. If you want it all in binary, you can write
- 11012 = 12 × 102112 + 12 × 102102 + 02 × 10212 + 12 × 10202
- -- ToE 12:45, 16 December 2015 (UTC)
- On second reading I'm still not sure of the questioner's concern, but I now suspect that AndrewWTaylor answer that 110110 = 100010011012 is more likely to address it. -- ToE 13:48, 16 December 2015 (UTC)
- I'm not quite sure what Justin's concerned about, either, but another thing that's important to remember is that when we talk about "base 10" and "base 2" and the like, we're only talking about numeric representations of numbers; we're not changing the underlying number. If I ask you to count the number of x's here:
- xxxxxxxxxx
- you will say "ten", and if I ask you whether your answer was in base 10 or base 2, you'll look at me funny and say "Huh?", because changing the base does not change the number of x's on the line, it's still ten.
- But it's easy to get confused, because when we see "10" we automatically pronounce it "ten", not "one zero". Strictly speaking, "ten" is "1010", and we ought to pronounce "102" as "one zero base two", not "ten base two". (Or maybe I'm splitting hairs. But remember, "There are 10 kinds of people in the world, people who understand binary numbers and people who don't.") —Steve Summit (talk) 15:04, 16 December 2015 (UTC) [edited 15:46, 16 December 2015 (UTC)]
- I am tired of this joke about 10 kinds of people. Don't remember how many times I heard it. Although the first time I heard it in CS 101, my fifth course, I found it funny. --Denidi (talk) 18:33, 16 December 2015 (UTC)
- Speaking of old jokes: you learned about binary in CS 101? --76.69.45.64 (talk) 03:43, 17 December 2015 (UTC)
- Sorry Dendi. Anyway, the joke is incorrect. There are really 10 kinds of people, those who understand ternary, those who heard the joke about binary and are now hopelessly confused and those who don't understand what the heck we're talking about. SteveBaker (talk) 04:04, 17 December 2015 (UTC)
- Enough with the stupid old nerd jokes. Happy upcoming OCT 31, everybody. —Steve Summit (talk) 04:31, 17 December 2015 (UTC)
- I once took a Unix sysadmin class whose teacher was missing two fingers. When we got to chmod, I thought of Tom Lehrer. —Tamfang (talk) 05:10, 17 December 2015 (UTC)
Eventhough 11012 = 12 × 102112 + 12 × 102102 + 02 × 10212 + 12 × 10202 is the correct way to represent the number in binary it doesn't gives us the value 1101 rather it gives the value 13 which is the decimal equivalent of 1101.I think it may not be possible to produce such a representation eventhough in decimal it's possible.This might be a weakness of binary but not sure about it.JUSTIN JOHNS (talk) 06:46, 17 December 2015 (UTC)
- The thing is, the value 1101 in binary is exactly the same as the value 13 in decimal. It's sort of like if I asked you how far it is to the end of the road - 300 feet, 100 yards, 91.44 metres, and 9.66E-15 light years are all the same answer, just in different units. 11012 and 1310 are the same number, just in a different base. MChesterMC (talk) 09:40, 17 December 2015 (UTC)
- Somewhere I have a marvelous old HP "programmer's calculator" that does decimal, binary, octal, and hex. If I put it in binary mode and ran the calculation 12 × 102112 + 12 × 102102 + 02 × 10212 + 12 × 10202 (rearranged into RPN, natch), the display would show 1101. —Steve Summit (talk) 11:15, 17 December 2015 (UTC)
- I may never forget the time a coworker punched a hex number into such a device, converted it to decimal and then asked me whether or not the result ≥ 2^15. —Tamfang (talk) 01:55, 20 December 2015 (UTC)
- I think User:scs's comment is correct about this. Numbers aren't in any base, they just are numbers. But in order to write them, we need representations of numbers, and those representations are in bases. The number of fingers on a person's hands can be written as 1010 or 10102, but the actual number is still the same. JIP | Talk 18:47, 17 December 2015 (UTC)
Google Chrome on iOS
[edit]Hi, my Chrome app is acting weird. Specifically, when I look at these ref desk pages, they all come up with the last posts on December 12. They used to work fine, and all other pages seem to work fine. I've tried clearing my cache (and cookies, history, etc) in Chrome and also forcing WP to purge its cache, but neither helps. Any ideas? Thanks, SemanticMantis (talk) 15:11, 16 December 2015 (UTC)
Could an anti-virus and similar malware tools lead to less security?
[edit]In the same way that a seatbelt, air-bag or insurance could make people possibly drive less carefully, could an anti-virus make people be less careful with what they are doing? Risk compensation examples are rarely about computer security, but why wouldn't the phenomenon arise in this field? --Denidi (talk) 16:20, 16 December 2015 (UTC)
- Sure, conceivably - as you ask, why not? Risk compensation seems to be pretty basic to how humans work. I can't find anything as explicitly empirical and data-driven as e.g. the Munich taxi data, but here's a journal article in Computers and Security that talks a bit about risk compensation in context of computer security [1]. SemanticMantis (talk) 16:35, 16 December 2015 (UTC)
- When I am using someone's Windows PC I am super careful not to visit dodgy sites or to download and run anything. When I am at home running Slackware on a system that I can easily restore to last night's backup, I am less careful. When I am running Tails, or Tiny Core, both of which which lose all changes when I reboot (and I have not entered and will not enter any personal info or passwords during that session), I freely download and run anything that seems interesting. You could say that this means that my using Tails "leads to less security", but the simple fact is that when running Tails no malware can harm me, and I behave appropriately. --Guy Macon (talk) 21:07, 16 December 2015 (UTC)
- I was going to drown you in a load of scholarly articles, but you can easily go to scholar.google.com and search for "mac users are less secure" and find more than you want to know about. To summarize, a quote from my brother, "I don't have to worry about viruses or hackers because I only use a Mac and an iPhone." 47.49.128.58 (talk) 15:25, 17 December 2015 (UTC)
- As we discussed in February and March of this year - exactly what type of threat are you trying to secure your system against?
- If your threat model is unrealistic, or incomplete, your security response will be equally unrealistic or incomplete.
- I really find the thought-experiment about "hacking a vending machine" to be very instructive. If you fixate on cyber-security to the exclusion of physical security, you're probably overlooking the most obvious and important set of threats.
- Nimur (talk) 17:14, 17 December 2015 (UTC)
- Most computer users are taught that it is somehow their job to decide whether attachments and other links are safe to click on or not. If, instead, computers were programmed to simply categorize media into "safe" (words or images to display) versus "unsafe" (programs or other active content to execute), and only implement one-click openability on the former, users could click on links with abandon, but they'd actually be more safe, not less. (In other words, while it's certainly true that the provision of a safety system can cause people to worry less about security, this is not necessarily a bad thing, in fact it's arguably the whole point of safety systems. It is therefore not necessarily a reason to not deploy safety systems!) (But yes, returning to my earlier point, and before 1,267 people jump down my throat to lecture me, I do realize that some media, such as Flash and even Microsoft Word documentation, is not always so easy to discriminate across my "safe" versus "unsafe" dichotomy.) —Steve Summit (talk) 18:19, 17 December 2015 (UTC)
- Yes, I'm going to jump down your throat for this. The difference between good guys and bad guys is not mathematically formalizable, so computers can't check for it. The distinction between "executable" and "nonexecutable" data is irrelevant. In-browser Javascript is Turing-complete but sandboxed, so theoretically as safe as "nonexecutable" text and images. In practice both are unsafe because of bugs in the implementations. Javascript may be more unsafe because of greater complexity, but there's no clear line. And phishing in its most basic form requires no exploitable bugs or executable code.
- OSes could do much more than they do to protect people. Every program I run should not have read access to all of my personal files, much less write access. But some should, and I have to decide which ones. We know from smartphones and general UI experience that if you ask for permission to break a security barrier, many people will click "allow" without even reading the message. Those people need to be less stupid. -- BenRG (talk) 19:38, 17 December 2015 (UTC)
- Not going to get into a long argument here, but we know from general experience that people are this stupid. If you believe that additional restrictions can't/shouldn't be built into computer systems, if you believe that the only solution is to keep trying to educate the poor users not to do "stupid" things, then the computer security problem is going to continue to get worse and worse -- and it's already unimaginably bad. (I'm not saying that you, Ben, believe these things, but plenty of people seem to.) —Steve Summit (talk) 20:02, 17 December 2015 (UTC)
- For me, the most astonishing misunderstanding is the completely incorrect assertion that malware must have user-visible symptoms. Some malware will flood your UI with pop-ups... but such an infestation is easy to identify, and therefore an appropriate response can be taken. But there is a much more sinister threat. Great malware exists that never shows the user an annoying pop-up advertisement; never slows down the CPU or network in a meaningful way; never even appears in the technical data dumps, system logs, or power-user interfaces. These are the silent keyloggers and traffic sniffers and rootkits and illicit backdoors. A great piece of malware is one that you never even know is installed - it will just persist forever and the user won't even think about trying to clean it up. These invisible artful engineering marvels are only really appreciated by systems-programmers. This is the stuff that keeps me up at night - now that my AC power adapter has firmware - and a digital communication channel to the operating system - is that firmware exploitable? Can a thus-exploited AC-adapter-microcontroller get on to the main computer's system bus and sniff other traffic? How would I even know, unless I had the ability to deeply inspect the electronics schematics and the software implementations? Nimur (talk) 20:22, 17 December 2015 (UTC)
- Once a month Windows Update downloads and runs a new version of Microsoft's Malicious Software Removal Tool. It's tailored to malware that's actually present in the wild, so by construction that malware can't evade detection by it. The malware could block Windows Update, but that's noticeable. -- BenRG (talk) 21:06, 17 December 2015 (UTC)
- You can give people control of their own computers or not. If you do, some of them are going to break security barriers willy nilly. If you don't, because some software legitimately needs to break those barriers, a central authority has to decide which software gets that permission. This applies to every computer user, not just the irresponsible ones, unless the central authority also decides which users are responsible. The alternative is to give people control of their lives and try to educate them to not mess up their lives. Do you disagree with that? -- BenRG (talk) 20:56, 17 December 2015 (UTC)
- For me, the most astonishing misunderstanding is the completely incorrect assertion that malware must have user-visible symptoms. Some malware will flood your UI with pop-ups... but such an infestation is easy to identify, and therefore an appropriate response can be taken. But there is a much more sinister threat. Great malware exists that never shows the user an annoying pop-up advertisement; never slows down the CPU or network in a meaningful way; never even appears in the technical data dumps, system logs, or power-user interfaces. These are the silent keyloggers and traffic sniffers and rootkits and illicit backdoors. A great piece of malware is one that you never even know is installed - it will just persist forever and the user won't even think about trying to clean it up. These invisible artful engineering marvels are only really appreciated by systems-programmers. This is the stuff that keeps me up at night - now that my AC power adapter has firmware - and a digital communication channel to the operating system - is that firmware exploitable? Can a thus-exploited AC-adapter-microcontroller get on to the main computer's system bus and sniff other traffic? How would I even know, unless I had the ability to deeply inspect the electronics schematics and the software implementations? Nimur (talk) 20:22, 17 December 2015 (UTC)
- Not going to get into a long argument here, but we know from general experience that people are this stupid. If you believe that additional restrictions can't/shouldn't be built into computer systems, if you believe that the only solution is to keep trying to educate the poor users not to do "stupid" things, then the computer security problem is going to continue to get worse and worse -- and it's already unimaginably bad. (I'm not saying that you, Ben, believe these things, but plenty of people seem to.) —Steve Summit (talk) 20:02, 17 December 2015 (UTC)
- You didn't get into the human side of it. The human user will not care about "safe" and "unsafe". If the link says something like "Click here to see Obama getting a blowjob from Hillary!" then it wouldn't matter how unsafe the computer labeled the link. The human would click it. Even if clicking it caused it to say "Hey Idiot! This is completely unsafe! Don't click it again you complete moron!", the human would click again and again and again. In the end, humans are the primary threat, not unsafe links or attachments. 47.49.128.58 (talk) 19:51, 17 December 2015 (UTC)
- Not going to get into a long argument here, but when I said "only implement one-click openability on the former", I didn't say what I'd like to see done with the latter. (Hint: it is not "protect them with an 'are you sure?' prompt".) —Steve Summit (talk) 20:02, 17 December 2015 (UTC)
- You didn't get into the human side of it. The human user will not care about "safe" and "unsafe". If the link says something like "Click here to see Obama getting a blowjob from Hillary!" then it wouldn't matter how unsafe the computer labeled the link. The human would click it. Even if clicking it caused it to say "Hey Idiot! This is completely unsafe! Don't click it again you complete moron!", the human would click again and again and again. In the end, humans are the primary threat, not unsafe links or attachments. 47.49.128.58 (talk) 19:51, 17 December 2015 (UTC)
- To answer the question in the subject line (if not the following paragraph), antivirus software is very complicated (modern scanners have built-in emulators for x86 and various bytecode languages, for example), it has access to all of your files and all of your data on all web sites (so it can scan them) and to the OS kernel, and it's not written by the world's smartest people. That means it opens an enormous attack surface when it's installed. This paper describes some exploits for Sophos antivirus that could be triggered without any user action (because Sophos scans things before you even get the option to open them) and led to full system takeover. Those bugs have been fixed, but the situation is probably the same now for every antivirus product because they are still very complicated and under active development. -- BenRG (talk) 19:38, 17 December 2015 (UTC)
- "it's not written by the world's smartest people" Why would AV developers be below average? I don't associate them with low IQs.Denidi (talk) 22:29, 17 December 2015 (UTC)
- Most (or all?) complex software has flaws. Even those with high IQs are seldom able to ensure that every possible attack route has been blocked. It's a race between AV software developers and malware writers, and sometimes the bad guys win. Dbfirs 08:57, 18 December 2015 (UTC)
- But would competent hackers tend to the other side ? There are certainly good paid honest jobs in the security industry, developing AVs and all. --Denidi (talk) 15:08, 18 December 2015 (UTC)
- See White hat (computer security). Dbfirs 16:57, 19 December 2015 (UTC)
- Indeed, why would defensive security experts be less competent than criminal security experts? Imagining how a system can be hacked is essential, both to defend it or to attack it. The difference is ethics and personality, not competence.--Denidi (talk) 18:45, 19 December 2015 (UTC)
- See White hat (computer security). Dbfirs 16:57, 19 December 2015 (UTC)
- But would competent hackers tend to the other side ? There are certainly good paid honest jobs in the security industry, developing AVs and all. --Denidi (talk) 15:08, 18 December 2015 (UTC)
- Most (or all?) complex software has flaws. Even those with high IQs are seldom able to ensure that every possible attack route has been blocked. It's a race between AV software developers and malware writers, and sometimes the bad guys win. Dbfirs 08:57, 18 December 2015 (UTC)
- "it's not written by the world's smartest people" Why would AV developers be below average? I don't associate them with low IQs.Denidi (talk) 22:29, 17 December 2015 (UTC)
- See, human engineering. Make the stupid user to click something you want him to do to compromise his system and own it. As a user, always think: Why should I click this? Malware protecting software is as intelligent as it has been made. Never foreget the user's human factor. --Hans Haase (有问题吗) 20:08, 19 December 2015 (UTC)