Wikipedia:Reference desk/Archives/Computing/2016 March 13
Computing desk | ||
---|---|---|
< March 12 | << Feb | March | Apr >> | March 14 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
March 13
[edit]AlphaGo hardware
[edit]AlphaGo#Hardware shows how the strength of the program varied with hardware. There is a big jump from the first line to the second line - about 600 ELO points, which is huge. But the only change was from one GPU to two - the other parameters were the same. The other increments in strength are much smaller. How did adding one GPU make such a big difference? Bubba73 You talkin' to me? 01:13, 13 March 2016 (UTC)
- Maybe this link helps explain it. https://security.stackexchange.com/questions/32816/why-are-gpus-so-good-at-cracking-passwords The Quixotic Potato (talk) 13:11, 13 March 2016 (UTC)
- I think the question is also at least partially about why further jumps are so small. Since the next one is also a doubling of the GPUs and so is the next, and then they move to distributed with quite a large increase in both initially albeit with fewer threads (although I'm not sure if these mean the same thing) but the increases here are all relatively modest compare to the very big jump from 1 GPU to 2. I think an understanding of how Elo rating system, competition, as well as game difficulties and learning tends to work will partially help with this. Consider for example, how the difficulty of a person increasing their Elo rating in some game (be it chess, Go or whatever) by 600 points from 300 to 900 or 900 to 1500 compares. Of course to some extent there's probably a factor of what's good enough for their neutral network which may also be related to typical things like Amdahl's law, diminishing returns etc (as much as these apply to their program). Nil Einne (talk) 20:03, 13 March 2016 (UTC)
- True. The ROI of adding new hardware does not stay constant. And you'll probably have to rewrite the software to take advantage of the new hardware. The Quixotic Potato (talk) 20:43, 13 March 2016 (UTC)
- A good example of diminishing returns. The Quixotic Potato (talk) 23:59, 13 March 2016 (UTC)
- I think the question is also at least partially about why further jumps are so small. Since the next one is also a doubling of the GPUs and so is the next, and then they move to distributed with quite a large increase in both initially albeit with fewer threads (although I'm not sure if these mean the same thing) but the increases here are all relatively modest compare to the very big jump from 1 GPU to 2. I think an understanding of how Elo rating system, competition, as well as game difficulties and learning tends to work will partially help with this. Consider for example, how the difficulty of a person increasing their Elo rating in some game (be it chess, Go or whatever) by 600 points from 300 to 900 or 900 to 1500 compares. Of course to some extent there's probably a factor of what's good enough for their neutral network which may also be related to typical things like Amdahl's law, diminishing returns etc (as much as these apply to their program). Nil Einne (talk) 20:03, 13 March 2016 (UTC)
- Actually I am wondering why there is such a huge jump from the first row to the second, compared to the change in the other lines. A 600-point jump in ELO rating is huge. Bubba73 You talkin' to me? 23:19, 13 March 2016 (UTC)
- This is guesswork, but it seems likely that at that point the bottleneck was the inability to handle loads of data in many streams, and adding a GPU fixed that problem. It is possible that the new bottleneck was something that adding a new GPU wouldn't fix. They probably had to rewrite the software to take advantage of the new hardware, so it is possible that they have improved the software too. I am not sure if they used the exact same software and different hardware configurations, or that the hardware and software both changed over time. The Quixotic Potato (talk) 00:05, 14 March 2016 (UTC)
- Actually I am wondering why there is such a huge jump from the first row to the second, compared to the change in the other lines. A 600-point jump in ELO rating is huge. Bubba73 You talkin' to me? 23:19, 13 March 2016 (UTC)
- CPUs and GPUs have significantly different architectures that make them better suited to different tasks. A GPU can handle large amounts of data in many streams, performing relatively simple operations on them, but is ill-suited to heavy or complex processing on a single or few streams of data. A CPU is much faster on a per-core basis (in terms of instructions per second) and can perform complex operations on a single or few streams of data more easily, but cannot efficiently handle many streams simultaneously. [1] The Quixotic Potato (talk) 20:27, 13 March 2016 (UTC)
- The first part of that page compares 48 CPUs and 1 GPU with 48 CPUs and 2 GPUs. Perhaps the program was designed in such a way that the relative lack of GPUs starved some of the CPUS of work to do? Alternative theory: When they sat "GPU" they really mean "video card" or "video chip", and maybe not the same model. 1 GPU seems really low. An Nvidia GeForce GTX Titan has 5760 CUDA Cores. I would really like to see what model of processor and graphics card they used. --Guy Macon (talk) 21:10, 13 March 2016 (UTC)
- I was looking at the Nature paper again (I'm not involved in software development let alone one of neutral networks nor do I have much of a mathematics so I've only really skimmed through it). I noticed in particular this table in the extra data [2] which got me thinking. From what I can tell, it isn't explained either in the extra data [3] nor in the article e.g. [4] (although bearing in mind I only skimmed through it) how the single GPU was used. I didn't look for discussions elsewhere but perhaps the single GPU was only used for the policy network. The Elo rating suggests the first mix in Extended Data Table 7, where they use 2 GPUs for the policy network and 6 GPUs for the value network with a mixing constant of 0.5 is the one they used for the normal 8 GPU variant. However the data suggests 8 GPUs dedicated to the policy network can perform higher than the single GPU performed so it could be they dedicated the single GPU to the policy network. Or maybe they are stil using both networks, but their sole GPU is dedicated to one of the networks. Alternatively if it was shared, perhaps their code wasn't well designed to have both performed on the same GPU or it simple doesn't work well. Nil Einne (talk) 12:11, 15 March 2016 (UTC)
Estimation of bandwidth requirement
[edit]What will be the bandwidth required of a single 1:1 Internet Leased Line connection for an organization of 250 LAN users for internet surfing and email sending downloading attachments and downloading other resources.What will be maximum , minimum and average bandwidth requirements in case all 250 users access concurrently and when usage varies. What will be bandwidth requirement for comfortable web surfing. Will other accessory equipment like accelerating UTM, caching web proxy, minimum spanning tree intranet, SAN NAS buffer design help lower bandwidth and lower rental cost. What type of last mile connectivity is most cost effect as well as the most efficient in terms of bandwidth and low latency. Will setting filters for all videos and streaming media help get a good web experience. How in will I get lowest latency and highest bandwidth for organizational internet with 1:1 Leased Line. What type of cabling from gateway will be best for 250 concurrent users of 1:1 Leased Line.What are cheaper and/or more efficient and effective alternatives to 1:1 Leased Line.115.187.47.89 (talk) 12:19, 13 March 2016 (UTC)
- I recommend asking a couple of companies that specialize in this kind of stuff how they would solve this problem and what it would cost, and comparing the responses. The Quixotic Potato (talk) 12:49, 13 March 2016 (UTC)
- The IP appears to come from West Bengal, so is that where the system is required? The country and location makes a big difference as to what service is available and the cost. Graeme Bartlett (talk) 22:53, 13 March 2016 (UTC)
- Is this a homework, or an actual requirement? You have mixed up several technologies there. Web proxies certainly have helped in the past, but with the changeover to https: they are becoming less able to intercept and buffer the traffic. A SAN NAS buffer is a disk storage technology, and could be used for a proxy cache. However it is not the best for this, and primary directly attached storage would be better. That Violin storage mentioned above could hot up a proxy server, but is much better put in an application server. A SAN would increase your costs for this application. If UTM means Unified threat management, then sure you need to do something about malware, but the same issues with https arise, and "unified" may not be very possible. The firewall is still an essential component to only allow what connections you want, and you will want anti virus too on your workstation. Spanning tree intranet is really a different thing, and 250 users is getting a bit too many for one LAN. It is still possible, but there will be too much rubbishy broadcasting disturbing all the devices, and it will be better at half the number or less. These users may have more than one device on your LAN too. The bandwidth variation range is a bit too hard to say, as are all these users sitting at desks surfing all day watching videos, or are they factory workers, that just have a quick look at the start of the day? Email does not need a very high bandwidth. For web surfing, somewhere between megabit per second and 100 mbps could do. You could vary this by selecting different upload/download ratios, or different contention ratios, eg that 1:1. You could get better lower latency by getting a high speed back-haul, eg 1gps and having a 1:10 ratio (ie on average you only get to use 10%). Filters to stop video and streaming will certainly improve the situation for those that don't need to watch them, but is video/TV a requirement? You can expect video to take up the majority of your bandwidth. Also nowadays you can expect software updating to hundreds of devices to be a drain on capacity, so a method to reduce that may be needed. An important consideration is reliability. There are different tiers of service, and the lowest is a domestic grade, that could be down for days. How much outage can you tolerate? Lastly 250 person organisation would not have the capacity to support so many technologies, so you may wish to have some reliable other company provide the service. Graeme Bartlett (talk) 22:22, 13 March 2016 (UTC)
Custom Software Development
[edit]As there are now a lot of open source development frameworks can I develop domain specific software solutions without in depth knowledge or only a basic understanding of programming languages like C Java VisualBasic,.net PL/SQL. Where can I find a domain specific list of opensource frameworks.Are COM and COM+ components equivalent of Java Beans. Can datastructures tcp/ip protocols apis be implemented as com/com+/javabeans if possible qwith configurable graphical interface.what is opensource equivalents of activex controls and OLE objects. can protocols be treated as apis 150.129.102.146 (talk)- —Preceding undated comment added 14:19, 13 March 2016 (UTC)
- I'm sorry nobody has answered this yet, but the range of the questions is very broad. See Domain-specific language and Component model for our articles on the basic concepts. If you have a more specific question, please feel free to come back. Tevildo (talk) 22:34, 16 March 2016 (UTC)
PSU
[edit]Is there a specific term for PSUs that have an IEC 60320 C14 output connector for attaching to the monitor, so that both the computer and the monitor only use one mains socket? Like this. It was common on older computers but seems to have disappeared in modern PSUs. Thank you. 82.44.55.214 (talk) 20:25, 13 March 2016 (UTC)
- I believe the term "AT Power supply" would describe the de-facto standard PC power supplies from the late '80s and early '90s before the ATX power supply standards were introduced. back in those days, PCs were typically turned on and off at using the physical power switch on the power supply itself. Monitors, which did not then have low-power standby logic that responded to the presence or absence of a video signal, could be conveniently turned on and off by the same switch if their power was connected to the line out of those power supplies. -- Tom N talk/contrib 22:17, 13 March 2016 (UTC)
- As a point of clarification, the switch was connected to the AT power supply, but was not necessarily part of the actual power supply box. See [5] as an example. However it was always a simple switch which disconnected mains power to the PSU. (I shocked myself once when the switch insulation had moved.) Nil Einne (talk) 09:53, 14 March 2016 (UTC)
- There's no general specification since the ATX specification have been released. Siemens-Nixdorf added in their PCs in generations up to Pentium III C14 AC output connectors. Recent power supplies – I guess manufactured by ASTEC – had the output directly connected to the input. Former series had a dual line relay installed, turned on when the PS-ON is activeated. A simple 5 or 12 volts relay connected to the DC output, controlling the AC output would do it. Insulation is required! --Hans Haase (有问题吗) 21:30, 15 March 2016 (UTC)