Wikipedia:Reference desk/Archives/Computing/2010 April 24
Computing desk | ||
---|---|---|
< April 23 | << Mar | April | May >> | April 25 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 24
[edit]Forget password and not getting E-mail
[edit]Hello !!
I have forgot my Wikipedia password ( username : manoj_meena ) I tried resetting it with "Mail me my password" method but i am not getting any mail on my account.
can you please let me know which e-mail account is associate with my Wikipedia account so that i can check those mails there. It may be issue as i earlier had my college official e-mail attached to most of my accounts but as not i am graduated from there that e-mail address is deactivated.
What should i do ? any sort of help will be appreciate.
thank you
Regards
- Manoj Meena
<contacts removed> —Preceding unsigned comment added by 203.78.217.151 (talk) 00:25, 24 April 2010 (UTC)
- Unfortunately, if you do not remember the e-mail address you associated with your account, or otherwise do not have access to the address, there is nothing anyone can do. We do not know and cannot find out what the e-mail address was, nor can anyone here reset your password for you. Unless you miraculously remember what the e-mail address was, you will have to create a new account. If you have any further questions that directly relate to Wikipedia, please post them at the Help desk; this page is for general knowledge questions. Xenon54 / talk / 00:36, 24 April 2010 (UTC)
- All we can see is that the account User:Manoj meena was created 27 August 2006 [1] and has an email address associated with it. We cannot see the address. The account has no edits so creating a new account is perfectly OK. PrimeHunter (talk) 01:10, 24 April 2010 (UTC)
- Another account User:Manoj Meena was created 24 September 2006.[2] They are different accounts and may or may not have different passwords and different email addresses. We cannot see it. As an administrator I can see that User:Manoj Meena made edits to the deleted page VSOID on 3 February 2010. PrimeHunter (talk) 01:17, 24 April 2010 (UTC)
How's my motherboard look?
[edit]Hello! I have an HP dv 9000 laptop, which has been noted by the company to have inadequate fan controls to cool all of the internal hardware, and I believe my video card has been damaged. When I try to boot to Windows, I get a BSOD caused by a fatal error at my video card's driver. When I delete the driver, Windows boots up fine (albeit with very ugly graphics) on an external monitor, with the disk drives perfectly accessible. I've uploaded a few pictures of the internal components to my userpage (didn't want to clog up the RefDesk with them). I know C is the video card, but not quite sure what A and B are. They were all connected to the heat sink. Do any of them show signs of physical damage from overheating? Is it possible for components to be damaged without showing signs? Thanks for any help or advice. (And, yes, I tried the newest video card driver, and reverting to an old one; both caused fatal errors.)--el Aprel (facta-facienda) 01:21, 24 April 2010 (UTC)
- No idea what A is, but A and B look pretty scorched to me. Are all HP laptops notorious for overheating, or just the particular model you have? 24.189.90.68 (talk) 02:14, 24 April 2010 (UTC)
- A is the CPU and B is the northbridge chip. They do look a bit scorched, but it's still possible for them to be damaged even if there are no visible signs. To me it does sound like a fried video chip is a likely possibility. Winston365 (talk) 03:18, 24 April 2010 (UTC)
- 24.189, have a look at this website from HP. It talks about the models in question. I'd also like to note that A may look worse than it actually is; that burnt metallic stuff at the top was melted off from a conductor on the copper rod to the heat sink. When I opened A up, the CPU looked fine on the inside. What seems strangest to me is that dark ooze around A and B. It's dry, but is it supposed to look like that? Also, should I be concerned that A and B (the CPU and northbridge, as identified by Winston365) are damaged too? The computer runs fine with poor graphics with the help of an external monitor and no videocard driver, and judging by how important the CPU is, I would guess you would not see this much functionality if it was damaged.--el Aprel (facta-facienda) 04:08, 24 April 2010 (UTC)
- The metallic stuff on the CPU is probably just thermal paste. It is not unusual for some to stick to the chip and some to the heat sink when they are separated, and doesn't indicate anything actually melted. The dark ooze you see appears to just be an epoxy used to affix the chip, also nothing to worry about. There does seem to be other discoloration around B that does look a bit like heat damage though. I agree that if it was a CPU problem it's very unlikely the computer would continue to function normally apart from the graphics. I'm not sure if the northbridge could cause that, I haven't seen it happen, but I suppose it's possible. Winston365 (talk) 04:50, 24 April 2010 (UTC)
- Agree with Winston that A doesn't look abnormal. The 'ooze' is likely simply the epoxy to attached the chip. I think I've even seen a core like that before personally and it's easy to find pictures of similar cores [3] [4] [5]. It tends to stand out more when the core is large relative to the ceramic layer such as in mobile CPUs, for obvious reasons and obviously various from batch to batch and probably factory. B does look a bit odd Nil Einne (talk) 06:47, 24 April 2010 (UTC)
- For future reference, the three biggest chips inside a laptop is the CPU, northbridge and graphics. The CPU can be identified by the fact that it is attached to a socket and not directly soldered. The graphics can be identified by the word Nvidia. 121.74.167.214 (talk) 08:08, 24 April 2010 (UTC)
- Thanks, everyone!--el Aprel (facta-facienda) 19:53, 26 April 2010 (UTC)
HDTV stuck blue pixel
[edit]I have a brand new 26-in. RCA LCDTV, about 20 minutes out of the box, and there's a stuck blue pixel on it. What do I do? --Lazar Taxon (talk) 06:21, 24 April 2010 (UTC)
- Ugh. We have an article, which stuck pixel does redirect to. Some manufacturers allow a return and exchange, and others have a stated minimum (for example, you need 5 stuck pixels in order to return the TV). Comet Tuttle (talk) 06:44, 24 April 2010 (UTC)
- If the pixel happens to be near the edge, perhaps you can adjust the display size to not include that pixel or physically cover that area with black tape. I use the tape method to hide a line of frenetically flashing digital dots at the top of my TV. StuRat (talk) 11:48, 24 April 2010 (UTC)
- Check with the store— most big-box places will swap it outright within a certain number of days. ---— Gadget850 (Ed) talk 16:46, 24 April 2010 (UTC)
Mr. Do's Wild Ride
[edit]I remember earlier posting to the reference desk about some computer game I played at my cousin's house in the early 1980s, but couldn't remember what it was. I had a look at the computer game articles on Wikipedia, and saw that Mr. Do's Wild Ride comes the closest to my (faint) recollection. But according to the article, the game was never released for the Commodore 64, which my cousin had. Is it still possible that a Commodore 64 version of the game somehow exists? JIP | Talk 06:43, 24 April 2010 (UTC)
- Did the screenshots resemble the ones at arcade-museum.com? Could it have been one of the other games in the Mr. Do series? Comet Tuttle (talk) 07:08, 24 April 2010 (UTC)
- Looks like the first game in the series Mr. Do! and the second Mr. Do's Castle were ported to Commodore 64, so you may be thinking of one of those. Winston365 (talk) 07:14, 24 April 2010 (UTC)
- No, it was specifically Mr. Do's Wild Ride that invoked the memory. The screenshots did seem to match my recollection, with the tracks and the bumper cars. It was specifically not either of the first games. JIP | Talk 07:27, 24 April 2010 (UTC)
fast xml processing?
[edit]I want to process some large (6TB) XML documents, no prize for guessing what they are. Expat is supposed to be pretty fast, but my test app only gets about 3 megabytes/sec through it, which means that parsing the document will take months all by itself. I see there are some other libraries like vtd-xml that read the document into memory, which only is practical for smaller documents. I guess I can bite the bullet and write some custom C code but I don't understand the obstacle to writing a SAX-like parser that's simply extremely fast. I guess I will try libxml2 but from what I've heard, it's about the same speed as expat. Also, all these libraries that I know of handle the document serially; it would be nice if there could be some multi-threaded readahead to get more speedup (I have a quad core processor). I wonder if anyone has any suggestions. Thanks. 69.228.170.24 (talk) 08:24, 24 April 2010 (UTC)
- Reading ahead would only help if you had multiple disk drives (so each can read simultaneously) and the memory to store the data. With a 6 TB file I would assume that it must be broken down into at least 3 pieces on 3 hard disks, but you're obviously going to overflow the memory quickly. What type of processing do these documents require ? Like any large task, it will help tremendously if you can break this down to small tasks. In this case, if you could take maybe 1 GB chunks of the document at a time (assuming you have maybe 2 GB of memory available), finish with each, then go on to the next, you would have a much faster program.
- Be sure to code the program so that it can be restarted if the program stops, since this is all but inevitable over such long time frames. If you post the code, we might be able to point out any inefficiencies in the way it processes the data. StuRat (talk) 11:34, 24 April 2010 (UTC)
- The file has a huge amount of redundancy and compresses down to about 30gb on disk. The idea is to uncompress and parse and process it on the fly. In case it wasn't obvious, it is a wikipedia meta history dump. The first thing I'd want to do with it is split it into smaller chunks that I can process in parallel, and gather various sorts of stats. My experience with this kind of thing though is it always ends up having to be done more than once, so month-long tasks aren't appetizing. It occurs to me that I can do some low-rent hack to mark out large sub-documents (article revisions) and then process the sub-documents with one of the fast in-situ xml packages that I've been able to find. After that I may be able to do further processing with a hadoop cluster. My expat bnechmark just reads a few million events and ignores them, but even doing nothing, it's much slower than I want to wait for. If I rewrite it in C it will be faster, but still too slow, and I was hoping to use more modern languages for this. 69.228.170.24 (talk) 11:51, 24 April 2010 (UTC)
- The obvious solution here is to find a way to do the processing directly on the compressed format, without decompressing and recompressing it. You still haven't said what type of processing you need to do. StuRat (talk) 12:02, 24 April 2010 (UTC)
- Processing the compressed format sounds completely impractical. My first goal is just to figure out how to even handle the volume of data, but as an example of a processing task, I'd like to look at statistics about edits from IP addresses to biographies of living people (User:Casliber had asked about this). 69.228.170.24 (talk) 12:12, 24 April 2010 (UTC)
- OK, let's take that as an example. First, you might want to look for the "Category:Living people" tag. Hopefully, that will always look the same, once compressed. If you find that, then look for a compressed IP address. Hopefully there's a similarity there which your program can recognize, even in compressed format. Now, if you found both of those, you might uncompress the line and do whatever else you want with it. This will make it so that only a small portion of the lines need to be uncompressed, which should dramatically increase the speed. Why not show us what a typical target line looks like when compressed and uncompressed ? StuRat (talk) 12:24, 24 April 2010 (UTC)
- Another general comment: Flat files (XML or other) are completely unsuitable to the task of processing such large amounts of data. A relational database would do a far better job, if properly indexed. However, in your case, I assume this isn't an option. StuRat (talk) 12:29, 24 April 2010 (UTC)
- The compression (7z) is so efficient because it's an intricate LZ code followed by entropy coding using (I believe) a dynamic Markov model driving a range coder (similar to an arithmetic coder, I think). A string like "Category:... " repeated at various places in the uncompressed text will not be identifiable in the compressed text without actually running the decompression algorithm (it will look different every time, won't be the same length every time, and won't even necessary be on bit boundaries because of the arithmetic codes. To give you an idea, the .bz2 compressed version of the same file is 180+ GB). Also, merely seeing the string "Category: living people" with angle brackets doesn't mean you've actually found those category tags, since the string might occur in the text. You have to actually parse the xml to see that the tag is in the right place in the document structure, to know that it is an actual category tag. The files are in that compressed XML format because the wiki developers release them that way, but yes, populating a database from them might be a good way to make certain kinds of queries easier. I'll give that some thought (I'm not very experienced with rdbms's). Thanks. 69.228.170.24 (talk) 12:40, 24 April 2010 (UTC)
- (unindenting) Some thoughts:
- 1) If the string you search for is different every run, that can be handled, just decompress enough lines to find out how it's compressed this time, then use that as your key.
- 2) If the string is different at every occurrence during a given run, that that's a more serious problem. I can't think of an easy solution.
- 3) Occasional false positives are OK; presumably once you uncompress those lines you can filter them out.
- 4) As for the RDBMS idea, it probably will take longer for the first run, since you have to load all that data into the database, but each subsequent run should be far faster. StuRat (talk) 14:34, 24 April 2010 (UTC)
- Expat shouldn't be that slow. I downloaded simplewiki-20100401-pages-meta-history.xml.7z for testing. (7z -so | wc -c) ran at about 280 MiB/s and (7z -so | expat-test-app) ran at about 85 MiB/s, on a Core 2 Duo P8700 running Win7 32-bit. At that rate processing enwiki would take about a day, which isn't too bad.
- The test app is written in C and installs dummy start/end/data handlers. I strongly advise you to write your analysis code in C++, since that's usually the language of minimum hassle for programs that need to be extensively optimized.
- If you're processing page records in parallel on a multi-core processor, an obvious way to speed things up is to run n simultaneous threads which skip ai bytes, look for the next occurrence of "</page>\n <page>", and then process until the first occurrence of that string after the ai+1 byte mark, for i in {0,1,...,n−1}. You could preprocess the input file to split it into appropriately-sized pieces, but you'd have to recompress the pieces, and that could take a very long time.
- I think it would be possible to combine 7-Zip decompression and XML parsing for a substantial speedup (basically, lex the output as you go, and then copy lexemes from the sliding dictionary instead of characters). But it would probably be far too large an engineering effort to justify the benefit (a factor-of-3 speedup at best). -- BenRG (talk) 20:01, 24 April 2010 (UTC)
- Thanks, 85 mb/sec is WAY faster than other benchmarks I'd seen for expat--maybe something is broken about the wrapper I'm using. OK, I will probably have to write this in C or C++. 69.228.170.24 (talk) 20:33, 24 April 2010 (UTC)
- BenRG, when you say dummy start/end/data handlers, you mean you installed callbacks that don't do anything? Or that you didn't pass callbacks at all (I'm not sure if expat allows that, but if it does, it might skip some processing steps in that case). How long does it take to catch the first 10 million expat events and then stop? 85 MiB/sec is around 25-30 cycles/byte on your machine, which is much faster than I've ever heard anyone claim for expat. I will think about your idea of processing page records in parallel without first splitting up the file. The problem is some page records (all the revisions) are very large, like 100's of GB for noticeboard pages that have huge numbers of revisions. Processing individual revisions in parallel on the other hand will require "juggling" since I want to look at differences between revisions. I also have never done any multithreaded programming in C++. I was thinking it would be simpler to just split the file into chunks and then handle the chunks with multiple processes, possibly distributed across several machines. 69.228.170.24 (talk) 03:42, 25 April 2010 (UTC)
- If you put the data in a database, it becomes trivial to split the work between multiple processes, even on separate machines. Since the data is constant, you can avoid making the database server or the network a bottleneck by duplicating the database on several machines. IMHO learning SQL and basic RDBMS usage is something every programmer should do at some point. The sooner you bite that bullet, the more productive you'll be in the future. 85.156.90.21 (talk) 06:51, 26 April 2010 (UTC)
- I installed callbacks that didn't do anything (except increment a counter). Stopping after 10,000,000 callbacks takes 2.4 seconds. I guess I might have botched my benchmark somehow, but 25 cycles per byte seems entirely plausible to me, since there's not very much that expat needs to do. Most of the time it's skipping bytes looking for the next < character. (Edit to add: I think you're right that splitting the file into chunks is the way to go.) -- BenRG (talk) 08:27, 26 April 2010 (UTC)
- Thanks. I'll do some more of my own timings. 69.228.170.24 (talk) 01:56, 27 April 2010 (UTC)
- I installed callbacks that didn't do anything (except increment a counter). Stopping after 10,000,000 callbacks takes 2.4 seconds. I guess I might have botched my benchmark somehow, but 25 cycles per byte seems entirely plausible to me, since there's not very much that expat needs to do. Most of the time it's skipping bytes looking for the next < character. (Edit to add: I think you're right that splitting the file into chunks is the way to go.) -- BenRG (talk) 08:27, 26 April 2010 (UTC)
removing grub from Windows machine
[edit]I have an WinXP-Mandriva dual boot system which starts with grub. I hardly use the Linux and don't want the grub to appear each time I switch on the machine. Can removing the string "c:\grldr="Start GRUB"" from the boot ini prevent the grub from loading? Will XP start on its own once grub is deactivated? --117.204.94.241 (talk) 10:43, 24 April 2010 (UTC)
- Are you actually seeing a Grub startup screen first, or only the option to "Start GRUB"? If the latter, removing the line (and making sure the default choice isn't set to the now-missing GRUB line) should suffice.
- You can find an example here: Boot.ini#boot.ini
- If, however, the first thing you see is the Grub startup screen, you will have to replace the Master Boot Record.
- To do this, run the recovery console and enter the command fixmbr
- Before doing that, I would recommend getting a Linux live CD and a floppy disk or USB key fob, to store a copy of the current mbr using the dd command - just in case the machine goes belly-up after the fixmbr.
- -- 78.43.60.58 (talk) 16:12, 25 April 2010 (UTC)
- I was wrong to say it was grub. What I see at the boot is this screen. There are three options. XP on top Mandriva and Mandriva safe mode. After that I again get an OS selector menu, that of XP. black back ground and no graphics. -117.204.86.125 (talk) 11:33, 26 April 2010 (UTC)
- That screenshot of yours does show a grub startup screen, just with a graphical boot image. So the procedure above regarding fixmbr should help you. -- 78.43.60.58 (talk) 10:52, 28 April 2010 (UTC)
- I was wrong to say it was grub. What I see at the boot is this screen. There are three options. XP on top Mandriva and Mandriva safe mode. After that I again get an OS selector menu, that of XP. black back ground and no graphics. -117.204.86.125 (talk) 11:33, 26 April 2010 (UTC)
Saving only the Google cookie when exiting Firefox
[edit]I am using Firefox 3 6 3 and WinXP. I have it set to delete all cookies when the browser window is closed. This means my preferances for Google are lost every time. Is there any way to delete all cookies when Firefox is closed, except the Google cookie. I have looked through the Options menu and not seen a way. Thanks. 78.148.48.230 (talk) 11:52, 24 April 2010 (UTC)
- You can do it manually, by copying the cookie to a safe directory, closing Firefox, then copying the cookie back. Of course, Firefox may actually delete the old cookies at startup, in which case you'd need to do the cookie replacement after that. This process could be automated with a batch script. StuRat (talk) 12:12, 24 April 2010 (UTC)
- It would be easier to simply re-enter my choices in Google rather than do that, which is what I am trying to avoid having to do every time I start Google. 78.148.48.230 (talk) 12:36, 24 April 2010 (UTC)
- "Edit">"Preferences". "Privacy" tab. Set "Firefox will: Use custom settings for history". Turn off "Accept cookies from sites", but click the "Exceptions" button, and add an exception to allow "google.com". Paul (Stansifer) 13:31, 24 April 2010 (UTC)
- That will cause the browser to not retain non-google cookies even during the session. I think the desire is to retain all cookies during the session, then delete them all except for some special ones when the browser is closed. I've wanted that too and don't know a simple way. 69.228.170.24 (talk) 13:35, 24 April 2010 (UTC)
- Maybe the best solution is to set up Firefox Profiles for your private browsing (and set the privacy settings high, including clear-on-exit). Then, use a different firefox profile (with a different desktop shortcut, for example), to access your non-private profile, which saves your google and other persistent cookie information. Nimur (talk) 14:53, 24 April 2010 (UTC)
- That will cause the browser to not retain non-google cookies even during the session. I think the desire is to retain all cookies during the session, then delete them all except for some special ones when the browser is closed. I've wanted that too and don't know a simple way. 69.228.170.24 (talk) 13:35, 24 April 2010 (UTC)
Isnt there some way of making a file undeletable? I forget what it is called. Would that work with the Google cookie? 89.243.201.152 (talk) 19:12, 26 April 2010 (UTC)
- All the browser cookies are in one file, I think. 69.228.170.24 (talk) 04:11, 27 April 2010 (UTC)
Layers in PDFs
[edit]I have a PDF which has some graphics over what I want to see. I can see it is a layer, as what is behind it writes to screen but then is covered by the graphics. Is there any way of removing the top layer please? 78.148.48.230 (talk) 11:54, 24 April 2010 (UTC)
- Maybe. Open the file in Inkscape, select and ungroup the objects, and then try to delete the top object(s). This may uncover objects underneath (or may not, depending on whether the program that authored the PDF decided to leave, cull, or crop occluded objects). -- Finlay McWalter • Talk 13:01, 24 April 2010 (UTC)
Can any freeware PDF software remove layers or otherwise decompose it? Thanks 89.243.213.182 (talk) 17:23, 24 April 2010 (UTC)
- Inkscape is probably as good as any other for this kind of thing. --Mr.98 (talk) 17:13, 25 April 2010 (UTC)
Defining a vague business project
[edit]If someone has a vague idea of what they want some business software system to do, what are the techniques or methods used to sharpen up that vaguery into something precise enough so that conventional programming techniques could then be used? Thanks 78.148.48.230 (talk) 12:00, 24 April 2010 (UTC)
- 1) First, define inputs and outputs. Specifically, what variable types and lengths will be used.
- 2) Define any forms used to supply the data and the format of any output documents created.
- 3) Next, create a flowchart of the program.
- 4) Finally, write pseudo-code.
- These steps will lead to the Qs which need to be answered to fully specify the program. StuRat (talk) 12:08, 24 April 2010 (UTC)
Sorry, that is way too far ahead. That is the stage I would want to reach. I should have written business system rather than business software. 78.148.48.230 (talk) 12:40, 24 April 2010 (UTC)
- This isn't really a computing reference question; but maybe you could check some "how-to" books on marketing, business, and design. Crossing the Chasm ($9 at Amazon) explores the task of taking a vague high-tech idea and turning it into a marketable product. Your question is worded so vaguely, though, that it's not clear what you are looking for. Business process management might link you to some useful ideas and concepts. And software engineering might help too. Software engineering is quite distinct from computer programming - in general, it is the profession of converting vague conceptual needs into well-designed, implementable features that can be programmed and designed into computer software. Nimur (talk) 15:00, 24 April 2010 (UTC)
An example would be someone who wants to create an online automated system for insuring bicycles. That is their vague nebulous idea: what are the methods or techniques to get it from there to software implementation? I'm not concerned about marketing and so on. Thanks. Update: Thanks for Crossing The Chasm, but I was already aware of the stuff about adopters that the article desribes some years before the book was published, and as I said I'm not enquiring about marketing, just the software and computing parts. 89.243.213.182 (talk) 16:49, 24 April 2010 (UTC)
- Then StuRat was on the right track, though he was coming at it from the coder's point of view rather than the customer's (yours). Let's consider this online automated bicycle insurance system, and let's assume it's a Web site. The traditional approach is that you would consider what you want this system to look like and do, from the point of view of each possible user of the system; and once you're done with all the categories of users, and you know what you want it to do for each user, you figure out what "back-end" systems also need to be written. Consider an end user, perhaps a college student. What's the home page of the web site look like? Use a pencil and paper, or Microsoft Powerpoint or a paint utility, to mock up very primitive versions of all the screens. This will force you to figure out at what point you're going to make him sign up for an account, which will force you to figure out what your validation process is going to be, for example. Continue this process for each web page that the college student will interact with. Then consider the system from the point of view of your system administrators; of the underwriters that are the counterparties to the consumers; etc., until you have designed, on paper, all the "interfaces" that each type of user will have to the system. Now you would figure out how the system needs to tie the insurance application to underwriters and get assigned to a pool and do the risk assessments and issue quotes and charge the credit card and all that stuff. Write it all out, in steps. Then at that point you're probably ready to be able to identify what types of talent you need to work with, and have that talent start designing the system at a lower level and start writing code.
- Now, I said that was the traditional approach to developing software. Some companies, like some brave video game developers, have been using "agile" development methods like Scrum, in which you don't design it all up front; you write the briefest possible design documents up front, and do "sprints" of one week, designing as you go. This is more appropriate if, as in some (well, probably most) video game development, there is a lot of risk of slippage in development, and/or the objectives are not able to be determined 100% up front — in either case, agile development lets you change course during development with a minimum of wasted development effort. Whether this is appropriate for your system is up to you. Comet Tuttle (talk) 18:02, 24 April 2010 (UTC)
- I'm not sure how your comment really relates to the OP's question - but I don't think you understand "scrum" at all - I'm a game developer and I've used that system for several years now. It's certainly not true that you can't design the system up-front with that management technique. Sprints last for three or four weeks during the early stages of the project and shrink to one or two weeks as we close in on a release candidate and are mostly doing bugfix and 'polish'.
- It's true that games software companies tend not to do such detailed up-front design - but that's nothing to do with scrum - it's because we're trying to engineer "fun" and that's a tough thing to plan up-front. We need agile approaches that let you change direction in mid-stream because that's what has to happen when something you thought would be fun turns out not to be - or when some odd thing you just tossed into the design as a small feature turns out to be something that play testing discovers to be unexpectedly fun and has to become a much more major game mechanic. However, for the more mechanistic parts of the system (eg How does the disk sub-system stream graphics data from the hard drive? How does the AI sub-system do pathfinding?) - we certainly do design in great detail 'up front' and produce detailed design documents. Then we turn those into "stories" and use scrum to manage the implementation processes.
- Over the life of a game, we'll have design "stories", implementation stories, (perhaps documentation stories), play-testing stories, QA stories and bug-fix/polish stories. Sometimes those stories are run consecutively - sometimes in parallel - it depends on the kind of sub-system we're building. We use scrum for programmers - but also for artists, sound designers, story-boarders, etc. The scrum approach is mostly a way for management and system design to interface with the engineers in such a way that 'micromanagement' (which is evil) is kept out of the system and individual engineers are given responsibility to estimate what they can do - and do what they estimate - which gives management more confidence that the system will deliver on time, and give implementers the necessary team spirit and feelings of 'ownership' that are so important to the way they work. The 'agile' part means that if we have to stop implementation and go back to design, we can cleanly do that at the end of the current sprint. But it all happens in a very controlled fashion. SteveBaker (talk) 18:42, 24 April 2010 (UTC)
- I'm not sure why you thought my suggestions were "so far out ahead". Let's take the online bicycle insurance project, and break it up into components:
- A) Offer a quote.
- B) Accept online payments and grant an insurance policy.
- C) Provide an online insurance claim form.
- D) Provide a "contact us" method for cancellations and other issues.
- Now, let's just take on the first part, offering a quote, and apply my suggestions to it:
- 1) "First, define inputs and outputs. Specifically, what variable types and lengths will be used." You're going to need their name, address, age, gender, number of bicycles, make of bicycles, age of bicycles, condition of bicycles, amount of insurance desired, etc. Why can't this be defined now ?
- 2) "Define any forms used to supply the data and the format of any output documents created." You probably want one form for info on the person and another for info on the bike(s). As far as output document, you want a printable quote with all the relevant info on it. Why can't this be defined now ?
- 3) "Next, create a flowchart of the program." In this case, how you calculate the premiums required for a given level of insurance. Again, why wait ?
- 4) "Finally, write pseudo-code." Why not ? StuRat (talk) 19:28, 24 April 2010 (UTC)
A more traditional process would go something like: 1) meet with business and marketing people and discuss the high-level purpose of the software, including things like user scenarios (use cases). Write this up in a requirements specification (see requirements analysis). 2) Meet with programmers and convert the requirements specification to a functional specification which describes technical features that the program should have, including mockup screen shots and the like. There are a lot of different "process" cults of which Scrum is one, but the basic idea is that you probably want to do one or more iterations of rapid prototyping at around this step. In the case of something like an insurance application (involving private financial info), you probably want to have security review even at this early level, including about making sure your system can pass audits under programs like SAS 70. 3) Going from the functional spec to working code is mostly what programming is about, and there's a lot of flexibility and culture wars about how to do it, but again, follow the standards of the industry you're working in. If it's an insurance app, don't write it like a video game. 69.228.170.24 (talk) 20:18, 24 April 2010 (UTC)
What is Rapport (Internet Explorer)?
[edit]Hi, I've seen on Internet Explorer on two different Windows computers that there's this program running in the background called Rapport. Is it a legimate program? I have not noticed it before. Has anyone else seen it running in IE? It's basically this white top half of an arrow in a green box in the far right of the address bar at the top. Chevymontecarlo. 16:36, 24 April 2010 (UTC)
- Perhaps the security software from Trusteer? Check the logo on their website. ---— Gadget850 (Ed) talk 16:44, 24 April 2010 (UTC)
- Ah, maybe someone else in my family got the software from their bank. Thanks. It doesn't seem to be malicious. Chevymontecarlo. 06:05, 25 April 2010 (UTC)
Dolphin emulator
[edit]What processors are able to run the Dolphin emulator? Are Pentium 4 and Athlon 64 the only processors that can run Dolphin? --88.148.207.106 (talk) 18:56, 24 April 2010 (UTC)
- From the wording of the article, I'm guessing any x86 processor with SSE2 can run the emulator although it's likely some may be too slow. This means most recent x86 processors (recent here meaning released since like 2003 or so) and therefore most recent desktop and laptop computers, more details covered in the SSE2 article. It sounds like it hasn't yet been ported to any other architecture such as the PowerPC [6] Nil Einne (talk) 20:00, 24 April 2010 (UTC)
- The requirements listed in the article are minimum requirements. A Core 2 is better than a Pentium 4. -- BenRG (talk) 20:04, 24 April 2010 (UTC)
Turn off/limit drag and drop of links
[edit]I regularly try to select a link, but instead of clicking, it either turns purple or gets a dotted box drawn around it. I suspect that the mouse moved slightly while I clicked, and it therefore thinks I'm trying to do a drag and drop. It then refuses to do anything, since that link can't be dragged anywhere:
1) Is my analysis of the problem correct ?
2) How can I prevent this ? Can we either disable the drag and drop feature for links or make it less sensitive, so moving the mouse a single pixel won't trigger that behavior ?
StuRat (talk) 20:51, 24 April 2010 (UTC)
- Yes; to sidestep the second question, you can press the escape key to cancel the drag-drop or alternatively drag the link to the address bar (or another tab). 131.111.248.99 (talk) 20:58, 24 April 2010 (UTC)
- I assume you are using Windows. In the mouse settings (in the control panel) there is a drag sensitivity setting. Increase it so you have to drag the mouse further to make it be recognized as a drag. I don't have Windows on any of my computers, so I cannot provide the exact location or name of the setting. It should be under Control Panel, Hardware, Mouse, Dragging. -- kainaw™ 21:57, 24 April 2010 (UTC)
- There's no "dragging" adjustment under Windows 98 or XP that I can find. StuRat (talk) 00:54, 25 April 2010 (UTC)
- I have not used Win 9x for years (last time in 2001), but I cannot recall any such option either. I know you can select the double-click interval, though... --Andreas Rejbrand (talk) 01:10, 25 April 2010 (UTC)
- I had a chance to chat with a Windows guy and he said it was part of power toys for Win XP. -- kainaw™ 01:24, 25 April 2010 (UTC)
- Thanks, I'll try that on my XP computer. StuRat (talk) 05:21, 25 April 2010 (UTC)
- UPDATE: That apparently requires Windows Genuine Advantage, and I'm not going to put that crap on my PC. Any other suggestions ? StuRat (talk) 18:55, 27 April 2010 (UTC)