Wikipedia:Reference desk/Archives/Computing/2014 September 13
Computing desk | ||
---|---|---|
< September 12 | << Aug | September | Oct >> | September 14 > |
Welcome to the Wikipedia Computing Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
September 13
[edit]Ranking in Excel
[edit]Let's say there is a competition with 10 people. The names are written once in A2 to A11 (column) and once B1 to K1 (row). The person in A2 wins against C1, thus C2 displays "1" (indicating win) and B3 displays "0" (indicating loss). Since one can't play against himself, B2, C3, D4,... also display "0". The points will be added at the end of the row. So far there's no problem. However the competition system is different. It can be generally said that everyone play against everyone once. The problem with this is that people can have the same amount of points because of this situation: Person A wins against person B, loses against person C, but person B defeats person C. Now everyone has one point and there no final winner. The system doesn't allow rematches and this is solved this way: Person A wins against person B, loses against person C, so person B has already lost against C despite having never played against each other. Now don't question the fairness of the system. I just want "person B has already lost against C" to be somehow indicated after A's games so that no mistakes occur. The cell would automatically display "0" in B's imaginary game against C. It could be something else like color changing or just some other indication. This should be then extended to 10 more people. Is this possible with some VBA? Unfortunately I have absolutely no knowledge of VBA, so I need an explaination to how to alter factors. --2.245.191.249 (talk) 00:44, 13 September 2014 (UTC)
- It is possible in VBA (although cumbersome). It will be possible also in plain excel with formulas. If you have time you can add to the excel file another spreadsheet with all the formulas that you require that will fill for you the wanted result. For example (in pseudocode, i don't remember the excel syntax now)
[in the cell holding the result]: = <reference to a cell in the second spreadsheet that holds the right result> [in the referenced cell in the second spreadsheet]: IF( C2==0 ; 0 ) //this means, if the cell C2 has zero, return a zero
- Anyway in the case of checking several results you will need to be creative with excel formulas. A long work but not impossible. --92.225.181.9 (talk) 23:41, 13 September 2014 (UTC)
"Native"ness of resolution
[edit]The article display resolution tells the reader:
- most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to "fix" the non-native resolution input into the display's native resolution output
I've experienced this mushiness at lower resolutions, but not on very recent screens (simply because I don't have easy access to very recent screens). Is what this says still true for what's for sale in 2014? (I'd be using Crunchbang, which is like Debian + Openbox, if this is an issue.) I'm pretty sure it would still be true; just hoping against hope. (I'm thinking of getting something marketed for a resolution rather higher than 1920×1080 but then using it for 1920×1080.) -- Hoary (talk) 13:01, 13 September 2014 (UTC)
- 1) If you never intend to view anything higher than 1980×1080, then it's counter-productive to get a higher resolution screen.
- 2) If you do sometimes want to view higher res video, then it might still not be worthwhile, depending on the portion of the time that would be. If only 1% of your viewing will be at higher res, then it's still not worth it.
- 3) If you do want to view higher res frequently, then look for a 4K resolution screen. Since that has an integer multiple of 2 over 1980×1080, that gives you some nice options (assuming they support these):
- 3a) Blocks of 4 pixels could be used without interpolation, to display 1980×1080 images. That would look sharp, although if it's a large display you may see the pixel blocks and "jaggies".
- 3b) Do interpolation between the pixels. That will still look a bit fuzzy, but better than a non-integer scale factor interpolation.
- 3c) Only use 1/4 of the screen (the center, presumably). That should produce a nice sharp pic, without visible pixels or jaggies, but quite small, of course.
- Another option is to use a separate display device for your 1980×1080 viewing. If you have a 1980×1080 device that still works, and have room for both, this might be an option, and be the best of both worlds. If you want a display in another room anyway, this might be a good way to get it there. And your new display device should last longer, if you use it less often. 76.226.124.19 (talk) 14:21, 13 September 2014 (UTC)
- 76.226.124.19's advice is more or less OK - but it's dangerous to assume that doubling the resolution *exactly* will cause the screen's firmware to duplicate pixels and thereby give you a sharp image. Some screens have multiple modes - some of which do interpolation to avoid pixellation of TV/video - and others of which may indeed replicate pixels to get you a perfectly sharp image. Others choose to interpolate all the time...yet others are smart about it. But either way, a "4k" display may not be exactly twice 1980...so even then, you may get interpolation. But 4k displays are horribly expensive - and many graphics cards can't generate an image that big, and when they can, it uses up so much graphics memory, that there may not be enough left to run some games full-screen, or even for the desktop compositor to do it's work. If you do actually want to generate a 4k image sometimes, and you're sure your software and your graphics card can do it - then sure, that's the way to go. But if you just sometimes need 2400 horizontal pixels - then a 4k display is an expensive solution - and buying two lower resolution displays will likely be far cheaper. SteveBaker (talk) 14:37, 13 September 2014 (UTC)
- As far as I know, every color display technology in history has had a "native resolution" and been blurry at other resolutions. Color CRTs had a fixed mosaic of pixels too, but it was impossible to aim the electron gun accurately enough to control them individually, so every achievable resolution was blurry.
- You didn't explain why you want to use your monitor at a non-native resolution. If it's to watch Full HD video, that's unlikely to look better on a native 1920×1080 screen than on a higher-res screen, because video normally doesn't have sharp pixel-aligned edges that would be visibly blurred by resampling. In fact a lot of 1080p video is really upsampled 720p video, so it's blurry at its "native" resolution and will look just as good at 1366×768. -- BenRG (talk) 17:12, 13 September 2014 (UTC)
- The funny part to me is that "blurring" and "antialiasing" are, commonly, applications of the exact same kernel: convolution with a star filter gaussian function (at least, this is the simplest realization of the technique). In one case, a marketing team calls this an advantage; in another case, a marketing team calls this a disadvantage.
- If you are concerned about pixel-accuracy to the extent that every single bit value of every single pixel should be under your control, you're in for some bad news: in 2014, there are almost zero displays available on the consumer market that will satisfy your requirements. If you are willing to shell out big bucks and mucho engineering time, you can get such equipment: but you'll need a display technologies engineering team, a graphics processing engineering team, and many many hours to make the technology do what you expect, by calibrating its analog behaviors and operating its digital intricacies.
- In reality, most people do not actually care about every bit of every pixel, because few humans can see anything remotely close to that level of detail. Most of the time, coarse control is sufficient to satisfy user needs: for example, a lot of professional graphics designers want a white balance knob and a gamma correction curve on their displays. High-level software features can turn image-processing features like antialiasing "on" or "off," (usually with nothing in between). Users expect a software-abstraction of a rectangular frame buffer, with square-shaped pixels, with magically co-situated "Red", "Green," "Blue" sub-pixels, with a one-to-one mapping into hardware - even though modern display pixel hardware is not even remotely arranged in that way. Pixels show up on screen after they are processed at the application layer, at the graphics acceleration layer, and (it's 2014!) at the data link layer. That's right: as your pixel bits are banged into the wire connecting your "computer" to your "display," most modern systems are processing the pixels inside the firmware that runs on the wire. You probably don't know how or why there is firmware inside a "wire" - if you wanted such details, you'd spend your 40-hour-workweek visiting display technology symposia, and you'd spend your nights and weekends learning how to write that kind of firmware! For a start, here's a lengthy 200-page textbook on last year's Intel display technology: Intel Embedded Mobile Graphics User Guide, v1.16.
- However, if you tune those parameters, you're intentionally de-calibrating your pixel values - are you sure you're calibrating better than the factory, which presumably had access to optical equipment you can't even dream about?
- And when it comes to resolution, don't you really just want a clean software-abstraction? Trust that the hardware will correctly re-sample your software "1920x1080 RGB" array, which you will update n times per second, mapping this idealized representation into hardware, subject to the sampling theorem, mathematically optimized for minimal error in time-, position-, and color- spaces. A "good" display is one for which each stage of this engineering project has been correctly implemented: but this has essentially no correlation to the number of pixels of "native resolution."
- Nimur (talk) 17:34, 13 September 2014 (UTC)
- Generally on modern displays you do have control over each subpixel. On most LCDs the pixels are square and divided into three rectangular subpixels in a pattern that's predictable enough that subpixel rendering works well. The only postprocessing is brightness/contrast/gamma/temperature adjustment which applies independently to each subpixel.
- "Blurring" and "antialiasing" are basically the same thing in this context, but the desirability of it is not defined by marketing but by what other options are available. Pixel art can be displayed pixel-for-pixel, nearest-neighbor resampled, or resampled with antialiasing, and pixel-for-pixel is often the best choice. Vector art can't be displayed at its native resolution because the native resolution is infinite. It can be point sampled or sampled with antialiasing, and often the latter is the better choice.
- I'm not sure where you got the idea that there's firmware in DVI, DisplayPort, or HDMI cables, if that's what you meant. They are just cables. Of course, you need signal processors at either end that understand the wire protocol. -- BenRG (talk) 19:03, 13 September 2014 (UTC)
Thank you all for your informativeness. From above: You didn't explain why you want to use your monitor at a non-native resolution. No I didn't, sorry. I'm thinking of replacing one of my laptops. There are half a dozen or so reasons for wanting to do this. (None of them is compelling on its own but cumulatively they are.) One is that the resolution is 1366×768; although this is sufficient most of the time, often it isn't sufficient and 1920×1080 seems good. Now the (imagined?) problem. Today's screens for 1920×1080 seem hardly larger than mine for 1366×768; larger screens (even putting aside freakishly giant laptops) tend to be for still larger resolutions (e.g. 2880×1620), which aren't of interest to me. I realize that today's technology is likely to increase the pixel density (per square millimetre) for a given degree of legibility (it would have to do so for the fancier kinds of tablet to be usable), but I wonder. -- Hoary (talk) 00:48, 14 September 2014 (UTC)
Computer graphics subsystem
[edit]I have a good overview of various parts of the graphics subsystem but I'm not sure what happens behind the scenes.As an example, could you tell me what happens when you compile an opengl program that renders a cube. What does the compiler output? What does the GPU and/or its drivers do with this? --178.208.200.74 (talk) 18:53, 13 September 2014 (UTC)
- To the compiler, the OpenGL calls are the same as any other external function calls. They are resolved at (static or dynamic) link time to an OpenGL library that is typically provided by the operating system, and is independent of the graphics card. That library normally passes the commands and data to the kernel without much processing, and the kernel gives them to the video driver without much processing. The video driver is video-card-specific and uses some combination of CPU and GPU capabilities to do the rendering. The CPU-GPU interface is proprietary and often undocumented. At a bare minimum, the GPU computes perspective-correct coordinates for each pixel in each polygon, runs a shader program to determine its RGB color given its coordinates, and writes the result to a frame buffer, while independent circuitry reads the frame buffer in raster-scan order and sends it to the monitor. On modern GPUs, if you don't supply your own pixel shader, the driver will most likely use a default shader that implements the traditional lighting model. The shader is compiled by the video driver into the GPU's (proprietary) machine code. DirectX has a video-card-independent shader assembly language and a user-mode compiler from HLSL into that assembly language, which avoids the need for an HLSL compiler in every video driver, but I'm not aware of an analogous intermediate language in OpenGL. ARB assembly language exists but doesn't support all GLSL features. -- BenRG (talk) 19:50, 13 September 2014 (UTC)
- There is no real guarantee of how OpenGL works - every implmentation is a little bit different. I know that (for example) the implementation on at least one device has the OpenGL library build up a buffer of commands in a special format that the GPU can recognize and sends the address of that off to the device driver for that graphics card. The driver does a DMA (direct memory access) operation to transfer that block of memory over to the GPU. The device driver has to handle complicated situations such as when one program is drawing in one window while another program is drawing to a different one. It's also got to worry about memory management inside the GPU. It is a complicated matter.
- On older Intel graphics chips, the graphics chip wasn't really capable of doing many of the things you'd expect it to be able to do (such as running vertex shaders) - and the device driver would have to run those things on the CPU instead. SteveBaker (talk) 01:41, 17 September 2014 (UTC)
Programming libraries question
[edit]Take as an example the C programming language. I know that a line such as x = a + b; would be compiled to something that runs on the CPU as such: move a into register 0, move b into register 1, add register 0 and 1 storing result in register 3.... So I can see how it relates to the way the CPU works. But what exactly is happening when you use functions like printf() --178.208.200.74 (talk) 19:38, 13 September 2014 (UTC)
- printf is usually an ordinary function written in C. You can look at the source code if you want (even Microsoft provides source code for its standard C library), though it's rather complicated. Ultimately it bottoms out at a system call that writes bytes to a file handle (write on Unix, NtWriteFile on Windows). The kernel is also usually implemented in C and is even more complicated. If stdout is a file, the write request will go to a filesystem driver, which will turn it into reads and writes of disk sectors, and the disk driver will turn that into commands to the disk controller, probably using the IN and OUT instructions on x86. If stdout is a pty, the kernel will give the written bytes to the terminal emulator the next time it does a kernel read, and the terminal emulator will use a font rendering library to display the text on the screen, which may be more complicated than everything else put together, at least if outline fonts are involved. Windows doesn't actually have ptys, and communication with console windows uses LPC instead, but the idea is the same. -- BenRG (talk) 20:18, 13 September 2014 (UTC)
- Yes, exactly. Ultimately, computers have either special registers, special memory locations or special instructions for talking to 'peripherals' like disk drives, your screen, the keyboard and so forth. (We call these "I/O" instructions/locations/registers). In a computer like a PC, which has an operating system, those special things are handled by the operating system software (Linux, Windows, MacOS, Android...whatever). In that case, there is software (written in C or C++ usually) which passes numbers back and forth to the peripheral. So, for a keyboard, there might be a special memory location that you can read that tells you whether a key is being held down - and another memory location that tells you what key that is. The operating system reads that information and stores it so that when some application program tries to read from the keyboard, that information is right there in memory.
- So on a computer like that - 'printf' is just C code that ultimately presents a string of ASCII characters to the operating system, which deals with talking to the graphics card to display that string onto the screen. It's crazily complicated to do that because the character has to be decoded, a font has to be selected, the font is probably composed of detailed descriptions of the curves and lines that make up the letters - those have to be broken down into individual pixels, the pixels have to be placed (in the right colors) into the right memory locations within the graphics card in order for them to get onto the screen. Add in the possibility of overlapping windows, magnified views, windows that straddle two or more displays - each (perhaps) with their own graphics card...and the process of turning "printf" into photons of light coming out of the screen is a phenomenally complex process. Almost beyond description. Millions of lines of C/C++ software are dedicated to doing all of that.
- But not all computers are PC's - many so-called 'embedded' computers (the ones that run your microwave oven, or your TV remote) are too simple to have operating systems - and in those cases, your program can directly interact with these special I/O locations. If you're a programmer and you are interested in this stuff, you should DEFINITELY splurge $30 to buy an "Arduino" board and play around with programming it. The Arduino is one of the simplest computers you can buy these days. It has NO operating system at all. If you want to send data to flash the LED on the board, you get to write C++ code to directly send 0's and 1's to the special I/O register that controls the LED. If you connect a display to the Arduino, you can write code to directly turn the pixels on and off. It's surprisingly interesting - and there is no substitute for actually doing that to get to some kind of understanding about what's going on "under the hood" on a PC. The Arduino is simple enough to run off a battery, so you can easily write software to make fun gadgets.
Okay, so taking a simple hello world program compiled for Windows I assume that all that fancy stuff isn't actually defined by printf but rather printf passes the string to Windows which then takes care of everything. If so then this also means that when people write compilers for windows they need to know how to pass data from their application to the OS. How is this accomplished? The special instructions/memeory locations you mentioned? --178.208.200.74 (talk) 00:30, 15 September 2014 (UTC)
- Well, for Windows, I'm not 100% sure - I'll describe what happens in Linux, but probably Windows is similar.
- For 'printf', it happens like this:
- The 'printf' function is written in C or C++ and built into the standard I/O library. Your program links to that library, which (depending on how you linked your code) might mean that this software is included into your ".exe" file - or it might be that your program links to a ".dll" file as it's loaded for execution.
- The code for 'printf' probably uses 'sprintf' to generate a string of characters and sends them to the underlying output using either 'fputs', just as you could do yourself. 'sprintf', 'fputs' are also C/C++ code in the standard I/O library.
- The code for 'fputs' will call the 'fwrite' function to send the data to the standard-out file descriptor.
- The code for 'fwrite' is also in the standard I/O library. It deals with buffering of the output into convenient sized blocks - but for the standard I/O, it may not bother. Either way, it'll call the 'write' function which does unbuffered I/O.
- Now, the 'write' function is probably still written in C/C++ - but it does almost nothing except to make a special call to the operating system kernel. This special call is something you could do yourself via a low level function or an assembly language statement - but I don't think I've ever seen an actual program that did that! In Linux, this transfers control of the CPU over to the operating system.
- The operating system does a bunch of complicated things here - one is to park your application while it handles the 'write' call. Each device that can be written to has a 'device driver' - and that handles the low level operations like this one...so another call hands the data over to the device driver. Device drivers may be written in C or C++, but at least some of them are written in low level assembly code.
- What happens next depends critically on where your "standard output" is directed right now. If you had it directed to something very simple (like a USB port, for example) then the device driver might write the first byte of your data to the USB port hardware (which exists at a special memory location) - and then returns control to the Linux Kernel.
- Since the device driver (and your code) is 'blocked' waiting for the S-L-O-W hardware to write that byte out, it'll probably take the opportunity to run some other program while it's waiting.
- When the USB hardware has finished sending your first byte out, it does a special hardware operation called an 'interrupt'. As it's name implies, this operation interrupts the CPU - so no matter what it's doing, it stops doing it and hands control back to the device driver.
- The device driver hands the second byte to the USB hardware and gives control back to whatever was running just before the interrupt.
- This cycle repeats until all of your message has been sent, then the device driver tells the kernel that your 'write' operation was completed.
- The kernel then schedules your program to be allowed to continue when no other programs need to be run...
- Then control returns to 'write', which returns to 'fwrite', which returns to 'fputs', which returns to 'printf', which returns to YOUR CODE! Hoorah!
- Now, that's all well and good if the device you're sending the message to is something simple. If you sent the message to the screen though...OMG! It gets *much* more complicated.
- If you're writing text to the screen then the device you're writing to is called a 'pseudo-tty' (tty=='teletype'!). The pseudo-tty device driver sends the message off to the windowing system software. In Linux, that's the X-windows system. X is just a regular program that does graphics stuff. So it takes your string, notes which window it's being sent to, figures out which font is being used and at what font size - and where inside the window the text has to be written. Armed with all of that, on older systems, it would use the font description to convert each character of your message into a bunch of pixels (a little picture of that letter) and send it off to more software that crops the letter shape to the boundary of the window - then physically writes it into the memory location in the GPU memory space where those pixels need to be. In more modern systems, X makes calls to OpenGL calls to pass the letter off to a program called a 'shader' inside the GPU that generates a quadrilateral with a texture map representing that letter and places it in the right place in GPU memory. The shader program is probably written in a C++-like language called 'GLSL'. But OpenGL also has to send it's commands to the GPU hardware via kernel calls and device drivers.
- It's actually even more complicated than that...but I've spent too long typing this message already!
- I've probably messed up quite a few details here - and undoubtedly, Windows does it a bit differently than Linux (sorry, I'm not sufficiently familiar with the inner workings of Windows) - but the broad-brush picture is the same.
- But, as I said before - it all depends on the system you're using. If you use 'printf' in an Arduino program, then 'printf' is C code that calls 'putchar' that directly controls the hardware registers to send the data out of the USB port. No operating system kernel, no device drivers, probably not even any interrupts.
- The whole beauty of this insanely complex edifice that is our modern programming environment is that we don't need to know all of this stuff. We call 'printf' and it does the same thing no matter which operating system you use and no matter which hardware it's running on. When you consider everything it takes to make that happen, it can get totally mind-bogglingly complex.
- SteveBaker (talk) 01:06, 15 September 2014 (UTC)
- A few corrections:
- printf definitely isn't implemented using sprintf because there's no safe way to do that. Technically I don't think it can even be implemented with snprintf because that might give incorrect output for a call like printf("%s%n", &x, &x)—not that anyone would ever write that, but they could. I think that typically the printf-family functions use a shared implementation that writes to a buffer with a callback if it overflows, but it's not available as a public C library function.
- stdout is almost certainly buffered, but if it's attached to a terminal (pty), it's probably line buffered, meaning that if your printf format string ends with \n, it will be immediately sent to the kernel.
- write probably won't block ("park your application") if you pass a small amount of data, because the kernel has its own write buffers. It will copy the data to a kernel buffer and return without a task switch. In this case the system call is not much different from an ordinary function call, except that the CPU switched to a higher privilege mode and back.
- The CPU doesn't wait for an interrupt after every byte because that would be far too slow—USB 2 data rates can exceed 30 MB/s, and it can take hundreds of CPU cycles to service an interrupt. For most devices, the CPU doesn't feed data to the device at all; it gives a memory address to the device and the device reads the data directly from RAM and interrupts the CPU when it's done (direct memory access). Devices that don't support DMA normally have internal buffers so that the CPU can at least send more than one byte at a time.
- -- BenRG (talk) 17:55, 15 September 2014 (UTC)
- A few corrections:
- To make a system call, the program typically loads the system call number (sys_write on Linux is #4) and the arguments into CPU registers and then uses the SYSENTER or SYSCALL or INT instruction to switch the CPU into high-privilege mode and jump to a kernel entry point. The user-mode address space is still mapped in kernel mode, so the memory address that you passed to write is still valid, and the kernel can access it using memcpy just like user-mode code.
- On (NT-based) Windows, the only documented way of making a syscall is to call a function in ntdll.dll. Under the hood it works the same way, but because the code is dynamically linked and ships with the kernel, the system call interface doesn't have to remain compatible across OS releases. That was useful when they switched from INT to the newer and faster SYSENTER/SYSCALL mechanism. Linux has to retain compatibility with binaries that have hardcoded INT 0x80 instructions, and I think it actually patches them in RAM to use SYSENTER/SYSCALL after the first call. -- BenRG (talk) 17:55, 15 September 2014 (UTC)
These are all good explanations. I have one quibble, though: several posters have made a point of expressing how "scary complicated" the whole chain is. Now, yes, there's a fair amount of complexity, but this is the Computing desk, and as computer professionals it's our job to appropriately manage complexity. And, in fact, most of the time we don't have to worry about most of that complexity. And, in fact, when we zoom in on one of the individual layers in the complex-seeming layer cake, in most cases we find that each layer is built up in little sub-layer steps that are all individually straightforward and easy enough to understand. (This is, of course, the holy grail of modularity.) So, please, don't be scared off by any of this!
In particular, it has been a hallmark of both Unix and C from the very beginning that they support device-independent I/O. When you call printf, you don't have to worry about whether the ultimate destination of the characters you're printing is a file, or a teletype, or a TCP stream, or a window on a modern display, or whatever. "Everything's a file", so you just call printf, and the lower layers take care of the rest.
In terms of how printf is written, traditionally, anyway, printf, fprintf, and sprintf are all written in terms of something like vfprintf. vfprintf, in turn, is written in terms of putc (as are putchar, fputs, and the rest). putc writes characters to a buffer, and when the buffer is full, it's flushed to the operating system using the write system call, as others have described.
If you want to see how printf itself might be written, there's a stripped-down version in the C FAQ list at question 15.4. (For a non-stripped-down version, see your favorite open-source C library, or this version which for some reason I decided to write many years ago.) —Steve Summit (talk) 09:07, 16 September 2014 (UTC)
How can the "same" printf function work regardless of whether you are using it in code for an application or say code for an OS. Surely when writing an OS you'd have to handle all the things mentioned above: windows, fonts etc. So in general how can C functions just work in all these situations --178.208.200.74 (talk) 19:44, 16 September 2014 (UTC)
- When I plug headphones into my iPod, just barely enough sound comes out for me to hear when I've got the headphones on my head.
- When I plug in the powered speakers at my desk at work, the music can be heard in just my cubicle.
- At home, when I plug it into my stereo, if I crank the volume up I can rattle the windows on the second floor.
- If I went to a big outdoor concert venue before a show, and struck up a friendship with the guy in the sound booth, we could plug that very same iPod into the monster sound system and pump thousands of watts of sound out of speaker towers a quarter of a mile away.
- How can one little iPod generate a sound signal that can do so many different things? —Steve Summit (talk) 21:30, 16 September 2014 (UTC)
- The point is that what you see as "the printf function" is really "the specification for the printf function". The function itself is likely to be different (possibly wildly different) on different systems. The C or C++ code for the Arduino, Linux and Windows versions of printf are likely to be quite different.
- The thing that you marvel about being "the same" is the specification for how printf works. That's a part of the C language documentation dating back to the 1969 implementation. What that specification does is to ensure that every subsequent programmer who is tasked with making printf work on some new system writes a function called 'printf' that provides the exact same interface as every other printf implementation.
- So, it's like asking "How come all DVD's work in all DVD players?" - it's because the specification of how a DVD player has to work is codified someplace - and everyone who builds one follows that specification. Everyone who makes DVD movies has to make sure that their disk works when played by a player of that standard.
- In practice, all of those printf implementation aren't exactly the same. Some have bugs - others work on only a subset of the '%' things, others add extra % things that aren't in any of the standards. But pretty much, so long as you don't try to do anything too fancy, you can say printf("X is %d\n",x); on more or less any C system anywhere and you'll get the value of x sent someplace useful as a decimal integer.
- printf in user mode just calls write or NtWriteFile or the equivalent. The kernel knows what type of stream the file handle is attached to and calls a stream-type-specific function to handle the data. It's a form of dynamic dispatch. printf is normally not available in kernel code. The kernel doesn't have a stdout that it could write to, for one thing. -- BenRG (talk) 03:34, 17 September 2014 (UTC)