Jump to content

Talk:C (programming language)/Archive 13

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 10Archive 11Archive 12Archive 13Archive 14Archive 15Archive 17

"Development of the C Language" by Dennis Ritchie 1Z (talk) 18:50, 23 October 2011 (UTC)

That's already reference #3. Rwessel (talk) 19:21, 23 October 2011 (UTC)

What about this information from "The Unix-Haters Handbook"?

B Creators Admit C, Unix Were Hoax FOR IMMEDIATE RELEASE In an announcement that has stunned the computer industry, Ken Thompson, Dennis Ritchie, and Brian Kernighan admitted that the Unix operating system and C programming language created by them is an elaborate April Fools prank kept alive for more than 20 years. Speaking at the recent UnixWorld Software Development Forum, Thompson revealed the following: “In 1969, AT&T had just terminated their work with the GE/AT&T Multics project. Brian and I had just started working with an early release of Pascal from Professor Nichlaus Wirth’s ETH labs in Switzerland, and we were impressed with its elegant simplicity and power. Dennis had just finished reading Bored of the Rings, a hilarious National Lampoon parody of the great Tolkien Lord of the Rings trilogy. As a lark, we decided to do parodies of the Multics environment and Pascal. Dennis and I were responsible for the operating environment. We looked at Multics and designed the new system to be as complex and cryptic as possible to maximize casual users’ frustration levels, calling it Unix as a parody of Multics, as well as other more risque allusions. “Then Dennis and Brian worked on a truly warped version of Pascal, called “A.” When we found others were actually trying to create real programs with A, we quickly added additional cryptic features and evolved into B, BCPL, and finally C. We stopped when we got a clean compile on the following syntax: for(;P("\n"),R=;P("|"))for(e=C;e=P("_"+(*u++/ 8)%2))P("|"+(*u/4)%2); “To think that modern programmers would try to use a language that allowed such a statement was beyond our comprehension! We actually thought of selling this to the Soviets to set their computer science progress back 20 or more years. Imagine our surprise when AT&T and other U.S. corporations actually began trying to use Unix and C! It has taken them 20 years to develop enough expertise to generate even marginally useful applications using this 1960s technological parody, but we are impressed with the tenacity (if not common sense) of the general Unix and C programmer. “In any event, Brian, Dennis, and I have been working exclusively in Lisp on the Apple Macintosh for the past few years and feel really guilty about the chaos, confusion, and truly bad programming that has resulted from our silly prank so long ago.” Major Unix and C vendors and customers, including AT&T, Microsoft, Hewlett-Packard, GTE, NCR, and DEC have refused comment at this time. Borland International, a leading vendor of Pascal and C tools, including the popular Turbo Pascal, Turbo C, and Turbo C++, stated they had suspected this for a number of years and would continue to enhance their Pascal products and halt further efforts to develop C. An IBM spokesman broke into uncontrolled laughter and had to postpone a hastily convened news conference concerning the fate of the RS/6000, merely stating “Workplace OS will be available Real Soon Now.” In a cryptic statement, Professor Wirth of the ETH Institute and father of the Pascal, Modula 2, and Oberon structured languages, merely stated that P. T. Barnum was correct.

Page 338 of "The UNIX-haters handbook" http://simson.net/ref/ugh.pdf — Preceding unsigned comment added by 86.125.191.144 (talk) 11:59, 18 December 2011 (UTC)

I haven't looked at the link given, but I read UGH some years ago and can recommend it for some affectionate and interesting (and pointy) commentary. However, the joke about C is not suitable for mention in this article. Johnuniq (talk) 02:05, 19 December 2011 (UTC)
It was a joke. The end. --Macrakis (talk) 16:08, 19 December 2011 (UTC)

Edit in characteristics section by Chcampb

The following discussion was started by Chcampb on my talk page. It properly belongs here. Rwessel (talk) 04:17, 27 October 2011 (UTC)

-- Regarding a bullet edit at C (Programming language) --

I realize the point you made in the revert comment, but the change that I had made addressed several issues,

1) The fact that multiple assignments do not work on the same variable was not made clear. I propose changing the the text to "More than one variable may be assigned to in a statement. This is common in idiomatic C."

2) There are two sentences in that bullet; the first is kludgy, and the second does not reference assignment. The fact that function returns can be ignored is stating that they do not need to be assigned, but indirectly. The new bullet was to separate the two ideas (as they were written to be independent anyways).

3) Back to the original bullet, the 'idiomatic' link is not applicable except as an external link to a discussion on idiomatic statements in programming languages. The current article lacks the clarification one would expect from such a link, as it is more concerned with natural languages.

Do you agree? Disagree?... I don't want to change this back just to have you (or someone else) nuke it. — Preceding unsigned comment added by Chcampb (talkcontribs) 03:45, 27 October 2011 (UTC)

Agree partially. The function return comment deserves its own bullet. The idiomatic comment should get pulled. But I think just "multiple assignments allowed" is enough - this is not the spot to delve into what is or isn't undefined behavior according to the standard, and it's rather more complex than that anyway, involving side effects, aliased pointer dereferences, and whatnot. But I've made those changes. For the future, this sort of discussion should be on the talk page for the article, that way all the regular editors would see it (anyone who has the C (programming language) on their watch list, would also be notified of changes to the talk (discussion) page. For clarity, I'll copy this discussion to the article talk page, let's continue any discussion there. Rwessel (talk) 04:13, 27 October 2011 (UTC)
There is more to it than multiple assignments in a single statement. The fact that an assignment-expression has a value is part of a general design that allows a lot to be accomplished in a single expression. The &&, ||, ?:, and comma operators are part of that, and parentheses and macros also contribute. A general statement to this effect might be more useful than focusing too narrowly on assignment. — DAGwyn (talk) 01:05, 2 December 2011 (UTC)
I don't believe C supports multiple or parallel assignment, where e.g. you can exchange values A and B using A,B = B,A. Nor does it support destructuring assignment, where the elements of a sequence/record can be broken out using something like A,B = quotientremainder(x,y). It does allow chained assignment, but this is not a special syntactic or semantic feature of the language: "=" associates to the right, and returns the assigned value. That is, A=B=3 is equivalent to A=(B=3). --Macrakis (talk) 23:15, 21 December 2011 (UTC)

Not efficient on all hardware!

C is not an efficient language for IBM mainframes. That is to say: it was, because now machine instructions were added specifically for C programs.

Example: C extensively uses "null-terminated strings". Handling these strings requires tedious loops that move byte by byte, each time testing for a NULL byte, while mainframes typically used instructions that allow the movement of many bytes at the time - as long as the number of bytes to be moved is known in advance (varying from 256 to 16777216 bytes, 2**24).

Also, C used to be very inefficient on IBM AS/400 systems, because C programmers typically use subroutines extensively, and subroutine calls are very expensive on these machines. But perhaps that is also solved now, because nowadays all systems must properly support C. Rbakels (talk) 14:11, 1 December 2011 (UTC)

What is the point of the above? Do you propose some editorial change to the article? Actually one of the earliest C implementations was for the IBM 360 family, and it was reasonably efficient. Your complaint about null-terminated strings suggests that you think "efficient" means "giving easy access to specific hardware features;" C does allow access to block-move operations (e.g. memcpy), but chose a different representation for basic character strings for a variety of reasons (see a nearby discussion about that). Compilers can, and do, optimize generated code to take advantage of particular characteristics of the target platform. As to subroutine usage: if C encouraged a change in the hardware to better support modular programming, that is to its credit; computer scientists and software engineers generally understood the value of functional modularity at the time that C was invented, and nearly every significant programming language has provided a comparable feature. Perhaps the existing text is not clear enough — C aimed to replace assembly language in most circumstances, so it was designed to produce run-time code that was not significantly slower than assembly language in most cases, on most platforms, and it succeeded in that goal. — DAGwyn (talk) 00:55, 2 December 2011 (UTC)
Local variables and subroutine parameters are in C supposed to be stored on the stack, and there is supposed to be machine instructions to directly support this. If there aren't, then at least in assembly, all variables will be fixed in memory. If you want to write efficient code on such a system, use only global and static variables, like assembly would. Don't blame C as a bad language if it has a feature that the hardware doesn't support well, instead avoid using that feature. --BIL (talk) 05:56, 3 December 2011 (UTC)
If need be, a hardware-supported stack mechanism (which virtually all modern CPUs support) can be substituted by another method, e.g. linked segments. It shouldn't be necessary to avoid use of function invocation-local variables, which are a great boon to using recursive methods. — DAGwyn (talk) 06:24, 21 December 2011 (UTC)
There is nothing in the specification of C that says that local variables must be stored on the stack. I've seen implementations for Intel 8080 CPUs that used reserved fixed-address memory areas to simulate a stack (since accessing fixed addresses is considerably faster than calculating stack offsets on that CPU). The reason for local variables is that that is what they are - variables that have local scope to their containing function. As for efficiency on IBM mainframes, C should be about equal to PL/1 on that architecture. — Loadmaster (talk) 21:18, 21 December 2011 (UTC)
I have written compilers for the S/360 and other machines without explicit instruction set support for stacks or stack frames. Not a big deal. It may take a few instructions more than on a machine with such support, but it's not necessarily much slower (cf. RISC). The main challenge is getting efficient and clean error-checking for stack overflow.
Null-terminated as opposed to counted strings have a variety of issues on all instruction set architectures, including architectures with built-in null-string handling. Efficiency is only one of them. The article C string handling discusses some of the issues; the article Null-terminated string discusses more. Those articles could probably bear to be improved.... It might be appropriate in this article to include a tiny bit more on this, e.g. "Strings are not a separate data type, but are implemented as null-terminated arrays of characters with operations supplied by string functions in the standard library." --Macrakis (talk) 23:28, 21 December 2011 (UTC)
That sounds like a good way to state it, and some mention of strings as such should be made since object-oriented languages provide (more or less) intrinsic string objects. However, I don't immediately see a good place to mention it in the article; perhaps under the "Library" section? — Loadmaster (talk) 18:58, 11 January 2012 (UTC)

Title of an external link in the References section need to be corrected in the article

In the section titled "Criticism", there is the following line:

         For example, the conditional expression in if (a=b) is true if a is not zero after the assignment,
         which cites a reference numbered [17].

And if you click the link of reference [17], which is the following:

          http://www.cs.ucr.edu/~nxiao/cs10/errors.htm

You can notice that the title of the webpage is 10 Common Programming Mistakes in C++

and not related to C.

So we need to make the necessary changes. Thanking you. — Preceding unsigned comment added by Gansai9 (talkcontribs) 09:34, 30 January 2012 (UTC)

Infelicitous

Hmmm... "Non-standard", "unusual", "unprecedented", "singular", "counter intuitive"? - Richfife (talk) 18:48, 2 February 2012 (UTC)

Confusing would be another option.--Taylornate (talk) 19:05, 2 February 2012 (UTC)
Confusing is better, although not perfect. I was looking for something along the lines of unexpected or surprising. Perhaps counter intuitive as suggested above, or non-intuitive. Rwessel (talk) 22:31, 2 February 2012 (UTC)
Unexpected or surprising don't indicate a problem. I think counter or non-intuitive have very similar meaning to confusing, but confusing is more direct as a single word. The previous paragraph suggests incorrect which I like the best if it is actually supported by the source which I don't have easy access to.--Taylornate (talk) 04:13, 3 February 2012 (UTC)
Well, the quoted section largely stands alone. The beginning of the quote, "C, like any other language, has its blemishes. Some of the operators have the wrong precedence; some parts of the syntax could be better" is basically identical in both K&R and K&R2, and what comes after is not really on point. So if K&R themselves have called it "wrong", "incorrect" has some justification. But I shy away from that strong a term. A poor choice? Sure. But in matters of language design, I don't think there's an absolute standard of right or wrong we can apply to operator precedence. And even that's problematic: operator precedence is only an informal concept for C anyway.
My problem with "surprising" is that it implies something of an ongoing mystery, which there's not, at least not once you learn the painful truth. "Non-intuitive", I think, captures the problem a bit better. Rwessel (talk) 07:15, 3 February 2012 (UTC)
Do you like non-intuitive better than confusing? If so, why?--Taylornate (talk) 14:14, 3 February 2012 (UTC)
I do, but it's not a big thing. I think confusing has more of a connotation of an ongoing problem, while non-intuitive implies a problem when you first encounter something. The operator precedence doesn't change, isn't inconsistent, etc., it's just not quite what you'd expect. Rwessel (talk) 20:41, 3 February 2012 (UTC)
You make an interesting point. I wouldn't be opposed to changing it to non-intuitive.--Taylornate (talk) 22:24, 3 February 2012 (UTC)
How do you feel about misleading? I think it has a very similar meaning to non-intuitive and flows better.--Taylornate (talk) 15:00, 10 February 2012 (UTC)
I think misleading has the same connotation of an ongoing problem that confusing has. Misleading also has an air of active deception. I understand what Schutlz was saying, but I think he was groping for a word as much as we are (and I think K&Rs use of wrong is problematic as well). They're both trying to express the idea that the precedence* is not what you'd expect, but I don't think that either wrong or misleading quite capture that. Infelicitous is actually better than either of the two directly quoted options in that it describes a wrong *choice* of precidence, while describing the failure of the language designer(s) and being rather pretentious. Non-intuitive describes the problem from the point of view of the poor programmer having to deal with the language, which I think is more relevant.
*and again, I'd emphasize that technically C doesn't even have a precedence (it has a grammar which defines the bindings of operators, which is similar to a precedence, so we often call it that as a shorthand), so any discussion of the quality of C's precedence is informal anyway
Are we really spending this much effort on this one word? Rwessel (talk) 05:20, 11 February 2012 (UTC)
Agreed, there's only a little progress: we've separated Ritchie from an editor's interpretation of his quote and propagated the opinion onto a stray comment from a little-known author (who might object to this construction - I can't tell at the moment). If it were my own quote which was treated like that, I'd do more TEDickey (talk) 11:18, 11 February 2012 (UTC)
Unfortunately (for me anyway), misleading is still bugging me. I'd be happy with unusual, non-intuitive or unexpected. Probably in that order of preference. Any comments? Rwessel (talk) 10:06, 26 February 2012 (UTC)
nonintuitive is what I'd use. The other terms carry other connotations, which would require specific sourcing TEDickey (talk) 11:35, 26 February 2012 (UTC)
Even nonintuitive states implies that someone with no knowledge of order of precedence in any context would be able to intuit the correct order. Which isn't true. All orders of precedence were arbitrary at some point in history (and can be argued against). - Richfife (talk) 16:09, 26 February 2012 (UTC)
True - but my suggestion to check on the context of the original remark seems to have been lost (which I recall was that it was wrong because it differed from other languages). That is, if valid, to qualify the statement appropriately rather than using the editor's preferences TEDickey (talk) 18:16, 26 February 2012 (UTC)
I'd be happy with just about any word that the common reader would be familiar with, which would be most of the words that have been suggested here.--Taylornate (talk) 18:25, 26 February 2012 (UTC)
OK, sorry, I've lost track. Which original context were we going to (try to) track down? Rwessel (talk) 21:03, 26 February 2012 (UTC)
The original dispute arose here, where one editor tried to improve on the wording and was reverted because another editor disagreed that it was minor. I tagged it because the phrase "confusing operator precedence" (a) did not match the quote, and (b) was not close to what I recalled reading of K&R's qualms about precedence (one had expressed misgivings that the precedence did not match (some) other languages, which in context would be perhaps PL/I, Algol - remember this was before Pascal). So "did not match other languages" was all that I saw we could use for that source (verification needed of course). Rather than amend the criticism to keep Ritchie's quote, another editor decided to rephrase it slightly to keep the criticism and find some source to support it. We now have a relatively obscure source (essentially a short comment found in a book), and the phrasing still isn't aligned with the source (that "properly evaluated" is something that injects an opinion). Rather than go shopping for sources to match the criticism that one or another wants to construct, it would be nicer to find an actual authority on language design (preferably several) and summarize their criticisms. TEDickey (talk) 21:21, 26 February 2012 (UTC)
I'm at somewhat of a loss, I'm not aware of too many guru-level language designers who have spend too much time enumerating some of C's basic issues. We have "wrong" from DMR quoted just above, I don't really thing unusual or non-intuitive would be that far from a paraphrase of wrong. OTOH, I may be headed back to infelicitous. It would quote DMR himself: http://cm.bell-labs.com/cm/cs/who/dmr/chist.html (search for infelicity). Rwessel (talk) 05:22, 28 February 2012 (UTC)
The statement in question is not a quote and it would be perfectly acceptable to substitute any synonym.--Taylornate (talk) 06:20, 28 February 2012 (UTC)

The problem is, "infelicitous" is exactly the right word. It means that in retrospect we're not happy with the choice that was made. If you really insist on dumbing it down, "non-intuitive" would be the closest match among the suggestions so far. — DAGwyn (talk) 06:48, 22 March 2012 (UTC)

We need to use language that is accessible to the general reader as much as possible, and I see no advantage to using this word that many people will have to look up. The definition I found [1] does not suggest any nuance in meaning that would make it a better choice than anything else suggested.--Taylornate (talk) 21:03, 22 March 2012 (UTC)

precedence - injection of opinion

The source referenced by editor's remarks does not say anything that could be interpreted as "confusing". Perhaps some editor can provide an apt term without being challenged by those who prefer the injected opinion TEDickey (talk) 11:41, 9 February 2012 (UTC)

Per the discussion under "Infelicitous" above, should we go with "non-intuitive"? Rwessel (talk) 13:40, 9 February 2012 (UTC)
Like I said before, I think non-intuitive is fine and I'd be open to discussing other options as well. TEDickey: Out of curiosity, do you think infelicitous was acceptable? Would you be ok with confusing if we had a source? Do you really think calling it confusing is dubious?--Taylornate (talk) 14:15, 9 February 2012 (UTC)
How about misleading? I think I like it better than anything else proposed so far and I actually found it in a source: [2]--Taylornate (talk) 14:21, 9 February 2012 (UTC)
As I recall the context of the "wrong", the point was made that the precedence did not match those of some other languages, hence "wrong". Rather than elaborate on the intent of the author, focusing on what he said would be the way to go (unless you'd rather ignore the original source, and find someone else who says "confusing", etc). TEDickey (talk) 15:00, 9 February 2012 (UTC)
Tedickey, would you please comment on some of the other points and suggestions raised above? I'd hope for a bit more input from someone adding a dubious tag and starting a new section.--Taylornate (talk) 15:46, 9 February 2012 (UTC)
"dubious" was applicable because it was doubtful that that exact meaning could be construed from the given source - it was some editor's inference. As I noted, if you really want to say confusing, you should find sources which say that, without making inferences. Your suggested alternate source comments that "the precedence rules are those of algebra, but the sequence for mixed math and logic operations can be misleading" (relative to what is not stated, nor is any justification given for the author's presumption that mixing math and logic should be simple, particularly since C was not designed to be a strongly typed language). By the way, criticism really should be sourced to knowledgeable authors, rather than shopping around for stray comments TEDickey (talk) 21:35, 9 February 2012 (UTC)
Please give a link to a Wikipedia policy that corroborates what you said about my source for misleading.
Your stated reason for applying the dubious tag is invalid.--Taylornate (talk) 01:36, 10 February 2012 (UTC)
WP:RS is where you should start. At the very start it advises the editor to use a balance of sources, fairly representing those rather than being selective to prove a point. The given criticism is slanted, because it (a) misrepresents the quote from Ritchie and (b) omits the usual guidance given to novice programmers to use parentheses when in doubt about precedence. You have so far omitted any discussion of either aspect. TEDickey (talk) 10:24, 10 February 2012 (UTC)
The reference I gave above for misleading is Schultz, not Ritchie, and it is a straightforward reference. I didn't omit anything, as I didn't write this article, and that sounds like WP:NOTHOWTO anyway. I simply replaced a word. I'm familiar with WP:RS and I'm not interested in where I should start. I'm not going to dig through policy to support your opinion. You need to put in some effort here. Link to a specific part of a policy. Instead of tagging something and only saying it's wrong, work toward improving it. What sources would you cite, and what word would you use? Anyway, not every word needs a mess of references after it. At this point I feel justified in removing the dubious tag (you didn't respond to my assertion that your reason was invalid) and changing the word to misleading with the addition of the above source. If you truly believe the precedence is not misleading then you can find some references that say that and then we can talk about it.--Taylornate (talk) 14:58, 10 February 2012 (UTC)

There is nothing intrinsically "misleading" about C's operator precedence. Whether one is confused about it largely depends on one's prior training and experience. The same is true to a lesser extent for other suggestions including "non-intuitive;" however, if you really feel that the term "infelicitous" is infelicitous, "non-intuitive" is close enough. — DAGwyn (talk) 07:00, 22 March 2012 (UTC)

Can you think of any prior training or experience that would make it make sense?--Taylornate (talk) 20:58, 22 March 2012 (UTC)
I agree that beginners do find the operator precedences confusing. In particular, the binary bitwise operators (<<, >>, &, |, ^) have a lower precedence than the other arithmetic operators. The most often recommended rule for writing expressions using more than one of these operators is to use parentheses, which would not be necessary if the precedences were universally intuitive. Kernighan and Ritchie themselves mention (in the original white book) that some of the operator precedences are "wrong". I can't see where having learned any other prior language (other than those that borrow C syntax) would provide the necessary experience to not think that C's operators have confusing precedences. — Loadmaster (talk) 23:59, 22 March 2012 (UTC)
Actually, "bitwise" logical operators within arithmetical expressions are practically nonexistent in ordinary mathematical usage; it is primarily a low-level computer programming notion, and before C it was often expressed using function-call syntax. (In mathematical contexts, usually only scalar Boolean quantities are involved, not vectors of bits.) These days, programming languages that include something similar often mimic C's syntax and precedence. The main reason for uncertainty may be that this feature is not used for most routine programming purposes, so one might not have become very familiar with it. I've seen expressions where parentheses are needed and also expressions where parentheses are not needed. Because of the unfamiliarity issue, it is often wise to employ parentheses judiciously to ensure that the meaning is understood by human code readers as well as by the compiler, even in cases where the compiler would do the right thing without them. (Some compilers or "lint"-like code checkers suggest adding parentheses for clarity in several contexts.) Note also that "beginners" find many things confusing, which doesn't prove much about those things in themselves. — DAGwyn (talk) 12:31, 31 March 2012 (UTC)

"Pass by reference" vs "Pass by value"

"Function parameters are always passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values."

Uhm sorry, but... I don't really get the difference between "Pass-by-reference" and passing a pointer to a variable.

I mean, no matter what's your point of view - if you pass a pointer to a variable, you by all means DO pass a REFERENCE to that variable, and in effect, it's EXACTLY the same as "Pass-by-reference". It is also totally easy because all you need to do is prepend a '&' to the variable name.

So, for me, the correct wording would be "Function parameters can be passed both by value and by reference". — Preceding unsigned comment added by 82.139.196.68 (talk) 18:49, 13 March 2012 (UTC)

You've passed a pointer to variable. You can use the pointer to reference the original and 95% of the time that's what you do. However, that's how the language is commonly used, which is distinct from what the language actually does. Some uses for a passed variable that do not involve dereferencing that pop into mind (I'm not necessarily endorsing any of these as a good practice): Doing pointer arithmetic to convert a pointer back to an array index, using a pointer as an ID for an object for logging purposes, bounds checking, adding a pointer to linked list and freeing or resizing a buffer pointer. - Richfife (talk) 19:20, 13 March 2012 (UTC)
Also, I think you're confusing a C++ reference with a C pointer. C doesn't have references. - Richfife (talk) 19:58, 13 March 2012 (UTC)
You can pass-by-reference, but you have to do it completely manually, because C itself only does pass-by-value; which is what the article already says and more precisely than your suggested wording. I can manually implement pass-by-anything in any language; what's relevant and interesting is which parameter passing method the language natively (i.e. automatically) supports. --Cybercobra (talk) 21:03, 13 March 2012 (UTC)