Talk:Type system/Archive 2
ALGOL 68 supports Union types (modes).
mode node = union (real, int, compl, string);
Usage example for union case of node:
node n := "1234"; case n in (real r): print(("real:", r)), (int i): print(("int:", i)), (compl c): print(("compl:", c)), (string s): print(("string:", s)) out print(("?:", n)) esac
The mode (or "type" in "C" speak) of n would normally determined at run time from a hidden type field (unless optimised out). So could I call ALGOL 68 "Strong Dynamic" or "Weak Static" typing? Or is there another typing name?
NevilleDNZ 11:03, 1 February 2007 (UTC)
The human compiler
[edit]This has led some writers, such as Paul Graham, to speculate that many design patterns observed in statically-typed languages are simply evidence of "the human compiler" repeatedly writing out metaprograms.[3]
The quotation is related, but Graham talks about "OO languages" here:
This practice is not only common, but institutionalized. For example, in the OO world you hear a good deal about "patterns". I wonder if these patterns are not sometimes evidence of case (c), the human compiler, at work.
He also uses Python as an example, which is not statically typed. We should either remove the link or put it in a better fitting context. --88.73.13.40 15:46, 24 September 2006 (UTC)
- I'm pretty sure Paul Graham is using "dynamic" in a more expansive sense than merely "dynamic typing". He compares everything to the standard of Lisp, which is just about the most dynamic language that exists. --FOo 03:24, 19 January 2007 (UTC)
- Agreed, I added that sentence, however I misunderstood what he was saying and I've now edited the article to remove the given statement. - Connelly 12:26, 21 February 2007 (UTC)
Dynamic data typing
[edit]Dynamic data typing at present is very ahort and lacking context. I propose that it is merged into this article. If at some future time there is a lot more written about dynamic data typing a demerge can be considered but at present a redirect here is more appropriate as I see it.--NHSavage 09:18, 31 March 2007 (UTC)
This article may be too technical for most readers to understand.(September 2010) |
- 69.140.164.142 23:26, 5 April 2007 (UTC)
- Compared to the complex, specialized language used in this article, Dynamic data typing is an easy read and sure helped me understand the whole concept of a type system. Let's not lose that aspect if we do a merge into this article's section on static and dynamic typing, which explains the concept but fails to elucidate it for non-compsci majors. --Martinship 07:02, 3 April 2007 (UTC)
- I'll start on the merge on my sandbox page. Comment and edits welcome.--NHSavage 18:11, 3 April 2007 (UTC)
- Merge complete. I hope I have made the article easier for a non advanced audience without losing any information.--NHSavage 19:11, 12 April 2007 (UTC)
- I'll start on the merge on my sandbox page. Comment and edits welcome.--NHSavage 18:11, 3 April 2007 (UTC)
What is static typing?
[edit]The article states "A programming language is statically typed if type checking may be performed without testing equivalence of run-time expressions." Now this might be a great definition for Robert Harper's book, or a great definition in the ML Universe, but what does it mean here? How does it relate to the conventional definition of "Statically typed: all type checking of variables is done at the compilation stage?" I don't find it particularly well defined (and thus not particularly meaningful) as written. Also is dynamic typing the complement of static typing? Are the two sets even disjoint? The article uses "A programming language is dynamically typed if the language supports run-time (dynamic) dispatch on tagged data." It seems like you could do all type checking of variables at the compilation stage and also (inefficiently) use dynamic dispatch at run-time, therefore it is possible to make a language that is both statically and dynamically typed. This makes no sense; clear definitions are needed. You can say my objections are pathological, yet Common Lisp is said to be "dynamically typed with optional static type declarations" so one can ask how this was arrived at. Can every statically typed language be made into a dynamically typed dual with the addition of an "Object" or "Variant" and dynamic dispatch? Is then Java not the result of taking a statically typed language and applying the operation which results in a dynamic language? Is then Java not dynamically typed? Is this all just social science B.S. or are there any clear or meaningful definitions to be had? - Connelly 18:33, 20 February 2007 (UTC)
- Good questions. The first is somewhat moot since the "without testing equivalence" definition has been removed from the article, but I'd like to try to explain it. The only place I've run into that definition (the following comments have much less weight if I'm wrong) is in Harper's chapter of Pierce's Advanced Topics in Types and Programming Languages (MIT Press, 2005, p. 305). There are a few things to say about this:
- That definition was given in that book chapter in order to make a certain subtle but important point about module systems, which is what the chapter is about. In particular, he is contrasting static typing not with dynamic typing but with "dependent" typing.
- Harper, unlike Wikipedia, has the luxury of defining his terms however he wants -- especially in contexts like that book chapter, where the only person he has to convince that his definition makes sense is Pierce!
- In my experience, Harper (who was on my thesis committee) likes to give unexpected definitions for terms to make his readers think. His goal is the "aha!" experience that occurs when the reader realizes his wacky definition and the usual one are related, gaining some new understanding in the process.
- IMO, the key to understanding this one is that in the next sentence he says that testing equivalence of run-time expressions is "sometimes called 'symbolic execution'". So try rephrasing the definition to "...without evaluating any expressions in the code being checked." If type-checking requires evaluating expressions, then (in general) it cannot be done before run time because the expressions might perform I/O or have other effects in the course of producing their values, or might not terminate. Thus, a type system with this property cannot be called "static", but (for Harper's purposes) there's not much harm in calling any other type system static.
- As for the second question: "Is dynamic typing the complement of static typing? Are the two sets even disjoint?" I'm glad you asked! No and no! See Type safety#Type safety and strong typing. Under definitions like Harper's, static typing just means that all the checking of types that must be done at compile time (i.e., because the language definition says programs that fail those checks aren't supposed to be compiled) can be done at compile time. So languages with no compile-time type checking are statically typed (yes, you read that right!) -- and any language where the necessary static checking (if any) leaves behind a need to check some things (or everything) at run time is dynamically typed. Thus Common Lisp and Java are both statically typed, and are both dynamically typed.
- So to your third question: "Is this all just social science B.S. or are there any clear or meaningful definitions to be had?" Yes, there are clear and meaningful definitions, but they aren't what most people think they are. Creating a satisfactory Wikipedia article out of this situation involves a lot of B.S. :-) Cjoev (talk) 21:03, 17 December 2007 (UTC)
loose typing
[edit]I've seen the term "loose typing" several times. Could we add a section on some of the things this term might be used to refer to, similar to the Strongly-typed programming language article? Herorev (talk) 18:04, 30 April 2008 (UTC)
Meaningless sentence; incorrect example of type inference
[edit]This sentence in the article doesn't make sense to me:
"Careful language design has also allowed languages to appear weakly-typed (through type inference and other techniques) for usability while preserving the type checking and protection offered by strongly-typed languages. Examples include VB.Net, C#, and Java."
In fact, type inference is not used in those languages (except maybe C# 3.0, but surely not current releases of Java - Java 7 will bring some news here). And the examples of type inference in Java, about templates, are not at all comparable to real type inference. So, to what is the article referring to?
Also, type inference removes just the need for _manifest typing_, i.e. type declarations (see programming language for the term), so can make a language appear dynamically typed (because it will lack type declarations) like Python; but weak typing (according to the definition in programming language#Weak and strong typing) means having things such as casts from int to string. And neither C# nor Java support this.
Programming language#Weak and strong typing explains that there is confusion between strong and static typing (and weak and dynamic typing), so I think the author was talking about static and dynamic typing. Anyway, the relation between type inference is unclear, the "other techniques" are unobvious, no citation is given and the sentence is more confusing than useful. So I'm removing it. The partially corrected version is:
"Careful language design has also allowed languages to appear dynamically-typed (through type inference and other techniques (?????)) for usability while preserving the type checking and protection offered by statically-typed languages. Examples include VB.Net, C#, and Java."
--Blaisorblade (talk) 15:31, 24 June 2008 (UTC)
- You are absolutely right. Not only do none of these languages exhibit type inference, but type inference is mischaracterized here. It only removes the need for explicit typing, but unlike weak or dynamic typing does not actually expand the class of programs you can write. I suggest the statement be stricken in its entirety. Dcoetzee 17:09, 24 June 2008 (UTC)
- The removed sentence makes sense except the examples are wrong. It would work if it said Haskell and ML instead of C# and Java. The idea is that you can write ML code as if you were writing Lisp or Python, freely using complex nested data structures with no type annotations, letting the compiler infer types for all the terms in your program. As for strong and weak typing, I think the point being made is that there are a lot of non-overlapping conceptions of what those terms mean. Chris Smith's article (see the external link) discusses this at some length. —Preceding unsigned comment added by 207.241.238.217 (talk) 01:52, 25 June 2008 (UTC)
- Oh I think we can work out what was meant, it's just that the text was unclear. --Paddy (talk) 04:46, 25 June 2008 (UTC)
Memory-safe language
[edit]In Type_system#Strong_and_weak_typing it is mentioned that array accesses should be statically checked. But this is simple impossible, as a value of a run-time variable may be used as an array index. Therefore, I'm more than interested about concrete research efforts that have gone into this particular field.
Many modern languages support tuples, which can be statically indexed. But that's a polymorphic structure, not a monomorphic one like an array. —Preceding unsigned comment added by 84.29.75.6 (talk) 00:29, 9 August 2008 (UTC)
Dynamic typing != no type checking
[edit]A large part of the section Type_system#Static and dynamic type checking in practice is void as the writer assumes, falsely, that Dynamic languages do not do type checking when many type check at run time. For example, Python is a strongly typed language where type compatibility is checked between objects themselves rather than any variable referring to such objects and this is enforced at run time. --Paddy (talk) 04:43, 16 April 2008 (UTC)
- Is this issue still open? If not, could we move it to another section? I disagree that the section suggests that Dynamic typing doesn't do any typing checking at all. I think the remark doesn't relate to the current text, or the author doesn't fully comprehend the English text. —Preceding unsigned comment added by Nielsadb (talk • contribs) 00:41, 9 August 2008 (UTC)
Duck-Typing dynamic??
[edit]I disagree that Duck-Typing is dynamic. If anything, it is "type ignorance": moving the whole concept of "what to do with data" outside of the scope of language-defined typing. Instead, interfaces to objects are (more or less) directly called. Directly saying "give me the integer value of x" doesnt make x any less static, x is simply asked to provide the only acceptable value. Sometimes that is itself, sometimes it is interpreted dynamically by x, but that doesnt make the return type of int(x) dynamic, nor does it make x itself dynamic. On a language-level, x and int(x) are static (though not necessarily safe)
To me, "dynamic typing" is something like: "if I say x='A' it's a string, but if I say x='3' it's an integer". "duck typing" is more like: "If it walks like a duck, talks like it a duck, it's the other guy's fault if it's not delicious"
Final argument: the ability to call atoi(ptr); doesnt make ptr a dynamic type. You've just explicitely called an operation on it. --vstarre 17:22, 18 January 2007 (UTC)
- A call to atoi statically requires that ptr is implicitly convertible to a pointer to const char, in a proper language that is. Duck typing is just as void as a type system as any dynamic system: no checking whatsoever at compile time, just look up the method or operator name at call-time (by the way: if such a method/operator does not exist, what would be a sane alternative to call other than to issue a "type error"?). It doesn't get any more dynamic (and unsafe) than that. Suggest to move to "Revolved Issues". —Preceding unsigned comment added by Nielsadb (talk • contribs) 00:59, 9 August 2008 (UTC)
Dynamic typing: a naive view
[edit]Allow me to expose my own view about dynamic typing, which might be fairly naive since I do not know about the literature on the subject or type theory.
I always thought that dynamic typing was simply the ability to contain, within a single variable, an object of different possible types, hence making the real type of the variable "dynamic".
For example, with nominal subtyping, a variable of type Base can actually contain (or refer to, depending on the semantics of the language) an object of any type derived from Base.
Same with structural subtyping, except derivation is implicit.
That also happens with discriminated unions: if the variable either contains an object of type T1, of type T2, ..., or of type Tn, the real type of what it contains is then dynamic.
None of these, however, are considered dynamic typing with the provided definition, since they're completely type-safe constructs. Does that really make those static typing though? According to the Java description, it seems they are considered as such. When I discovered that I realized my definition was in contradiction with the one other people seem to use.
Of course, all of these can be used in the static domain instead of the dynamic domain, just replace variables by types and types by meta-types. Static subtyping is provided by C++0x Concepts or Haskell Classes, for example. Static discriminated unions would be the same as overloading.
Then there is duck-typing. PHP, Python, ECMAscript and all provide dynamic duck-typing. C++ templates provide static duck-typing.
Duck-typing is extremely unsafe, since validity of an operation is only checked when it is done, so it can fail anytime. Well, if it is static duck-typing, hence working in the meta world of compile-time, the only problems that it generates is hard-to-understand compiler error messages.
According to the dynamic typing definition, it seems the only valid form of dynamic typing is duck-typing, which funnily enough is not even tied to dynamics.
That seems fairly restrictive to have a term "dynamic typing" if it only exists to speak of dynamic duck-typing.
By the way, dynamic duck-typing is usually equivalent to having all variables of the following 'variable' type (pseudo ML-like code):
type variable = Object of object | Function of (variable list -> variable) | Procedure of (variable list -> unit) | Int of int | Float of float | Array of (variable, variable) map | Resource of int ... other native types and type object = (string, variable) map
One can also remark that's fairly similar to prototype-oriented languages.
Unless it is converted to some other kind of typing (which would require whole program analysis), that explains the inefficiency of such dynamically typed languages. A solution seems to JIT-compile them, in order to have types evolving within the compiler at runtime.
--140.77.129.133 (talk) 04:38, 30 March 2008 (UTC)
- I was going to comment, then noticed the section title which is apt ;-) Try looking at the Duck Typing page for more info on that subject for example. --Paddy (talk) 07:08, 30 March 2008 (UTC)
I don't see how that brings any more useful info or anything new to the discussion.--140.77.129.158 (talk) 08:54, 3 April 2008 (UTC)
- I thought it would help point out the differences between C++ templates and duck typing. It doesn't help in general to talk of C++ templates being static duck typing and the duck typing entry, as well as its talk page would help you find out why. --Paddy (talk) 13:36, 3 April 2008 (UTC)
- I think the author of this "naieve" view should be less insure about his/her opinion. C++ templates are in fact static duck typing and he/she's compleyely right about the variants (cf. VB). Do not dismiss this issue. I'd be happy to explain the C++ template mechanism (which can be seen as a very advanced form of the C preprocessor, but still: templates aren't code only expansions are) to those who question this remark.
template<typename T> bool lessThan(T const& a, T const& b) { bool retval = a < b; // how can this work without a-priori knowlegde if operator< exists? int whatever = a.blabla(b); // this will even compile as long as the method actually exists... at instantiation time return retval; // more or less valid C++ code right? }
Without any expansions this code will always compile. C++ templates are macros. If you're looking for real generics, look at Haskell, C# or OCaml (not Java). —Preceding unsigned comment added by Nielsadb (talk • contribs) 01:15, 9 August 2008 (UTC)
- Okay, actually they're not macros since you can pass higher-order template as template arguments, but that's untyped, esoteric and pretty much unknown. C++ does not bring generic to the masses: it requires a very big brain to do a very small job. How many people can say they really understand Boost.MPL? Oh, and I've found that doing stuff the ISO standard STL or Boost way, using generic types (or templates) can require up to 3,2GB of memory (using Visual Studio 2005) so just don't if you're anywhere near pragmatic in a collaborative project. This would be one of the cases where one of the Python fan-boys would step in (no types = faster compile), but I think most of my routine problems would actually been solved using Haskell. —Preceding unsigned comment added by Nielsadb (talk • contribs) 01:27, 9 August 2008 (UTC)
repeatedly incorrect paragraph about type inference
[edit]under "Explicit or implicit declaration and inference", there is a paragraph with some problems.
"Type inference is only possible if it is decidable in the type theory in question."
That's not true. GHC's "AllowUndecidableInstances" extension does some type inference. The only catch is: when you enable that flag: although GHC might terminate successfully, it might however not terminate at all if you've done the wrong thing.
"Haskell's type system, a version of Hindley-Milner, is a restriction of System Fω to so-called rank-1 polymorphic types"
Nay, that's Haskell98. Everyone (well, hugs and ghc anyway) supports Rank2Types.
", in which type inference is decidable."
Except for polymorphic recursion, which Haskell98 allows but requires explicit type-signatures for. Including those, I'm not sure if Haskell98 is theoretically described by "a restriction of System Fω to so-called rank-1 polymorphic types" (though that's mostly an issue of overly theoretical terminology, to me-on-Wikipedia)
"Most Haskell compilers allow arbitrary-rank polymorphism as an extension, but this makes type inference undecidable. (Type checking is decidable, however, and rank-1 programs still have type inference; higher rank polymorphic programs are rejected unless given explicit type annotations.)"
It makes *complete* type inference undecidable. (But it already was, due to polymorphic recursion; and many other Haskell type-system extensions require type signatures in particular places too.) The presence of one undecidable function, can be fixed by the type annotation and the rest of the module doesn't then require type signatures.
In fact, some Lisp compilers are known to attempt some type inference for optimization purposes.
How should we fix this paragraph? We don't want to just delete it and give the reader the impression that type inference is a magical thing that will fix all their type errors while keeping all their working untyped programs valid. I looked at the main article Type inference for inspiration and wasn't very impressed with it...
—Isaac Dupree(talk) 14:10, 29 May 2008 (UTC)
- I wholeheartedly agree. Since you posted this at the end of May, let's just say we'll have our way with the respective text at 15th of August. It's about time this article reflected a realistic state to those frustrated with Java and C++, not suggest abandoning the concept because certain installations use a 60s implementation of a type system. At least every computer scientist should be convinced that Python is a step back into medieval times (rather use C# 3.0, Haskell, OCaml, whatever). —Preceding unsigned comment added by Nielsadb (talk • contribs) 01:37, 9 August 2008 (UTC)
Advantages of Dynamic Typing?
[edit]Consider the fragment:
There is also less code to write for a given functionality [...]
in Type system#Dynamic typing. I feel this is incorrectly giving credit to dynamic typing. Most dynamically typed languages (with the notable exception of Lisp) are modern languages with a ditto type system, whereas many statically types languages (e.g. C++, Java and C# 2.0) use a more conservative system require type specifications rather than optional annotations. I feel this is unfair to modern statically types languages such as Haskell and OCaml which are statically typed, but are smart enough to infer typing information automatically in many cases. This is mentioned in Type systems#Static and dynamic type checking in practice, but the section name may suggest this is just an eventuality. I'm not completely up-to-date regarding C# 3.0, but I've heard it includes type inference, which should finally bring this to the main stream.
Looking at e.g. the (often not idiomatic and highly optimized) code at Language Shootout I find e.g. most Haskell and OCaml code to be just as concise as (say) Python code. This issue about amount-of-code or lines-of-code keeps getting mentioned by dynamic typing advocates. I'd like to see a realistic comparison between a modern statically typed language such as Haskell and a language like Python with all its pre- and post-conditions that could've been checked via typing.
Continueing the section:
In practice, however, at least for some types of problems, these speed differences will not be significant or noticeable
This should be put in the next section, as the sentence's first two words suggest. Also, the Language Shootout suggests a different picture: Python and its ilk are significantly slower than high-level statically programming languages. This is not something to ignore, in many cases we're talking an order of magnitude. Of course, the author meant to say that for much code performance isn't a key factor. Also, he meant to say dynamically typed languages are often interpreted, and performance is horrible because of it.
I find it only fair to include a counter-argument just before closing this remark: static type system can be confusing. Writing generic code comes more or less automatic with dynamically typed languages (resulting in run-time errors if a typing error is made, which may be after deployment), with statically typed languages it can sometimes be very confusing to get to compile in the first place. For example, Haskell has type extensions for existential types, generic abstract data types, type families, and it goes on and on. In a presentation about Boomerang by Benjamin C. Pierce (!) a short overview is given.
To summarize: in an old language typing can be a nuisance but it'll find some errors. In a modern statically typed language typing will find significant errors automatically for you, and your code will run faster. Only when you're doing something incredibly clever will you need an incredibly clever type type system to make the code compile, but what exactly are the odds of getting that code right without any checking whatsoever.
I would be happy to propose some new text for that particular section if the original author accepts some of the arguments I've made here. I'm guessing he/she is a dynamic language supporter. I'm a static typing supporter. Yet I'm most interested in presenting a well-balanced text about the pros and cons of static typing on Wikipedia, but I fear the current text doesn't reflect the current situation of programming language research (or regarding engineering: consider the stability of the language Python and its interpreter against Haskell and its compiler GHC). —Preceding unsigned comment added by 84.29.75.6 (talk) 00:01, 9 August 2008 (UTC)
- Just a note, that was me (logged in now). —Preceding unsigned comment added by Nielsadb (talk • contribs) 01:39, 9 August 2008 (UTC)
- I think it is fair to contrast the most widely used static languages, (by a large margin), such as Java, C++, and C against the most widely used dynamic languages, such as Perl, Python and Ruby. Yes their are other languages but I think it is a fair generalization (generalizations are never fair to all). If you look [here], for example, there is no mention of OCaml and Haskel their usage is minor (But their future influence ... who knows)?.
- On speed, The shoot-out tries to measure the speed in absolute terms. The sentence you quote is linked to quality. If you have to do a task and the task must run in a particular time, then to make it run faster can be very wasteful. If your task is to write a Tetris-like game for multiple platforms and you write a first version in a dynamic language and it not only works, but gives good game-play then you would be stupid to re-implement it in a static language for greater speed. The reimplementation would automatically detract from quality, if you think of quality as defining what is needed, then meeting that need , then extra speed does not automatically give you extra quality and usually detracts from quality as you divert resources on a superfluous quest.
- On finding errors: static typing finds type errors. I'm against any text that omits the fact that type errors are not all errors. I've seen and read of too many cases of people under pressure passing off something that merely compiles - even when they have gone to the trouble of putting in place test suites and automatic testing. (And had to debug the result).
- In short, I don't think it is incorrectly giving credit to Dynamic languages, and, if their is a wider audience of mainly static language practitioners out their, then I think it is fair to not hide Dynamic language advantages. --Paddy (talk) 07:16, 9 August 2008 (UTC)
JavaScript Dynamic Typed ?
[edit]Are you sure JavaScript is dynamically typed as you can declare variables without first initializing them
ie: Var x;
x="Wiki"; Paulchwd
- First of all, you are just declaring a name with no associated type. You are giving it type just when the assignment takes place. Still consider the following:
var x=true; alert( typeof(x) );
x='test'; alert( typeof(x) );
x=3; alert( typeof(x) );
- It's a perfectly valid JS code and we do change the type of x three times. Declaration has nothing to do with wheather a language is statically or dynamically typed except the fact that in most statically typed languages you must declare a variable as to give it a type (but could be done in assignment also). You can still declare in some dynamically typed languages but it's of less use. Technically, you could even have a type declaration in a (hypothetical) dynamic language like
int i = 6;
and then change the type.. huh, since it's dynamically typed. So yeah, we ARE sure JS is dynamically typed.
And one more note: JS is case sensitive: Var x;
will not compile. —Preceding unsigned comment added by 89.25.28.95 (talk) 21:53, 24 October 2008 (UTC)
Redundancy
[edit]Parts of this article seem to say the same things over and over again. The Dynamic typing and Static and dynamic type checking in practice sections repeat many of the same discussions and arguments, which is then repeated in the Controversy section. --Allan McInnes (talk) 01:25, 16 January 2009 (UTC)
Improving the description of DT to remove bias
[edit]In order to prevent the continuity of an edit war (in which I assume my fault, since I should have just partially modified a proposed change which has had some good points instead of making a complete revert and throwing the baby out with the bathwater), I have created this section to discuss a proposed change.
A previous version of the article contained:
Compared to static typing, dynamic typing can be more flexible, though at the expense of fewer a priori guarantees. This is because a dynamically typed language accepts and attempts to execute some programs which may be ruled as invalid by a static type checker.
That represents a NPOV because it shows an advantage of DT over ST (flexibility) as well as a disadvantage (fewer a priori guarantees).
Now, User:Paddy3118 has proposed to change it to:
Compared to static typing, dynamic typing is more flexible, for example by allowing programs to generate types based on run-time data.
While I am in favor of incorporating the example he provided to enrich the text ("for example by allowing programs to generate types based on run-time data."), I do not think it is proper to remove the disadvantage previously listed ("though at the expense of fewer a priori guarantees. This is because..."), because this makes the section biased towards DT, violating WP:NPOV.
Another piece of text in the previous version:
Testing is a key practice in professional software development, and is particularly important in dynamically-typed languages to exercise the dynamic type checks over a range of possible program executions, since there is no static type checker.
It is biased in favor of static type checking, and this is not good. Static type checking is not a replacement for unit testing, just a complement of it.
In the new version, User:Paddy3118 proposes to change it to:
Testing is a key practice in professional software development, and is particularly important in dynamically-typed languages. In practice, the testing done to ensure correct program operation is a more rigorous test of a programs correctness than mere static type checking.
I think this removes the bias towards static type checking. I would just change "than mere static type checking" to "than static type checking by itself".
I'll edit the version proposed by User:Paddy3118 to keep his additions that improve the text (the example, the information about unit testing, ...) while removing the bias. If User:Paddy3118 or someone else objects to the change, please discuss it in the talk page instead of simply reverting the edition without discussion. Thanks. --Antonielly (talk) 20:31, 14 January 2009 (UTC)
- Thank you for explaining your reason for change.
- On reading your changes in the first sentence; I should explain that there are valid programs that can be expressed in a dynamic programming language that cannot be statically type checked. Your changes in this sentence amount to saying that because such programs cannot be statically type checked, they should be treated as invalid. Incapable of performing their function. That is why I made the change removing removing the statement to the effect that "The only valid programs are those that could be statically type checked", which may or may not have been the intention of its original author, but which can be read that way.
- On reading the changes in the sentence originally containing the word 'mere', I can see that this might be provocative, and on reading your replacement text, accept the changes you made as equitable.
- In summary, I think their is still an issue with keeping the change that includes "... though at the expense of fewer a priori guarantees. This is because a dynamically typed language accepts and attempts to execute some programs which may be ruled as invalid by a static type checker.". --Paddy (talk) 04:29, 15 January 2009 (UTC)
- Alright, I agree :) . Fortunately, another editor has rewritten the misleading sentence to remove the potential POV you have pointed to. If you still feel the result is POV, feel free to reword the sentence. In the end, what is important is that the tradeoff is clearly shown: the advantages and costs of the flexibility provided by DT. This way we can achieve NPOV.
- And again, I apologize for having completely reverted your previous edit. I hope the misunderstanding has now disappeared. --Antonielly (talk) 14:50, 16 January 2009 (UTC)
Re: What is static typing?
[edit]This is another item within the static typing topic: I have to disagree with the statement about C# in the article "However, a language can be statically typed without requiring type declarations (examples include Scala and C#3.0)". C# executes in the .NET runtime, which goes allows us to use concepts beyond static compilation. It is true that C# is strongly typed so I propose striking the reference to C#3.0 in the sentence of the article. --Ancheta Wis (talk) 13:16, 25 December 2007 (UTC)
- This only refers to C#'s type inference and it's var keyword for type-inferred local variables => a valid statement that. btw, what do you mean with "beyond static compilation"? Sure you have some infrastructure like CodeDOM which can be used to compile C# code at runtime... but in the general case, C# is statically compiled to CIL. 62.218.223.47 (talk) 21:02, 14 February 2009 (UTC)
Alternative formulation for Dynamic Typing
[edit]Dynamically typed programming languages associate types to values rather than to variables. Therefore the type of a variable's value may only be available at runtime. A variable may also have different values with different types during their lifetime.
dynamically typed:
var x = 5 // the variable x has the value 5 of type int
print x
x = "foo" // now the variable x has the value "foo" of type string
print x
statically typed:
int x = 5 // the variable x has the type int and value 5
Tdanecker (talk) 21:36, 14 February 2009 (UTC)
print 5
x = "foo" // invalid: the variable x still has type int so the string "foo" can't be assigned to it
print x
- The problem with this formulation is that types are not associated just with variables, but more generally with expressions (see for example Pierce's text, in which he describes type systems for languages that don't even have variables). A concrete example of this is the following Erlang program:
-module(testtyping). -export([run/0]).
run() -> 5 + "foo".
- Erlang is a dynamically-typed language. The program above will compile just fine (ok, the Erlang compiler will give a warning that it may generate a runtime error, but that's only because it's doing some limited static checking), but running it causes a runtime type error. Note that the program contains no variables, just an ill-typed expression that would pass compilation in a statically-typed language.
- As an aside, I'll just note that the following Haskell program compiles and runs just fine:
main = do x <- return 5 print x x <- return "foo" print x
- Haskell is strongly statically-typed.
- --Allan McInnes (talk) 04:08, 25 February 2009 (UTC)
Merge proposal
[edit]- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- No consensus to merge. WTF? (talk) 02:33, 24 January 2013 (UTC)
What is the difference between the subjects covered by the articles Type system and Data type to justify setting those articles apart rather than merging them? Thanks in advance. --Antonielly (talk) 02:17, 6 May 2009 (UTC)
- I think, merge proposals should be accompanied by detail reasoning for doing so, not the other way around. Since you want to merge them, you should elaborate and justify the reasons for this. Kbrose (talk) 04:53, 6 May 2009 (UTC)
- In 2006 someone created data type saying that "type system is moved out of the way".[1] I don't have a personal view but I'm going to remove the tag on the articles given the lack of a merge rationale and discussion. Thincat (talk) 10:07, 8 June 2009 (UTC)
Wrong statement
[edit]I'd be glad if somebody could confirm this (which is why I didn't edit/correct the article page) but in section four, subsection 4.3 and 4.4 the statements beginning with 'In a subclassing hierarchy' are IMO exchanged, i.e. the intersection of a type and its ancestor is NOT the most derived but the ancestor type, the most derived type (present in the operation) is the UNION of the type and its ancestor. Bmesserer (talk) 15:35, 30 June 2009 (UTC) P.S.: Sorry, I couldn't get a link to point to the (sub)sections
Dynamic Typing, type checking is secondary
[edit]- A programming language is said to be dynamically typed, or just 'dynamic', when the majority of its type checking is performed at run-time as opposed to at compile-time. In dynamic typing, types are associated with values not variables. Dynamically typed languages include Lua, Groovy, JavaScript, Lisp, Objective-C, Perl (with respect to user-defined types but not built-in types), PHP, Prolog, Python, Ruby, Smalltalk and Tcl/Tk. Dynamic typing can be more flexible than static typing (for example by allowing programs to generate types and functionality based on run-time data), since static type checkers may conservatively reject programs that actually have acceptable run-time behavior.[1] The cost of this additional flexibility is fewer a priori guarantees, as static type checking ensures valid use of types throughout all possible executions of the restricted programs allowed.
Dynamic typing is not used mainly for type checking. Data is tagged with a type. When functions are applied for data, they either don't check their argument at all, they select an implementation for the function based on the data's type or they may check the data type.
Consider: (+ 1 2 3) in Lisp. The main purpose of dynamic typing is here to select the right addition (here integer addition) or raise an error if no addition is possible.
Also (defun foo (bar) (baz bar)) : the function FOO just passes the data to the function BAZ, it does not care about the type of bar at all.
- Dynamic typing may result in errors with variables having unexpected types (like string instead of integer or an extra object field because of a typo in the assignment). This buggy behavior could appear a long time after creating the assignment, making the bug difficult to locate.
I thought with dynamic typing it is not the variables having a type, but the objects being type tagged. So the first part of the sentence is already wrong.
Joswig (talk) 16:16, 18 July 2009 (UTC)joswig
I rewrote the latter paragraph ("Dynamic typing may result in errors..."), hopefully clarifying it. Ezrakilty (talk) 21:58, 18 July 2009 (UTC)
Static and Dynamic type checking
[edit]Some people seem to see this article as battleground for a dynamic vs. static type checking war. There were a lot of POV corrections to make static type checking look stupid and to make dynamic type checking look like the best thing ever. Strange arguments like: "Static type checking may refuse a program which would run OK" and "Static type checking cannot find all bugs in a program, so it is useless" are explained in detail. Conclusions like "Static type checking is never type-save" are drawn without mentioning that static type checking can be type-save when some other conditions (like no type conversions and no random changes in memory) hold. Besides: A language could be statically and dynamically type checked. It is said that static type checks are used as an excuse for bad testing without mentioning that tests, especially code coverage tests with 100% code coverage are done rarely. Besides: Not even a code coverage test with 100% code coverage can verify that no type errors will occur at runtime, since the combination of all places where values are created and all places where a certain value may be used must be taken into account. Additionally: All this tests must be repeated after every change in the program. Arguments where most professionals and computer scientists agree like "The earlier you find a bug the better" are not mentioned at all. IMHO static and dynamic type checking have their advantages and drawbacks and this article should give a NPOV view. When someone was annoyed, when a compiler complained about type errors in his programs, he should not take revenge by writing his prejudice in this article. Raise exception (talk) 08:32, 28 July 2009 (UTC).
- You your self do not hold a neutral point of view. I came to an article which defined static typing in its section with little or no reference to its limitations, and a dynamic typing section littered with them. This is despite their being a separate "Static and dynamic type checking in practice" section. Your comments contain miss-quotes as to what has been added to the ST section, and no mention of the absurdities that have been removed. The article never said that ST can not be type safe, only pointed out that the popular ST languages were so. The ST section was making much out of its use to 'test' hard to reach code without mentioning the balance of mentioning any limitation to the efficacy of ST as a general test methodology. That is not neutral.
- An outsider has probably a more neutral view about my neutrality. :-) The shape of the article, when you came to it, is not my fault. IMHO I did not miss-quote: The ST section contained a longish explanation that "Static type checking may refuse a program which would run OK". This is true, but it is not very important and the way it was written gave the impression that static type checks are a hindrance. My attempt to explain that "Static type checking can find type errors in rarely code paths" and why this is not easily reached with dynamic type checking, was removed by you. Additionally you tried to turn the argumentation around such that the original meaning got lost. It is clear that static type checks cannot find all errors and that intensive testing is necessary anyways. But the fact remains that static type checks can uncover errors that could stay hidden with dynamic type checking even after tests with 100% code coverage succeed. The reason is simple: The combination of places where values are created and where a certain value is used must be taken into account. A code coverage test only assures that all code paths in a program are tested, but not that all combinations of code paths are tested. It is IMHO NPOV to point out this difference. Raise exception (talk) 14:45, 28 July 2009 (UTC)
- Some may criticize the position that explicit type casts make a language not type safe. A possible view (not necessary my view) could be: The programmer said: "Trust me, I change the type on purpose". Sure errors can easily happen this way, but an explicit cast is comparable to a function call. And function calls allow that the result has a different type than the argument. This way explicit type casts between simple types (references and pointers are probably something different) are viewed as functions changing the type. Raise exception (talk) 14:45, 28 July 2009 (UTC)
- Your last sentence is pure conjecture (and wrong, by the way). The point remains. If the ST section and DT section should either both have criticisms or not. Their is another section were criticisms and comparisons abound, so I would prefer not, but ST script kiddies cannot resist the temptation to add a dig at DT in the DT section; and now it seems, they will not admit a blemish in the ST section either. That's hardly balanced.
- Note that I did not mention your name in the paragraph. The last sentence was just a general talk about possible reasons to hate static typing. Many people are forced to learn statically typed languages in school/university and many people do not like to be criticized. Some people hate the nitpicking compiler which complains instead of just reading the mind of the programmer and "do what I think". The ST and DT sections should both have some criticisms. But the critic should not give the impression that users of ST or DT are stupid. The other section should contain direct comparisons. BTW.: Most scripts (like Perl, Python, Ruby, JavaScript, Lua, ...) are dynamically typed and therefore "script kiddies" are seldom fans of statically typed languages. Have you ever written an interpreter or compiler? There are reasons why most interpreted scripts are dynamically typed and why most compiled languages are statically typed (I know that the implementation and not the language is interpreted/compiled). Your view of other people adding dig and blemish is hardly balanced. When somebody writes pro ST or contra DT he/she is not your enemy or stupid. There are other people having knowledge in CS besides you. Raise exception (talk) 14:45, 28 July 2009 (UTC)
- Read your last change: You have replaced a reference to a WP article on Type safety which names all of the most commonly used ST languages as not being type safe; Java, C, C++; by a comment without relevant links at all (I discount a links to loophole, and programming language specification, as they are not nearly as appropriate).
- Do you refer to the change where I undid your change? Well, the older sentence was just better, easier to understand and your link on type safety was a red link. BTW: Why did you remove the original sentence instead of adding your information and your link? Raise exception (talk) 14:45, 28 July 2009 (UTC)
- I will refrain from any immediate edits around this point, but I suggest you revisit your recent changes and strive for that neutral point of view that currently evades you --Paddy (talk) 11:30, 28 July 2009 (UTC)
- Static and dynamic typing both have their assets and drawbacks. They are different and both have advantages in certain places. This facts should be pointed out without the drive to prove something. BTW: I am always interested in a neutral point of view (and NPOV is not evading me). Raise exception (talk) 14:45, 28 July 2009 (UTC)
- I just corrected the paragraph comparing static type checks and code coverage tests in case of type errors in hard to reach code. To avoid conflicts with fans of DT dynamic typing is not mentioned. The paragraph just mentions that code coverage tests cannot guarantee to find such errors. It is IMHO clear that testing is important and that many other errors are not found with static type checks. Therefore and to keep the paragraph short I do not mention it. Raise exception (talk) 15:12, 28 July 2009 (UTC)
Merge manifest typing here
[edit]That article just uses different terminology to describe the same concepts as here. But it some code, which is only thing that saved it from a straight redirect. Pcap ping 16:48, 23 August 2009 (UTC)
Type errors will not occur in any possible execution of a program
[edit]Maybe some people do not know it: Under some conditions (E.g.: static typing, type casts forbidden (or not used) and no random overwrites of memory) type errors will not occur in any possible execution of a program. Please tell me when I forgot some precondition. Having no runtime type errors is one of the promises most people expect from a static type system (For sure: When type casts are used or random memory is overwritten such a promise will be void). It is also clear that a type safe type system requires such conditions to be ENFORCED (some things are easier to enforce and others are harder to enforce, but all preconditions can be enforced). This is one way to reach a type safe type system (when all preconditions are enforced in the type system). Coding guidelines to "enforce" this preconditions are certainly not enough to reach type safety. The original sentences removed were:
- When "loopholes" are avoided and when overwriting of random memory areas (e.g. with pointers) is prohibited static typing will ensure that type errors will not occur in any possible execution of a program. When the preconditions mentioned above are enforced the type system can be made type safe.
Maybe my explanation does not make everything clear. Maybe someone else should have a look. There is always room to improve, but just removing the information is not OK. Georg Peter (talk) 13:49, 18 August 2009 (UTC)
Just to make it clear: A language which is not type safe cannot be changed to be type safe afterwards. I am talking about something else (preconditions and promises) and how this things can be used to construct a new type safe type system. Georg Peter (talk) 14:03, 18 August 2009 (UTC)
- If you make a mistake in static typing (ST), then your ST system will find it at compile time. If you mistakenly include a "loophole" then your ST system will not necessarily report an error. A type safe ST system would report an error. I object to you stating that what you propose is Type safe as you are relying on purely manual checks. —Preceding unsigned comment added by Paddy3118 (talk • contribs) 13:26, 24 August 2009 (UTC)
- It was not my intention to give the impression that something can be made type safe by relying on purely manual checks. I wanted to point out how a language with a static type system can be designed to be type safe. BTW I do not consider all type conversions automatically type unsafe, just specific ones (see below). IMHO a language can allow particular type conversions and still can be type safe. Georg Peter (talk) 15:57, 24 August 2009 (UTC)
I think you should not give the impression that excluding type safe parts of a language makes it type safe unless these omissions are checked by the type system, i.e. by having at least one type safe type system and allowing a switch to a type-safe type system for the compilation of a program. (Are their any languages with multiple type systems)? This is certainly not the case for the most (overwhelmingly), popular statically typed languages. --Paddy (talk) 03:21, 19 August 2009 (UTC)
- The article is about type systems in general and about their theoretical and practical properties. The beginning of the paragraph already mentions that popular statically typed languages allow to circumvent the static type system (therefore they are not type safe). AFAIK there are no languages with multiple type systems. But when a new language (with a new type system) is designed some rules apply. This rules should also be pointed out, not just the things that popular languages do. Wikipedia is an encyclopedia. Therefore also theoretical things count. One thing that I want to point out is: A static type system (for a new language) can be designed in a way to make it type safe.
- About type conversions themselves: Some people seem to believe that type conversions are automatically type unsafe. This is only half of the truth since type conversions/casts come in several flavors. It is clear that a casting between pointers to different structures is NOT type safe. But there are also other conversions/casts which are more similar to arithmetic operators or functions (which may throw an exception under some circumstances). Propose a conversion from boolean to integer which converts FALSE to 0 and TRUE to 1. Since boolean allows only the values FALSE and TRUE this conversion can be considered safe (a language with static and dynamic type checks would change the dynamic type info also). A proposed conversion from integer to boolean can convert 0 to FALSE and 1 to TRUE and any other value would throw exceptions. This exception throwing must be distinguished from a type error since it is more comparable to a division by zero exception. It would be nice if you helped to describe such things instead of just removing paragraphs. Georg Peter (talk) 15:57, 24 August 2009 (UTC)
- The misleading text is this:
- "When such "loopholes" are avoided and when overwriting of random memory areas (e.g. with pointers) is prohibited, static typing can effectively ensure that type errors will be avoided (outside of artificial corner cases), and the type system can become type safe for practical purposes."
- The sentence can be read in two ways. That the loopholes are in the static typing system and you are asking for a new static typing system without them, OR that manual avoidance of loopholes can change an un-safe ST system into a safe one - which is not the case.
- In the first case, you are merely stating what constitutes a safe type system and this might be better left to the type safe article. In the latter case the erroneous information should be removed. Either way, I cannot see much use for the sentence as it obfuscates more than it enlightens.
- The misleading text is this:
--Paddy (talk) 15:21, 26 August 2009 (UTC)
Dependent types
[edit]This was added a couple of weeks ago. I've removed it because it's not true as it was stated. In LF or CoC for instance all programs are terminating, so neither is Turing-complete. But you can't write a decidable dependent type system for a Turing-complete system if the "arbitrary specification" is "this expression terminates", c.f. Rice theorem. I chose to remove that sentence rather than clarify it because that discussion is too side-tracking in the first section of this article. Pcap ping 19:00, 1 September 2009 (UTC)
Intro changes by User:Ketil
[edit]I've reverted [2] as a non-improvement. Please consult MOS:INTRO for how to phrase the 1st sentence. Also "A type system is a loosely defined term for associating metadata (one or more types) with each program value; by examining the flow of these values," is clear as mud. Which values is "these values" referring to? Pcap ping 09:41, 1 September 2009 (UTC)
- Fair enough. (The "mud" part was there originally, I just shuffled things about a bit.) The main problem with this page as I see it, is that "type system" and "type" are terms that mean completely different things in different contexts. I think that the introduction starting with "may be defined as" and presenting one particular definition is very confusing, irrespective of how well it may conform to MOS:INTRO (and upon reading it, I actually think my shuffled version was more conformant). (talk) 11:32, 1 September 2009 (UTC)
- I realized the "mud part" was originally there after I posted this, so I attempted to clarify it immediately thereafter. This is all about the typing relation, but I did not want to introduce that much jargon in the lead. Pcap ping 18:34, 1 September 2009 (UTC)
- I'm also curious why you reverted "limited" back to "very limited". Was this a mistake, or do you have a citation for that? kzm (talk) 11:32, 1 September 2009 (UTC)
- That wasn't my intention although now that you mention it, POV-like qualifiers such as "limited" "very limited" aren't that great of an idea in technical article even if cited. I reworded it. Pcap ping 18:34, 1 September 2009 (UTC)
"tractable syntactic method"?
[edit]The lead sentence of this article defines a type system as "a tractable syntactic method for proving the absence of certain program behaviors by classifying phrases according to the kinds of values they compute." Which is all well and good - it's pretty much verbatim from Pierce's Types and Programming Languages. The problem is that dynamic typing doesn't fall within that definition, yet it is discussed quite extensively in the article. Dynamic type-checks are not syntactic in nature. In fact, the introduction to Pierce's book makes it quite clear that his definition of "type system" is intended only to encompass static analysis of types. It seems to me that we either need
- (a) A new (citable) definition of "type system" that encompasses dynamic typing, along with some dicussion of the fact that the meaning of "type system" varies and sometimes only includes static type analysis;
or
- (b) To rework the article so that it conforms to Pierce's definition (and I should note here that Pierce is one of the standard references on the topic of type systems), and makes clear that "type systems" are used for static analysis but that other options (dynamic type checks) are also available.
--Allan McInnes (talk) 01:43, 16 January 2009 (UTC)
- I tried to do (a), see if you think it helps? Not sure I'm happy with the second half of the introduction, seems to specific. kzm (talk) 09:19, 1 September 2009 (UTC)
- If you want to do this, please cite some authoritative source rather than write it off the top of your head. This article has enough problems already. I reverted you for this, and other reasons, see new thread at end. Pcap ping 10:03, 1 September 2009 (UTC)
- I tried to do (a), see if you think it helps? Not sure I'm happy with the second half of the introduction, seems to specific. kzm (talk) 09:19, 1 September 2009 (UTC)
- Okay, can you suggest any citeable reference that encompasses both dynamic and static typing? I can't find any, so perhaps the best solution is to go with (b) and make "type systems" an empty disambiguation page pointing to "static type system" and "dynamic type system". As it stands, the introduction is misleading, confusing, and makes only the most feeble attempts at defining the topic. kzm (talk) 13:29, 1 September 2009 (UTC)
- No, but there are plenty of references discussing dynamic type systems [3]. I don't have time work on this article currently, so feel free to hack at it. Pcap ping 00:32, 3 September 2009 (UTC)
- Okay, can you suggest any citeable reference that encompasses both dynamic and static typing? I can't find any, so perhaps the best solution is to go with (b) and make "type systems" an empty disambiguation page pointing to "static type system" and "dynamic type system". As it stands, the introduction is misleading, confusing, and makes only the most feeble attempts at defining the topic. kzm (talk) 13:29, 1 September 2009 (UTC)
Corner case where static type checkers reject programs that may be well-behaved at run-time
[edit]There are two statements which sound similar but express different meanings:
Static type checkers will reject some programs that may be well-behaved at run-time.
In other words: Sometimes a program is rejected although it may be well-behaved at run-time. This is true, but a correct program rejected by a static type check is just a corner case. Most programs rejected by a static type checker will probably also have runtime type errors under certain circumstances. BTW: This corner case is certainly not the reason why people use dynamic typing. But it is OK to mention the corner case once in the paragraph about "static type checking". The second statement is:
A dynamically typed language accepts some programs which may be ruled as invalid by a static type checker.
In other words: Rejected programs may or may not well-behave at run-time. This statement does not focus on a improbable and obscure corner case. The emphasis on a corner case in two chapters is just too much. Therefore I used the second phrase in the paragraph about "dynamic type checking". Raise exception (talk) 10:58, 31 July 2009 (UTC)
Any program that uses run-time data to create functionality cannot be statically typed. Dynamic typing has this ability, it is used naturally. By your use of the phrase "corner case" are you trying to diminish the importance of what it is describing? I have to ask as exploring "corner cases" is what Verification Engineers of chip designs strive to accomplish. "Corner cases" are very important, they can be exploited for good or ill, and cause unexpected behaviour.
I have tried to remove the Dynamic bashing from the static typing section, and the Static bashing from the dynamic typing sections. There are other sections were they can be, and are, compared and contrasted. --Paddy (talk) 02:14, 1 August 2009 (UTC)
- You removed several paragraphs without adding the information at a different place. E.g.:
- Dynamic typing may result in runtime type errors—that is, at runtime, a value may have an unexpected type, and an operation nonsensical for that type is applied. This operation may occur long after the place where the programming mistake was made--that is, the place where the wrong type of data passed into a place it should not have. This makes the bug difficult to locate.
- This paragraph descibes exactly how debugging is done with dynamic typing. Do you think this is not true? I think that this information should not be left out. You also removed:
- Static type checkers evaluate only the type information that can be determined at compile time, but are able to verify that the checked conditions hold for all possible executions of the program, which eliminates the need to repeat type checks every time the program is executed.
- This describes the reasons why many static languages omit runtime type checks. The effect of "loopholes" to circumvent the type system is described elsewhere. You also partly removed:
- Compared to static typing, dynamic typing can be more flexible (e.g. by allowing programs to generate types and functionality based on run-time data), though at the expense of fewer a priori guarantees. This is because a dynamically typed language accepts and attempts to execute some programs which may be ruled as invalid by a static type checker.
- What is so bad to say "at the expense of fewer a priori guarantees", And it is also a fact that dynamically typed languages accept programs ruled as invalid by a static language. The removed paragraphs contain valuable information and they are certainly NPOV. Your changes remove paragraphs describing advantages of static typing and drawbacks of dynamic typing. You may be a great fan of Python and dynamic languages, but this is not the place to express your liking. If you want comparisons done in a different chapter I suggest you add the comparisons first and start to remove data AFTER it has been saved to a new place. Removing valuable data while pretending that the comparisons should be at a different place is not ok. Wikipedia is about gathering information and not about removing it. BTW: IMHO the current place of comparisons should be left as is: Some comparisons in the "static" and "dynamic" chapters and some outside. I tried to keep several of your changes and moved some paragraphs to a different place. Georg Peter (talk) 08:42, 1 August 2009 (UTC)
I doubt that a program that uses run-time data to create functionality cannot be statically typed. Proof: When a statically typed language provides a special type (e.g.: dynType) with the properties equal to 'typeless' expressions used in dynamic languages all variables and functions which use 'dynType' share the same possibilities as expressions in a dynamic language. Another proof is that the interpreters used to interpret dynamic languages are often written in a statically typed language (e.g.: C). This does not imply that static typing is inherently better. It just shows that a statically typed program can emulate dynamic features (I admit that in an interpreter this is not done in a simple way).
About my use of the phrase "corner case": The corner case I was referring to was:
A program rejected by a static type checker which actually never triggers a runtime type error.
as opposed to the probably more common case where:
A program rejected by a static type checker which triggers a runtime type error under some circumstances.
I know that corner cases are very important, but constructing a program that is rejected by a static type checker and actually never triggers a runtime type error, is not a job that a software engineer must keep an eye on all the time. Verification Engineers of chip designs strive to accomplish different things and certainly do not explore the corner case mentioned above. I also want to point out that the main reason to use dynamic typing is not: "Executing programs that are rejected by a static type checker and which never trigger a runtime type error". As you probably know there are other reasons to use dynamic typed languages.
About your attempt to remove bashing: In a dynamic type checked environment the cause of an error and the place where it is found can be far away. This makes it hard to find the real reason of an error. This is a fact and certainly not bashing. A critical view at static and dynamic typing is necessary. Luckily someone else already did some corrections, so I leave everything as is. Raise exception (talk) 10:02, 1 August 2009 (UTC)
I am happy that the description of the "corner case" is still in the article (BTW.: I see it as important case since it explains differences between static/dynamic typing). Georg Peter (talk) 17:05, 12 November 2009 (UTC)