Jump to content

Talk:Vector space/Archive 3

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3Archive 4Archive 5

Riemann integral and completeness

In the section "Banach spaces" it is written "If one uses the Riemann integral instead, the space is not complete, which may be seen as a justification for Lebesgue's integration theory. Indeed, the Dirichlet function of the rationals, is the limit of Riemann-integrable functions, but not itself Riemann-integrable.[57]"

The first phrase if OK with me, but the second is not. The Dirichlet function is not an example of an element of the complete Banach space Lp not belonging to the (linear, non-closed) subspace of Riemann integrable functions. Indeed, the Dirichlet function is equivalent to a constant. Any sequence of Riemann integrable functions that converges to the Dirichlet function IN NORM, converges to a constant in norm. Pointwise convergence is not relevant here. Boris Tsirelson (talk) 07:45, 19 November 2008 (UTC)

In addition: "One example of such a function is the indicator function of the rational numbers, also known as the Dirichlet function." (quote from Dirichlet function) Either say "Dirichlet function", or "indicator function of the rationals", but not "Dirichlet function of the rationals". Boris Tsirelson (talk) 07:50, 19 November 2008 (UTC)

Thanks, Boris. I'm just a stupid guy :( -- I forgot that identification business at that point. Actually the problem is, I did not find a reference for the fact that the Riemann integral yields an incomplete space. It sounds like you might have one? Could you please help out by putting a precise ref at that place? For the moment I simply removed the wrong statement, which also solves your secound point. Jakob.scholbach (talk) 08:58, 19 November 2008 (UTC)
A function is Riemann integrable (on a finite interval, I mean) if and only if it is bounded, and continuous almost everywhere. The space Lp evidently contains unbounded functions (unless p=infinity), which makes the statement trivial. However, this is a very cheap argument; usually for an unbounded function one uses improper Riemann integral. It is much more interesting to see a function that fails to be continuous almost everywhere, and cannot be "repaired" by a change on a null set. The indicator of a dense open set of small measure fits. I'll try to find an appropriate reference. Boris Tsirelson (talk) 19:23, 19 November 2008 (UTC)
See Smith-Volterra-Cantor set; its indicator function fits (and its complement is a dense open set not of full measure).
Also, look at this quote: "Many functions in L2 of Lebesgue measure, being unbounded, cannot be integrated with the classical Riemann integral. So spaces of Riemann integrable functions would not be complete in the L2 norm, and the orthogonal decomposition would not apply to them. This shows one of the advantages of Lebesgue integration." Richard M. Dudley, "Real analysis and probability", 1989 (see Sect. 5.3, page 125).
For now I do not have anything better; maybe tomorrow... Boris Tsirelson (talk) 20:02, 19 November 2008 (UTC)

Distributions

"Distributions" section starts with "A distribution (or generalized function) is a map assigning a number to functions in a given vector space, in a continuous way". First of all the map is linear (and second, continuous). Also, to be continuous (or not), it needs to be defined on a space with a topology, not just a vector space. Also, a continuous linear functional on SOME linear topological (or Hilbert, etc) space is not at all a distribution (or generalized function). Boris Tsirelson (talk) 08:05, 19 November 2008 (UTC)

That should be OK now? As you see, I'm not at all into analysis, so I'd be grateful if you could give the article a thorough scan (in these respects or any other, too). Jakob.scholbach (talk) 09:51, 19 November 2008 (UTC)
Yes, it is OK with me now. Yes, I did scan (maybe, not quite thoroughly). Really, I like the article. Boris Tsirelson (talk) 19:25, 19 November 2008 (UTC)

Complex and real vector spaces

Maybe it would be nice to add the following (easy) example illustrating how the dimension of a space depends also on the field over which the vector space is defined:

The complex numbers over the complex field and R2 over the field of real numbers have dimensions 1 and 2 respectively.

I can add this example tomorrow (but not know; its past my bedtime).

Topology Expert (talk) 13:18, 21 November 2008 (UTC)

I have now written a word about C over R. Jakob.scholbach (talk) 21:44, 26 November 2008 (UTC)

Topological aspects of the article

The article is great but there are some problems with the topological part of the article. For instance, 'more generally, the Grassmannian manifold consists of linear subspaces of higher (fixed) dimension n' is mathematically incorrect. In general, the collection of all such subspaces need not be a manifold (Banach manifold perhaps if restrictions on the vector space are imposed but not a Euclidean manifold). I have added a bit of information on tangent bundles but a little more could be added.

Also, if the article discusses applications of vector spaces to topology, why not include something on Banach manifolds? They are very important (in my opinion) and since they are related to 'Grassmannians for arbitrary Banach spaces', it maybe useful to include something about it.

Topology Expert (talk) 18:40, 3 December 2008 (UTC)

I have added "finite-dimensional" to the projective space discussion (which also sets the stage for the Grassmannian). As for your other additions: I think the discussion of parallelizable leeds us astray, so I have trimmed it down a bit. (The material would be an addition to tangent bundle or tangent space, but we have to stay very focussed here). Banach manifolds? Hm, currently we don't really talk about "usual" manifolds (and I don't yet see why we should). What particular application do you have in mind? Jakob.scholbach (talk) 19:00, 3 December 2008 (UTC)

Category of vector spaces

Perhaps a brief summary of this?

Topology Expert (talk) 19:23, 3 December 2008 (UTC)

Well, what particularly do you mean? The category of vector spaces is mentioned. Of the particular properties, we could mention semisimplicity perhaps. Jakob.scholbach (talk) 21:04, 3 December 2008 (UTC)
Maybe not; I did not know that there was an article on that. I will add that to the 'see also' section.

Topology Expert (talk) 07:31, 4 December 2008 (UTC)

Minor details

The article is coming along but there are still a few minor facts that should be added here and there. For instance, there was no mention of what an eigenspace is or what it means for a map between vector spaces to be 'orientation preserving'. I will try to add as much as I can (minor facts) but since I can't spot all minor facts it would be helpful if other editors helped. As I mentioned, the 'topology sections' could be improved but I can do that.

Topology Expert (talk) 09:28, 4 December 2008 (UTC)

Should be a good article

In my opinion, the article should be a good article (I don't understand why it is not a featured article but I can take User:Jakob.scholbach's word on that). It has over a 100 references (for even the trivial statements) and basically anything I can think of related to vector spaces is included in the article (in all branches of mathematics). Maybe there are a few minor details that the article is missing out on but those would be probably required at the featured article nomination.

Topology Expert (talk) 09:45, 4 December 2008 (UTC)

Manifolds and tangent spaces

The section on manifolds contains the following sentence:

"It (the tangent space) can be calculated by the partial derivatives of equations defining M inside the ambiant space."

There are many things wrong with this sentences (besides the misspelling of ambient). First of all it suggest that all manifolds have some ambient space in which they are embedded. This a popular intuitive misconception that we definitely don't want advertise on wikipedia. (This misconception is a great obstruction to people understanding the curving of spacetime in general relativity.) Of course, defining the tangent space for an intrinsic way is notoriously abstract, and I see that we don't want to talk about germ_(mathematics) in this article. But even if you accept an embedding space for the manifold this sentence makes very little sense. You can either take partial derivatives of the embedding function to find the tangent space. (although that seems awkward in this context because for an embedding you'd first need to define an abstract manifold. Or you can linearize the equations defining the manifold (i.e. x2 + y2 =1 for a circle) around a point on the manifold to find the tangent space at that point. The later clearly involves partial derivatives, but I certainly wouldn't describe it as calculating by the partial derivatives of the equations. I'm not sure how to fix this in an easy war, since most of the solutions involve going into detail on a subject that is increasingly off-topic for this subject. Does anybody see an elegant concise way to rephrase this, so that it remains understandable for most people. (TimothyRias (talk) 10:33, 4 December 2008 (UTC))

Yeah. We definitely don't want to talk about non-embedded manifolds! This is just to give some idea what this is good for. But we could just remove that sentence. Jakob.scholbach (talk) 13:05, 4 December 2008 (UTC)
I agree with that error (I hate restricitive mathematics which only considers subspaces of Euclidean space but I am sure that in the next century maths will not be like that anymore). The topology part of the article does need some work before it can go to GA (everything else is fine except for the occassional mistake such as allowing the zero vector to be an eigenvector which someone corrected recently).

Topology Expert (talk) 08:16, 5 December 2008 (UTC)

By the way, it should be mentioned that differential topology is not about "calculating" partial derivatives; it is more about checking relations between differential manifolds (as Cantor once said: mathematics is not about studying objects but rather the relations between them).

Topology Expert (talk) 08:18, 5 December 2008 (UTC)

Recent edits

As the collaborative aspects of WP gains speed, which is cool, I take the opportunity to point out some ideas I have about writing a good article, by exemplifying with some recent edits. My ideas have been shaped by/after FAC discussions, manual of style and so on. I don't want to be imposing, but am just trying to save time for all of us.

  • Typesetting is something which requires care, e.g. '''R<sup>2</sup>''' (R2) should be '''R''<sup>2</sup> (R2).
  • Italics are used only for top-level notions, or to emphasize things: "The determinant of a square matrix can also tell us whether the corresponding linear transformation is orientation preserving or not." I feel orientation preserving is neither of the two.
  • Talking about "us" and "we" should be avoided.
  • Please try to keep the structure of sections etc., if you agree with it. E.g. adding another example (cross product to the Lie algebra thread) should be close to the examples already given. If you want to reorganize things (in this case put examples up first), look what other changes this necessitates (in this case moving the other example up)
  • Whereever {{Main}} templates are used, the corresponding subarticle should not be wikilinked again, to avoid overlinking. Also, main templates should link to only very important related articles (which cross product is not, IMO).
  • The "see also" section should not repeat items already covered in the text. Jakob.scholbach (talk) 13:09, 4 December 2008 (UTC)
Sorry if I was not following some of those conventions; I will try so in future. But if I ever miss a convention, feel free to either correct it or revert (I will try my best not to miss one) (I know it is hard to follow my edits like that but hopefully 90% of them, at least, should be alright).

Topology Expert (talk) 08:12, 5 December 2008 (UTC)

Thanks for cleaning it up.

Topology Expert (talk) 13:10, 5 December 2008 (UTC)

The "see also" section should not repeat items already covered in the text.
Personally, I do not agree with this convention for several reasons:
  1. A reader may not read a particular section where a topic is Wikilinked. Often only the first occurrence of a topic is Wikilinked, so a reader of a later section will be unaware that there exists a Wikilink.
  2. I personally find it very handy to be able to scroll down to See also just to see what is out there. If significant subjects are not there, it's a problem.
  3. As an editor, when changing items in an article I often wish to refer to other related articles to be sure of compatibility and not missing items of importance. It is nice to use See also for this purpose, rather than scanning through a long article to find all the embedded links.
For all these reasons, I believe all significant articles should appear in See also section or referred using a {{seealso}} template. Brews ohare (talk) 15:54, 6 December 2008 (UTC)

I disagree with you. We have to distinguish between an article which is under development, i.e. a stub or start class article, and an article that is reasonably complete, such as this one here. When writing a stub, it is very good to put related notions into the s.a. section, just as a replacement of a more thorough treatment.

If interpreted literally, your arguments (all of which are pretty much parallel) would lead to including pretty much every linked notion in the s.a. section, which is not useful either for editors nor readers. It is, sad or not, a fact that a reader will have to read an article if she/he wants to know about it. If you have little time, a glance at the TOC should give you the big points of the topic in question. See also ;) the relevant MOS section. The s.a. section is just for putting peripherical notions whose (minor) importance (to the article in question) does not give them the "right" to deserve a subsection or even a phrase. Jakob.scholbach (talk) 16:07, 6 December 2008 (UTC)

I think the truth is somewhere in between. If there is a prominent "main article" link, it makes no sense to repeat the link in the "see also" section. If there is only an obscure link to a section of another article hidden somewhere in a footnote, this is obviously no reason not to put a link to the article into the "see also" section (if it should be there otherwise). I think we really need to use judgement, weighing the relevance of a link against the prominence with which it already occurs in the article. But I agree that in finished articles the "see also" section is often not needed. --Hans Adler (talk) 16:26, 6 December 2008 (UTC)
Jakob exaggerated my suggestion, which actually states:
I believe all significant articles should appear in See also section or referred using a {{seealso}} template.
That means I'd object to putting "an obscure link to a section" in See also, but favor including "significant articles", unless already in a {{seealso}} or {{main}} template. Of course, who can argue against using judgment? Brews ohare (talk) 17:55, 6 December 2008 (UTC)
I'm sorry, Brews. Somehow I did indeed not see your last line above. Do we agree that category of vector spaces (just as an example) should not reappear in the see also section, or do you think it is significant enough to make it show up again? I guess it's also not that important of an issue. Much more enerving (to me) is it that despite my iterated posting at WT:WPM nobody seems to be inclined to review the article. What can we do about that? Jakob.scholbach (talk) 18:14, 6 December 2008 (UTC)

Hi Jacob: I have no experience with such things. Try asking User_talk:Dicklyon, who I have found to be very helpful. Brews ohare (talk) 20:23, 6 December 2008 (UTC)

Format of See also

It can be a useful discipline in the See also section to use headings to classify various links by subject. Doing this helps the reader and also leads to some useful scrutiny of what is linked, to avoid it becoming a "gee, this is interesting" section. Here is an example from k·p perturbation theory:

See also

Multiple column format is implemented using {{Col-begin}} rather than a myriad of alternatives because many of the alternatives work fine in Firefox, but not in Internet Explorer. Brews ohare (talk) 15:59, 6 December 2008 (UTC)

notes from GeometryGirl (talk)

  • I don't really know how to deal with this sentence: "Another conceptually important point is that elements of vector spaces are not usually expressed as linear combinations of a particular set of vectors". "point" sounds informal and something is wrong with "usually expressed"
  • In the section "linear equations" I would add a small note about annihilators being 'dual' to linear equations
    • What do you mean by that? Jakob.scholbach (talk) 12:55, 7 December 2008 (UTC)
      • Well, taking the example given, if e1, e2, e3 is the standard basis for (R^3)* (the dual of R^3) then the space of solutions is simply the annihilator of W = <e1 + 3e2 + e3, 4e1 + 2e2 + 2e3>. The dual perspective makes it clear why the set of solutions to a set of linear equations is naturally a vector space. GeometryGirl (talk) 14:50, 7 December 2008 (UTC)
        • Hm. I'm not so convinced. That the solutions form a vector space is clear(er) by seeing it as the kernel, right? I personally believe talking about annihilators would lead us a bit astray. We would have to talk about dual space, dual basis, pairing at that point, which is a bit too much. What do others think? Jakob.scholbach (talk) 15:54, 7 December 2008 (UTC)

GA Review

This review is transcluded from Talk:Vector space/GA1. The edit link for this section can be used to add comments to the review.
GA review (see here for criteria)
  1. It is reasonably well written.
    a (prose): b (MoS):
  2. It is factually accurate and verifiable.
    a (references): b (citations to reliable sources): c (OR):
  3. It is broad in its coverage.
    a (major aspects): b (focused):
  4. It follows the neutral point of view policy.
    a (fair representation): b (all significant views):
  5. It is stable.
  6. It contains images, where possible, to illustrate the topic.
    a (tagged and captioned): b lack of images (does not in itself exclude GA): c (non-free images have fair use rationales):
  7. Overall:
    a Pass/Fail:

Here are some specific issues that I'd like fixed before this reaches GA:

  • The lead says "much of their [vector spaces'] theory is of a linear nature"; but I don't think the meaning of "linear nature" will be apparent to people unfamiliar with vector spaces. E.g., someone might not know what a linear combination or linear transformation is.
    • OK. Better now?
      • Yes.
  • In the "Motivation and definition" and definition section, "linear combination" has not yet been defined. It might be better to say that there is no preferred set of numbers for a vector, and to say no more until bases have been introduced.
    • OK.
      • Also good.
  • In the subheading "Field extensions", the description of Q(z) is odd: It sounds like you mean for z to be a transcendental, but you say that z is complex. If z=1 then the field extension is trivial; even if the field extension is non-trivial, it's not unique (square root of 2 vs. cube root of 2). I see below that you do really mean for z to be complex, but perhaps there's a better way to say what you mean.
    • I'm not sure I understand your points. I do mean z to be complex, just for concreteness. ("Another example is Q(z), the smallest field containing the rationals and some complex number z.") What is the problem with z=1 and a trivial extension? What do you mean by "it's not unique"? (I think, for simplicity, the subfield-of-C-definition I'm giving is appropriate at this stage, and yields something unique).
      • I think what bothers me is that you say you are about to give another example (singular) and then proceed to give a family of examples (plural). I've changed the text to try to make this better; is this OK for you? (BTW, I used an α instead of a z because z looks like a transcendental to me. This might have been part of my confusion, too. But change it back if you think having a z is better.)
  • The article should say very early that abstract vector spaces don't have a notion of an angle or of distance or of nearness. This is confusing for most people.
    • OK. (In the definition section).
  • The bolded expression〈x | y〉does not display properly on Safari 3.0.4; the left and right hand angle brackets show up as squares, Safari's usual notation for "I don't have this character". (It works when unbolded, as I found out when I previewed this page.)
    • Yeah, it was weird, there were two types of angle brackets. Can you read them now (there are three occurences in that section).
      • Yes.
  • The natural map V → V** is only discussed in the topological setting. It should be discussed in general. (Note that the map is always injective if one considers the algebraic dual (for each v, use the linear functional "project onto v".))
    • Done. (will provide a ref. later) Jakob.scholbach (talk) 12:50, 7 December 2008 (UTC)
      • I now had a look at most of the algebra books listed in the article and none of them, actually, talks about algebraic biduals. So I wonder if this is so important. (I wondered already before). Jakob.scholbach (talk) 21:40, 9 December 2008 (UTC)
        • Hmm! I know that they appear in Halmos's Finite dimensional vector spaces (p. 28, exercise 9). But it seems to me that the best reason for discussing them is the finite-dimensional case: Right now, the article doesn't discuss reflexivity of finite-dimensional vector spaces, a real gap!
  • JPEG uses a discrete cosine transform, not a discrete Fourier transform.

Here are some other issues which aren't as pressing but which I think you should handle before FA:

  • I'm not sure that likening a basis for a vector space to generators for a group or a basis for a topology will help most readers. Most people who use and need linear algebra have never heard of these.
    • I'm not sure either! I removed it.
  • Since you mention the determinant, it's worth mentioning that it's a construction from multilinear algebra. A sentence or two should suffice.
    • Except for det (f: V → V) being related to Λ f: Λ V → Λ V (which I think should not be touched here), I don't see why the determinant belongs to multilinear algebra. What specifically do you think of?
      • That's exactly what I was thinking of. I don't want to make a big deal about that construction, but I do think it's good to mention—it's the right way to think about the determinant, and the only way I can think of which admits generalizations (e.g. to vector bundles). I put a sentence in the article about this.
  • It seems that for most of the article, whenever you need an example of a non-abstract vector space, you use solutions to differential equations. I agree wholeheartedly that these are important, but there are probably other good examples out there which shouldn't be slighted.
  • It also seems that you rely on convergence to justify the introduction of other structures such inner products; but inner products can be (and should be, I think) justified on geometric terms, because they're necessary to define the notion of an angle.
      • OK, you did this.
  • It's also worth mentioning the use of vector spaces in representation theory.
  • When writing an integral such as , the output looks better if you put a thinspace (a \,) between f and dx: .
    • OK.
  • Image:Moebiusstrip.png should be an SVG.
I'll try to make a SVG picture of a moebius strip later today. (TimothyRias (talk) 10:58, 8 December 2008 (UTC))

Ozob (talk) 02:40, 7 December 2008 (UTC)

Thanks very much, Ozob, for your review! Jakob.scholbach (talk) 12:04, 7 December 2008 (UTC)

I concur with the comments above; I have the following comment to make on tensor products, which I would like to be taken addressed before GA status:

The description of tensor product as it stands is too vague (such as "mimicking bilinearity"). It would be better to first give the universal property of tensor product of V and W as the unique vector space E + bilinear map V × WE with the universal property of expressing all bilinear maps from V × W to a vector space F as a linear maps from E to F. Then one could state that a space with these properties does exits, and outline the construction. Similarly the adjoint property of tensor product with respect to Hom is too vague. To control article size, one could consider leaving that out as tensor product article is wikilinked; otherwise one should definitely point out that tensor product is a (bi-) functor. As for extension (and restriction) of scalars (tensoring with extension field of the base field), that could be treated, but then again functoriality of tensor product would be natural to include. Perhaps effective use of summary style could help keep amount of material here still manageable. Stca74 (talk) 10:45, 7 December 2008 (UTC)

Thank you too, Stca, for your review: I have trimmed down the tensor product discussion a bit, but also made it more concrete. I think doing the universal property thing properly (i.e. with explanation) is too long and also a bit too complicated (even uninteresting?) for general folks, so should be deferred to the subpage. As for the isomorphism: I don't know why I called this adjunction isomorphism, since it is effectively both adjunction and reflexivity of f.d. spaces. Anyhow, this comment was just to put tensors in line with scalars, vectors and matrices, but I would not go into functoriality etc. Jakob.scholbach (talk) 15:12, 7 December 2008 (UTC)
Looks more precise now. However, I would still consider adding the universal property (perhaps somewhat informally, at least) - as least for me it is the only way to make sense of the construction, which otherwise risks being just a tangle of formulas. As for the last few lines after the representation of Hom as tensor product of dual of the domain with the target, I'm not sure if I can follow (or expect others to follow). Actually, the canonical map goes in general from the tensor product into the Hom space and is injective. It is bijective if one of the spaces is finite dimensional. Thus, if you insist, you get an interpretation of a tensor (element of the tensor product) as a matrix, but not really tensor as a generalisation of matrix (following scalar, vector, matrix list). Stca74 (talk) 20:06, 7 December 2008 (UTC)
OK, I scrapped the sketched ladder of "tensority". Also the universal property should be fine now. Jakob.scholbach (talk) 21:40, 9 December 2008 (UTC)
Unfortunately, the tensor product section now has a problem: It doesn't define "bilinear", so it doesn't make a lot of sense. The previous version was better in this respect because it was only hand-waving, so the reader didn't expect to understand; but now that the article is more precise, the lack of definition of "bilinear" is a problem. I'm not really sure what to do here; if one defines "bilinear" then one should give an example, but the simplest example is the dot product, which is later in the article. And being vague, as Stca74 noted, is no solution either. It might be good to introduce the dot product here and then reintroduce it later in the inner product section; the second time you'd point out that it's positive definite. (Also, the inner product section currently calls the Minkowski form an "inner product" even though it's not positive definite. I know that in physics, "inner product" doesn't mean positive definite, but it certainly does in math. This deserves a remark somewhere, I think.) Ozob (talk) 00:55, 10 December 2008 (UTC)
(<-) Well, the bilinearity is certainly no problem. I mentioned this now. The problem is more: how to create a little subsection that is inviting enough to guide the reader to subarticle. When I learnt this, I kept wondering "what is this u.pr. all about?" I only got it after learning about fiber products of (affine) schemes, but we certainly cannot put that up! Jakob.scholbach (talk) 08:07, 10 December 2008 (UTC)
Oof, that's a tough way of figuring it out! (Not that I did better!) I agree, this is a tough thing to work out. It'll have to be done before FA, though (if that's where you want to take the article). The only really elementary context I can think of where they turn up is bilinear forms. It might be best to have a section on bilinear forms first (which would mention inner products and the Minkowski metric and link to the article on signature) and then use those to justify tensor products: "Tensor products let us talk about bilinear maps, which you now know to be wonderful, in terms of linear maps, which you also know to be wonderful." That would require reorganizing the article a little, but I don't see a good other solution. Ozob (talk) 03:45, 12 December 2008 (UTC)

I will try to review each section one by one and add comments. But just something User:Ozob said:

  • The article should say very early that abstract vector spaces don't have a notion of an angle or of distance or of nearness. This is confusing for most people.

Maybe you should not emphasize this (nor should you write that they do have these structures) because you can equip vector spaces with a norm (distance and nearness) or an inner product (for angles) and I am quite sure that most of mathematics done on vector spaces studies these structures on them (such as Banach space theory or Riemannian geometry). So perhaps keeping the sentence, should imply that there is an explanation that you still can equip these structures on vector spaces since these strucutures are indeed very important in mathematics.

Topology Expert (talk) 17:17, 7 December 2008 (UTC)

Topology Expert (talk) 17:17, 7 December 2008 (UTC)

Careful with generalisations: any absolute value on a finite field is improper (|x|=1 for all non-zero x) and thus there are no interesting norms to put on vector spaces over finite fields. And while norms on finite-dimensional real vector spaces equivalent, there are still no canonical norms nor inner products. I do agree with Ozob's view that it makes sense to warn readers about this potentially counterintuitive fact. Stca74 (talk) 20:06, 7 December 2008 (UTC)

Topological vector spaces and biduality

I'm afraid the discussion on biduals (discussed already above during GA nomination) in the topological context is a bit too inaccurate as it stands, and contains claims which only hold with additional hypotheses.

First, the definition of the bidual is incomplete unless the topology of the dual is specified - there is in general no preferred topology on the (topological) dual. The bidual E ' ' is the (topological) dual of the strong dual of E. The theory is normally developed only in the context of locally convex spaces, for which it indeed follows from Hahn-Banach that the canonical mapping of E into its bidual is injective precisely when E is Hausdorff. Next, it is possible to define semireflexive spaces as the ones for which this canonical map is bijective without specifying a topology on the bidual. Reflexivity refers to the canonical map being an isomorphism when the bidual is given the strong topology (with respect to the strong topology of the dual of E). For normed spaces semireflexive and reflexive are equivalent.

As the above suggests, the concept of duality for general (locally convex) spaces is not entirely straightforward, and it would be better to avoid introducing too much of the theory in this article which is primarily about the algebraic theory of vector spaces. While it could in principle be feasible to discuss biduals more easily for normed vector spaces, I would rather agree with Ozob's comments in the GA discussion and treat biduals and reflexivity in the purely algebraic setting here. It would then be possible to point the reader to the relevan articles on topological vector spaces for the related concept in that context. For a very clear discussion on biduals in the algebraic context (also for general modules, not only vector spaces) see e.g. Bourbaki, Algebra, Ch II. Stca74 (talk) 20:39, 12 December 2008 (UTC)

OK. First, what is written certainly has be correct and as precise, so this has to be amended. From what I know, though, I cannot see why algebraic biduality is so important or even more important than topological biduality. The algebraic statement is a triviality, whereas the topological one is not at all. Also, I think, the article should not give more weight to algebraic assets than to functional analysis etc. So, some concrete questions to everybody:
  • What makes algebraic biduality so important? (I don't have the Bourbaki at hand right now, but I'm suspecting it does not tell too much about its importance).
  • Is it right to think of the strong topology on the dual as the "most natural one"? Jakob.scholbach (talk) 09:44, 13 December 2008 (UTC)
I don't know enough functional analysis to answer your questions, in particular I wouldn't know what the strong topology on the dual is in the general setting. I do think though that (everything else being equal) algebraic concepts should be stressed in this article, because "vector space" is an algebraic concept. But why do we talk about biduals at all? Isn't it a bit far removed?
The paragraph does make a nice point that for topological vector spaces, you want to talk about the continuous dual instead of the algebraic dual. That's worth keeping. But I'm not so sure about biduals. In what context do you want to introduce them? If you talk about locally convex vector spaces, you'll have to add yet more definitions. As mentioned before, it's easier to talk about it in the context of Banach spaces. But still I think it's a bit too far removed to be in this article. There is no Banach space theory in here, nor should there.
What do you think about adding instead an example of a topological vector space that is not a Banach space? The easiest example I know is C. -- Jitse Niesen (talk) 16:33, 13 December 2008 (UTC)
Strong operator topology says "It (the s.o.t.) is more natural too, since it is simply the topology of pointwise convergence for an operator." I'm not experienced enough either to say whether biduals are crucial, but reflexivity seems to be somewhat important. What say the functional analysts in the house? Jakob.scholbach (talk) 16:49, 13 December 2008 (UTC)
As for the non-Banach example, we mention the noncompleteness of Riemann integrable functions. I prefer this over C since it shows the superiority of Lebesgue, whose influence is all over the place in these matters, right? Jakob.scholbach (talk) 16:51, 13 December 2008 (UTC)

(←) On the importance of the algebraic bidual: while the proofs of the statements about biduality are indeed quite trivial for vector spaces, one is dealing with a special case of a much deeper algebraic concept (and one which is important even where the proofs are easy). The precisely same concept is already non-trivial for modules over rings more general than fields. In that context the canonical map to bidual is injective for projective modules and bijective for finitely generated projective modules. Via the well-known correspondence (finitely generated) projective modules have interpretation as (finite-rank) vector bundles (rightly introduced as generalisations of vector spaces in the article) , both in differential geometry over the rings of smooth functions, and in algebraic geometry for general commutative rings. In the somewhat more general set-up of coherent sheaves biduality and reflexivity are common issues to consider in the practice of algebraic geometry. Not surprisingly, the same occurs in homological algebra, where double duals of cohomology spaces, modules, sheaves are a very common occurrence. Eventually this leads to biduality in the context of derived categories as a crucial component of Grothendieck's "six operations" formalism for the very important generalisations of Poincaré / Serre -type dualities. The theory of D-modules deserves a related mention here too. Finally, one can mention the role the canonical mapping into bidual played in the formulation of natural transformations of category theory. Hence, all considered, I do think the (algebraic) bidual deserves to be intorduced briefly here, as a simple incarnation of truly important and fruitful concept.

On "natural" topologies to define on the topological dual E ' of a topological vector space E, I suppose one could argue that for normed spaces the strong topology is the most "natural": it is just the familiar norm topology where the norm of a functional f is the supremum of the absolute values of f(x) where x ranges over the unit ball (or sphere) in E. The canonical mapping EE ' ' is then always injection. However, from another viewpoint a "natural" topology T on the dual E ' would have the property that the natural map φ: EE '* to the algebraic dual of the topological dual (defined by the duality pairing E × E ' → R) were a bijection to the topological dual of E ' equipped with the topology T (i.e., T compatible with the duality). Now (under the necessary assumption that φ is injective) all topologies between the weak topology (weakest) and the Mackey topology (strongest) satisfy that condition. However, in general the strong topology of the dual is stronger than the Mackey topology; for the strong topology to be compatible with the duality (and hence equal to the Mackey topology) is precisely the condition of E being semi-reflexive. For normed spaces reflexive is equivalent to semireflexive, which shows that there is a clear viewpoint from which (for example) the weak topology of the dual of a normed space is "more natural" than the strong topology, at least when E is not reflexive. This would be the case for example for L1-spaces. But again, all of the above digression I think mainly helps to show why the topological reflexivity is best left to articles on topological vector spaces and functional analysis. Stca74 (talk) 18:53, 13 December 2008 (UTC)

Huh! Since I cannot cite this talk page ;-( I decided to trim down the presentation somewhat and moved the algebraic biduality statement up to the algebraic dual. I left the Hahn-Banach theorem but without referring to the bidual. Jakob.scholbach (talk) 19:50, 13 December 2008 (UTC)

Tangent space edits

this edit removed some content as per "removing inaccuracies". What exactly did you mean by that, Silly rabit? I'm inclined to revert that change (it removed references, a rough description what the tangent space is, Lie algebras vs. Lie groups). Jakob.scholbach (talk) 11:27, 3 January 2009 (UTC)

The parts of the paragraph I removed were a bit confused and overstate the important of the tangent space itself. In particular, the vector flow does not go from the tangent space to the manifold: perhaps what was meant was the exponential map? I don't know. Also, the tangent space of a Lie group is quite ordinary, and doesn't "reveal" anything special about the Lie group. It is given the structure of a Lie algebra in a natural manner, but that is something extra. Another example in the same vein: does the tangent space of a Riemannian manifold reveal something special about the manifold? No, it is the metric which does that. siℓℓy rabbit (talk) 14:42, 3 January 2009 (UTC)
I sort of agree with you, but I like to play devil's advocate. Doesn't the Lie algebra determine the Lie group? (up to the connected component of the identity?) It has been a long time since I studied Lie groups and their algebra's, but I do seem to remember something along these lines. Thenub314 (talk) 14:51, 3 January 2009 (UTC)
More or less yes. Anyway, I have added the statement about Lie groups and their Lie algebras to a more conceptually appropriate place in the article that will hopefully satisfy Jacob's objection. siℓℓy rabbit (talk) 15:06, 3 January 2009 (UTC)
OK. Probably I was indeed to sloppy. Just one point: the statement "The tangent space is the best approximation to a surface" is unclear, to me and probably more so for a reader who does not yet know about the t.sp. What exactly does "best" mean? Jakob.scholbach (talk) 22:44, 3 January 2009 (UTC)
A good point. I have added an additional link to the sentence in question to linearization, and an additional content note defining precisely what is meant by "best" in this context, together with a reference. siℓℓy rabbit (talk) 02:52, 4 January 2009 (UTC)
Good, thanks. Jakob.scholbach (talk) 19:19, 4 January 2009 (UTC)

Minor changes

I would like to make the following minor changes to the lead sentence and paragraph.

I would like to change the lead sentence to

A vector space is a mathematical structure formed by a collection of objects, called vectors, that may be added, subtracted, and scaled.
This may not be necessary. I hadn't noticed it was put in the first section until now, I kind of like it better in the lead, but it would not make me unhappy if it stays the way it is. Sorry I should read more carefully before I write. Thenub314 (talk) 08:53, 8 January 2009 (UTC)

Also in second sentence I think "Euclidean vectors" was better as just "vectors", because we follow it shortly after with the phrase "Euclidean space," and it seems like one too many Euclideans for this sentence. I suggest we change Euclidean vectors with vectorss (this is still linked to the same article), and we link plane with an appropriate article to make clear we mean the Euclidean plane. (Prehaps Plane (geometry)?)

What do people think about this? Thenub314 (talk) 08:49, 8 January 2009 (UTC)

Lead section image

Does anybody have a good idea what image could illustrate the vector space idea? The current image is pretty crappy, I think, for it conveys basically nothing. Jakob.scholbach (talk) 18:12, 11 January 2009 (UTC)

Added a better image with better description. Please have a look. PST
Well, I think that one is only little better than the previous one. BTW, please sign your posts at talk pages! Jakob.scholbach (talk) 19:28, 12 January 2009 (UTC)
Also, the new caption is not great, since in many vector spaces there is no, or at least no preferred inner product, therefore "closer to" some vector is meaningless. Jakob.scholbach (talk) 19:37, 12 January 2009 (UTC)
Yes, I just intended that to be a rough idea (I kind of felt uncormfortable when writing that caption). Hopefully someone will get a better image soon (the current images at commons are not very good so someone will probably have to upload one). I am quite happy to improve this article in the near future (but perhaps I should discuss here before I edit because I want to make sure that my edits are appropriate). --Point-set topologist (talk) 20:37, 12 January 2009 (UTC)
How about a drawing of the parallelogram law for adding and subtracting vectors? That's the cover illustration for Sheldon Axler's Linear Algebra Done Right. --Uncia (talk) 16:01, 15 January 2009 (UTC)
That's an idea. I'll try merging this illustration with a flag (0-, 1-, and 2-diml subspace of R^3) tonight, unless somebody else is up to it... Jakob.scholbach (talk) 16:34, 15 January 2009 (UTC)
How about this one? Jakob.scholbach (talk) 21:47, 15 January 2009 (UTC)
I like the picture. There are a couple of points about the caption that I thought were not clear: (1) the gray square is not actually the vector space, because the vector space extends to infinity in all directions; (2) The label 0 is used but not explained; maybe we could add "the zero vector 0 belongs to all subspaces". --Uncia (talk) 22:45, 15 January 2009 (UTC)

Although the image is much better than before, I am not perfectly satisfied. It has one error (mentioned above) not to mention that it looks a bit messy (and hard to follow). But I think that the image is temporarily good enough. PST 09:16, 16 January 2009 (UTC)

This is certainly the second best article, I have seen, in Wikipedia

If this goes for FA, I would be quite pleased to support. However, I am a little worried regarding the issue on 'range of content'. Vector spaces have so many applications everywhere (heaps in mathematics) and I don't think that the current article describes all of these applications. This maybe because it is not supposed to, but if it is, this is just a suggestion. More importantly, the sections regarding topology have to be cleaned up. I see that we should not go off the topic so that has to be done carefully (but note that the section on tangent bundles here is, in my view, better than the article :)). PST (--Point-set topologist (talk) 09:57, 15 January 2009 (UTC))

Dimension

Jacob, are you seriously saying that all infinite dimensional vector spaces are isomorphic to each other? How about the Hilbert spaces? Is H0=H1 ? −Woodstone (talk) 22:57, 18 January 2009 (UTC)

I'm seriously saying that two vector spaces of the same dimension are isomorphic as vector spaces. There may be v.sp. that are both infinite-dimensional, but the cardinality of the two bases is different. Also, L^p is isomorphic to L^q as vector spaces, but not as topological vector spaces. Likewise with any other counterexample you may think of. Just see the relevant article section and the refs cited therein. Jakob.scholbach (talk) 23:00, 18 January 2009 (UTC)
That's only a half answer. Are you stating that H0 and H1 are isomorphic as vector spaces? I think not. −Woodstone (talk) 23:09, 18 January 2009 (UTC)
I don't know that notation. What does it mean? But anyway, you can answer it yourself: if the dimensions agree (as cardinal numbers) they are isomorphic as v.sp., otherwise they are not. Jakob.scholbach (talk) 23:32, 18 January 2009 (UTC)
Two vector spaces of the same dimension are isomorphic, even if that dimension is an infinite cardinal. Indeed, any vector space over a field F with a basis set X is isomorphic to , which is the space of all finitely supported functions XF. By precomposition with a bijection to the cardinal |X| of X, this can be put into a one-to-one linear correspondence with the vector space of finitely supported functions |X|→F. For the other question, the example of and seems strange to me, because these typically denote Sobolev spaces, in which case H0 and H1 are both separable Hilbert spaces, and so are in fact isomorphic as Hilbert spaces as well. siℓℓy rabbit (talk) 23:43, 18 January 2009 (UTC)

I have added a content note to clarify this. I am generally opposed to footnotes in the lead. However, sometimes they are necessary. This seems to be such a case. Please change the wording around and provide references as appropriate. Originally, I thought that Halmos Finite dimensional vector spaces provided some discussion of this, but I was unable to find a suitable section there. Anyway, I think the clarification would be much more effective with a suitable reference. siℓℓy rabbit (talk) 01:02, 19 January 2009 (UTC)

I think that most people are familiar with the idea of a dimension. But can any laymen who reads this discussion confirm what exactly they think it is? Tracing back to my earlier days, I used to think that higher dimensional spaces are "more complex". When writing the article, perhaps you may like to bear that in mind. --PST 08:47, 19 January 2009 (UTC)

Reference

I added this reference sometime ago: Cohen 1998, p. 31, The Topology of Fibre Bundles, but it seems to have disappeared. Is this a problem with the formatting of the reference? I think that this PDF file has a good lot of information on vector bundles so it should be there. --PST 08:59, 19 January 2009 (UTC)

Yes, that was me. I didn't remove it primarily because of the reference, but of the statement that vector bundles form a monoid (which I thought leads a bit too far away). Remember this is an article about v.sp., not vector bundles, so we should not include any reference which is just somewhat nice on a side aspect of the article topic. Also, unless the thing is printed by a regular publisher (which it is not?), the file does not count as a reference, but as an external link, which makes it even less interesting to include here. Jakob.scholbach (talk) 16:21, 19 January 2009 (UTC)

Inner product notation

I've seen the following notations commonly used for inner products: ; ; (the last only as part of Dirac notation). This article uses yet another, viz: , which I have never seen (but I'm a physicist). I assume this is used by some mathematicians, but how common is it? At any rate shouldn't the other notations be mentioned? PaddyLeahy (talk) 11:45, 19 January 2009 (UTC)

How exactly are the last to different? Except that the later uses both braket notation and bold to (doubly) indicate that the objects are vectors. Anyway, if different notations are to be mentioned, shouldn't at least the ordinary dot notation be mentioned? (TimothyRias (talk) 12:34, 19 January 2009 (UTC))
I don't have a preference for either of the variants. The boldface for vectors is just a general notation in this whole article (and elsewhere), that's unrelated to the inner product. The dot notation is mentioned (for the standard dot product, for which, I feel, it is preferably used). I think additional notations should not be discussed here, since this is an article about vector spaces. Jakob.scholbach (talk) 16:18, 19 January 2009 (UTC)
Still using both boldface and braket notation is weird since it doubly denotes the objects as vectors. Notation wise probably is nicer. (TimothyRias (talk) 22:49, 19 January 2009 (UTC))
Well, we use boldface all the time, so I don't see a reason to change it at that place. But whether we put a "|" or a comma in the middle, I don't care. Change it if you like... Jakob.scholbach (talk) 23:15, 19 January 2009 (UTC)

When to mention fields

It seems a bit early in the lead to bring up fields, since we cover them in the definition. I have tried leaving it in, but I am concerned it might get beyond interested high school students if we jump into it too quickly. Thenub314 (talk) 15:59, 20 January 2009 (UTC)

Well, I think they are too important to be omitted, but I like the way you trimmed it (except for the use of the second person, which I removed by making that sentence passive). But I think we should at least give a non-rigorous explanation of what a field is (indeed because high school students won't know what they are[1]), such as

... provided the scalars form a field (such as rational numbers or complex numbers), that is, that they can be added and multiplied together satisfying similar properties.

, or something like that.
[1] Incredibly, I've seen junior high school books with definitions of groups, rings, and fields, but I guess more than 99.99% of all teachers simply skip that parts. -- Army1987 – Deeds, not words. 16:22, 20 January 2009 (UTC)
I did another take of the first section of the lead. I also think we should not put too much emphasis on the base field. A people who does not know about this will never think: Oh and what if I consider a complex vsp over the reals? Also, I somewhat disagree that we should explain what a field is. Again, people who don't know what a vsp is will hardly digest the brief definition "you have +, -, *, /". Indeed most of the article, and most of the applications both in mathematics and beyond concern real and complex spaces, so conveying this particular case in the lead is fairly sufficient. Jakob.scholbach (talk) 18:56, 20 January 2009 (UTC)

Recent change to the lead sentence

I don't really like this edit. Apart from some errors (such as the inappropriate mention of mathematical physics) that could be cleared up with copyediting, I think it is actually more formal rather than more understandable for a non-mathematician. Do we have any "non-mathematicians" available for comment? Army, I believe, is an engineer or physicist. siℓℓy rabbit (talk) 02:46, 21 January 2009 (UTC)

I don't see why the mention of mathematical physics is inappropriate: that's where the motivation came from, and where virtually all the applications are. Re formality, there are basically two directions the article could go: it could start with a "physics" intuition of a vector as something with magnitude and direction, or it could start with an abstract description. If it starts with an abstract description, the previous version wasn't good enough. Saying "a vector is something that can be scaled, or multiplied by a number" is only understandable to somebody who already understands it. Looie496 (talk) 05:19, 21 January 2009 (UTC)
"Mention" of mathematical physics is appropriate, but in proportion to its prominence in the article. So far, not much of the article is dedicated to physics, and therefore the second sentence does not appear to be correct weight for this. Also, contrary to conventional dogma, the abstract notion of a vector space was not motivated directly by physics, mathematical or otherwise. (It is true that the notion of a physical vector emerged from such considerations.) siℓℓy rabbit (talk) 07:10, 21 January 2009 (UTC)
I tend to agree with siℓℓy rabbit. Though I am a mathematician, I am a pretty bad one, so hopefully my input carries some weight. I think the notion of scaling is pretty clear and intuitive for people who haven't seen vector spaces (we all have some seen scale models, drawings, etc.) The previous lead prehaps could be criticized because it inferred you could "add" some objects called "vectors". But I think the current picture next to the lead made that rather clear as well. Overall my opinion is we go back to the lead we had a day (or two) ago. Thenub314 (talk) 07:19, 21 January 2009 (UTC)
I think reverting Looie's edit there was appropriate. Do you still think the physics aspect has too much weight now? (Currently just one motivating and hopefully understandable example from physics in the lead. I plan to brush over the motivation section in this direction too, but there also highlighting the mathematical background, i.e. triples etc. of numbers). I think one motivational example in the lead is good, since then we can say that the axioms are modelled on that. Jakob.scholbach (talk) 07:30, 21 January 2009 (UTC)
I don't like the new lead either. And Looie496: "that's where the motivation came from, and where virtually all the applications are" is false. Some motivation does come from physics but there are heaps of applications of vector spaces in mathematics; perhaps as much as physics. Something about physics should be mentioned, but I strongly disagree that physics is the only reason why vector spaces were invented. --PST 07:54, 21 January 2009 (UTC)
By the way, Army is a high school math teacher as he/she noted already on the "comments" page. --PST 08:01, 21 January 2009 (UTC)
I'm not. I'm an undergraduate physics student, as noted on my user page. Probably you've been confused by the comment by Vb immediately above mine, which I suppose was signed with a ~ too many, displaying only the time in the signature. BTW I fixed that. -- Army1987 – Deeds, not words. 15:00, 21 January 2009 (UTC)
As for the lead, I don't think that factual accuracy and clarity are incompatible goals. While it's true that very few people know what the word field means, commutativity and associativity of addition and multiplication etc. are taught in grade schools. It shouldn't be impossible to write a lead which doesn't contain factual inaccuracies and yet can be understood by anyone in the last year of high school and also by sufficiently bright younger people. -- Army1987 – Deeds, not words. 15:16, 21 January 2009 (UTC)
Well, accuracy and clarity are not incompatible goals overall, over the span of 3-4 sentences they can be. Most high school students I have taught are much more comfortable with the concept of a vector as a pair of numbers then with the terms commutativity and associativity. While these are often taught in grade school, and again in middle school, and again in high school algebra, it doesn't exactly prepare students for the concept of "numbers" that are more general then the complex numbers. I think the goal for the lead should be to get a the idea across, and later in the article (say the definition section) we can discuss its more general formulations. Thenub314 (talk) 15:45, 21 January 2009 (UTC)
I agree with Thenub here. For what it's worth, as an undergrad math student I made a living tutoring people, and as a grad student I was a TA and taught courses in calculus, algebra, and discrete math, among other things, so I too have had some opportunities to learn what sorts of explanations actually work for people. Looie496 (talk) 17:55, 21 January 2009 (UTC)

Concrete proposals

OK, so now everybody has given his/her opinion. In order to get it back on a more concrete track, may I propose the following procedure: everybody interested writes a lead section (1st paragraph only) and puts its here. Then we can see and discuss advantages of the drafts.

Here is my take (which is the current version) Jakob.scholbach (talk) 16:06, 21 January 2009 (UTC)


A vector space is a mathematical structure formed by a collection of objects, called vectors, that may be added and scaled, or multiplied by numbers. For example, physical forces acting on some body are vectors: any two forces can be added to yield a third, and the multiplication of a force vector by a real factor—also called scalar—is another force vector. General vector spaces are modelled on this and other examples such as geometrical vectors in that the operations of addition and scaling or scalar multiplication have to satisfy certain requirements that embody essential features of these examples. In addition to scaling by real numbers, vectors and vector spaces with multiplication by complex or rational numbers, or scalars in even more general mathematical fields are used.

Jakob.scholbach (talk) 16:06, 21 January 2009 (UTC)


A vector space is a mathematical structure formed by a collection of objects called vectors, along with two operations called vector addition and scalar multiplication. Vector spaces are a primary topic of the branch of mathematics called linear algebra, and they have many applications in mathematics, especially in mathematical physics. The most basic example of a vector space is the set of "displacements" in N-dimensional Euclidean space. Intuitively, a Euclidean displacement vector is often thought of as an arrow with a given direction and length. Addition of displacement vectors is done by placing them end-to-end, with the vector sum being a vector that points from the beginning of the first vector to the end of the second vector. Scalar multiplication is done by altering the length of a vector while keeping its direction the same. Many of the properties of N-dimensional Euclidean vector spaces generalize to vector spaces based on other number systems, or to infinite dimensional vector spaces whose elements are functions.

Looie496 (talk) 17:48, 21 January 2009 (UTC)


Lead suggestions

Places where confusion arises in the lead:

  • The sentence beginning "for example" - readers unfamiliar with mathematics will be confused by this accumulation of terms
  • Perhaps a simple definition could be added before the precise mathematical definition, one that would be more accessible to non-mathematicians.
  • The "history" paragraph seems to interrupt the discussion of vectors
  • The relationship between "collection of objects" and "physical forces", both described as vectors, is unclear.

These suggestions come from my writing class, which consists of college sophomores, juniors, and seniors from across the disciplines. Hope they help! Awadewit (talk) 17:44, 21 January 2009 (UTC)

Thanks! As you see, we are in the middle of the discussion. We'll use your hints. Jakob.scholbach (talk) 22:09, 21 January 2009 (UTC)
1) OK, 2) Hm. Currently (as previously) a pretty vague "definition" is given. Do you think "objects that may be added together and multiplied ("scaled") by numbers" is still too difficult to grasp? 3) OK. 4) OK, that was a mis-wording (the objects in the collection are vectors, the collection of vectors is the vector space). Is this clearer now? Jakob.scholbach (talk) 23:22, 22 January 2009 (UTC)
My class thought the new version was a dramatic improvement, in particular the phrase you have highlighted above - "objects that may be added....". Awadewit (talk) 20:31, 26 January 2009 (UTC)

"A vector space is a set"

Out of curiosity, haven't vector spaces as proper classes been considered in the literature? GeometryGirl (talk) 13:44, 22 January 2009 (UTC)

I haven't come across it but I would be surprised if it didn't exist. If you ever run across a good reference it might be nice to include it in the generalizations section. Thenub314 (talk) 14:49, 22 January 2009 (UTC)

Further comments on the lead

To support the ongoing FAC process, a few comments on the lead:

  • Could shorten first paragraph by taking out the in-line explanation of what a field is: it is already wikilinked and a reader not familiar with the concept is not likely to learn yet another new definition while reading the lead;
  • Euclidean vectors vs geometrical vectors: what is the intended difference? Current text appears to equate Euclidean vector with uses in physics, which is not really right. Should probably combine the terms geometric and Euclidean together (using only one of them) and then say that one very important use of these is representing forces in physics.
  • To be honest, I don't know what to do about Euclidean vectors. Physicists seem to insist on them, mathematically they have little or no importance (at least they are hardly called like that). I tried to reword it to make clear that, in essence, the same concept is used both in physics and geometry. Jakob.scholbach (talk) 23:17, 22 January 2009 (UTC)
We had a long argument about them on Talk:Euclidean vector. If one accepts that a "Euclidean vector" is the same thing as a "contravariant vector", then we had a long discussion starting about here, and my conclusion was that contravariant vectors are tangent vectors. There was a moment when I was convinced they were something else, but I changed my mind later (see my last comment under here). Ozob (talk) 02:00, 23 January 2009 (UTC)
We physicists don't usually call them Euclidean vectors either. I don't think thatn-dimensional Euclidean vector space means anything more specific than any n-dimensional vector space over R with a positive-definite inner product. If one wants to specify vectors acting on the particular Euclidean affine space used in non-relativistic physics to model physical space, one would just say "space vectors", "spatial vectors", "three-vectors" or stuff like that. But people on Talk:Euclidean vector seem to think otherwise, and I got tired of arguing. -- Army1987 – Deeds, not words. 18:10, 23 January 2009 (UTC)
  • The end of the first paragraph: "in that the axiom of vector addition and scalar multiplication embody essential features of these examples" is not very helpful and repeats the point. Could be cut to make the text mass lighter.
  • Linear flavor: what does this mean? Some could call this a circular reference...
  • Second paragraph should be split: it is not coherent as it begins with history (which should find its way in the lead) but ends with discussion of dimension.
  • Could it be more comprehensible to try to define dimension (finite, at least) vaguely in terms of "independent directions" existing in the space (technically this would be the maximal number of linearly independent vectors) rather than with "number of scalars needed to specify a vector" (technically size of minimal set of generators)? The reader at this stage does not know how a vector can be specified using a list of scalars (once a base is given) but the intuition about independent directions could be provided with the list of one direction in a line, two in plane and three in space.
  • Convergence questions is not very clear unless you know what's meant already. Could someting like Analytical problems call for the ability to decide if a sequence of vectors converges to a given vector provide more flavour without adding much text?
  • Now get the impression that among topological vector spaces Banach and Hilbert spaces are particularly complicated, a viewpoint I would not accept. Intention is presumably to claim that these are particularly important types of TVS, which is surely right.
  • Applications section is strangely skewed. It is true that given the almost ubiquitous applications of vector spaces both within mathematics and in other disciplines, it is hard to write a balanced paragraph. But singling out Fourier analysis looks unwarranted. Differential equations make sense, in that they were instrumental for the development of topological vector spaces. Local linearisation of manifolds may be a tad too technical as the other example. Systems of linear equations? High school background should make these something to relate to.
  • Do you mean that the article is skewed or that the lead is skewed? If the article is OK, then the lead has but to sum up the article, so a word about Fourier and friends seems logical? Systems of linear equations are now mentioned.

Stca74 (talk) 15:03, 22 January 2009 (UTC)

Using bullets for scalar multiplication

Don't you think that writing The product of a vector v and scalar a is denoted av, but then denoting it a · v in the rest of the article, can be confusing? Is the reader going to understand they refer to the same thing? Also, in equivalently 2 · v is the sum w + w, why the same vector should be referred to as v on the LHS and as w on the RHS, and why there shouldn't be a {{nowrap begin}}/{{nowrap end}} around the w + w., as there is one around similar such expressions in the same paragraph? And why the word ordered should be hidden by a link such as pair, in flagrant violation of WP:EGG? -- Army1987 – Deeds, not words. 14:26, 24 January 2009 (UTC)

I guess you refer to my reverting your edit. Sorry, I had not realized these changes, only the removing of the dots. (I did watch the diff, but somehow missed them). I have reinstated your points (thanks for catching the 2*v = w+w, in particular) and put a notice that the product may also be denoted with a dot. I think points like rv = (rx, ry) could be confusing to some readers. Jakob.scholbach (talk) 15:43, 24 January 2009 (UTC)

The lead

In view of some of the problems people are having with the lead over at the FAC page, I thought I'd put something down here to see if this is more along the lines of what they want. I'm thinking that what is desired is that at least the first paragraph be some layman's terms way of describing what vector spaces over the reals are. Delving into other fields and such could then be left to the later paragraphs of the lead. So my question is, is the following the type of content that the opposition at the FAC page is looking for (clearly, the prose itself is quite lacking)?

A vector space is a mathematical structure that, in its simplest form, can be used to track one or more quantities. In this way, they generalize the real numbers, which can be used to track one quantity as in "I am 5.2 km down the road from your house" or "I am missing 1.3 cups of sugar for this cake recipe" (-1.3 cups of sugar). An element of a vector space (called a vector) could represent a position using three distances such as "I am 2.3 km east, 1.1 km north from you and 100 m below ground" (which can be represented as a triple of real numbers (2.3, 1.1, -100) ), or it could represent how much sugar and flour one requires as in "These cupcakes need 1 cup of sugar and 3.25 cups of flour" (which can be represented as a pair of real numbers (1, 3.25) ). Like real numbers, vectors in a vector space can be scaled and added together. In other words, one can speak of multiplying a vector by a real number, as in "I want to make 2.5 times as many cupcakes, so I will need 2.5 cups of sugar and 8.125 cups of flour" (written as 2.5 · (1, 3.25) = (2.5, 8.125) ), and one can speak of adding two vectors together, as in "This cake asks for 1.5 cups of sugar and 2.75 cups of flour, so in total I will need 2.5 cups of sugar and 6 cups flour" (written as (1, 3.25) + (1.5, 2.75) = (2.5, 6) ). From a mathematical point of view, the specific quantities a vector represents are immaterial so that only the number of quantities matters. For this reason, the mathematical structure of a vector space is determined by the number of quantities it tracks (called the dimension of the vector space).

One could then go on to say that "Mathematicians have generalized certain properties of the real numbers to invent the concept of a "field" ..." etc.

Now, I realize this is a rather poorly written paragraph, but in particular it seems hard to clearly describe what is going on without all the examples. Though perhaps they could be relegated to the "motivation" section. Also, for a mathematician, this is probably a non-ideal beginning since a mathematician prefers to say what something is before describing what it can do. However, it seems like a compromise is necessary. I hope what I've written can lead to some progress on the issue. Cheers. RobHar (talk) 16:13, 24 January 2009 (UTC)

Thanks for your suggestion. Frankly, I would be somewhat unhappy to have such a paragraph in the lead section. It contradicts the credo (or guideline, if you want) that working out detailed examples should be avoided. Also, we are not writing a textbook (or cookbook :), sorry I couldn't resist). I like "[v.sp.] can be used to track one or more quantities." and we could perhaps integrate that to the lead. I'm repeating myself, but we can not explain the whole concept from scratch in the lead section of the article. This would be totally unbalanced (also contradicting some guideline). If anywhere, we can explain it with this level of detail in the body of the text. But, I think even there it is inappropriate to do it this way.
I'd like to put here, for comparison, the relevant lead section paragraph of group (mathematics), which is a featured article whose accessibility has been validated by lay readers. It reads
In mathematics, a group is an algebraic structure consisting of a set together with an operation that combines any two of its elements to form a third element. To qualify as a group, the set and operation must satisfy a few conditions called group axioms, namely associativity, identity and invertibility. While these are familiar from many mathematical structures, such as number systems—for example, the integers endowed with the addition operation form a group—the formulation of the axioms is detached from the concrete nature of the group and its operation.
I should say I'm probably biased, because I contributed to that, but I think it has the right spirit of succinctly picking a simple key example and alluding to the concrete definition with its motivation/background/... which comes in the body. There will be many readers who will only fully understand the "integers and addition" thing. So what? We can not, for example (Notice that there are differences: e.g., group axioms are fewer. We should not mention the axioms of vsp in the lead).
Another example that comes to my mind: if you want to write the lead section for plane, say, you would not be able to talk in detail about explaining basics of aerodynamics, but perhaps just write that "Aircrafts often rely on carved wings, creating a difference in air pressure above and below the wings". You would and could not explain what pressure means, why moving air creates pressure differences etc. I think we have to face the reality that certain concepts can not be explained from scratch in one paragraph. Doing our best to educate the reader with the text body is our duty, and we should excel in it. However, putting everything into the lead is simply not going to work. Jakob.scholbach (talk) 16:59, 24 January 2009 (UTC)
RobHar's suggestion is well-meaning but totally contrary to WP:LEAD ("The lead serves both as an introduction to the article below and as a short, independent summary of the important aspects of the article's topic.") and the principle that Wikipedia is not a textbook. Geometry guy 20:39, 24 January 2009 (UTC)
Using the Group (mathematics) article's lead as a basis (pun not intended):
In mathematics, a vector space is an algebraic structure consisting of a set of vectors together with two operations, addition and scaling. These vectors track one or more quantities, such as displacement (the distance travelled and in which direction), or force. To qualify as a vector space, the set and operations must satisfy a few conditions called axioms. These axioms generalize the properties of Euclidean space (e.g. the plane, an idealized flat surface), but their formulation is detached from the concrete nature of any given vector space. Concepts like length and angle may be undefined or counter-intuitive in certain vector spaces.
I think this a bit too abstract. Also Euclidean space is too technical. Is there anything else that vs's generalize that is less technical? Alksentrs (talk) 17:55, 24 January 2009 (UTC)
Let me point out that the Euclidean vector article is quite nice, with minimal assumptions of background on the part of the reader. It might be helpful to direct readers with a weak background there—a reader who has read that article should be in a much better position to understand this one. Looie496 (talk) 18:15, 24 January 2009 (UTC)

Three elementary consequences of the definition need to be given explicitly

I think that three important consequences of the definition need to be stated. Namely that for all scalars a, and vectors u, the following hold:

  1. a0 = 0
  2. 0u = 0
  3. (−1)u = −u

Note that 2 is expressed in words, but I think it would be good to express it as a formula.

Paul August 17:41, 24 January 2009 (UTC)

I'm not sure. You are right that it is somewhat important to know these facts, but we can not (for space reasons) put everything that is important. Also, we have to maintain balance of elementary and advanced material. That said, I would propose adding a precise citation to the relevant paragraph, pointing to a book or so that has these (and further) elementary consequences. Jakob.scholbach (talk) 15:55, 25 January 2009 (UTC)

2nd paragraph of lead

I think the first paragraph has improved substantially, although it still needs work. I would like to make a couple of comments about the 2nd paragraph, which I feel misses opportunities. What I would like it to say is that the basic vector space properties are too weak to give any very interesting consequences, and that the additional structure needed to make them interesting is a norm—a concept of length. With a norm, you get a concept of distance, and therefore a topology. One particularly important way to get a norm is by means of an inner product. Thus you get Banach spaces, inner product spaces, and Hilbert spaces, with increasing levels of mathematical structure. Looie496 (talk) 18:11, 24 January 2009 (UTC)

This is not true. Plain old vector spaces are plenty interesting. For example, you don't need norms or inner products for some topics in differential geometry, such as the theory of differential forms. Nor do you need them for some convex geometry. It's very useful to look at homology and cohomology groups with coefficients in a field, and these are vector spaces. And as Stca74 pointed out above, over finite fields there are no non-trivial norms, so the idea of a Banach or Hilbert space is uninteresting. (The p-adic functional analysis I've looked at is very weird.) Ozob (talk) 23:50, 25 January 2009 (UTC)
This is exactly what I would reply, too. Jakob.scholbach (talk) 08:44, 26 January 2009 (UTC)

Mentioning nationalities of mathematicians in "History"

I've seen that some names are preceeded by nationality (French mathematicians René Descartes and Pierre de Fermat, others aren't (Bolzano introduced). This should be made consistent. Were this a more popular topic, we'd already have attracted the anger of nationalist fellow citizens of Möbius, Cayley etc.. But I'm reluctant to fix it myself because there would be the burden of deciding whether to mention all or no nationalities. What do you think? There is a similar issue for given names, but a good compromise for this would be using initials (e.g. R. Descartes and B. Bolzano). -- Army1987 – Deeds, not words. 13:18, 25 January 2009 (UTC)

I guess we should remove all nationalities. Unless there we want to make it a point that, the (fictional) islandic school of algebraists was instrumental in pushing forward v.sp. the nationalities play no role. I would also remove all given names, for brevity's sake. Jakob.scholbach (talk) 13:52, 25 January 2009 (UTC)