Jump to content

User talk:Quondum/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1Archive 2Archive 3Archive 5

Welcome

Welcome note (to me)

Welcome!

Hello, Quondum, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are some pages that you might find helpful:

I hope you enjoy editing here and being a Wikipedian! Please sign your messages on discussion pages using four tildes (~~~~); this will automatically insert your username and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or ask your question on this page and then place {{helpme}} before the question. Again, welcome! strdst_grl (call me Stardust) 16:16, 10 May 2010 (UTC)

Vagueness of definition of Clifford algebras

I invite interaction on [[1]] (moved from this page) from knowledgeable (especially mathematically trained) persons on the definition of Clifford algebras. Quondumtalkcontr

Template in signature

Which template are you using in your signature? Because you're introducing 3 parser functions every time you sign your name, a practice which is actually forbidden. I'm asking so that we can safesubst the template. Magog the Ogre (talk) 10:53, 2 November 2011 (UTC)

I'm using {{SUBST:su|}}. I'm a rookie at this, and there seems to be no obvious "user manual" for building signatures. I'd be happy to change it if you let me know what alternative I should use for a similar signature. Quondumtalkcontr 11:07, 2 November 2011 (UTC)

OK, I made the template substitutable, so it shouldn't be a problem anymore. Magog the Ogre (talk) 17:21, 21 November 2011 (UTC)

Poynting

https://en.wikipedia.org/w/index.php?title=Poynting_vector&diff=461715242&oldid=461714499

I think "through a surface" is more intuitive than "per unit area", but it gets wordy to include both. — Omegatron (talk) 05:16, 21 November 2011 (UTC)

As a first approximation, yes, but "through a surface" does not give enough information on its own and in particular does not imply that the area of the surface is relevant, and hence does not really say what it is meant to say. Encyclopaedic statements must (IMO) not sacrifice correctness purely to make them pedagogical. I think "directional energy flux density" is intuitive enough without the clarification; I did not remove the clarification because some may find it helpful. Quondumtalkcontr 05:33, 21 November 2011 (UTC)

Geometric algebra bases

  • After searching through literature, I couldn't find any other authors that agree with my source (Bishop and Goldberg) on using "orthonormal" in the extended sense of allowing -1,0 and 1, so I did change Geometric algebra to describe that clearly, rather than overload the term "orthonormal". Overloading it is practical but only if it becomes well known in the literature.
  • I have to disagree about changing the wording in the description of the standard basis. Every construction I've read so far adheres to "strictly ascending indices" for "standard bases". Allowing permutations in each element does result in a basis but that's overcomplicated. Rschwieb (talk) 19:24, 27 November 2011 (UTC)
  1. Yes, I think it's nicely done. The standard overloading allowing ±1 is bad enough.
  2. I think the wording "forms a standard basis" as you have it is semantically flexible enough to imply "is one way of forming...", which is fine. I was being unnecessarily picky: it is clear that showing the allowable variation at this point will be confusing at best. Quondumtalkcontr 19:47, 27 November 2011 (UTC)

Extended inner products

I am positive you will be interested in this[2] if you haven't seen it already. Rschwieb (talk) 03:23, 29 November 2011 (UTC)

This is a lovely treatise on the subject of the product generalizations – definitely worth making a section of as you did (though I had a go at "adjusting" it, as you'll notice). It mirrors so closely what I've been saying that I feel I must have read it before. It may be worth noticing that the scalar product defined by Dorst is different from the one in Clifford algebra#The Clifford scalar product. This one is nicer in that it does match the bilinear form on every candidate generating subspace. I felt it was necessary to put in the depreciation of the "original" inner product: it is the best-known, and it makes sense for an encyclopaedia to define it and clarify its status. I'll debate the issue of direct sum in Talk:Geometric_algebra#Use of direct sum ⊕ in section Pseudoscalars, since this is of general interest. Quondumtalkcontr 15:51, 29 November 2011 (UTC)
Ok sounds like it was a success then. I have to depend on you to know which versions are more common... I have so few resources! Rschwieb (talk) 20:16, 29 November 2011 (UTC)

Footnote, Speed of light

Hi. Could you explain the part of your recent edit summary that says, "Still a minor issue: the column heading "Distance" doesn't apply." --Bob K31416 (talk) 18:34, 3 December 2011 (UTC)

I've partly explained in Talk:Speed of light#Trimming list of signal times. I'll put a bit more detail here. The column is headed "Distance", but no indication is given that the distance changed manyfold while the light was in transit. For argument's same, at the time the light was emitted by the furthest observed galaxy it may have been only 1 billion light years away, and now it is perhaps 100 billion light years away (illustrative figures: they're complete guesses). Cosmology is not part of the article, but is necessary for understanding this. So using this as an illustration of the speed of light seems a little problematic. Quondumtalkcontr 20:34, 3 December 2011 (UTC)
Well, that's assuming that distance can only mean ‘cosmological proper distance’, when it can have other meanings as well, and in that context it's obvious which one is meant. (FWIW, I've seen the Wikipedia article about that galaxy, and the comoving distance is 31.7 billion light-years and hence it was 31.7/(1 + z) = 2.8 billion light-years back then.) ― A. di M.​  21:00, 3 December 2011 (UTC)
Obvious perhaps to a cosmologist, definitely not to the average reader of the article. And light travel distance is exactly what I was calling tautological. Thanks for the better figures. Quondumtalkcontr 21:21, 3 December 2011 (UTC)

SoL animation

Hi. Just double checking something. Did you look at the section Speed_of_light#Astronomy before making your comments? --Bob K31416 (talk) 17:45, 6 December 2011 (UTC)

Yes, I did. It makes the simple point that the finite speed of light travel time of light puts observations in the past depending on distance, sometimes substantially, and the observational benefits thereof. This much is relevant and popularly understood. It does not make any reference to red shift etc. So if we bring in factors such as recession, red shift, expansion of the universe and subtleties such as illustrated in the proposed animation, we may be in for a lot of explaining: these cosmological factors probably come as a package deal because they're so tightly connected with each other. See Metric expansion of space#Understanding the expansion of Universe for a nice graphic illustrating exactly the same thing as the animations does in a different form, plus a reasonable explanation. Quondumtalkcontr 18:20, 6 December 2011 (UTC)
I can understand that that the animation has more information than what is presently in the section, since after all, that was its purpose. But not to the extent that you are mentioning, for example red shift??? And the "nice graphic" that you were referring to is extremely difficult to understand and in no way compares to the simplicity of the animation. It's as if you have had some history regarding these issues or some discussions other than the recent ones on the SoL talk page. Is that the case? --Bob K31416 (talk) 02:07, 7 December 2011 (UTC)
I can imagine red shift or similar concepts having to be introduced simply to explain what the animation illustrates. History as you suggest? I'd say no. Perhaps I express my opinions too intensely, but I regard these as opinions merely to be considered, no more than that. Since the inference here seems to be that something other than a collective editorial decision is at issue, I'll stand back from the Speed of light article. Quondumtalkcontr 06:10, 7 December 2011 (UTC)
Actually, I was planning on standing back from the SoL article. : ) Sometimes interactions through just writing, without the added communication of body/face language can be problematic. I don't know if the following will be helpful but let me say that I felt like I was interacting with two considerably different personalities in your messages. At first your messages seemed to me to be quite reasonable and open to considering others' ideas, then your messages seemed to be aggressive and closed to my ideas to the extent that the messages seemed somewhat hostile, and now your messages here seem to be returning back to the personality that I originally perceived. Now this is just my impression, which is all I can express, and I hope you recognize it is not meant to be argumentative or critical by rather to give you an idea of my perception of your messages.
Regarding the problem of digression in general from the topic of SoL, this is a consideration that I have had myself and I have expressed it in my messages and actions at SoL. A concrete example is when I trimmed the paragraph about the recent neutrino experiment. Regarding your concern about the animation leading to a digression, one might consider doing the same thing I did with the neutrino experiment in SoL, i.e. refer the reader to another article instead of discussing the information in the other article here.
But from your messages, I think there may be some additional problem(s) that you have with the animation which may or may not be apparent to yourself, but that's just speculation on my part. Please note that my comments at talk SoL regarding resistance wasn't directed just towards you but were just as much directed to the other two editors on that thread, although one editor withdrew his comment.
SoL has had a troubled editing past with some editors having their accounts blocked for long times. I don't know if you were around then, but I know that some other editors currently active were, and that may be influencing their actions, or their intrinsic nature may have contributed to the troubles in the past and may continue to surface. As far as how I handle these situations, I recognize that Wikipedia is a big place with a diversity of editing environments, my participation is completely voluntary, and that we are all human with all the strengths and weaknesses of that condition. Regards, --Bob K31416 (talk) 13:08, 7 December 2011 (UTC)
The way I express myself may be variable; the degree of variability is difficult to judge subjectively. I think that we were each failing to respond to some points the other made. The occasional misinterpretation may have played a part.
I'm with you on using links.
On the animation, I'm not wild about it. But this is secondary to relevance.
My first edit on SoL or its talk page was on 2011-10-28, so no, I wasn't involved. I only became aware of the article a little earlier. I enjoy Wikipedia as an environment for collaboration and learning, including the diversity of personalities. I think this is largely due to its inherently voluntary nature. Quondumtalkcontr 15:01, 7 December 2011 (UTC)
Thanks. That helps me understand the situation better. Bye, --Bob K31416 (talk) 15:13, 7 December 2011 (UTC)

News?

Hi, hope your holidays went well. I was wondering if you had any recommendations for computer programs for GA? I saw the list on the webpage but it doesn't tell me much. Rschwieb (talk) 22:46, 10 January 2012 (UTC)

My own experience in this regard has been entirely negative. I have not yet found a single package or library that I could get running, party through lack of perseverence, I suppose. I am not prepared to buy Matlab or similar runtime requirements for what is an exploratory exercise, and with C/C++ libraries I have not figured out how to use them (identify the interface and how to instantiate them with my non-Linux system) before losing patience. I also have experience with graphics interfaces that is indistinguishable from nil. The closest I have come is a little experimentation in writing my own C++ classes for a CA, including writing a fairly efficient inversion routine for general multivectors in a real CA with a generating vector space of up to 5 dimensions. It does not help that my brain feels like it has turned to mush every time I try to derive a projection of a vector onto a bivector from purely geometric principles. So, sorry, not much help from me, unless you want to co-build something from the ground up. I would be interested in anything you find. — Quondum 04:58, 11 January 2012 (UTC)
I have a general lack of experience with computer algebra systems, which I'd like to change. I've been looking into the free systems available online to become more familiar with them. Most of my experience is with Matlab, through institutions I teach at. I've never really worked with a graphical interface (other than graphing calculators and on Matlab) but I suppose if one wants to learn geometry it should be incorporated into the program. Mainly I think of a program doing tedious computations. Or as a fast way to check my tediously done hand computations. Rschwieb (talk) 11:51, 11 January 2012 (UTC)
Writing a program occasionally also helps me to clarify subtle thoughts about abstractions. But in this context, a benefit would be as a visualization tool. In your position, I'd be writing for Matlab. What were you wanting to do with a GA computer program? — Quondum 12:26, 11 January 2012 (UTC)
To be honest I have no solid goal in sight! I'm still in the beginning stages of getting tensor calculus, relativity, electromagnetism and differential geometry under control. I lack of programming experience and lack experience with computer algebra systems, and I guess I just wonder what I'm missing out on. I'm looking to enhance my research capabilities with these things. The most interesting project I carried out in Matlab was to build a few strange finite rings that I found in a paper. I'd especially like to be able to look at infinite objects this way in other CAS's somehow. Rschwieb (talk) 16:56, 11 January 2012 (UTC)
There seem to be many CASs listed on Wikipedia, but the sample of interesting ones seem to produce a lot of dead links. One that looks interesting for my purposes is SymbolicC++, but it apparently does not yet support anything particularly abstract; it only goes as far is sets, quaternions, vectors and matrices. No CA. But it does have symbolic manipulations, including non-cummutative algebra. For it you need a C++ compiler. And it can be extended by writing your own code. — Quondum 18:11, 11 January 2012 (UTC)

So many things to learn and so little time... Rschwieb (talk) 22:35, 11 January 2012 (UTC)

Do you have access to a C++ compiler? I've downloaded and built SymbolicC++ on Visual C++ 2010 (with a minor hiccup), but so far it is showing promise. I have yet to actually make it do something. They make mention of working with exterior algbras using the CAS, so perhaps we could apply it to geometric algebra. Do you know of a TeX-like text format that accurately encodes expression semantics? — Quondum 11:54, 12 January 2012 (UTC)
I've got a C++ compiler on a disk here somewhere, but I have no idea how to use it. I just recently read that Xcas and Axiom have LaTeX compatibility, but I don't know any details. Rschwieb (talk) 12:21, 12 January 2012 (UTC)
As far as the expression format is concerned, I was wanting a format to store and transfer expressions in so that they could be algebraically manipulated after input. LaTeX does not seem to be suitable (it discards huge amounts of semantic information that humans still infer). I imagine some XML format will be necessary. It's not important at this stage. Producing LaTeX for display should not be too difficult. If you have a Windows system and are interested in this route, you might want to get Visual C++ 2010 (no cost from Microsoft, just bandwidth). I'll play around with it a bit and see what I can get it to do. — Quondum 12:42, 12 January 2012 (UTC)
OK, I'll grab a copy of that. It sounds functional! I have almost nil experience storing expressions: the closest I got was that Matlab allows you to save variable to a file for retrieval later. What I want to find is a program that can multiply generators symbolically. In the past, I've just numbered the elements of the finite ring/group I was looking at, and made multiplication and addition tables to deduce the ideals. Really, that, and some sort of special tools for examining ideals and modules would be great. I wonder what capabilities these things have for looking at infinite rings and groups... Rschwieb (talk) 14:04, 12 January 2012 (UTC)

GeoGebra is also another fairly well known geometry program. It's written in Java. Does that have any potential to be adapted for GA? Rschwieb (talk) 14:32, 12 January 2012 (UTC)

My impression from glancing around at GeoGebra and what it links to is that it is aimed at high school students: Geometry, algebra, graphs, differentiation, integration. It seems to be a teaching tool rather than a research tool. My guess is that even with the source, to adapt it to GA would be a huge task.
I've had a little more of a look at SymbolicC++. It has some rough edges, and aspects of it are perhaps a little crude. For example, it seems to handle all symbolic manipulation as strings, not as optimized data structures, forcing a huge amount of parsing. This results in it being tremendously slow. There are one or two bugs (e.g. its very-big-integers, also handled as decimal digit character string, crashes when a subtraction yields zero - easily fixed but symptomatic of software that is not very mature). It is not too obvious how one specifies general rules, such as cos2x+sin2x=1 for any expression x of a given type; their example specifies it for a given variable, and it only works with the expression being that variable. They implement a Grassmann algebra in an example (which is trivially modifiable to a Clifford algebra) by specifying the manipulation rules (by means of substitutions) for the standard basis. After specifying the result of every pairwise multiplication between elements of the basis of the vector space, they throw an expression (the product of four 1-vectors expressed in terms of the basis) at it to simplify, and as you'd expect, the result is a 4-vector. I've got a feeling that it does an awful lot of brute-force substitution using rules, and that it will become very slow when the number of rules and/or the size of the data gets large. The tensor example they have does not treat tensors as an algebra, but rather as a large number of components. I'm progressively starting to think that SymbolicC++ was a thesis project and not a serious research tool. Oddly, they've published a book about it. Maybe I've missed something. — Quondum 16:53, 12 January 2012 (UTC)
PS I've just located a review of the SymbolicC++ book by Francis Glassborow, a person who is in my estimation highly competent in the C++ field. His verdict: "not recommended". In fact, he slates it, though he does not go into the CAS side much. So my impression from looking at the code was not far off. It seems this is one for the dustbin. — Quondum 17:18, 12 January 2012 (UTC)

GA tools

So what tools can we expect to have at our hands, in a good GA program? You can delete the stuff above here and expand this list:

  • Given two multivectors, compute their geometric/wedge/contraction product
  • Given an invertible multivector, compute the inverse
  • Given generators of a subspace of V, determine a blade/pseudoscalar
  • Given a vector and angle a rotated version, determine the corresponding rotor rotation multivector
  • Given a basis and a rotated version, determine the corresponding rotor
  • Given a rotation plane/bivector and an angle, determine the corresponding rotor (i.e. exponentiation)
  • Rotation, Projection, and reflection operations
  • (more)

ALSO take a look at another C++ CAS Plural Rschwieb (talk) 17:22, 12 January 2012 (UTC)

The Singular/Plural website has so little on it that I find it difficult to judge. This suggests that it does not yet have much of a user base.
Your approach of the list raises a few questions. Would you want something that can do straightforward numeric manipulations (e.g. +, −, ×, inverse)? Or do want symbolic manipulation (I would want that, especially coordinate-free symbolic manipulation)? Display for visualization? All of the operations items on your original list are straightforward to implement numerically in a crude program or calculator, except for inverses in higher dimensions and handling ill-conditioned cases. Exponentiation and logarithm is probably a bit tricky (though a Taylor series would do the job for exponentiation).
Something like Geometric Algebra for Computer Science may be good if you have time and money, and do not want the CAS side. — Quondum 18:30, 12 January 2012 (UTC)
As I mentioned before, I'm not sure what to expect from a program. I never really feel the urge to see graphics so far. Part of this is just finding out what can be done. Rschwieb (talk) 21:07, 12 January 2012 (UTC)
I am having difficulty infering what would be of interest to you. You will obviously know that addition, subtraction and multiplication in terms of numeric components will be straightforward and computationally stable. The standard basis CA/GA representation of vectors, rotations and the like is one of the most stable of all representations; it does not exhibit gimbal lock, but there are still some inherently ill-conditioned problems. Provided that the input/output requirements remain minimal, I could customize a "calculator", adding what we need. Eventually graphic capability may be possible. Entering symbolic expressions should not be difficult, but manipulating them symbolically is another story. Some of the things such as "determine a rotor" you will want to deal with algebraically. — Quondum 08:41, 13 January 2012 (UTC)

I should give an example of what I was thinking of for symbolic computation. I would like it to be able to take a list of symbols "a,b,c" and a list of relations (say ab=ba, ac=b ...) and then accept inputs consisting of powers and products of a's b's and c's, and then be able to use the relations to multiply and sort the generators alphabetically. Programs of matlab do this of course when they handle polynomials, but that's just one particular example of a group/ring, and I wonder how to implement more. Especially those with infinitely many elements. Rschwieb (talk) 12:11, 13 January 2012 (UTC)

SymbolicC++ does pretty much this. You give it relations like you describe, and you can turn off commutativity. Using these relations, it can then simplify a symbolic expression. How about giving me an example of defining relations and several expressions you want simplified, and we can use it as a trial? — Quondum 12:44, 13 January 2012 (UTC)
A simple example would be to implement the multiplication table for a dihedral group of a given size. Maybe a more sophisticated application would be this: for a free algebra on given generators, with given relations, given an element x, determine the form of elements in the right principal ideal xA.
Another thing: I'm looking for a list of programming projects that one can exercise in any language. The idea is that they should help me learn the generic aspects of programming necessary in C++ and maybe Python. Rschwieb (talk) 14:02, 13 January 2012 (UTC)
I've managed to coax it to produce the following table for Dih5.

            Dihedral group with n = 5 (Dih5)

            rules:
            [ r^(5) == 1,
              s^(2) == 1,
              s*r == r^(-1)*s,
              r^(-1) == r^(4) ]

            [ 1 r r^(2) r^(3) r^(4) s r*s r^(2)*s r^(3)*s r^(4)*s]
            [ r r^(2) r^(3) r^(4) 1 r*s r^(2)*s r^(3)*s r^(4)*s s ]
            [ r^(2) r^(3) r^(4) 1 r r^(2)*s r^(3)*s r^(4)*s s r*s ]
            [ r^(3) r^(4) 1 r r^(2) r^(3)*s r^(4)*s s r*s r^(2)*s]
            [ r^(4) 1 r r^(2) r^(3) r^(4)*s s r*s r^(2)*s r^(3)*s]
            [ s r^(4)*s r^(3)*s r^(2)*s r*s 1 r^(4) r^(3) r^(2) r ]
            [ r*s s r^(4)*s r^(3)*s r^(2)*s r 1 r^(4) r^(3) r^(2) ]
            [r^(2)*s r*s s r^(4)*s r^(3)*s r^(2) r 1 r^(4) r^(3) ]
            [r^(3)*s r^(2)*s r*s s r^(4)*s r^(3) r^(2) r 1 r^(4) ]
            [r^(4)*s r^(3)*s r^(2)*s r*s s r^(4) r^(3) r^(2) r 1 ]

This will work for any size group, if you're prepared to wait. I can probably get it to substitute back the 2n names back after it has worked out each entry. You will notice the redundant (last) rule: this is because it appears to work by literal substitution. You have to provide rules for literal substitutions, and it does not seem to be able to apply inference to make related substitutions. Rather pathetic, but it can at least generate tables.
To get the right principal ideal xA, I presume one would generate the ring A as a set, and then left-multiply it by x. Redundant elements in the result would then be eliminated from the set. One might be able to do this with SymbolicC++ with small finite rings. But it is really a bit pathetic for serious use.
On programming problems, almost all language can do anything the others can, it's simply a case of what is more natural in each. I have no experience of Python. They all take a while to learn with some proficiency. — Quondum 20:11, 13 January 2012 (UTC)
Hmm, you know, we should be looking at software developed for studying Rubik's cube. While it's not infinite, it's certainly big and complicated enough to require specialized tools. Rschwieb (talk) 12:39, 14 January 2012 (UTC)
Yes, for anything serious I don't think that SymbolicC++ is useful - it seems to do little more than a find-and-replace string editor could do with suitable rules. To deal with Rubik's cube needs something far more sophisticated due to the size of the group. To get to know what can be done with each of the packages will be laborious. Wikipedia does not give a realistic sense of what they are. So I don't think we've achieved much so far. — Quondum 13:22, 14 January 2012 (UTC)

Euler's formula

"I'm not sure whether x = π is the ideal choice for the illustration, though I'm aware that it is simply what was available as a GIF." I made the GIF for Euler's identity of course, not really for Euler's formula. I'm happy to make a new one with a different x. It's very easy, I just change one or two lines of the program. What value of x do you suggest (and why)? :-) --Steve (talk) 13:35, 17 January 2012 (UTC)

For purposes of giving a general sense for the Euler identity, I'd say it should end up as an obviously general complex value (i.e. not on or close to any of the axes, and hence x should not be a multiple of π/2). For reasons of simplicity of illustration and understanding, it would probably best fall in the first quadrant (0 < x < π/2). Also in the first quadrant the scaling would be better, showing more detail. Putting x = 1 feels good and fits well with the illustration in the template {{E (mathematical constant)}}, but you may want to use a more "nonspecial" value, such as 1.1 or 1.2. The template diagram seems to be about 1.2 radians. Oh, and while you're at it, it would be nice if the name N in the diagram could be changed to n to fit the formula in the text. — Quondum 15:04, 17 January 2012 (UTC)
On second thoughts, one that people may be able to relate to easily is x = π/3, because it satisfies all the above and can be easily analysed in terms of trigonometry and geometry/Pythagoras. — Quondum 18:09, 17 January 2012 (UTC)
[3] Thoughts? :-) --Steve (talk) 15:24, 21 January 2012 (UTC)
I certainly think it works well – I find it quite intuitive and pretty as a geometric visualization of the limiting process. It gets a thumbs-up from me. — Quondum 05:57, 22 January 2012 (UTC)

I've put up a GA Discussion page

I found another fellow outside of wikipedia interested in discussing GA. I've been corresponding by email, and I'm setting up this page for all of us to discuss things. (I don't mean to limit the discussion to that page, I just wanted a place to start.) Rschwieb (talk) 19:57, 30 January 2012 (UTC)

Thanks for the invite. I've added it to my watchlist. BTW, what about keeping most of the chat on the associated talk page, to act as a journal, with the user page holding the distillation/reference material that we are talking about, much like in the main article space? That way one rarely has to search back through the discussion. — Quondum 07:12, 31 January 2012 (UTC)
Good advice which I will plan on following. Rschwieb (talk) 17:43, 31 January 2012 (UTC)
Have you ever talked to User:Chessfan? Based on my reading of his contribution history he's quite active in topics of geometric algebra. Even better, he appears to have a balanced view of the state of GA today, and he doesn't appear to be a crank. Unless you have any bad reports on him I was considering inviting him to the page. Rschwieb (talk) 16:46, 1 February 2012 (UTC)
BTW, I'll still be looking at our old talk page. I figured it would be good to keep that one as a "two person" page, and this new one could be the community page. It will no doubt be a little bit more chaotic. Rschwieb (talk) 16:57, 1 February 2012 (UTC)
No, I've never had any interaction with Chessfan, but have seen the signature occasionally. A glance suggests a lot of work on primarily three-dimensional representations and rotations, and a liking of GA. I would welcome the additional input. Any chaos can be managed perhaps by splitting pages into fora and/or topics if this should ever prove necessary. — Quondum 17:12, 1 February 2012 (UTC)

CGA representation of objects

You're right. This phrase jumped out at me too, when I looked at the page a couple of days ago for the first time in months. As currently written the phrase is not good, and can indeed be expected to trip people up.

So you're quite right. What directly corresponds to a sphere in the original space is a 4-vector blade (or tetravector, quadvector, ... ? -- 4-vector usually means something else), which can be composed by wedging together vectors g(x) corresponding to four points x on the sphere.

But, per your table on my talk page, as well as the "CGA representation", it's almost as important to hold in mind the "CGA dual representation" which are just their duals in the CGA space. So the original sphere also has a dual representation which is the vector orthogonal to the quadvector we've just been talking about. Wedging together two such vectors (using the usual wedge product) gives a bivector, which is the dual representation of the intersection of the two spheres.

It's a while since I looked at this, so I can't remember exactly what's in Richard Wareham's PhD. I do remember finding the treatment in Perwass's quite useful and concise.

One thing that could also be brought out more is the way that we can now use GA sandwich structures for transformations of the objects that makes so much of this possible -- that the way pq transforms is equivalent to wedging together the transformations of p and q, which is one thing that leads to these equations having such simple and consistent forms. Jheald (talk) 12:40, 4 February 2012 (UTC)

Geometric algebra

I sent you an email through the WP email system. Do let me know if it got through all right. Jheald (talk) 19:07, 4 February 2012 (UTC)

Yes, thanks. I'll reply directly to the email so that you have my direct email address. — Quondum 19:16, 4 February 2012 (UTC)
How does the wiki email work? And what, may I ask, prompted you to modify the contraction symbols at the GA page? Nothing wrong with it, just curious if the new look appears more commonly somewhere. Rschwieb (talk) 13:44, 17 February 2012 (UTC)
When you're on someone's user page, expand the "Toolbox" option on the left, and you should find "E-mail this user" if the user has wikimail enabled in their options. This gives a way of contact that eventually results in the direct email addresses being known to both cooperating parties, but without publishing these on Wikipedia. Your first email comes from the email address you have set under your own preferences.
I previously came across the symbols looking more like I have modified them (though I'd have to dig to locate the reference again); if people object of course they can be changed back. Dorst is the first (only?) one I found with the extremely vertically stretched version; I assumed Dorst et al had the same problem we have with our <math> TeX: there is no suitable symbol. A search of Unicode Supplementary Mathematical Operators yields U2A3C ⨼ INTERIOR PRODUCT and U2A3D ⨽ RIGHTHAND INTERIOR PRODUCT. I'm taking a bit of a leap, but I'm assuming these are actually where Dorst's symbols derived from, and they could only find the TeX \rfloor and \lfloor symbols (which I dislike because they're liable to clash visually with other bracketing symbols, most particularly with the ones they're copies of). I also think this is semantically a good overload of these Unicode symbols. The symbol I've used is somewhere in between (imagine the shock to the system if I'd used the very flattened symbols shown here), and is also on the wiki edit bar for use in normal inline text formulae. — Quondum 14:19, 17 February 2012 (UTC)
Pertti Lounesto (2001) Clifford Algebras and Spinors uses the symbols and (in contrast to Dorsts use of and ), though I'm pretty sure I first saw it elsewhere. A few others that have adopted Dorst's symbols. Many use the centred dot for their preferred inner product extension, whether this be the left contraction, the "fat dot" inner product or Hestenes's quirky inner product. The field is clearly very young: no-one but Perwass seems to have noticed the massive simplification (to the Clifford group and general versor equations) that results from defining the "fundamental reflection operation on a vector" to be anan instead of a ↦ −nan. And only Dorst et al and Macdonald seem to have bothered with the question of the relative merits of the various possible extensions to the inner product. — Quondum 06:45, 18 February 2012 (UTC)

book recommendation

I'm finally getting around to skimming a book I bought a while ago: Emmy Noether's Wonderful Theorem by Dwight E. Neuenschwander. You might be interested, and it is very affordable. The intro says it's "written for physicists...[abstract algebra] is another language, one less accessible to a student more at homewith energy, momentum and intertial reference frames." Boy does that sum up how I feel about the gap between me and physics! I am currently trying to google for documents on "how to think about physics" and "How to think like a physicist"... Rschwieb (talk) 00:45, 16 March 2012 (UTC)

Many thanks for this. I have ordered it and should have it in 3–4 weeks (delivery is kinda slow in this part of the world...). I have long been intrigued by Noether's theorem, but have generally bogged down with excessively technical approaches. We'll see how readible I find the book – I'm a layman when it comes to physics just as in maths, and it sounds like a fairly solid physics background is needed. Anyhow, maybe we'll have some interesting discussions. — Quondum 10:45, 16 March 2012 (UTC)
Yeah, don't read too fast though :) I'm going to be very slow about it. I wish I could duplicate my learning process of algebra with physics, but I really don't remember very much about learning algebra. My theory right now is that if you solve enough problems from a text, you will eventually duplicate the thinking process of the author. Rschwieb (talk) 13:54, 16 March 2012 (UTC)
You'll have a headstart on me, and I'm sure I'll bog down on the exercises, which I'll need to do. — Quondum 15:33, 16 March 2012 (UTC)
Started collecting some of my notes at User:Rschwieb/Physics_notes. Rschwieb (talk) 15:05, 16 March 2012 (UTC)
I've put it on my watchlist. Pose any questions you like there if you want comment. It's looking very good so far; to nitpick about trivia (e.g. referring to "surface" presupposes a 2D manifold) at this point would detract from where it's headed. — Quondum 15:33, 16 March 2012 (UTC)
Hrm, I can tell a lot of articles use the term in this narrow sense. What do they like to call n-dimensional surfaces? Rschwieb (talk) 16:15, 16 March 2012 (UTC)
I'm no authority on this, but people like Penrose seem to avoid ambiguous terms such as "surface". With an n-manifold, terms like hypersurface (having n−1 dimensions), hypervolume (implying n dimensions), or k-surface or n-volume are used. Other authors may take different approaches, but would probably define their use of a term if there was any ambiguity. — Quondum 16:32, 16 March 2012 (UTC)
As an afterthought, the way you've used it, the word "manifold" would be used, e.g. it becomes necessary to "flatten" the manifold at points. — Quondum 17:12, 16 March 2012 (UTC)
I got through the first half of the book, past the main theorem. I didn't pause to do any exercises yet, because I think I need to take the time to do some simpler ones. Got to learn to think in terms of energy and momentum I suppose! By the way, I'm looking for another thing I linked you to, and I can't seem to remember where I put it. Do you remember where the link to a big quadratic forms paper is? I think the author was Conrad? Rschwieb (talk) 01:57, 12 April 2012 (UTC)
Coincidentally, I 've just received the book. I've read the first two chapters. The preface makes it feel daunting ("The reader I have in mind is..."), but the "primary an auxiliary questions" are exactly right to interest me, so I expect it to be rewarding. You placed the link [[4] at User:Quondum/sandbox/Trial_ideas#Orthogonality. Another link in the same section may also be of interest, but probably not as good. — Quondum 05:41, 12 April 2012 (UTC)

Question : rotation formula

Why did you undo it? 46.49.3.65 (talk) 18:03, 29 March 2012 (UTC)

You are referring to [5]. If you read the description carefully, you should see. For example, given a point (x,y) = (0,1), rotate anticlockwise by θ = π/2 radians (or 90°), produces the point (−1,0). This corresponds to the new coordinates (x cos θy sin θ, x sin θ + y cos θ), not the formula you had changed it to (which corresponds to a clockwise rotation rather than the stated counterclockwise rotation). — Quondum 18:26, 29 March 2012 (UTC)

Thanks for taking the time to explain it with an example. Now it makes sense. My bad, I was rotating X axis counterclockwise while you are rotating (x, y) point. And formulas i wrote are certainly right in this case.

P.S. I spent like one hour debugging my code ( copied formulas without reading the sentence properly). Thanks again one more time, people like you, are awesome to keep wikipedia clean. 46.49.3.65 (talk) 19:22, 29 March 2012 (UTC)

You're welcome, and thanks for the compliment. I thought it might be something like that – what is being rotated makes all the difference. Happy editing. — Quondum 19:38, 29 March 2012 (UTC)

Thanks for cleaning up the article after me, you did fine! =) F = q(E+v×B) ⇄ ∑ici 23:12, 31 March 2012 (UTC)

My sandbox

While I appreciate your corrections, I should explain the sandbox page will generally be incomplete and there will be inevitable mistakes. A note will be added to the top of that page. Thanks F = q(E+v×B) ⇄ ∑ici 07:21, 2 April 2012 (UTC)

There is no implication that it your sandbox should be either complete or correct – pretty much by definition. I find it difficult not to "correct" something (not always correctly!) when it seems to it can trivially be made more rigorous or systematic. So when I make simple changes, it is more to do with my own "nervous tics" than with the quality of what I change. I'm happy to be guided on how much "interference" you will tolerate in your user area; what is not intended is any implied criticism of what you've done. — Quondum 07:41, 2 April 2012 (UTC)
That’s very altruistic of you! If its knowledgeable editors like yourself and Rschwieb (who I know), I have no problems with you making edits to the sandbox since they will improvements (provided edits summaries are filled in, but you two do that anyway so no problem). On the contrary an IP randomly showed up and added an AfC template in this edit [6] (don't ask, I just reverted it...). =) F = q(E+v×B) ⇄ ∑ici 07:57, 2 April 2012 (UTC)
The IP seems to have had a warped sense of humour. I'll probably be tempted to make tweaks, especially in the section with the tensor formulation of relativity – largely because it is exactly the kind of reference I occasionally seek to get my signs right and the like. As to "knowledgeable", I hope it is understood that I have only a cursory undergrad exposure and the rest is informal reading and probably overconfidence in the way I express myself. I'd be most comfortable if you treated me as at undergrad level, even though some time has lapsed since then. — Quondum 10:49, 2 April 2012 (UTC)
Well, I am also a 2nd yr undergrad, but you (and Rschwieb) certainly have more experience than I have (which is important). Anyway to the point - please do feel free to edit my sandbox. Admittedly, some parts I may have added very quickly should be mostly correct from memory, but (as you say) some signs/symbols will be mixed up/cut and pasted and I might have been too lazy to correct at the time, intended to correct and amplify those bits later. Before all else - the structure to the page has and is being developed: and sections are still experimental, and concentrated on that than getting every detial correct.
Btw, Rschwieb liked it also - so the talk page for it will be started and we can discuss things specifically about the page there, if its easier. Best wishes for now, F = q(E+v×B) ⇄ ∑ici 19:16, 2 April 2012 (UTC)

Index notation at wikiproject maths

Thanks for finishing and fixing my incomplete errors...

Just out of interest, what do you think of the presentation of index notation here on wikipedia? It can and should be better. I have yet to see an article which includes the summary in that box, even if readers didn't understand the notation at least its in one place so they can see what will appear in some equations, and make it easier for editors to link there...

F = q(E+v×B) ⇄ ∑ici 11:36, 10 April 2012 (UTC)

Nevermind - you answered there so we'll keep it there. Thank you for response. F = q(E+v×B) ⇄ ∑ici 11:42, 10 April 2012 (UTC)

Nice overview of quaternions in physics

You may have already seen this, and the content is probably very old news to you, but I thought it would be worth bringing it up. I've been reading the Joachim Lambek article mentioned in the quaternion article (If Hamilton had prevailed), and IMO it's quite a good presentation. A mathematician I admire (T.Y. Lam) admires Lambek, and now I can see why. The explanation is very good. I'm seeing now that before I learned very much Clifford algebra, I should have spent time boning up on everything about quaternions Rschwieb (talk) 16:55, 17 April 2012 (UTC)

Unfortunately I do not have access to the article. I suspect you'll find me a little prejudiced on the matter (though I'm open to persuasion) – Cl3,0(ℝ) seems to be more natural than quaternions for a geometric purpose, because quaternions need a slightly artificial (and clumsy) mapping to fit the purpose. The idea of quaternions not having prevailed over Gibbs's algebra in the historical perspective is something I do find unfortunate though, and still would like to see Cl3,0(ℝ) doing so as the geometric algebra of choice. From the historical perspective, the geometric application of quaternions is IMO a worthwhile object of study. — Quondum 18:15, 17 April 2012 (UTC)
Every time I try to absorb what everyone wrote at User:Rschwieb/GA Discussion/Comparison of methods: rotations, I still find it too abstract for me at this point. I wish I could see a plain quaternion example (I've suggested a "baby" problem at the page). Could you find time to supply the concrete representation of the problem in quaternions and its solution? I would really appreciate it. Rschwieb (talk) 12:57, 27 April 2012 (UTC)
Not surprizing: we'd not not provided much by way of a quaternion treatment. I'm not familiar with quaternions per se, and so have rather embedded the quaternions in the GA that I have a feel of, and have put my working where you posed the problem. Let me know if I've missed something either logical or pedagogical needed from your perspective. Correctness verification is left as an exercise . — Quondum 21:40, 28 April 2012 (UTC)
I skimmed it and it looks great. I will be sure to read it thoroughly (after I get done grading tests, and designing final exams *sigh*). I am thankful you wrote down some of your thought processes too, because I need those. Rschwieb (talk) 13:57, 29 April 2012 (UTC)

"Does not imply"

My thinking is that "does not imply" means "does not prove" in some situations (like a math proof) and "does not suggest" in other situations (like normal conversation). I don't think "does not suggest" is appropriate ... there is a gentle suggestion. The only known explanations of charge quantization (in my understanding) are monopoles and GUTs, and the latter require monopoles anyway. That doesn't mean there isn't an unknown explanation of charge quantization that does not imply monopoles ... but anyway charge quantization is at least a suggestion that there are monopoles. So anyway I think "does not prove" is better. :-) --Steve (talk) 12:01, 11 May 2012 (UTC)

In the logical context, "does not prove" means "is not a logically consistent argument of an implication", and here we are talking about deducability. Hence, assuming a formal interpretation as a logician/mathemation/physicist would apply, "does not imply" is actually what we want to say.
In the informal context, "does not imply" has a problem as you say, but then "does not prove" suffers from similar problems – it means there is no convincing evidence, also a weak statement.
Who the typical reader is is also debateable. I tend to prefer precise language especially for math and physics articles, but I'm not the typical reader in this sense, so I'm happy to let others decide provided there is limited scope for actual confusion. — Quondum 13:31, 11 May 2012 (UTC)

Hi, this article is awesome, the first time I have seen such a complete and transparent summary for this concept.

The Teamwork Barnstar
This is to be shared between:


Well done and thanks to you all, and sorry this is so late (I would have awarded this earlier but don't get on WP much anymore). Best, Maschen (talk) 16:32, 20 May 2012 (UTC)

Many thanks Maschen – this was certainly an enjoyable collaboration. Kudos to F=q(E+v^B) for spearheading and driving the article. — Quondum 14:46, 27 May 2012 (UTC)

The ongoing fight at Bell's theorem

Hi Quondum, how have you been? I've been fairly busy lately, just dropping in once in a while. I managed to catch this kerfuffle over Joy Christian at the math and physics projects. It makes me a little queasy to see he is using geometric algebra in his proofs[7]. I sure hope he doesn't make geometric algebra even more unpopular... It might actually be a good exercise to see who did the algebra right and who didn't. Actually I just checked, and this guy does come off as a pro crackpot, so reading it might be a headache. Rschwieb (talk) 01:01, 1 June 2012 (UTC)

I've just had a nice hike along the Mozambican coast, so feeling very rested. The paper seems to make some early mistakes, so I did not go into detail. For example, the choice of the term "righthanded frame of basis bivectors" seems spurious. The choice of basis of the ℝ8 space of C3,0(ℝ) is arbitrary, subject only to it being a basis of the vector space. The choice of the term "handedness" in this context is in any event ill-advised, and has no relevance in a Clifford algebra without an externally imposed sense of handedness (unlike in vector calculus). My first impression is that some significance is being attributed to the arbitrary choice of basis, as though that confers some fundamental mathematical distinction to the space for which the basis is being chosen. I have no idea how this is extended to relate to Bell's theorem. This sort of nonsense is hopefully simple-minded enough that it will not impinge on GA.
In all, the fight should not be about the technicalities, but should focus on the repeated pattern of using Wikipedia for self-publication of fringe science by various people. The argument of prejudice against Joy Christian is immaterial, and does not trump WP policies. If references to Christian's work get edited in repeatedly, blocking should be invoked administratively. But I wouldn't bother arguing anything but the simple WP policies. — Quondum 10:25, 1 June 2012 (UTC)
Now that it has come up, I find myself wondering who, if anybody, is aligned with him in the GA community. In fact, I find myself curious about the entire spectrum of crackpottery in physics, now. It's kind of scary to learn that a listener could inadvertently take a pseudoscientific conversation seriously, if the speaker was sneaky enough about it.
I've been finding Penrose's book very helpful, although I have not been able to read it lately. Occupational demands are catching up with me, making my ambitious self-education plan difficult to execute. Rschwieb (talk) 13:09, 1 June 2012 (UTC)
I saw only one editor (Fred Diether with edits only on that talk page) in active support, though I only scanned through briefly. So: no real alignment. The proliferation of pseudoscience is widespread, and extends to maths, nutrition etc. I guess one could say it is scary how many people will follow the fringe directions. It is pretty pointless worrying about protecting people from it; it makes much more sense to make more trustworthy stuff accessible for those not predisposed to a fringe bias, and allow the crackpottery to find its own equilibrium. WP seems to be pretty good at providing such a pool of information.
I too have not been reading or editing much of late, though I cannot blame work for this. — Quondum 15:26, 1 June 2012 (UTC)

programming

I've forgotten what programming languages you have tried your hand at... could you remind me? Thanks! Rschwieb (talk) 13:06, 7 June 2012 (UTC)

C/C++ is my primary programming area. My marginal/rusty experience in a raft of others allows me to interpret, but not write in them. — Quondum 13:21, 7 June 2012 (UTC)
I really hope to brush up my distant memory of C++ soon, so I may start asking you questions :P I have not touched it in 10 years. Rschwieb (talk) 21:32, 7 June 2012 (UTC)
Feel free – I'll probably enjoy it. Email will naturally be the best medium. I might give a bit much detail about semantics in C++; if so just rein me in. — Quondum 05:29, 8 June 2012 (UTC)

Where tensor densities come from

Since you seem to have an aversion to tensor densities, I would like to explain how the idea arises naturally from tensors. Sometimes people have to work with tensors which are anti-symmetric in several indices. For example, suppose Aa b c is anti-symmetric in three indices. Rather than speak about 64=43 components which have only 4 independent values among them, it is easier to redefine A as a tensor density Ad where d is the unique dimension different from a,b,c (assuming that these dimensions are all distinct from each other, else the tensor is zero). Thus the new Ad is equal to (1/6)εa b c dAa b c. Here I have defined εa b c da b c d0 1 2 3 using the generalized Kronecker delta. Thus tensor densities can be regarded as tensors which have hidden anti-symmetric indices. The rules for transforming tensor densities arise from that concept, but have been generalized further. JRSpriggs (talk) 07:04, 28 June 2012 (UTC)

What you describe is the Hodge dual, except that the Hodge dual uses the tensor equivalent of the Levi-Civita symbol. It is not surprising that a tensor density results when a tensor density is artificially introduced (as in this case, by selecting components of the generalized Kronecker delta).
I could go on at length to demonstrate why I believe tensor densities can always be eliminated from any fully covariant theory.
Thanks for fixing my error on the generalized Kronecker delta, and your addition. As you may see from my sandbox, I think it is notable enough for a main article. — Quondum 11:25, 28 June 2012 (UTC)
The tensor density is not "artificially introduced". The numerical values of the four components of Ad are exactly the same as those which appear three times each among the components of Aa b c along with three each which are their negatives and forty zeros. So this is merely selecting a subset of components which represent the independent information in the original A.
If someone naively attempted to use Maxwell's equations in a curvilinear coordinate system or in the presence of significant amounts of gravity, then he would find that they still work in that context (except for the constitutive equations which relate D to E and H to B). How can this be explained in your ideology? It only makes sense when one realizes that some of the quantities are tensor densities.
So please see that I am not adding tensor densities to make life harder. Rather I am finding that they already exist and permitting their use to avoid the extra and unnecessary work of converting everything to ordinary tensors. Why multiply by the metric when you do not have to? Why use the Levi-Civita connection when partial derivatives will suffice? JRSpriggs (talk) 21:30, 29 June 2012 (UTC)
I accept that tensor densities may be useful for simplifying certain expressions, as can generally equivalently be done by a suitable choice of basis such that g=−1 without use of densities. However, inserting mathematical tricks like this into articles such as Riemannian curvature tensor remains undesirable in my opinion. I think Tensor density is the place to discuss such things; as a concept it is notable enough to merit an article.
I am finding this somewhat exhausting, and intend to leave you and the other editors to it. — Quondum 10:59, 30 June 2012 (UTC)
I am sorry that my attitude disturbs you. I wish we could cooperate on these topics.
g=-1 is a coordinate condition which may be inconsistent with some other desirable coordinate conditions. So it is not always practical to use it. JRSpriggs (talk) 18:32, 1 July 2012 (UTC)
Hi JRSpriggs and Quondum, I'd like to interject with a few comments. I'm not very clear on the use of tensor densities, and I don't know all the sections concerned. During an intensive self-study of tensors a while back, I did not encounter tensor densities at all. I trust the usefulness of tensor densities (in some places), but I would be worried if they showed up in too many places. Their scope in WP should reflect the scope of use in the literature (as much as possible). Tensor densities definitely deserve an article of their own: is there any clear reason they need to appear in any another article? Rschwieb (talk) 18:51, 1 July 2012 (UTC)

I feel that the articles on Covariant derivative, Lie derivative, and the Riemann curvature tensor would be incomplete, if they failed to mention the effects of those operations on tensor densities. The existing references to it are quite short, not obtrusive. JRSpriggs (talk) 20:01, 1 July 2012 (UTC)

Due to things unrelated to Wikipedia I should not be getting into debates now. I would normally have enjoyed a lively debate on such a topic, but not so now. So I apologize for the spill-over into my interaction of Wikipedia, and I beg leave to step out of the whole thing, possibly to return to the topic when I am more myself. — Quondum 21:14, 1 July 2012 (UTC)
OK, I hope your personal situation improves. JRSpriggs (talk) 23:03, 1 July 2012 (UTC)
Ditto what JRS said, Q. And thanks both for the crash course on tensor densities. Rschwieb (talk) 01:40, 2 July 2012 (UTC)

TB

Yeah.--Gilderien Chat|List of good deeds 18:42, 15 July 2012 (UTC)

Muha!

I used geometric algebra thinking to solve a project Euler problem! [8] Not a particularly deep connection, but I would not have thought of my solution without our discussions. Rschwieb (talk) 18:57, 24 July 2012 (UTC)

Nice. How did you do it? Not so obvious to me, but after some thought I guess I'd find AB∧AO, BC∧BO, CA∧CO, and if they all had the same sign (orientation), the origin O is contained in the triangle, otherwise not. On a related thought, have you discussed the idea of learning GA instead of vector algebra with anyone? — Quondum 20:59, 24 July 2012 (UTC)
Yes, exactly, I used an orientation argument as you described. The script took only a second to do all 1000 triangles!
I've pretty much given up on learning GA without the traditional approach. I'm really enjoying the Road to Reality still, and I think at this point I have to bite the bullet and learn both! That's ok too... Rschwieb (talk) 15:26, 25 July 2012 (UTC)
You make it sound like you find that learning the traditional approach facilitates learning GA. I like to leave out the cross product and the clumsy methods of rotation, but general arguments about vectors are pretty much the same. What is it that you need from this approach? I gather you are learning both simultaneously. — Quondum 17:57, 25 July 2012 (UTC)
I think it would be more accurate to say that both the traditional approach and GA are helping me learn differential geometry. Right now my picture is so incomplete that anything new helps me, so I may change my mind in the future as I learn more about both. I still have not learned much about rotation, and there is quite a road ahead of me. I'm trying to take the necessary baby steps with Penrose's introduction. After I absorb everything he mentions about geometry, the complex numbers and quaternions etc, then I think I'll know how to digest and organize the heavy stuff in my mind. I have a long way to go before I really understand geometry, Klein's algebra-through-geometry, Lie algebra and finally representation theory. Rschwieb (talk) 19:08, 25 July 2012 (UTC)
Well, good luck. Some of your goal would be interesting to me, and some perhaps over my head. And while I like Penrose's prsentation of the topics, I retrospectively find him placing a little more emphasis on the traditional areas, such as complex numbers at the expense of Clifford algebras. Though I can't blame him: had he done otherwise, he would have lost his readership. One thing I must say, I found the journey he takes one on exhilirating. I learned a huge amount very pleasurably from that book. — Quondum 19:59, 25 July 2012 (UTC)
Just read Penrose's spin on tensors and I can't say that it's as good as I had hoped... maybe more will be said later on! Rschwieb (talk) 16:40, 27 July 2012 (UTC)
I'd be interested in your perspective on this. I'm travelling at the moment and don't have his book to hand, so might not be able to give a sensible opinion. I don't remember having an issue with this, but I did have a pretty decent grounding in tensor fundamentals that might have helped me through the shortcomings. — Quondum 02:30, 28 July 2012 (UTC)

Do you think a connection should be drawn in geometric algebra to the Interior_product of differential forms? It looks like it is the (left or right) contraction. Rschwieb (talk) 12:34, 13 August 2012 (UTC)

I think you may have something here (there is a lot of similarity, and I think I've seen the connection made in discussions before). If the correspondence is exact (as it may be with the left contraction), it definitely must be made. It will take some study on my part to understand the interior product adequately to make this edit. There is an equivalent misnomer that should be highlighted better: GA's "outer product" is not an outer product as this is meant in any other branch of mathematics (other than as an inheritance in the quotient algebra), but is the exterior product of differential forms and more specifically exterior algebra, as implied by the link. One thing is clear to me: the two terms are not used the same way as elsewhere, and this should be highlighted better.
I am concerned by the statement in the article Interior product "...on the exterior algebra of differential forms on a smooth manifold": I suspect this is defined in any exterior algebra, and does not require manifolds, smoothness or differential forms.
There is a related similarity of an operation that makes me cautious: the Hodge dual versus the Clifford dual. These are identical except for a sign depending on the grade, and the same difference may occur here. — Quondum 13:52, 13 August 2012 (UTC)
PS: The interior product appears to be only defined on a 1-vector as the left argument, which is rather more restrictive than GA's left contraction, so this would have to be indicated. Another problem is that a distinction is made between a p-form and a p-vector, unlike GA (i.e. the interior product is defined even in the absence of a metric tensor, whereas GA implicitly assumes one). Thus, in the exterior algebra, ⨼ : V × ΛkV → Λk − 1V, whereas in GA, ⨼ : ΛjV × ΛkV → ΛkjV

Empty comments

Thanks for letting me know. I'll try to think of something. -- Magioladitis (talk) 13:01, 14 August 2012 (UTC)

Dyadic/tensor/outer products...

Hi. Referring to WP Maths and talk:outer product, do you think merging dyadic product + outer product + tensor product is a good idea? Most people seem to favour or not oppose blending dyadic product + outer product (including yourself), but tensor product is a very heavy and abstract article, it seems only you propose to blend tensor product into anything...

Sorry if I sounded personally direct, just a little curious of your thoughts, and would like to bring this to your attention. I don't intend to argue with you. (It may be better to reply at WP Maths rather than here also). Best. Maschen (talk) 21:25, 21 August 2012 (UTC)

Sure thing, I've added my 2¢ there. — Quondum 08:50, 22 August 2012 (UTC)

Dashes

In your edit to International System of Units you made some errors in changing hyphens to other marks. I don't know if you used any automation to assist you, but if so, your automation is in need of repair.

In one case, you changed a hyphen to an n-dash within a URL.

In another case you changed metre-kilogram-second to metre–kilogram–second when WP:HYPHEN seems to indicate the original version was correct.

In another case you changed electron-volts to electron–volts. The National Institute of Standards and Technology indicates on page 31 of Guide for the Use of the International System of Units (SI) that when a compound unit is formed by multiplication of two other units, and the unit names are spelled out, the names of the units should be separated by a space (preferred) or a hyphen. Jc3s5h (talk) 23:26, 31 August 2012 (UTC)

Your critique is much appreciated. My changes were manually made. I do not entirely follow you though, and would like to understand your position on contentious points.
  • With respect to the URL, the modification was in the display text and not in the URL itself, and thus does not fall under the obvious rule of not modifying a URL. I have not checked the correctness of my modification as a piece of text, though, and will leave that unchallenged.
  • I do not follow you when you say that metre–kilogram–second should use hyphens rather than n-dashes according to WP:HYPHEN. Here we are referring to a compound of three items of equal weight (without mutual modification as in light-year). Examples of similar use include (quoted from the guideline:) In some cases, like diode–transistor logic, the independent status of the linked elements requires an en dash instead of a hyphen, (taken from the article names:) centimetre–gram–second system of units and metre–tonne–second system of units. Here we are, after all, referring to the mks system, exactly parallel with my last two examples.
  • I agree with you on that I made a mistake with electron-volts. Here we have electron acting as a modifier.
  • On your last point, I it seems that the intent of NIST's focus on the hyphen–n-dash distinction is purely for avoidance of confusion and not a stylistic guideline, and thus would not take it as the final word on separating multiplied units when the full names are used. I find no WP guideline for this case, but I'd like to see it covered. Note, however, that the NIST guide's own usage of hyphens appears to be inconsistent with WP:HYPHEN (e.g. "centimeter-gram-second (CGS) units" on p. 10), and that the WP guideline takes precedence.
Quondum 06:36, 1 September 2012 (UTC)
After looking again at the MOS, I think you might be right about meter–kilogram–second. As for electron-volt, I don't think of it as electron modifying volt. Rather, I think of it as being a name for a unit of energy, the full description of which is "the absolute value of the charge on an electron multiplied by the volt" so I am inclined to apply the unit multiplication rule.
The status of the Guide for the Use of the International System of Units (SI) (Special Publication 811), at least in the US, is a bit hard to decide. There is also The International System of Units (SI) (Special Publication 330), which is an English translation of the BIPM's official publication; it has the same advice on page 40. The BIPM's English translation agrees, on page 131.
The confusing part is that the US law says the US has adopted SI, as interpreted by the Secretary of Commerce, as it's preferred system of measurement for commerce and trade. First Special Publications 330 and 811 were written; the first more in the style of a regulation, the second more in the style of a guide. Then the Secretary of Commerce designated them as the official interpretation of SI for the United States. But due to the history of being written first and adopted as official later, it is hard to tell what parts are style advice and what parts are enforceable law.
In any case, I don't believe the case of a compound unit composed of other units multiplied together can be compared to a noun being used to modify another noun. For example, in physics, a meter-newton is the same as a newton-meter, although the latter is more customary. But to use the example from WP:HYPHEN, "gas-phase reaction" is correct and "phase-gas reaction" is not. Jc3s5h (talk) 12:15, 1 September 2012 (UTC)
We seem to be essentially in agreement on the usage. I am still uncertain about the semantics of electron-volt considered as a product of units (since your interpretation would treat "electron" as the name of a unit of charge, but is referred to as the non-SI unit "electronvolt" in SP330 (and, it seems, "electronvolt" or "electron volt" in WP), is not written e V or e⋅V, "volt-electron" seems simply wrong, and eV is essentially defined as a unit of energy even though it can be defined via 1 eV = e ⋅ 1 V, where e (italicized) is the elementary charge and is technically not normally considered a unit, but rather a constant). This was what underlay my interpretation of "electron" modifying "volt", but I do not insist on it.
I am persuaded that hyphenating (or spacing) a product-of-units is appropriate, even in the WP context, due to the existing mandative documentation. I agree that the product is not one noun modifying another but two nouns with equal weight, with order not being significant (though this does not argue for hyphens). Since the WP guidelines refer back to the SI article for this, this rule should be added to the article (though I see that it does occur in WP:Manual_of_Style/Dates_and_numbers#Unit_names). — Quondum 13:52, 1 September 2012 (UTC)

some GA articles

Hi Quondum. I noticed you have made some recent contributions related to geometric algebra in physics. You may be interested in editing gauge theory gravity or Riemann–Silberstein vector if you are familiar with the topics. Teply (talk) 06:05, 3 September 2012 (UTC)

Judging by your comments on the RS vector page, perhaps I should also call your attention to my recent edits of Mathematical descriptions of the electromagnetic field#Geometric algebra (GA) formulation. Indeed the B-B ref tends to view this as complex Gibbs instead of APS (or STA), but in principle it doesn't really matter because it's all the same equations either way. Please by all means edit the articles in such a way as to be clear to you and everyone else! Teply (talk) 09:41, 3 September 2012 (UTC)
I'll give it a go, though it may take a while. The reason I commented rather than editing was that I am not very familiar with dominant interpretation and have my own biases, but I guess a little creative interpretation (i.e. reading between the lines) will not go far wrong. The changes to Mathematical descriptions... are a definite improvement, though the section intro still implies that the APS formulation is the GA formulation, but with your changes this'll be easy to fix. I'll still have to get a firmer handle on the RS "vector", though. — Quondum 11:28, 3 September 2012 (UTC)
The term "RS vector" was invented by B-B himself relatively recently. The alternative was to try to name the article "EM field" (likely to be confusing at best or possibly provoke an edit war) or worse "Faraday field" (yuck). RS is more historically honest and appears to be accepted by a handful of other authors. The unfortunate part, as you point out, is the use of "vector" instead of something more general. I'm not a fan of the "paravector" terminology when all-encompassing "multivector" will suffice regardless of APS/STA representation. If you like, you can think of RS "vector" as shorthand for RS "multivector". Incidentally, the translation of Silberstein's original uses "bivector" though I can't comment on how accurate the translation is. Teply (talk) 16:57, 3 September 2012 (UTC)
If it's ok I will update this table, after the excellent work done by Teply and Quondum. Maschen (talk) 17:05, 3 September 2012 (UTC)
Cool. I see you've already done it. — Quondum 18:03, 3 September 2012 (UTC)

Hi. Thank you for your recent edits. Wikipedia appreciates your help. We noticed though that when you edited Gradient, you added a link pointing to the disambiguation page Vector (check to confirm | fix with Dab solver). Such links are almost always unintended, since a disambiguation page is merely a list of "Did you mean..." article titles. Read the FAQ • Join us at the DPL WikiProject.

It's OK to remove this message. Also, to stop receiving these messages, follow these opt-out instructions. Thanks, DPL bot (talk) 12:30, 5 September 2012 (UTC)

I DABed vectorEuclidean vector, if it's ok, while now in the process of cleaning that article up also. Maschen (talk) 12:37, 5 September 2012 (UTC)
Cool, thanks for taking care of it. I must get into the habit of checking the links that I create/change. This DPL bot is a boon. — Quondum 14:48, 5 September 2012 (UTC)

progress through Road to Reality

I have to admit I've given up reading the last half of chapter 21! Until now I could follow closely or even skip stuff, but here I have to admit defeat for the time being. I'm jumping to 22:Quantum algebra, geometry and spin, but I think I might have to give up on 23 also. Chapter 24: "Dirac's electron and antiparticles" sounds like something I'll have to get through to understand you better, so I'm going to attempt that too. Chapter 25: "The standard model of particle physics" I should definitely read, and I should at least have a go at chapter 26: Quantum field theory" so I can get a taste of it. The contents of chapters 27-34 don't really jump out at me as anything I'm interested in yet, but I might have to browse a few sections (Maybe I will do 33:"Twistor theory" after all). Any "can't afford to miss" topics you would recommend from those omissions I plan? Rschwieb (talk) 16:50, 18 September 2012 (UTC)

The tone does change somewhat towards the end, and it becomes heavy going if you don't already have a good background in the topics covered. I battled with §20, as I've never got the "feel" of Lagrangians and Hamiltonians. §21 has a point of interest: the underlying nature of the Heisenberg uncertainty principle; it is worthwhile, if you understand Fourier transforms, to understand why the position and momentum of a particle cannot be simulaneously known. Most of the rest I wouldn't lose sleep over; I slogged through most of it, absorbing little. I didn't get much from his description of twistors; he seems to give an overview rather than any mathematical detail that you'd appreciate. The core of it seems to be a kind of mapping between Minkowski space and twistor space, a bit like a kind of transform (light rays in normal space to points in twistor space). I didn't quite figure out what the benefits were, nor what its topology was. I was thinking that §22.8 relates to interesting objects: spinors, which I think are best first studied in the form of rotations. However, §11 is probably better for that. (Have you properly understood Fig 11.4? And the topological distinction between a 2π and a 4π rotation?) §24 (along with §11.5) are perhaps of particular significance to me because they represent the start of a journey of discovery for me: the road to geometric algebra. It's also nice to get a historical perspective on how physicists think. One can think of obscure formalisms such as the Pauli matrix representation, the Dirac matrices and the algebra of physical space, then realize they are opaque isomorphic representations of the spacetime algebra. The Dirac equation and Maxwell's equations written in terms of GA make one realize the power of a well-chosen algebra. If nothing else, I hope you come away with a solid sense of GA, rotations/spinors, and the principle of superposition in quantum mechanics. — Quondum 20:04, 18 September 2012 (UTC)

Hamiltonians and Lagrangians are new to me too, and I think they are one of the big pieces absent from my toolkit for looking at physics. I did find diagrams 11.3 and 11.4 eye opening and interesting, although I can't safely say I totally absorbed them yet. Rschwieb (talk) 12:34, 19 September 2012 (UTC)

Those diagrams are what I mentally hang onto when trying to understand rotations – plus the mapping of an object's orientation in 3-space onto a point of the 3-sphere, with the "north pole" (if I may call it that) corresponding to no rotation a reference position (no rotation), and the "south pole" to any rotation through 2π. Antipodal points are identified and represent the same object orientation, but whereas a path around the 3-sphere can be shrunk to a point, a path to the antipodal point cannot. To rephrase, the 3-sphere (with a suitable multiplication operation) is isomorphic to the set of rotors (opposite points not being identified, since there are two topologically distinct instances of each orientation, or two rotors per SO(3) group element). Penrose's fig 11.4(c) corresponds to something different. (I hope I'm not talking rubbish here; it seems to make sense to me.) — Quondum 14:20, 19 September 2012 (UTC)

If GA is geometric, where are the diagrams!?

I'm sorry if this becomes exhausting, although the previous collaboration for exterior algebra and alternating forms was very enjoyable so held back raising this for a while to give you a break.

Thanks to you and Rschwieb, I became increasingly interested in GA; clueless a year ago, now can follow bits of it. (Ever had the feeling of a branch of mathematics "grow" on you the first time you come to learn it?) ^_^

If the operations can be interpreted geometrically, would it help spruce the article by drawing things in low dimensions, in particular the geometric (!) interpretation of a multivector and geometric product? If you (or Rschwieb) have any ideas I'd be more than happy to draw them for the article, exactly as you two describe. As you mentioned previously this would be better than attacking inobvious/abstract tensors/spinors all the time. Thanks again...

P.S. Unfortunately I haven't a single book on GA for now... so nothing to base interpretations from except the WP article itself. Maschen (talk) 00:13, 19 September 2012 (UTC)

GA, as the name suggests, has a lot of scope for geometric interpretation, with a lot of benefit. And yes, the article would be improved immensely with carefully chosen diagrams, but these may be challenging. Looking through Geometric Algebra for Physicists (Doran & Lasenby), I see comparatively few diagrams; this might suggest that the exercise is not very easy. The Road to Reality (Penrose) also does little in this regard. There is certainly scope for illustrating a vector, bivector, wedge product, representation of a subspace using a pseudoscalar, orientations in low dimensions, linear transformations (including reflections and rotations, as applied to blades) etc.
This article actually lacks a proper section devoted to geometric interpretation (currently bundled into Examples and applications; I see this as a major section in its own right). In this sense, the article still needs a lot of hard work to get it properly structured and focused.
It would be nice if you could define the pictures in 3-D, i.e. if you can model the objects in 3-D and only as a final step render them, including perspective. Having easy control of various parameters (orientation, perspective, position etc.) would be useful to allow later tweaks as one sees the results, though in most cases the diagram complexity should be on par with what you've already done. Perspective rendering takes advantage of important subliminal cues to our visual systems for interpretation of the third dimension.
There is also a major potential pitfall, which I'd like to guard against: turning the article into a textbook. — Quondum 07:22, 19 September 2012 (UTC)
It's true that the book you cited (found as a pdf here) has few diagrams and it's not surprising if the focus is abstract formulae, it is for "geometric emphasis to operations".
Could you be more specific what other entities would be good to draw (aside from n-vectors and the exterior product)? How to interpret a pseudoscalar/pseudovector for instance?
Be warned I'm not quite as good in 3d as 2d (or 3d projected into 2d), but will still try.
Also, adding just a couple images couldn't possibly make this article textbooky? If anything hopefully more understandable.
Maybe I'll research GA more myself to interpret the operations first, then come back to it... Maschen (talk) 07:58, 19 September 2012 (UTC)
No, some images will not make it textbooky, and will enhance it. I was only suggesting that given that the article needs work on the content to bring it up to standard, in conjunction with diagrams, my temptation would be to put too much explanation into the article.
The 2d/3d aspects are largely related to what software you use to generate the images. CAD-like programs would allow you to define objects (lines, planar figures, 3d figures), and allow manipulations such as scaling/rotation of these or groups of these. The perspective (angle of view, closeness) would then be separately controlled. The package automatically deals with obscuring parts of objects, transparency. If one defines everything as figures on a 2d plane pre-distorted for persective (also possible in the same package), almost all this is lost.
An example might be the use of a bivector as the representation of a 2d subspace (in Representation of subspaces). The plane could be shown as a 2d sheet in 3d space spanned by two vectors that form a basis. The bivector would be the wedge product of the basis vectors, but the subspace would be shown with a ragged edge (to suggest extending to infinity); the magnitude and sign of the bivector are unimportant for this, but the sign can be used if the orientation is significant.
Another example may be how a rotation/reflection (actually any linear transform) acts on a vector and a bivector (in a paired before-and-after diagram). It has the same effect on a bivector as if the vectors were individually operated on before wedging them to make the bivector, and this ties in directly with the use of a bivector to represent a 2d subspace of vectors and how the rotation affects the vectors within it.
Projection and rejection of a vector onto a subspace defined by a bivector is an obvious candidate for a diagram. A plane in 3d, with a vector with the classic parallelogram showing the split. Axes may contribute to the 3d effect. And in GA, since the metric (quadratic form) is intrinsically part of the definition, there is no problem about implying orthogonality (unlike with dual vector spaces); angles are always well-defined.
A down side is that a geometric interpretation of the algebraically well-behaved geometric product itself is probably beyond diagramming. One has to stick to the operations with direct interpretations: wedge product, projection etc. Even the dot product and generally the contractive products are more easily understood in terms of a projection than as a geometric operation in its own right. — Quondum 08:55, 19 September 2012 (UTC)
Thanks, that helps, I will absorb and get round to it in time.
I'm already aware of 2d/3d software rendering in different programs, and coincidentally have TurboCAD and can draw lines/planes/Euclidean solids though haven’t touched it for a few years so will need to re-familiarize. Also recently found POV-ray which may help, though need to familiarize with this also. Unfortunately the renders will be non-svg (which isn't a problem but is not preferable)...
Maschen (talk) 09:37, 19 September 2012 (UTC)
Sounds like a lot of work, which strictly isn't necessary for the kind of illustrations needed in GA and similar articles. However, if you are particularly interested in producing lots of varied quality drawings into the future, the labour might be rewarding to you. POV-ray seems to produce phenomenal pictures, though it seems a lot of work and skill would be needed to produce such pictures. It may also be worthwhile seeing what other editors use (e.g. Sam Derbyshire seems to generate excellent math-related pictures), though they may have access to non-free software that cannot be obtained too easily. Don't forget the alternative: keep doing exactly what you're doing; you produce good pics matching the need. I was really just wishfully thinking that it'd be nice to be able to do tweaks with little effort; if this means a complete revamp of the tools, I don't call this "little effort". — Quondum 11:34, 19 September 2012 (UTC)
Don't worry about any labour, although for now I'm a little busy and can't edit WP much for a while.
I agree and envy other editors like Sam Derbyshire who can create fantastic images and animations... mine are pale in comparison...
Anyway I'll get to it eventually. Thanks for your concern. Best, Maschen (talk) 13:10, 19 September 2012 (UTC)

Hi - just to let you know, I managed to get possession of Clifford algebra to Geometric calculus by G. Sobczyk and D. Hestenes themselves (loaned from uni library), and plan to read it very carefully before making any attempts. Maschen (talk) 18:15, 24 September 2012 (UTC)

That'll be great. Use some care: some of their work seems to have been cleaned up by others; you only need to look at Hestenes's "inner product" to see why. Hopefully they'll be able to add to geometric interpretation, though. And they're great to use for reference. — Quondum 18:33, 24 September 2012 (UTC)
...... Hi, apologies for the very very long time span. I'm sorry but most of Hestenes + Sobczyk was largely abstract and only a few geometric things here and there were described in words... The only geometric thing I could get were things we already know like the parallepiped interp for n-vectors, and geometric product commutativity (commutative => parallel plane elements, anticommutative => perp plane elements?), though the way the geometric product was introduced seemed circular to me... Other books related to GA were about Clifford algebra and other abstract stuff which I couldn't follow...
Some of the things you have mentioned above, like projection/rotation/reflection/any linear transform for vectors and bivectors, and plane elements in 3-space etc are easy eneogh and I have drawn in usual SVG elsewhere, though would like to tweak first. Should be able to get them on WP by end of tomorrow, if not end of the week, just busy (like everyone)... Best, Maschen (talk) 16:56, 8 November 2012 (UTC)

Here are two which contain essentially the main ideas you proposed above. Hope they're on the right lines (excuse mathematical pun).

In 3d space, a bivector ab defines a 2d plane subspace (light blue, extends infinitely in indicated directions). Any vector c in the 3-space can be projected onto and rejected normal to the plane, shown respectively by c and c.
Linear transforms (first a rotation R about the indicated axis, then a scaling S, the composite transform is rotation followed by scaling RS) of top: a vector, bottom: a 3-vector; the transform of the entire 3-element corresponds to the transformation of each constituent vector from (u, v, w) to (u, v, w) or (u′, v′, w′). A similar interpretation is true for n-vectors of higher grade.

Maschen (talk) 00:23, 10 November 2012 (UTC)

The first diagram showing projection looks good. The second has merit, but it bothers me that the linear transformation being applied is not at all clear. For this, it would be necessary to use one that will be readily recognized. For this purpose, I would suggest a transformation that incorporates some form of scaling: either a simple rotation (say, around the axis of vision) with a contraction (by a factor of say 2), or a unidirectional cmopression (e.g. compression by a factor of 2 on the vertical axis). The diagram gives the effect on a vector and a trivector. Do yuou think it would be any clearer adding a bivector? — Quondum 06:46, 10 November 2012 (UTC)
Thanks for feedback, tweaked. The point of using a trivector is that we have already used a bivector for the plane, and the bivector transformation can be seen as part of the trivector (the reader can consider any pair of vectors and a face of the parallelepiped), furthermore the caption can compensate.
If it's ok I combined the transformations you suggested into the diagram to illustrate the composite transformation feature, though if you think the diagram is now too big, I'll reduce it back to the "before/after" shots for only the composite transformations. Maschen (talk) 09:09, 10 November 2012 (UTC)

To prevent further clutter I placed the diagrams in a new section Talk:Geometric algebra#Diagrams. Best, Maschen (talk) 17:47, 10 November 2012 (UTC)

Representation theory of the Lorentz group

I've moved this discussion to the User talk:YohanN7/Representation theory of the Lorentz group#Discussion from Quondum's talk page. — Quondum 06:36, 4 October 2012 (UTC)

complex sesquilinear form

Since we were going off topic, I decided to reply here, instead. Yeah, any complex sesquilinear form could certainly be considered as a real bilinear form, but I was most interested in keeping the full complex view.

I would be tempted to call the result a "twisted" algebra structure, an analogy with twisted polynomial and group rings. I'm not really sure what entails. I guess the most reasonable extension of the axiom would be: .

I would be really surprised if this wasn't considered in detail somewhere... Rschwieb (talk) 16:22, 26 October 2012 (UTC)

You're probably right. My own exposure to "twisted" as applied to mathematical structures is indistinguishable from nil, but the term makes some kind of intuitive sense. My own approach relates more to my observation of real geometric algebras (being isomorphic to) replacing well-entrenched complex algebras in physics is suggestive that the real algebras actually make more "sense", and that the complex algebras were simply used first because they have some of the structure of the Clifford and Lie algebras. For example, the Pauli matrices happen to be equivalent to the real algebra of physical space, but the latter actually makes more sense from a direct interpretation of vectors. Ditto the Dirac matrices versus spacetime algebra. You're probably right about these complex "twisted" algebras being investigated, but I tend to doubt their mathematical significance relative to some other approaches. But I'm speaking out of turn, since I don't really know. — Quondum 19:46, 26 October 2012 (UTC)

Revert on Gradient page

Hi, on September 5th, you reverted my change to the Gradient page that directional derivatives are defined for unit vectors, with a comment stating "Removing restriction to *unit* vectors where inappropriate: the directional derivative is defined for *any* vector.", with "directional derivative" linking to the Wiki page for the term. As you'll see in the page you linked, directional derivatives are indeed defined for unit vectors, not in general.

(Also, please forgive me if this is not the correct way to contact you about this, I have little experience with intra-wikipedia discussions :))

Flebron (talk) 04:31, 28 October 2012 (UTC)

This is a perfectly good way to contact me (you have no way of knowing whether I'm still watching the article's talk page), though a discussion relating to a specific article would normally be best conducted there, since it could then attract the attention of others with an interest in the article. In this instance, though, it is (IMO) the article Directional derivative that needs debate, so we could take it there if appropriate.
My immediate reaction is that the page Directional derivative is somewhat unclear and needs clarification. There seem to be two distinct definitions in general use, in one of which the vector is normalized (but not required to be so, and hence is nevertheless defined for non-unit vectors), where
and the other where
Unfortunately, the two definitions are only equivalent for unit vectors. IMO, the definition without normalization is undoubtedly far better (more useful and elegant mathematically, and is defined in important contexts where normalization is impossible). And pertinently, the second definition is the most appropriate for the gradient (∇), since the first fails in many instances where the gradient is nevertheless defined.
In my own (perhaps limited) reading, the restriction to a unit vector has been externally imposed if desired (i.e. it has not been part of the definition of the directional derivative), and I have often seen it used when the vector is not a unit vector. Here is an example quote (from Doran & Lasenby. Geometric Algebra for Physicists. p. 168.) of a definition that does not assume normalization:
The operator properties of ∇ with any vector a results in the directional derivative in the a direction. That is,
where we assume that the limit exists and is well defined.
This seems to be typical in physics texts, and in my mind this definition is undoubtedly more convenient and useful, even if the word "directional" suggests the other definition. The unfortunate fact remains, however, that there are two different definitions in general use. This should be highlighted in Directional derivative. — Quondum 06:28, 28 October 2012 (UTC)
Archive 1Archive 2Archive 3Archive 5