Talk:Dot product/Archive 1
This is an archive of past discussions about Dot product. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
Circular proof?
In the Law of Cosines article, it is proved using vector dot products. And over here in dot products, a dot b = a b cos theta is proved using the Law of Cosines. Shouldn't somebody fix this? --Orborde 07:47, 9 September 2005 (UTC)
- Yes, not here though. Proving Law of Cosines using dot products is silly. All of vector calculus depends on basic trigonometry, not the other way around.
I don't know what you're talking about:the Law of Cosines is just the definition of the dot product!!!! (someone didn't understand their high school lessons)
- Of course, they're basically the same thing. The issue is the circularity of the proofs. Pfalstad 19:01, 4 February 2006 (UTC)
I suppose that there is a logical error in the definition of the dot product. The definition part of this article says that the dot product is the sum of the products of all the corresponding two vectors components. The problem is that in the part "Geometric interpretation", cos(theta) is calculated based on the algebraic definition and doesn't give a proof for the algebraic definition. Here is an explanation why the dot product of two vectors equals to the sum of the products of all the corresponding vectors components and equals to the product of the cosine of the angle between them and their lengths: A-> and B-> are two vectors that originates in the origin point (0,0). (the notation -> means vector) alpha is the angle between the positive direction of the X axis to vector A beta is the angle between the positive direction of the X axis to vector B A-> : (Ax, Ay) Ay = |A->|*sin(alpha) Ax = |A->|*cos(alpha) B-> : (Bx, By) By = |B->|*sin(beta) Bx = |B->|*cos(beta) theta = alpha - beta = the angle between vector A and vector B. Ax * Bx + Ay* By = |A->|*cos(alpha)*|B->|*cos(beta) + |A->|*sin(alpha)*|B->|*sin(beta) = |A->|*|B->|*( cos(alpha)*(cos(beta) + sin(alpha)*sin(beta) ) = |A->|*|B->|* cos(alpha-beta) = |A->|*|B->|* cos(theta) I suppose that this is the right proof to the definition of the dot product. If I am right please correct the definition part of this article, thanks !
I saw a better proof in the article than that I have shown above, that can be extended to every two vectors in N dimensions - it is in the section : Proof of the geometric interpretation. I think that it will be better to place this section next to the Gemotric interpretation, before the other sections. So, the logical error I have talked about above is fixed.
Merge?
This needs some major merging and clarification of terminology with the article inner product space. For example, this article never defines what a dot product is, esp. over an arbitrary vector space. Then it defines the dot product in terms of the "angle" between two vectors, which is never defined. This is circular definition. The typical way is to define the cosine between two vectors in terms of the inner product. Then all of what follows here is just special case when V is R^n with usual dot product. Personally, I think there could be two articles, the inner product space one giving the abstract algebraic formulation, and a more concrete geometric one focusing just on R^n or C^n, aimed more at people who might just use it for calculus or physics classes. Revolver 00:00, 1 Apr 2004 (UTC)
- I agree. Markus Schmaus 15:37, 27 July 2005 (UTC)
A quote from the main article between the horizontal rules:...
Properties
The definition has the following consequences:
- the dot product is commutative, i.e. a·b = b·a.
- two non-zero vectors a and b are perpendicular if and only if a·b = 0
- the dot product is bilinear, i.e. a·(rb + c) = r (a·b) + (a·c)
From these it follows directly that the dot product of two vectors a = [a1 a2 a3] and b = [b1 b2 b3] given in coordinates can be computed particularly easily:
- a·b = a1b1 + a2b2 + a3b3
I'm afraid that I don't see how "it follows directly".
It can be easily shown with 2D vectors that this is true using "cos(A-B)=cosA.cosB+sinA.sinB", but I'm not sure how to extend this to 3D (or more D) vectors.
Is it worth expanding the article to show this/these derivation(s)? -- SGBailey 21:56, 2003 Nov 16 (UTC)
Given a basis of perpendicular unit vectors, it does follow at once.
Charles Matthews 09:53, 19 Nov 2003 (UTC)
- I accept that it works. I just don't see how the three consequences "directly" cause the ax.bx+ay.by+az.cz construct. I used to know this stuff 30 years ago - sigh. -- SGBailey 2003-11-18
In order to generalize the statement concerning linearity, I would suggest to swap the term "bilinear" with "sesquilinear", which encompasses complex behavior. -- SEggl 2009-06-09
I dislike the first couple of prargraphs now. They are diving into too much depth without giving a general overview first. However I'm not able to edit it without losing much of the content of those paras. The equations for the dot product and its description want to come before the stuff about vector spaces and fields. -- SGB 2004-03-24
I would like to point out that most of this article is related to *real* vector spaces, when the field is not the reals, things are different. For example when the characteristic is non null, for example Z/5Z, then there are vectors in the 2-dimensional vector space (Z/5Z)² which are such that <x,x>=0, for example x=(1,2), without having <x,y>=0 for all y. And there no chance to define the notion of norm or angles between lines (there are 6 of them). All the things about angles should be left to the real case only. -- Christian Mercat 2004-04-01
I agree with some of the concerns mentioned above. First, as far as I know, the term dot product is only used in real vector spaces. Over an arbitrary field, the common used terminology is bilinear form. Even over the complex numbers, dot product sounds a little too conversational; in this case I would instead say inner product, and refer to it as a Hermitian form.
Apart from terminology, the article currently features considerable confusion between the real and arbitrary field cases. Dmharvey Talk 21:58, 3 Jun 2005 (UTC)
shouldn't the last property above be?
Fork?
I just wonder, why was the subpage dot product/Temp created? Making forks is not considered a good idea on Wikipedia. Do you plan to eventually overwrite the dot product article? Maybe you should have started by listing your objections on the talk page. Either way, to me it looks like stealth editing, and in this context I am not sure that is necessary. Oleg Alexandrov 15:39, 27 July 2005 (UTC)
- I originally created the page as a proposal for a re-write in response to the discussion at Talk:Inner product space#Separate_inner_product_page.3F but I haven't had much time to do the re-write. Anyways, it looks like someone just re-wrote everything anyways so dot product/Temp is pointless now. Don't worry, no ulterior motives. --Dan Granahan 18:00, 27 July 2005 (UTC)
Target group
I believe the target group of this article is not mathematicians, but mainly engineers and high school students.
I used length instead of norm, as it is a more down to earth concept and I knew about (spatial) vectors and their length long before I heard about norms. Similarly, I first gave the definition for three-dimensional vectors, as I belive many people looking for this article will be only interested in this case and have never seen a Σ sum before. Markus Schmaus 11:57, 29 July 2005 (UTC)
- I very much agree with Markus. People, please try to keep things simple. This is a general purpose encyclopedia, and the most frequent complaint is that mathematicians write things here only for themselves, which shutting everybody out. Thanks. Oleg Alexandrov 15:19, 29 July 2005 (UTC)
I'm curious what level of "dumbed-down" simpleness we are trying to achieve. For instance, switching from norm to length in the Geometric interpretation section seems unnecessary given we have terms such a Euclidean space and projection in the same paragraph. In my opinion, keeping phrasing such as
- ||a|| and ||b|| denote the norm (or length) of a and b
was reasonable and in no further need of generalization or simplification. After all, I've seen the notation ||a|| for the length of a vector since 8th grade or whenever kids are first taught about vectors. I just want to avoid excessive simplifying. --Dan 20:35, 10 August 2005 (UTC)
- I don't see any harm in excessive simplification. Dot product is a pretty simple, basic concept, and if someone is looking it up, it's likely they are very mathematically unsophisticated. They might not even be in 8th grade yet; maybe they saw dot product on a computer graphics site somewhere. The ||a|| notation is still there, it's just further down under "Generalization". And the link to inner product space contains even more rigor and fancy notation for those who are so inclined. Pfalstad 20:56, 10 August 2005 (UTC)
The person writing proof of the geometric interpretation didn't use ||a|| but a for the length of a vector notation and I think it is the more basic notation. If you covince me that high school students are more familiar with the other I have no problem with it. It will be much harder to convince me that using norm instead of length in the geometry section is a good idea. Norm generalizes length and the norm of a Euclidean vector is its length. Markus Schmaus 22:49, 10 August 2005 (UTC)
- Fair enough. You make a good argument that norm shouldn't be used in that section. As for the notation for length, I guess thats really just trivial as long as its consistent. Personally, I'm used to the ||a|| notation instead of a but I suppose they're both just as acceptable. The only thing that might be worth changing is the picture, which uses a different notation than the text. --Dan 03:23, 11 August 2005 (UTC)
Distributive law
If you know that the distributive law holds for dot product, "it follows directly."
I'll prove why.
a·b = (x1 i + y1 j + z1 k) · (x2 i + y2 j + z2 k) =
(x1*x2)i·i + (x1*y2)i·j + (x1*z2)i·k +
(y1*x2)j·i + (y1*y2)j·j + (y1*z2)j·k +
(z1*x2)k·i + (z1*y2)k·j + (z1*z2)k·k
Since i · i = cos 0 = 1, i · j = cos 90 = 0, i · k = cos 90 = 1,
j · j = cos 0 = 1, j · k = cos 90 = 0,
k · k = cos 0 = 1,
We can simplify the above equation as
(x1*x2)(1) + (x1*y2)(0) + (x1*z2)(0) +
(y1*x2)(0) + (y1*y2)(1) + (y1*z2)(0) +
(z1*x2)(0) + (z1*y2)(0) + (z1*z2)(1)
= (x1*x2) + (y1*y2) + (z1*z2)
but my question is... how do we prove that the distributive law for dot product holds??? —The preceding unsigned comment was added by 129.97.235.130 (talk • contribs) .
Distributive law
I'll show you babe! Grrr...
We are trying to prove this:
Now let:
,
,
and so
now to the juicy bit:
QUESTION: Didn't you just use the distributive law while proving distributive law above?
- Yes, we used the distributive law for real numbers to prove the distributive law for dot products. Nothing wrong with that. Pfalstad 18:01, 11 August 2006 (UTC)
TRANSPOSE MATRIX?
a^tb and yet vector "a" was not modified, only transpose of the vector "b" was implemented insted of the vector "a" or am i missing someting. --anon
- a^T b means transpose of a times b. Oleg Alexandrov (talk) 23:33, 22 June 2006 (UTC)
so why was the transpose of "b" taken insted of "a"? thanx —The preceding unsigned comment was added by 209.176.23.253 (talk • contribs) .
- The transpose matrix thing shouldn't appear in the intro, I dunno where else it can go but its confusing to matrix newbs and needs to be moved. Fresheneesz 19:53, 10 August 2006 (UTC)
My question is whether the correct expression should be (a,b) = (b^t)a, because otherwise the multiplication will yield a matrix instead of a number. a^t being a nx1 vector times b being a 1xn vector will yield an nxn matrix. Can you verify it?130.54.130.229 09:16, 21 December 2006 (UTC)
Complex dot product and more general definition
The definition of a dot product for complex vectors is more general, and i'm thinking it wouldn't be a bad idea to simply make that the main definition at the top. Most people learning about the dot product wouldn't be confused if a note about the complex conjugate were given "for real numbers, the complex conjugate of a number is equal to that number, and so that operation may be ignored in the case of real numbers". Comments? Fresheneesz 20:17, 11 August 2006 (UTC)
- It would be a very bad idea. The whole article is about the real dot product, and all the aplications in physics and geometric intuition are about that. The generalization to complex numbers would be sterile and confusing. The fully general dot product is described in the inner product article. Oleg Alexandrov (talk) 20:39, 11 August 2006 (UTC)
- Agree with Oleg. Pfalstad 21:32, 11 August 2006 (UTC)
I would like to add that the definition as it is right now includes complex vectors, but a lot of the rest of the article is about real vectors only, without explicitely stating so. This is misleading, as for example the dot product of complex vectors is not commutative. —Preceding unsigned comment added by 130.237.43.57 (talk) 12:02, 14 April 2008 (UTC)
- Perhaps we could add a note that most of the article refers to real-valued vectors (since this is what most enquirers will wish to learn about), but with a link to vector spaces for a more general mathematical treatment of vectors? dbfirs 12:21, 14 April 2008 (UTC)
I was looking at the article on four vectors. When I tried to link to scalar product, I found to my annoyance that it was redirected to this article which never discusses the scalar product in the context of tensor analysis. I was unsure how to include that in the main article. So I am going to put some additions here so that it could be included after deliberation of its relevance here on the talk page.
The concept of a dot product can be extended to some less obvious co-ordinate systems like curvilinear coordinates. We need to define a metric . This metric contains all the relevant information corresponding to the co-ordinate system and the geometry of space. Then we can extend the definition of a scalar product using tensor notation as,
An important aspect to note here is that this generalised scalar product can be extended to incorporate higher dimensions like time in relativity. This representation also explicitly shows that a scalar quantity has the property of being invariant under any transformation.
A lot more could be said about this but given the discussion above about the level of technicality in the article I considered it prudent to leave out more specifics. I am also adding a hidden note about this proposal in the main article. Other users please feel free to add to the above.--Fatka (talk) 04:29, 27 October 2008 (UTC)
Move proof to separate page
The proof is a bit lengthy and unweildy in this article. Also, I doubt most readers care about it. I think it would be a lot more concise and easy to read if a link to the proof was given in the section on the geometric interpretation. Comments? Fresheneesz 01:07, 13 August 2006 (UTC)
- I wouldn't move it.. People don't have to read that far if they aren't interested. It's not a very long article. Pfalstad 02:19, 13 August 2006 (UTC)
Error in image
The label below the image shows |A|cos(theta) . It is missing the |B| factor —The preceding unsigned comment was added by 200.117.138.199 (talk • contribs) 09:52, 8 September 2006.
- That label is for the scalar projection of A on B, not A • B. I've added a caption that hopefully clarifies this. --Mrwojo 07:46, 5 March 2007 (UTC)
error in included gif?
In the /*Geometric Interpretation*/ section, the equation included as a .gif uses sin(theta) where it should use cos(theta) - I don't know how to edit/fix that. —The preceding unsigned comment was added by 24.61.195.42 (talk) 09:20, 22 February 2007 (UTC).
- This was apparently vandalism that was later fixed. --Mrwojo 20:50, 5 March 2007 (UTC)
History
Since the dotproduct is a relatively simply mathematical tool, i think the article could be improved further by adding a short History section. Who decribed the dotproduct first? Who have made contributions to it? And so on. Sorry for the bad english - i'm danish --Bilgrau 18:06, 11 March 2007 (UTC)
- Good idea. For prospective researchers, the term inner product apparently comes from Grassmann's Ausdehnungslehre (1844). [1] Dot product and dot notation appear to be from Vector Analysis (Edwin Bidwell Wilson 1902). [2][3] --Mrwojo 20:31, 11 March 2007 (UTC)
Um, "4", right?
It seems to me that the answer to the example is '4', not '3' (1)(4) + (3)(-2) + (-5)(-1) =
4 + -6 + 6 = 4
—The preceding unsigned comment was added by 63.226.32.16 (talk) 18:04, 3 May 2007 (UTC).
- No, since (-5)(-1)=5 Kuteni 19:00, 15 August 2007 (UTC)
The dot product is not a binary operation
Unless I'm very mistaken, the dot product can't be a binary operation on vectors, because it is not closed. I've removed the words from the opening sentence. MrHumperdink 21:14, 6 July 2007 (UTC)
- I don't believe closedness is a necessary characteristic of binary operations. For instance, Wolfram MathWorld distinguishes between a Binary Operation (any function that applies to two quantities) and a "Binary Operation on a set A" (a function f : AxA -> A), which they also call a Binary Operator.
- The wikipedia article you link to also supports this interpretation, suggesting that whether this is a requirement or not is not universally agreed upon.
- I'd support restoring the sentence. JulesH 16:13, 27 October 2007 (UTC)
- Rereading, this of course means it can be described as a binary operation, but not a binary operation on vectors. The phrasing "a binary operation from vectors to scalars" might be appropriate. JulesH 16:16, 27 October 2007 (UTC)
Proof
Shouldn't the proof include a proof of distributitiveness of the dot product, seeing as it relies on this property, and it isn't proven anywhere else in the article? JulesH 16:13, 27 October 2007 (UTC)
Assessment comment
Should include three-dimensional case, less formally written. Overall, too dry. Arcfrk 11:08, 26 May 2007 (UTC) |
Last edited at 11:08, 26 May 2007 (UTC). Substituted at 21:14, 4 May 2016 (UTC)