Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2010 September 14

From Wikipedia, the free encyclopedia
Mathematics desk
< September 13 << Aug | September | Oct >> September 15 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


September 14

[edit]

Vector as an integrand

[edit]

I'd like to know is it possible for the integrand to be a vector value? for example ∬n dσ , where n is a unit normal vector. Re444 (talk) 12:44, 14 September 2010 (UTC)[reply]

Absolutely it's possible to integrate vectors. I'd only be guessing at what your notation means, though. Why is it a double integral? What are you integrating over? n is normal to what? And what's dσ?
But if I had to guess, I'd guess that you're integrating over a surface, and dσ is an element of area. Sure, you can integrate that. I probably wouldn't write it as a double integral, though, because that suggests that you're doing two integrations, whereas here you're doing only one (albeit in two dimensions). --Trovatore (talk) 09:54, 14 September 2010 (UTC)[reply]
Yes in my example integration is over a surface in 3D. Till now I've ran into some vector integral equations whose integrands reduce into scalars, like divergence and stokes theorem. In the case of vector integrands, where usually they appear and how we should deal with? Treating by decomposing them to their components? And one more question: I’ve read in a fluid mechanics text that according to Gauss theorem, my example integral, for any surface equals zero. But the integrand in the Gauss theorem is a scalar not a vector. Re444 (talk) 12:44, 14 September 2010 (UTC)[reply]
Certainly you can decompose to components: if , then by linearity. I don't immediately see how Gauss' theorem applies to the case of integrating a closed surface's normal vector, but it does seem that it should be by a projection argument (per dimension). --Tardis (talk) 14:46, 14 September 2010 (UTC)[reply]
Something like:
But can it be done without decomposing the vectors? -- 1.46.187.246 (talk) 16:31, 14 September 2010 (UTC)[reply]
I think the obvious generalization of the definition of the Riemann integral should work in this context, without ever having to give your vectors a coordinate system. There might be some picky details to work out as to what is meant by "refinement", but I expect it should all work.
If you want to do the (mostly) more general Lebesgue integrals, I expect that to get even pickier, but again, still work. --Trovatore (talk) 18:10, 14 September 2010 (UTC)[reply]
Beautiful! thanks everybody. Re444 (talk) 10:19, 15 September 2010 (UTC)[reply]

Vector Problem

[edit]

Hi. If I have two known vectors, n1 and n2, and I know scalars d1 and d2, can I find a given the following equations: dot( a, n1 ) = d1 and dot( a, n2 ) = d2.

I feel that this should be very simple - perhaps by taking the dot product of each equation?

Incidentally, it is to find the perpendicular distance from the origin to a line intersecting two planes, with normals n1 and n2 and perpendicular distances to origin d1 and d2. Just stuck on this point - any help appreciated!

Thanks. 128.16.7.149 (talk) 16:33, 14 September 2010 (UTC)[reply]

I assume that a is a vector that you are attempting to find. Then, an1 = a1n11+a2n12+a3n13... There isn't a lot you can do to reduce it. However many elements there are in the variables is how many unknowns you will have.
I think you will find more luck looking at eigenvectors. It appears that you want d1 to be an eigenvalue of n1 that will produce the eigenvector a. But, I could be reading this question completely wrong. -- kainaw 17:46, 14 September 2010 (UTC)[reply]
What's "an eigenvalue of n1" mean, since is a vector (not a matrix)? --Tardis (talk) 17:55, 14 September 2010 (UTC)[reply]
That is my confusion of the question. Related to your answer below, how can arbitrary length vectors produce the results the questioner is looking for? Either the questioner is misusing the term "vector" or a is not a vector (which makes the "dot" implication of "dot product" confusing) or some other issue that I am not reading correctly. He states that he is looking for a perpendicular vector (distance), which made me think of eigenvectors. -- kainaw 18:09, 14 September 2010 (UTC)[reply]
You can always find the dot product of two vectors if they are of the same dimension. Without Emil's addition below, the OP has two equations, and a number of unknowns equal to the dimension of the vectors. Thus, if the vectors are in 2 dimensions, there will probably be a unique solution. In a higher dimensions you will have more variables and thus many solutions. -- Meni Rosenfeld (talk) 18:31, 14 September 2010 (UTC)[reply]
By the rowwise interpretation of matrix multiplication, your two equations are equivalent to . Whether that matrix has a unique solution or not depends first on the dimension of your vectors: if they're 3-vectors, you can get only a parameterized family of solutions because the matrix is 2×3 (thus not square, thus not invertible). If they're 2-vectors, then for almost all there is a unique solution . (If they're 1-vectors (scalars), then for almost all there is no solution, but that's probably not the case you had in mind.) --Tardis (talk) 17:55, 14 September 2010 (UTC)[reply]
If I understand it correctly, the OP is talking about 3-dimensional space, and his a is also orthogonal to n1 × n2, which gives the third linear equation needed to make it regular: . The system can then be solved by any method mentioned in the system of linear equations article.—Emil J. 18:26, 14 September 2010 (UTC)[reply]
Another way to state the extra condition is that a is linear combination of n_1 and n_2. If I write , where u_1 and u_2 are scalars, then (using the fact that n_1 and n_2 are normals, which I take to mean they have norm 1) the equation above simplifies to the 2-dimensional system .—Emil J. 18:52, 14 September 2010 (UTC)[reply]

Thanks all for your helpful contributions. There are still clearly things I need to learn. The deceptively simple trick here was decomposing the vectors into their components, then simply solving through inverting the matrix (thanks Emil). I think I was distracted by trying to solve the problem using vector operations (dot, cross etc) only. Thanks again. 128.16.7.149 (talk) 09:35, 15 September 2010 (UTC)[reply]

Orthogonality of Fourier basis for L2[R]

[edit]

The Integral transform article says that any basis functions of an integral transform have to be orthogonal, and then says that the basis functions of the Fourier transform are . But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0. I know that you have to take the conjugate of one of the basis functions for inner products in a Hilbert space).

So is the kernel of the Fourier transform actually an orthonormal basis for L2[R]? Is there a different definition for orthogonality when operating over the entire real line? (The basis functions don't seem to be square integrable, for instance).

-- Creidieki 17:06, 14 September 2010 (UTC)

(Since no one actually familiar with the subject gave an answer, I'll try.) I'm afraid the article is oversimplifying the picture. Fourier transform on R does not directly define an operator from L2 to L2, but from L1 to L. The "basis functions" are also only L (moreover, notice that L2 has countable dimension as a Hilbert space, whereas there are continuum many these "basis functions"). In order to get an operator on L2, you take the Fourier transform on , and (uniquely) extend it by continuity to all of L2. This works because is a dense subset of L2, the transform takes -functions to L2, and it is a bounded linear mapping. This kind of construction by extension from a dense subspace works in general for any locally compact abelian group in place of R, see Pontryagin duality.—Emil J. 10:49, 15 September 2010 (UTC)[reply]
The Plancherel theorem is relevant to EmilJ's answer. You said that: "But these basis functions don't seem to be orthogonal (the inner product of two distinct basis functions, over R, doesn't seem to be 0." If the "u" you quote is allowed to range over all integers, then the fact that the basis functions form a (countable) orthonormal set is an easy computation. I think that the mistake you are making is to say that the inner product is defined by integration over the entire real line - it is not. The inner product is defined by integration over the the interval [0,2]. (Of course, when I say "the inner product is defined by integration over" I refer to the "integration" in the following rule for the inner product of two functions:
where f and g are usually taken to be complex-valued, measurable, -periodic functions. (Equivalently, they are complex, measurable functions defined on the unit circle in the complex plane.).) Hope this helps! PST 23:59, 19 September 2010 (UTC)[reply]

Octahedral Symmetry

[edit]

OK, so the group G of both orientation preserving and orientation reversing symmetries of a regular octahedron has order 48. If we then consider the set D that consists of lines adjoining pairs of opposite vertices (so there will be three such lines) and want to find the stabiliser of one of the lines, do we just bash out the orbit stabiliser theorem and say that the size of the stabiliser of a line is the size of G (48) divided by the size of the orbit (3) giving us 16? And is a question asking you to 'identify the stabiliser' likely to mean just find the order of the stabiliser or something more complex? Thanks asyndeton talk 19:15, 14 September 2010 (UTC)[reply]

They probably want you to actually say which group of order 16 it is. (There aren't that many to choose from.) Rckrone (talk) 03:58, 15 September 2010 (UTC)[reply]