Jump to content

Wikipedia:Reference desk/Archives/Mathematics/2016 June 11

From Wikipedia, the free encyclopedia
Mathematics desk
< June 10 << May | June | Jul >> June 12 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


June 11

[edit]

Analyzing the multiple choice test

[edit]

Consider a multiple choice test with questions, each having two choices, one of which is correct.

Suppose that answers are known, and so the remaining answers are unknown.

All the known answers are answered correctly, but the unknown answers are guessed randomly.

The number of correctly answered questions is and so the number of incorrectly answered questions is .

The odds are

because the number of ways of choosing out of unknown answers is the binomial coefficient, and guessing times provides . The constant factor is omitted.

This odds-table for is computed by J

  5":(2^x)*|:(!/~)n-x=.i.1+n=.10
   1   10   45  120  210  252  210  120   45   10    1
   0    2   18   72  168  252  252  168   72   18    2
   0    0    4   32  112  224  280  224  112   32    4
   0    0    0    8   56  168  280  280  168   56    8
   0    0    0    0   16   96  240  320  240   96   16
   0    0    0    0    0   32  160  320  320  160   32
   0    0    0    0    0    0   64  256  384  256   64
   0    0    0    0    0    0    0  128  384  384  128
   0    0    0    0    0    0    0    0  256  512  256
   0    0    0    0    0    0    0    0    0  512  512
   0    0    0    0    0    0    0    0    0    0 1024

The mean value ± the standard deviation of the number of correctly answered questions is

as has a binomial distribution. But this is uninteresting. I want to know the mean value ± the standard deviation of the number of known answers, , knowing the observed number of correctly answered questions, . It can be computed by brute force from the odds-table, but I want a simple formula like the one above. This is why I am requesting your assistance.

The challenge is to simplify the sums

.

Then it is easy to solve

for .

Perhaps the sum

is easier to simplify analytically. Then

.

Bo Jacoby (talk) 09:33, 11 June 2016 (UTC).[reply]

What's your prior probability for  ? Just a uniform distribution ? Jheald (talk) 14:00, 11 June 2016 (UTC)[reply]

Yes. Prior to the observation the possibilities are equally credible. Bo Jacoby (talk) 15:42, 11 June 2016 (UTC).[reply]

With an appropriate mapping of parameters, can you relate your posterior distribution to a negative binomial distribution? How close can you get? Jheald (talk) 14:14, 11 June 2016 (UTC)[reply]

After the observation of the only possibilities are . A negative binomial distribution allows Bo Jacoby (talk) 15:42, 11 June 2016 (UTC).[reply]

A tiny simplification: substitute and

Bo Jacoby (talk) 07:39, 12 June 2016 (UTC).[reply]

Your series are hypergeometric, and they don't factor. What sort of answer are you hoping for? --JBL (talk) 16:03, 13 June 2016 (UTC)[reply]

Thanks! I am hoping for a reasonable computational complexity (as I don't want to count when computing 28+94). Bo Jacoby (talk) 16:39, 13 June 2016 (UTC).[reply]

You already have a single sum of simple-to-compute summands. How much simpler can an answer be? (I don't understand what 28 and 94 have to do with anything.) --JBL (talk) 15:46, 14 June 2016 (UTC)[reply]

The formula

estimate the number of correctly answered questions from the number of known answers. I want a similar formula

to estimate the number of known answers from the number of correctly answered questions. (Forget about 28 and 96. I unsuccesfully tried to illustrate the difference between counting and computing.) Bo Jacoby (talk) 21:28, 14 June 2016 (UTC).[reply]

And you have exact formulas for mu and sigma, which involve ratios of single sums of simple-to-compute summands. So, something about "ratios of single sums of simple-to-compute summands" is unsatisfactory to you. But this is a pretty nice kind of answer (particularly because the sums don't factor). So could you make a clear statement of what properties an answer should have to be satisfactory? --JBL (talk) 23:32, 14 June 2016 (UTC)[reply]

When the formula

is simplified into

why can't the formula

also be simplified? Bo Jacoby (talk) 20:28, 15 June 2016 (UTC).[reply]

Because the sum (essentially, a 1F0 hypergeometric function) and the other sums in the first expression have transformation identities that allow them to be written as products, and then the products cancel. But the series that appear in your example do not factor, so cancellation of the same kind doesn't happen.
(Three things about my comments: (1) I do not consider myself an expert in this area, and in particular I don't have anything to say about why some series factor and others don't. (2) I do not claim that I have provided a rigorous proof of impossibility of anything. (3) It's easy to see that your S_k functions can't factor nicely in general by setting k and y to be small positive integers; you'll get polynomials in n that do not factor nicely.) --JBL (talk) 21:36, 15 June 2016 (UTC)[reply]

Thanks! How to find the hypergeometric expressions? Bo Jacoby (talk) 22:54, 15 June 2016 (UTC).[reply]

If you mean how to find identities, I suggest the book of Gasper and Rahman or these articles: hypergeometric function, hypergeometric identity, generalized hypergeometric function. If you mean how to express one of your series as a hypergeometric function in its standard form, I would be happy to do an illustrative example of your choosing. --JBL (talk) 16:31, 16 June 2016 (UTC)[reply]

I found

which is promising. Can you similarily reduce these?

Bo Jacoby (talk) 20:23, 16 June 2016 (UTC).[reply]

These are, respectively, , , and . (And these can be written lots of other ways, e.g., using the Euler and Pfaff transformations.) The hypergeometric function with z = 1 is summable to a product (essentially, this is Vandermonde's identity); that's why your first example is nice. The other three examples have z = 2, which is not summable to a product in general. --JBL (talk) 20:50, 16 June 2016 (UTC)[reply]

Thank you very much! Bo Jacoby (talk) 08:48, 17 June 2016 (UTC).[reply]

Rank deficient variant of matrix orthogonality

[edit]

An complex matrix satisfies this rank deficient variant of the orthogonality condition

where is the transpose of . I'd ultimately like to construct the most general solution for , but any pointers or suggestions are as welcome as a solution. Does anyone have any good ideas? Are there any possibly related concepts that you think could help? 92.12.167.39 (talk) 17:55, 11 June 2016 (UTC)[reply]

A large class of solutions is obtained by replacing the last column of an orthogonal matrix with 0's. In fact if we were talking about real matrices instead of complex, or using X*X instead of XtX then I believe those would be the only solutions. But the possibility of non-zero self-orthogonal vectors throws off my reasoning some. --RDBury (talk) 19:45, 11 June 2016 (UTC)[reply]
I think I have the step I was missing so here goes. The first n-1 columns of X are orthonormal; the first step is to show there is an nth column which forms an orthonormal basis for Cn. Let the first n-1 columns be u1, ... , un-1. These are linearly independent. To see this assume a1u1+...an-1un-1=0. Dot product both sides with ui to get ai = 0 for any 0. Since the vectors are linearly independent we can add an nth one v to make a basis for Cn. Now let un = v - v⋅u1 u1 - ... - v⋅un un, following usual process for forming an orthogonal basis. Then un is orthogonal to the other ui and u1, ... , un is a basis, but I need to show un⋅un ≠ 0 before going on. (This is where it would be easier if it we were talking about real matrices.)
Write each uj as y1je1+...+ynjen, where the e1 are the standard basis vectors. (In fact the first n-1 columns of Y = (yij) are the same as the first n-1 of X, but I won't need that.) Expanding out the dot products, ui⋅uj = y1iy1j+...+yniynj and so the matrix of dot products is (ui⋅uj) = YtY. The left hand side, given what we know about the ui's, is diag(1, ... 1, un⋅un). The determinant of the The left hand side is then un⋅un. But Y is a nonsingular matrix so it's determinant is non-zero, and so the right hand side is |Y|2 ≠ 0, therefore un⋅un ≠ 0.
Square roots always exist in C so divide un by √(un⋅un) to make Y an orthogonal matrix, i.e. YtY = YYt = I. (Actually an orthogonal matrix is defined to be over the reals but I'll just stretch the definition a bit.) The first n-1 columns of the original X are u1, ... , un-1 and the last column is some other vector w. Since the ui's form a basis, write w=b1u1+...+bnun. From XtX=diag(1, ... , 1, 0) we have ui⋅w = 0 for i<n and w⋅w = 0. Expanding, bi=0 for i<n, so w=bnun, w⋅w = bn2 = 0 so bn = 0 and so w = 0. Therefor X is formed by replacing the last column of Y with 0's. Note that the argument fails if we're given XtX=diag(1, ... , 1, 0, ... , 0) with more than one 0 on the diagonal. --RDBury (talk) 22:05, 11 June 2016 (UTC)[reply]
PS. The above reduces the problem to finding the elements of On(C). I'm not sure what is known about this group; I think On(R) and Un(C) are better known. Do we have an article on it? --RDBury (talk) 22:11, 11 June 2016 (UTC)[reply]
Thanks RDBury, that seems convincing to me. Regarding complex orthogonal matrices. I've seen them come up in dicussions of Lie theory, and we have some brief comments on classical group and orthogonal group. If it helps, I think that based on the Lie group connection, you can construct these matrices by taking a parameterization of SOn(R) in terms of a set of real parameters (for example Euler-like angles) and analytically continuing them. 92.12.167.39 (talk) 09:16, 12 June 2016 (UTC)[reply]