Wikipedia:Reference desk/Archives/Mathematics/2009 January 26
Mathematics desk | ||
---|---|---|
< January 25 | << Dec | January | Feb >> | January 27 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
January 26
[edit]Turkish numerals
[edit]Barely mathematical, but still. . . .
Here's a retailer's ad for a pocket watch for the Ottoman market. The main dial has roman numerals in black around the outside. Inside these are other markings in red. Most of these markings are identical, but something looking like "0" corresponds to black "V", and I happen to remember that the Arabic-as-used-by-Arabs-numeral for 5 looks like "0". Oh, here's another, earlier example.
Googling for "turkish numerals" seems to bring either mentions of watch designs or explanations of the words in Turkish for numbers. Perhaps I'm googling poorly today, but can somebody point me to an informative link for this system of Turkish notation (surely much simplified and stylized on watch faces), even if the page is within an encyclopedia that anyone can edit? -- Hoary (talk) 01:04, 26 January 2009 (UTC)
- Miscellaneous or humanities might be better. Just a guess but don't they go off praying at about that unearthly hour of the morning? Dmcq (talk) 13:35, 26 January 2009 (UTC)
- ...that they don't get killed today. at least that's what I'd be praying for in gaza at an unearthly hour of the morning.
- Please...--pma (talk) 21:35, 26 January 2009 (UTC)
- Just what I'd been thinking. Many wristwatches with Roman or [what we normally think of as] Arabic numerals only give some of these (e.g. just 12, 3, 6, 9) and use simple strokes for the rest; presumably these faces choose among the numbers for their religious/cultural significance rather than for design or symmetry. -- Hoary (talk) 01:31, 27 January 2009 (UTC)
- I just had a look and they are obviously a stylized version of the numerals in Ottoman Turkish alphabet. Not sure why you couldn't find it as googling "Turkish numerals" gave me a good web reference as the first hit. Dmcq (talk) 22:45, 26 January 2009 (UTC)
- Numerical digit is the wiki page. Clearly looking like the Hindu-Arabic numeral system. --Salix (talk): 23:12, 26 January 2009 (UTC)
- If you look closely, you'll find that some of the figures that look like simple strokes (in the form of wedges) are accompanied by little crescents; these are stylized forms of Arabic '2', '3' and '6'. (I can't explain the '4'.) —Tamfang (talk) 05:31, 27 January 2009 (UTC)
Thank you all for the links. WP seems to say all, except for the (of course maths-irrelevant) matter of selection among the numerals for watch-face design. -- Hoary (talk) 01:31, 27 January 2009 (UTC)
zero divisors in non-commutative rings
[edit]The following question arose in a homework problem I am stuck on. Suppose R is a non-commutative ring and x in R is nonzero. For every y in R, does there necessarily exist nonzero z in R such that zy is in Rx? (If the answer is yes, please don't give the proof, just tell me if the proof is easy or not.) I see from zero divisor that left and right zero divisors do not necessarily coincide, which (if it were true) would give z = x as a solution. Eric. 131.215.158.184 (talk) 01:34, 26 January 2009 (UTC)
- Actually, my comment about zero divisors is completely off anyhow. I'm working on solving my original problem (if every left R-module is free, prove R is a division ring) without using the above question, because I suspect it isn't true. Eric. 131.215.158.184 (talk) 01:49, 26 January 2009 (UTC)
- In a domain, your first question asks: For every nonzero x,y in R, does there exist nonzero a,b in R such that ax = by? In other words, must Rx and Ry intersection non-trivially? The answer to this is no (see "uniform dimension" and Goldie's theory of classical rings of quotients). For instance, take R to be the free algebra on x and y over a field. This ring also has the property that every left ideal is free, but is not a division ring. This shows you will have to consider more than left ideals for your original problem. JackSchmidt (talk) 12:29, 26 January 2009 (UTC)
- The ring of integers also has that property. Algebraist 01:33, 27 January 2009 (UTC)
- (That every left ideal is free, but not that there is a direct sum of nonzero left ideals, which can never happen in a commutative domain.) JackSchmidt (talk) 04:03, 27 January 2009 (UTC)
- The ring of integers also has that property. Algebraist 01:33, 27 January 2009 (UTC)
- In a domain, your first question asks: For every nonzero x,y in R, does there exist nonzero a,b in R such that ax = by? In other words, must Rx and Ry intersection non-trivially? The answer to this is no (see "uniform dimension" and Goldie's theory of classical rings of quotients). For instance, take R to be the free algebra on x and y over a field. This ring also has the property that every left ideal is free, but is not a division ring. This shows you will have to consider more than left ideals for your original problem. JackSchmidt (talk) 12:29, 26 January 2009 (UTC)
- For the record, here's the proof (which I got with a few subtle hints from a friend): let m be a maximal left-ideal of R, then R / m is a simple left R-module, which must be isomorphic to R (since it is free over R and a quotient of R and nonzero). Thus R is itself a simple left R-module, and a division ring. (I can't believe how long that took me.) Eric. 131.215.158.184 (talk) 03:28, 27 January 2009 (UTC)
- Yay. A ring where every left module is free is a division ring, but your proof even shows that if every cyclic module is free, then the ring is a division ring. If you have classified rings where every left module is projective, try classifying rings where every cyclic left module is projective. The proof is similar, but does not reduce immediately to simple modules. JackSchmidt (talk) 04:03, 27 January 2009 (UTC)
- Oh hey, there is a small problem. There are rings R where R^2 is a nonzero quotient of R that is free and not isomorphic to R (rings where R is not isomorphic to R^2, but R is isomorphic to R^3, see invariant basis number and Leavitt's work). However, you can argue directly that R/m is isomorphic to R since it is free and simple (so indecomposable). JackSchmidt (talk) 04:08, 27 January 2009 (UTC)
- A question occurs: this proof uses the fact that R has a maximal left ideal. As such, it relies firstly on the fact that R has a multiplicative identity, and secondly on the axiom of choice. Are either of these assumptions necessary? Algebraist 21:58, 27 January 2009 (UTC)
- There are no non-unital rings such that every module is free on some subset. Every abelian group (with zero multiplication) is a module, and the homomorphisms between two such modules are just the abelian group homomorphisms. Hence a module with two elements and zero multiplication always exists and is never free. Even the regular module need not be free on any subset. If one instead defines a free module to be one isomorphic to a direct sum of copies of the regular module, then again there will be some abelian group not of this form. Sometimes people are interested in rngs with local units (like a ring of compact operators), and then one requires that modules M for a rng R satisfy RM=M and Rm=0 implies m=0. However, having nontrivial idempotents tends to produce non-free modules as it produces direct sum decompositions of modules. JackSchmidt (talk) 23:16, 27 January 2009 (UTC)
- A question occurs: this proof uses the fact that R has a maximal left ideal. As such, it relies firstly on the fact that R has a multiplicative identity, and secondly on the axiom of choice. Are either of these assumptions necessary? Algebraist 21:58, 27 January 2009 (UTC)
- Oh hey, there is a small problem. There are rings R where R^2 is a nonzero quotient of R that is free and not isomorphic to R (rings where R is not isomorphic to R^2, but R is isomorphic to R^3, see invariant basis number and Leavitt's work). However, you can argue directly that R/m is isomorphic to R since it is free and simple (so indecomposable). JackSchmidt (talk) 04:08, 27 January 2009 (UTC)
- I can't tell whether you need the axiom of choice here, but it should be mentioned that the converse implication is equivalent to AC. That is, if every left module over a division ring (or just a vector space over a field) is free, then the axiom of choice holds. — Emil J. 16:08, 28 January 2009 (UTC)
electric watts to amps
[edit]a electric appliance draws 1500watts what would the amp draw be.what is the formula for watts to amps. 74.72.185.229 (talk) 02:02, 26 January 2009 (UTC)
- Watts=volts×amps
- -- Hoary (talk) 02:23, 26 January 2009 (UTC)
What is ?
[edit]What is ?The Successor of Physics 04:40, 26 January 2009 (UTC)
- It's not normally defined. See Division by zero.Joeldl (talk) 04:54, 26 January 2009 (UTC)
- In FORTRAN, it's NaNQ, which I've always thought of as "Not A kNown Quotient" (what, no article ?). StuRat (talk) 16:26, 26 January 2009 (UTC)
- Well, we have an article on the standard NaN from IEEE 754 floating-point arithmetic. We could redirect NaNQ there if it is similar (I don't know Fortran). — Emil J. 16:38, 26 January 2009 (UTC)
- Actually, according to [1], NANQ is a nonstandard notation for a quiet IEEE NaN, which is supposed to be denoted as NAN, NAN(), or NAN(Q) in standard Fortran 2003. (Incidentally, this also shows that the letters stand for "Not A Number, Quiet".) — Emil J. 16:48, 26 January 2009 (UTC)
- Ah, thanks for the redirect and explanation. StuRat (talk) 18:41, 26 January 2009 (UTC)
- Remark. Sometimes you find written , just to indicate a particular type of a limit, like in the sentence: " is an indeterminate form of type ." It just means that both the numerator and denominator of the fraction vanish in the point where you do the limit. Analogous meaning has "indeterminate form of type ", or , etc: they do not refer to any special value. To evaluate the limit you may use the so called de l'Hôpital's rule --pma (talk) 21:24, 26 January 2009 (UTC)
Using L'Hopital's rule to find the derivative of the function that raises its argument to the nth power constitutes circular reasoning. Sometimes one must point out this error in freshman calculus courses. Michael Hardy (talk) 23:55, 26 January 2009 (UTC)
- Yes; in fact, I too don't like so much l'Hopital's rule. In elementary courses weaker students get confused, and apply it without checking the hypoteses, or apply it to compute a limit that is explicitly a derivative, like sin(x)/x as x->0. And in more advanced mathematics no-one would use it because the little oh and big oh notation is more practical. --pma (talk) 17:29, 27 January 2009 (UTC)
There is another way of looking at this problem. Consider a piece of electrical wire with a DC current flowing through it.
V = I * R
or
I = V / R
Now if you replace the wire with a superconductor and then measure V (the voltage drop across the wire) and R (the resistance of the superconductor) you will get
So what is the current flowing through the superconductor? Well you can measure the current and it is (drums rolls please)
It is any amount of current you choose to send through the superconductor.
Hence 122.107.205.162 (talk) 10:34, 31 January 2009 (UTC)
What is 00?
[edit]What is 00?The Successor of Physics 04:45, 26 January 2009 (UTC)
- In some cases it's most convenient to define it as 1, and some people adopt this definition in all cases. In other cases, it may be most convenient to leave it undefined. See Exponentiation#Zero to the zero power. Joeldl (talk) 04:56, 26 January 2009 (UTC)
- It's undefined because to get from xn to xn-1, you have to divide xn by x. So to get from 01 to 00, you divide 00 by 0. And, as we all know, division by 0 = undefined. Moral of the story: we stick with Roman numerals, which don't even have a character for zero. Problem solved. flaminglawyer 06:37, 26 January 2009 (UTC)
- That argument doesn't hold up. Your statement is only true for x ≠ 0. Not only that, but it can't be made to work for 0. For example, according to your rule, to get from 05 to 04, you divide by 0. Since 05 is 0, this means 04 should be 0 divided by 0, which is undefined. So 04 should be undefined. But we all know that it's not; everybody agrees that 04 is 0. Joeldl (talk) 06:50, 26 January 2009 (UTC)
- You can also consider this: the function xy is continuous (and even more) as a function of all x>0 and all y real, but there is no way to extend it continuously to include (0,0) in its domain. No matter what value we agree for 00, the resulting function will not be continuous in the pair (x,y) at (0,0). That is why an extension is not so important after all. But, depending to the case, it may be useful to extend it by 0 or by 1. --pma (talk) 11:40, 26 January 2009 (UTC)
- It's undefined because to get from xn to xn-1, you have to divide xn by x. So to get from 01 to 00, you divide 00 by 0. And, as we all know, division by 0 = undefined. Moral of the story: we stick with Roman numerals, which don't even have a character for zero. Problem solved. flaminglawyer 06:37, 26 January 2009 (UTC)
In power series, it's 1. For example,
and when x = 0 the the first term of this series is 00/0!, and all other terms are 0. In many contexts in probability and combinatorics, 00 is 1. In any case in which 00 must be construed as an empty product, it's 1. Michael Hardy (talk) 23:53, 26 January 2009 (UTC)
- I just got in way over my head... flaminglawyer 22:46, 29 January 2009 (UTC)
Variable Changing in Multiple Integrals
[edit]What is the general formula for variable changing in multiple integrals?The Successor of Physics 04:50, 26 January 2009 (UTC)
- See Integration by substitution. Joeldl (talk) 04:59, 26 January 2009 (UTC)
- The integral of the product of f(g(x)) with the derivative of g(x) (if of course g is a differentiable function), equals the integral of f(x) (when integrating with respect to x). Just let u = g(x).
- Exact, so the formula of change of variables for differentiable functions in one variables reduces to the derivarive of a composition, via the fundamental theorem of calculus. The situation for multiple integrals is less elementary. The formula with a injective change of variables is:
- ,
- that holds for any f in ; here is the absolute value of the determinant of the Jacobian of f; you can check that for n=1 and g increasing you find again the 1-variable formula. In the particular case of f=the constant function 1, you get the Lebesgue measure of g(U) as:
- ;
- conversely, the above formula implies easily the one for general f, just by linearity and by taking limits. Also, if g is an affine map, that is g(x)=Ax+b and A is a matrix, you have an even more particular case:
- Note that here and denote n-dimensional Lebesgue measure, while is the absolute value of the number ! (and "!" just denotes exclamation mark). For physical applications it is maybe important to recall that, for a (nXn) matrix A, is exactly the factor that changes the (n-dimensional) volume of U after applying A, as shown by the above formula . Again, the particular case of g = an affine map implies the general case of g differentiable, with some work indeed, but you can have an idea of how it works if you imagine to subdivide the domain U in several small cubes, so small that g is approximatively affine on each cube, and |g(U)| is therefore approximated by a sum over all the cubes: a sum that becomes the integral when you take the limit. By the way, very efficient way to prove everything in one strike is to obtain the first formula by the joint application of the Radon-Nikodym theorem and the Lebesgue differentiation theorem, as you will learn if you happen to study these topics. Still, this is not yet the most general formula; you may want g to be non-injective, and you may also want to have the determinant without absolute value on the RHS. Even more generally, you can consider the change of variables in a context of Riemann manifolds: all the above can be suitably extended (then it's called "area formula"). For your purposes however, the most useful cases are probably , in particular, the integration in polar coordinates, and , in particular, the integration in spherical coordinates or in cylindrical coordinates. --pma (talk) 21:04, 26 January 2009 (UTC)
Successor of physics: I don't want to sound rude in anyway but could you please be a bit more specific with your questions? For example, if you are more specific, we can answer your question appropriately and in greater depth. --PST 23:01, 26 January 2009 (UTC)
Definition for │x│
[edit]What is the definition for │x│? Normally │x│ is the value without considering the 'i's and - signs, but according to the vector formula │x│=+(x^2)^0.5 (1) the 'i's are kept, and with in other sources even formula (1) changes to │x│=±(x^2)^0.5 (2), in which even the plus or minus sign is uncertain! So what is the definition for │x│?The Successor of Physics 05:00, 26 January 2009 (UTC)
- What kind of thing is x? Joeldl (talk) 05:02, 26 January 2009 (UTC)
- x can be anything.
The Successor of Physics05:15, 26 January 2009 (UTC)- I mean, x is a real number, right? In that case, I would define |x| (the absolute value of x) as being x if x ≥ 0, and -x otherwise. It it is also the case that the formula is always correct. In my opinion, this would be a poor definition of |x| because it's complicated. Joeldl (talk) 05:30, 26 January 2009 (UTC)
- x can be anything.
If x is any complex vector (which includes the real numbers and real vectors), I just think of |x| as the length of the vector, or the distance the corresponding point is from the origin. This is probably the simplest interpretation that is easy to explain. It is geometric.-Looking for Wisdom and Insight! (talk) 07:09, 26 January 2009 (UTC)
- If x can be anything, then |x| can mean any of a large number of things. In each case it's some sort of measure of the size of x, but the details vary a lot. See absolute value, absolute value (algebra), norm (mathematics), determinant and cardinality for starters. Algebraist 11:07, 26 January 2009 (UTC)
Successor, it should be pointed out that does not work for anything, as you suspected. If by "anything" you mean any complex number, as you seem to do, then , where is the complex conjugate of , works for anything. (You do see how the is disappear in this formula?) The first one is just the special case of this when and happen to be equal, that is when is real. -- Jao (talk) 13:41, 26 January 2009 (UTC)
- To summarize, "" is a useful notation that you will find in a great number of different situations; usually the writer will state exactly what it stands for, or it will be clear from the context, so you really don't have to worry about it! In most of these cases (but not all) |x| is used to represent a certain non-negative real number depending on "x", and the function has special properties (as it is in the mentioned cases: absolute value, modulus, norm, valuation). Also, if A is a matrix, |A| is used to denote the determinant of |A|, even though det(A) is usually preferred, because |A| may be better used to denote a matrix norm. But |X| is also used to denote cardinality, and also measure (especially the Lebesgue's one). Sometimes, with a completely different meaning, and less standard, if X is a set endowed with a structure, some authors use |X| to denote the underlying set, so if G is a group, |G| is the set of its elements; if X is a graph, |X| is the set of its vertices, etc. Again, if you find it, they will state what they mean. --pma (talk) 19:00, 26 January 2009 (UTC)
Many-body problems
[edit]As a physics student, here's a 'deep' one for the mathematicians out there. I'm wondering: Why are many-body problems so difficult? Now, since I know a thing or two about solving them, I already know the superficial answer: It's a difficult mathematical problem. My question is rather, why is it a difficult mathematical problem? Obviously, not all superficially complicated or 'difficult' systems translate into difficult mathematical problems. Math has a way of finding clever, non-intuitive (physically) ways to solve stuff. Yet many-body problems have been around for centuries, and Math doesn't seem to have found any major 'shortcuts'. So I'm assuming there's some deep-seated mathematical difficulty surrounding them - and wondering what that might be. Anyone care to shed some insight on this? --130.237.179.182 (talk) 13:23, 26 January 2009 (UTC)
- I would say there are two main reasons: (1) large number of degrees of freedom; (2) non-linearity, resulting in sensitive dependence on initial conditions and chaotic behaviour. Succesful approaches to many-body problems, such as the virial theorem and the equipartition theorem, all seem to reduce the number of degrees of freedom by considering statistical averages. Gandalf61 (talk) 13:42, 26 January 2009 (UTC)
- And let us also say that the N-body problem is not a single problem, but in fact, a source of problems. In order to answer to these problems and to undersand the various aspects of the motion of N bodies (stability, periodicity, integrability, chaos,...) mathematicians have been lead to invent new theories and give new impulses to existing ones (calculus of variations in all its branches, nonlinear functional analysis, topological and differentiable dynamical systems, ergodic theory, KAM theory,...). It is symptomatic the fact that Poincare' got the king Oscar II prize not for solving the given problem, but for important new contributions to the subject: in particular, for stating new problems. --pma (talk) 15:18, 26 January 2009 (UTC)
- I think the main difficulty with N-body problems is that you need to know what the first N-1 bodies are doing before you can work out when the Nth body is doing, but you need to know what the Nth body is doing in order to work out what the first N-1 bodies are doing. Therefore, any kind of direct approach won't work and you need to either do it approximately or find some kind of clever trick. --Tango (talk) 17:57, 26 January 2009 (UTC)
- That doesn't explain why the 2-body problem is easy, though. Algebraist 17:59, 26 January 2009 (UTC)
- When related to the centre of mass a two body problem turns into a single body problem. And dealing with just one thing does tend to be easier. Dmcq (talk) 20:05, 26 January 2009 (UTC)
- You still need a clever trick with the 2-body problem, it's just a very simple one: You can use symmetry very easily with a two body problem by just assuming one of the bodies is stationary, giving you a 1-body problem, which is obviously easy. --Tango (talk) 21:44, 26 January 2009 (UTC)
- That doesn't explain why the 2-body problem is easy, though. Algebraist 17:59, 26 January 2009 (UTC)
- There is a joke popular among physicists: "In Newtonian mechanics, we can't solve the 3-body problem. In quantum mechanics, we can't solve the 2-body problem. In quantum field theory, we can't solve the 1-body problem. And in string theory, they can't even solve the 0-body problem!" This may be taken as a gross insult or a sober assessment of the sitation, depending on your biases. Tesseran (talk) 17:40, 27 January 2009 (UTC)
It is not strange that most problems are difficult, but it is almost a miracle that some problems are not. Bo Jacoby (talk) 18:02, 27 January 2009 (UTC).
Roman numerals on watch dials
[edit]I guess this is the right section - apologies if it isn't. On a watch with Roman numerals, why is four always shown as IIII rather than IV? Pavel (talk) 18:18, 26 January 2009 (UTC)
- See Roman numerals#Calendars and clocks. PrimeHunter (talk) 18:20, 26 January 2009 (UTC)
- Uncle Paul, I would add that I often saw IIII instead of IV also in other contexts; in fact, it is used as well, like XXXX and CCCC. I think that particularly for watches IIII makes a sense, because of the fact that the numerals are written all around, and IV may be confused with VI since both of them are written almost upside down. --pma (talk) 22:42, 26 January 2009 (UTC)
Lim sup of number of divisors
[edit]Let d(n) denote the number of divisors of the positive integer n (so d(p) = 2 for a prime p). In divisor function#Approximate growth rate, one learns that:
- limsup d(n)/n^ε = 0 for every real number ε > 0
For a related function, the sum of divisors function σ (so σ(p)=p+1 for a prime p), one also learns that:
- limsup σ(n)/(n*log(log(n))) = C
where C is some finite, nonzero constant.
I would like a similar statement for d(n):
- limsup d(n)/f(n) = C
where C is some finite, nonzero constant and f(n) is a nice positive function, preferably a "monomial" in n, log(n), log(log(n)), etc. where I don't mind real number powers. What is a choice for f(n) that works?
I think it may be true that:
- limsup d(n)/log(n)^K = ∞ for every real number K > 0
I'm not really sure what to stick between n^ε (much too large) and log(n)^K (much too small). JackSchmidt (talk) 18:39, 26 January 2009 (UTC)
- I can't answer your question, but I can confirm your thought on log(n)^K. According to primorial, the kth primorial is n=exp[(1+o(1))klogk], and of course d(n)=2^k, so d(n)/log(n)^M is 2^k/[(1+o(1))klogk]^M, which clearly goes to infinity. Algebraist 19:04, 26 January 2009 (UTC)
- Excellent, thanks for the theoretic confirmation. I've been using numerical experiments for the most part, and have been doing suspicious things like assuming all primes are equal to 2 for my "theoretical proofs". It may even be that
- limsup d(n)/exp( log(n)^(1−ε) ) = ∞ for every real number ε > 0
- but this purely based on numerical experiments with tiny ε like 0.24. I'll see if I can prove this from the primorial estimates. JackSchmidt (talk) 19:35, 26 January 2009 (UTC)
- Yes the primorial estimate works nicely to show exp( log(n)^(1−ε) ) is much too small too. Also shows it might be tricky to specify the rate of growth nicely as I have J = I*log(I), and need to estimate I in terms of J. JackSchmidt (talk) 19:59, 26 January 2009 (UTC)
- Notice that in order to have a large d(N), for a given size of N, squarefree is good but doesn't seem optimal (think to replace the higher prime factor p of N with a power of 2 very close to p). So the idea should be N=2a3b5c..., with a choice of exponents a>b>c... slightly concentrated on the first ones. I see here [2] that for n=k!, d(n) has an estimate d(n) ~ c0 log(n)/(loglog(n))^2 (Erdoes et al.); however this seems worse than the primorial because it implies d(n)/log(n)^2 = o(1) for factorial whereas for the primorial Algebraist has d(n)/(log(n))^M divergent for all M. Thus k! has too high powers of low primes and the corresponding exponents a>b>c.. are too concentrated. Fixing the first prime factors p1,..pk (k large enough), and the size of N, I imagine that an optimal choice may be suggested maximizing D(a1,.. ak):=(1+a1)..(1+ak) with the constraint log(N)=a1log(p1)+..+aklog(pk). --pma (talk) 14:10, 27 January 2009 (UTC) (Oops, sorry! The quoted estimation was about log(d(n)) not d(n), of course )
- Yes the primorial estimate works nicely to show exp( log(n)^(1−ε) ) is much too small too. Also shows it might be tricky to specify the rate of growth nicely as I have J = I*log(I), and need to estimate I in terms of J. JackSchmidt (talk) 19:59, 26 January 2009 (UTC)
- Excellent, thanks for the theoretic confirmation. I've been using numerical experiments for the most part, and have been doing suspicious things like assuming all primes are equal to 2 for my "theoretical proofs". It may even be that
- By finishing the computation above, the primorial gives a lower bound for infinitely many n. I think I recall seeing somewhere (Apostol?) that this bound is in fact asymptotically optimal, i.e., for all n. If this is true, it is still too crude to give you a bound on d(n) of the form you want, but it gives a similar bound on log d(n): limsup (log d(n))/f(n) = log 2, where f(n) = log n/log log n. — Emil J. 14:41, 27 January 2009 (UTC)
- And, of course, I ~ J/log J. — Emil J. 14:46, 27 January 2009 (UTC)
- As a matter of fact, the paper linked above by PMajer confirms that , even with the stronger error estimate , and attributes the result to Wigert. — Emil J. 15:02, 27 January 2009 (UTC)
- Thanks! This sort of bound should work perfectly,
- d(n) ≤ exp( C log(n) / log( log( n ) ) ) for all n
- d(n) ≥ exp( D log(n) / log( log( n ) ) ) for infinitely many n
- d(n) = 2 for infinitely many n
- This shows the wide range of d(n) while still bounding it closely, which is what I need. JackSchmidt (talk) 15:10, 27 January 2009 (UTC)
- Thanks! This sort of bound should work perfectly,
- One more thing: an elegant elementary proof of the upper bound is sketched here. — Emil J. 17:13, 27 January 2009 (UTC)
- Thanks, this is also nice. The last step of the proof brings back nightmares from PDE, but otherwise the proof is extremely clear and elementary. I'm currently taking a combinatorics class so have some more integer sequences to practice approximating. It will be refreshing to have these sorts of arguments seem natural at some point.
- Do you think that a basic analytic number theory book might be a good place to get more comfortable with these techniques? Any recommendations? I'm only interested in being comfortable with rates of growth of naturally occurring sequences of positive integers. JackSchmidt (talk) 17:40, 27 January 2009 (UTC)
Notation
[edit]What is that pesky thing above the equals sign, right there?
Seans Potato Business 21:17, 26 January 2009 (UTC)
- Context? Algebraist 21:19, 26 January 2009 (UTC)
- I don't know what the LHS means, but the exclamation mark maybe just stay there to recall that the equality is not trivial. Or also, it could refer to some other fact explaining the equality (like "identity (!)" or "formula (!)" ). Especially if it comes from a blackboard, and not from a book! (In fact now I remember of one guy in a seminar putting a "?" on the equals sign of a certain equality to be proven, and after proving it, he replaced it with a "!") --pma (talk) 21:31, 26 January 2009 (UTC)
- It could also be a form of indicating either that it is so by definition, or that we are going to define it as such. Confusing Manifestation(Say hi!) 22:39, 26 January 2009 (UTC)
- But how could that equality be a definition? Algebraist 22:41, 26 January 2009 (UTC)
- It's a definition of 0, clearly ;) pma (talk) 22:47, 26 January 2009 (UTC)
- But how could that equality be a definition? Algebraist 22:41, 26 January 2009 (UTC)
- Alternatively, maybe it means "not equal to" as C-style != -- SGBailey (talk) 23:25, 26 January 2009 (UTC)
- That would be very strange... is pretty universal. --Tango (talk) 01:41, 27 January 2009 (UTC)
- As for context, these are the differential equations describing the consumption of a Substrate with an Enzyme to produce a Product. If the initial quantity of substrate is large, then as the enzyme approaches its maximal turn over rate, the quantity of ES (enzyme bound to substrate) approaches a constant, and so d[ES]/dt can be approximated as 0. During this time d[E]/dt is also approximately zero (since they are negatives of each other). This lasts until the substrate is exhausted.
- So I'm not sure how to describe that use of an exclamation point, but I imagine that the author is using it to specify that period of time with maximal turn over rate (i.e, maximal [ES]). As d[ES]/dt is approximately zero for an extended period of time the dynamics are considerably simpler. Eric. 131.215.158.184 (talk) 04:24, 27 January 2009 (UTC)
- The equations Seans posted come from Michaelis-Menten kinetics. I propose we find who edited that equation and interrogate him or her.
- More seriously, the exlamation point has no special meaning beyond that indicated in the article:
- "The first key assumption in this derivation is the quasi-steady-state assumption (or pseudo-steady-state hypothesis), namely that the concentration of the substrate-bound enzyme ([ES]) changes much more slowly than those of the product ([P]) and substrate ([S]). This allows us to set the rate of change of [ES] to zero and also write down the rate of product formation."
- So it seems that Confusing Manifestion is right. Eric. 131.215.158.184 (talk) 04:35, 27 January 2009 (UTC)
- Indeed, it looks like it indicates a hypothesis. --Tango (talk) 11:16, 27 January 2009 (UTC)