Talk:Nyquist–Shannon sampling theorem/Archive 3
This is an archive of past discussions about Nyquist–Shannon sampling theorem. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 | Archive 3 |
Figs 3, 4 and 8 -- very unclear
These figures are not very clear and should be tidied up by someone more knowledgeable than myself.
Problems:
1. The term "images" is introduced without explanation. From my mostly-forgotten understanding of Shannon's theorem, it appears to me that these "images" are similar to what are called "sidebands" in radio communications. Whatever "images" are, they should be explained either in the text or the figures.
2. The lettering on the frequency scale is unclear, particularly for Fig 3. For example, what is supposed to be made of "-f+ BB"? Some of the lettering should be moved above the scale to get it out of the way of the others. —Preceding unsigned comment added by Guyburns (talk • contribs)
- Fig 3 was an .svg file and was reverted to the .png file it previous was. They tell us that the vector graphics versions of the same drawn image is better because they are scalable without sampling artifacts, but in fact, because of some screw up, the .svg files never appear the same uploaded as they were created as one can see from the comments of the image creator at Commons. The letters were jammed together. The .png file is better.
- Images are not quite the same as "sideband" like in single sideband or double sideband in AM communications. If your reference point is that of an amateur radio operator or similar, images from sampling are like what happens with what we used to call a "crystal calibrator" that began as a 100 kHz signal, then passed through a nonlinearity to create more images of that 100 kHz at integer multiples of 100 kHz. The sampling operation is such a non-linear operator that takes the original spectrum and creates copies of that original spectrum centered at integer multiples of fs. Those copies are the images and an ideal brickwall filter can remove the images while leaving the original spectrum unchanged. 70.109.185.199 (talk) 03:23, 27 April 2010 (UTC)
Dubious external link
'Undersampling and an application of it' looks a little dubious to me. It starts out quite interesting but further down some rambling starts. I'm not sure if an encyclopedia should link to this site. Even if it is legit, I don't think it's entirely on topic. In accordance with wiki guidelines to avoid external links I'd vote to remove it (and possibly use it as a reference rather than an external link in an article more focused on undersampling, in case it meets the quality guidelines). 91.113.115.233 (talk) 08:00, 18 August 2010 (UTC)
Angular frequency vs. Frequency
I think the article should show equivalent forms of the sampling theorem stated in terms of angular frequency, as many textbooks use this convention. I realize its simple to convert, but still... 173.206.212.10 (talk) 03:39, 23 November 2010 (UTC)
- You might be right, but sometimes I wish we would stamp out nearly all of the use of angular frequency in EE lit because either the Fourier Transform is not "unitary" (a scaling difference between forward and inverse F.T.) or there is this awful scaling factor in both forward and inverse. Having a unitary transform with no scaling factor in front makes it easy to remember how specific transforms are scaled (like the rect() and sinc() functions) and makes theorems like Parsevals and duality much simpler. 71.169.180.100 (talk) 06:57, 23 November 2010 (UTC)
The Sampling Process Section
The article currently states: "In practice, for signals that are a function of time, the sampling interval is typically quite small, on the order of milliseconds, microseconds, or less."
This is not really true - it depends on which "practice" to which you are referring. What about long-term studies? Moreover, this sentence is not really helpful. It doesn't add any useful or insightful information to the article. — Preceding unsigned comment added by Wingnut123 (talk • contribs) 16:46, 22 March 2011 (UTC)
Sentence from intro removed
I removed the following sentence from the introductory section. It is not really related to the Nyquist-Shannon theorem and furthermore it is false.
- A signal that is bandlimited is constrained in how rapidly it changes in time, and therefore how much detail it can convey in an interval of time.
Using results from Robert M. Young, An Introduction to Nonharmonic Fourier Series, Academic Press, 1980, one can show without much trouble that the following is true:
- For every B>0, every f∈L2([a,b]) and every ε>0, there exists a function g∈L2(R) which is band-limited with bandwidth at most B and such that .
So band-limited functions can change extremely rapidly and can convey arbitrary large amounts of detail in a given interval, as long as one doesn't care about what happens outside of the interval. AxelBoldt (talk) 22:56, 15 October 2011 (UTC)
- Your point is taken, and the sentence should probably be removed (if not reworded). However, I think your example might actually weaken your argument. After all, g is not chosen uniformly over all B, a, and b. Moreover, your f is taken from L2, which constrains the behavior of the function substantially. So even though the wording of the phrase you removed was poor, I think there is still a relevant sentiment which could be re-inserted that does not go against your example (perhaps something about the information content of a bandlimited signal being captured entirely (and thus upper bounded) by a discrete set of samples with certain temporal characteristics). —TedPavlic (talk/contrib/@) 05:09, 16 October 2011 (UTC)
- I've never liked that sentence much either, since it has no definite meaning. Even the information rate is not limited to be proportional to B, unless you include noise, so it's not clear what is intended by "how much detail it can convey". Dicklyon (talk) 05:13, 16 October 2011 (UTC)
- g is not chosen uniformly over all B, a, and b.
- True, g must depend on B, the bandwidth we desire, and on a and b, since that's the time-interval we are looking at. In a sense that is the whole point: if you focus solely on one time interval, any crazy behavior can be prescribed there for a band-limited function, and furthermore you can require the bandwidth to be as small as you want.
- f is taken from L2, which constrains the behavior of the function substantially
- That's correct, but L2[a,b] has a lot of detailed and extremely rapidly changing stuff in it. For example, you could encode all of Wikipedia as a bit string in an L2[0,1] function, where a 1 is encoded as a +∞ singularity and a 0 is a -∞ singularity. Choosing your ε wisely, you will find a band-limited g (with bandwidth as small as you want!) that still captures all the crazyness that is Wikipedia.
- AxelBoldt (talk) 18:44, 16 October 2011 (UTC)
- I've never liked that sentence much either, since it has no definite meaning. Even the information rate is not limited to be proportional to B, unless you include noise, so it's not clear what is intended by "how much detail it can convey". Dicklyon (talk) 05:13, 16 October 2011 (UTC)
No, the point is that "constrained in how rapidly it changes in time" relates to the size of the function. And indeed, the L2-norm of the derivative of a band-limited function (indeed any derivative) is bounded by the product of (a power of) the bandwidth and the L2-norm of the function itself.
Or the other way around: given such a band-limited approximation for the restriction to an interval, the behavior outside of the interval can and typically will be explosive. And more so with increasing accuracy of the approximation--LutzL (talk) 15:32, 22 November 2011 (UTC)
Question
Isn't it the case that in practice, due to the possibility of accidentally sampling the ‘nodes’ of a wave, frequencies near the limit will suffer on average an effective linear volume reduction of 2/pi? — Preceding unsigned comment added by 82.139.90.173 (talk) 04:57, 6 March 2012 (UTC)
- In practice, "the limit" is chosen significantly above the highest frequency in the passband of the anti-aliasing filter, to accommodate the filter's skirts. So I think the answer is "no". And I have no clue how you arrived at the 2/π factor. It might help to explain that.
- --Bob K (talk) 05:42, 6 March 2012 (UTC)
It depends on the filters used. If you reconstruct with square pulses instead of sincs (or zero-order hold instead of impulses into a sinc filter), then you get a rolloff at Nyquist that's equal to an amplitude gain of 2/pi, which comes from evaluating the sinc in the frequency domain, since that's the transform of the rect. It's nothing to do with "accidentally sampling the nodes". Dicklyon (talk) 05:50, 6 March 2012 (UTC)
New section by Ytw1987
New editor User:Ytw1987 has been adding a bunch of stuff on nonuniform sampling and nonuniform DFT here and elsewhere, all sourced to one book by Marvasti. It's probably not bad stuff, but it's big and complicated, not well wikified, badly styled, and smacks of WP:SPA or WP:COI. If someone else has the time to help assess the new material, and advise him on how to make it more suitable, that would be great. Dicklyon (talk) 19:17, 4 July 2012 (UTC)
The new material is now in Nonuniform sampling, which seems like a more appropriate place for it. It needs work, if anyone if up for it. Dicklyon (talk) 23:51, 5 July 2012 (UTC)
- Good solution. --Bob K (talk) 15:17, 6 July 2012 (UTC)
Issues with section on Shannon's proof
There are some issues with the proof outlined in the section. It is not clear what is assumed about the function f. The context is the Hilbert space L^2(R) but, a priori, the argument doesn't hold for elements of L^2(R). For one thing, pointwise evaluation doesn't make sense for elements of L^2. Also, the very first equation
assumes Fourier inversion formula holds for f, which again does hold for general elements of L^2. For counter example, take the sinc function; the integral does not converge. This only works if f is assumed to have slightly better decay at infinity, to be in L^1.
This can be cleaned up as follows:
- If f in L^2 has Fourier transform lying in the Hilbert subspace then the well-definedness of the Fourier transform implies that f = g almost everywhere for a continuous function g.
- The Stone-Weierstrass theorem shows the family is an orthonormal basis for . So their inverse Fourier transforms is an orthonormal basis for L^2 elements of bandwidth limit W.
- One then directly computes the Fourier coefficient in the t-domain, obtaining the L^2-series I have a reference somewhere that says the equality in fact holds pointwise but I am not sure how that goes.
From the mathematical point of view, loosely speaking, the theorem holds because one is dealing with a compact set [-W, W] in the frequency domain. This leads to a situation similar to what we have for the circle, whose Pontryagin dual is the discrete set Z. Mct mht (talk) 00:37, 12 July 2012 (UTC)
- If you think it's important to know what assumptions Shannon was making, it would be good to check his papers before just rewriting his proof and calling your proof his, no? Dicklyon (talk) 07:10, 19 July 2012 (UTC)
- I don't want to get into an edit conflict. That section, as is, is not clean at all. Shannon, being an engineer, doesn't state any assumptions in his paper. It doesn't make sense to talk about a "proof" in the absence of a even clear statement. We don't call Fourier's justifications of theorems bearing his name "proofs" either and wouldn't teach those "proofs" to students. It's questionable whether a word by word reading of Shannon's arguments belongs in the article.
- Both the statement of the sampling theorem I gave and proof outlined is very standard in the harmonic analysis literature. The article is currently wanting mathematically. Hopefully something will be done about it, while preserving other points of view. Mct mht (talk) 16:48, 19 July 2012 (UTC)
Signals are in practice continuous functions, so is f. The Fourier integral exists for any compactly supported L2-function F, I don't get the insistence on L1 in this context, even if it is a tradition. The integral on the right hand side gives a continuous function in t. (Again, F has compact support. This is the stated assumption.) -- Shannon was a mathematician, cryptography and cybernetics were still mathematical topics in his time (or Hardy would be an engineer too). The theorem and proof in his article are short sketches of commonly known facts, serving to introduce the concept of orthogonality of signals and "dimension per time" of a band-limited transmission channel. As a sketch his treatment of Fourier theory is exact enough. Please do also note that strict proofs that are drowned in technicalities are not covered by the guidelines of the mathematics project in wikipedia. Short proofs or sketches that illuminate a topic are the exception.--LutzL (talk) 18:29, 19 July 2012 (UTC)
- That any compactly supported L^2-function F also lies in L^1, by Holder's inequality, is the point. If it's merely in L^2, then there is no inversion in the sense of the Fourier inversion formula. On L^2, the (inverse, in this case) Fourier transform is not given by formula, but via a density argument. It is a fact (needed in this case) that on the intersection of L^1 and L^2, this agrees with the usual integral formula on L^1.
- Shannon's sketch indeed works with a little care, and it also happens to be pretty standard. That was the intention of the edit. Also, the proof doesn't come close to the "too technical" threshold, in my opinion: argue that the inversion works a la Shannon, identify a natural orthonormal basis using Stone-Weierstrass, go back to time domain, done. Short and sweet. Mct mht (talk) 13:31, 20 July 2012 (UTC)
- There is no inversion involved, the formula is the definition of a bandlimited function as the reverse Fourier transform of a compactly supported function. That the Fourier series is the representation in an orthogonal basis in L2([-W,W]) is a standard fact, there is, in this given context, nothing to "construct" or to refer to "Stone-Weierstrass" (which is a last part in one of the proofs of the completeness of the basis. A nice proof, but one that belongs into the Fourier series article. This is Wikipedia, this is the internet, links exist for a reason). So indeed you are trying to load up the proof or sketch thereof with unnecessary technical "graffiti".--LutzL (talk) 13:43, 20 July 2012 (UTC)
- Of course there is inversion involved. Sure, the inverse Fourier integral is defined, since the spectrum lies in L^1 also. Inversion comes in precisely because one is trying to recover original signal from the spectrum. Also, without knowing you have a orthonormal basis, simply taking it as a definition is pretty pointless; you can't even assume you are not losing any information on the spectrum in the L^2 sense. I am happy to leave the article alone but that is ignorant. Mct mht (talk) 14:56, 20 July 2012 (UTC)
common misconception surrounding digital audio
could someone add this to the article? basically, it says that most people think (I certainly did, and was surprised to learn otherwise) that sampling is by its very nature inexact (no doubt prompted by pictures where a stair-stepped jagggedy line is overlaid on a smooth sinusoid), but the theorem says (it does, doesn't it?) that the digital signal contains just enough information to faithfully restore the analog signal. Уга-уга12 (talk) 19:06, 26 July 2012 (UTC) I was going to put this into our List of common misconceptions article, but it said there that the misconception must be sourced both regarding the subject matter AND the fact that it's a misconception (but how do you prove something is a misconception short of conduncting a survey? Intutively, however, it seems clear that a lot of people think this way about digitalization) Уга-уга12 (talk) 19:12, 26 July 2012 (UTC)
- The theorem does not hold in general. See for instance Poisson summation formula, which says when you take a discrete sample, on the integers, in the frequency domain and take the inverse Fourier transform, you get the periodized sum of the original signal. The enabling assumption here is that the signal is bandlimited. Loosely speaking, when your signal is only non-zero on the frequencies, say [-π, π], its Fourier transform is like a function on the circle and therefore is determined by its Fourier coefficients, a discrete set. Mct mht (talk) 12:20, 27 August 2012 (UTC)
- The theorem does hold in general. I think what you're saying is that sampling without bandlimiting doesn't give perfect reconstruction. And the theorem does not suggest that it would. Dicklyon (talk) 14:37, 27 August 2012 (UTC)
- "In general" means no restrictions, as in given any signal in, say, L^1, perfect reconstruction, in some suitable sense, is possible by sampling a countable set of values. Sampling theorem (this one or any other one) or not, this is clearly too much to hope for, since in the L^1 case, even Fourier inversion doesn't apply in general. Mct mht (talk) 15:19, 27 August 2012 (UTC)
- That sounds like splitting hairs. The theorem states the conditions under which it is valid, "If a function x(t) contains no frequencies higher than B hertz..." Maybe you can argue that these are not realistic conditions for real signals (see section below) but I think the article should make it clear that a bandlimited signal can be sampled and reconstructed without error. The misconception is that sampling produces an approximation of the signal. It is exact. The only caveat is that any energy above the nyquist frequency in the sampled signal is either lost (in the anti-aliasing filter) or causes aliasing distortion. --Kvng (talk) 17:08, 27 August 2012 (UTC)
- No, in general means just that. E.g. the Hahn-Banach theorem holds for Banach spaces in general; the spectral theorem does not hold for bounded operators on Hilbert spaces in general, for there exists counter-examples. In this case, the theorem is violently wrong in general. Even for bandlimited signals which are very regular, sampling interval must be no bigger than the 1/2W. One can prove the following: given any δ > 1/2W, there exists a bandlimited function f of Schwartz class such that f(kδ) = 0 for any integer k. Thus no information whatsoever can be recovered. Mct mht (talk) 23:55, 1 September 2012 (UTC)
- It seems like you've moved on to a different issue. The article discusses the problem of reconstructing signals exactly at Nyquist. Safer to say "Contains no frequencies higher or equal to B hertz..." Can we talk about reconstruction here in a way readers would understand? We can't use "the Hahn-Banach theorem holds for Banach spaces in general; the spectral theorem does not hold for bounded operators on Hilbert spaces in general" in the article. --Kvng (talk) 14:16, 4 September 2012 (UTC)
- No, exactly the same issue, read what I said please. I was explaining to you what "in general" means, naming as examples two theorems---one holds in general, the other doesn't. They are not (immediately) relevant to the sampling theorem. As for reconstruction, I just gave you a scenario above where no reconstruction of any kind is possible. Mct mht (talk) 01:04, 8 September 2012 (UTC)
Can we talk about reconstruction here in a way readers would understand?
The fact is, Kvng, that this article had, at one time, a far simpler and more elucidating mathematical treatment of sampling and reconstruction that closely matched what nearly any rigorous DSP or communications text would have. This article is a prime example of how some, many, articles in Wikipedia get worse, not better as time proceeds. Another example, not coincidentally, is the related article Poisson summation formula. Sometimes people try to hold the line on it but a couple of editors have took ownership of the article and the rest of us have given up trying to keep the article useful and informative. Best I can suggest is to buy a good textbook or check one out in an engineering library at some university, if you have access to such. 70.109.181.175 (talk) 18:07, 4 September 2012 (UTC)
- I haven't given up on this article. Can you provide a pointer to a past version that you believe was better than the current version? --Kvng (talk) 19:20, 4 September 2012 (UTC)
- This is the last version with a decent derivation of the sampling and reconstruction and this version of PSF is the last with a derivation . The current versions of both are abysmal and proof that Wikipedia articles do not always "advance" in quality with time. 71.169.179.221 (talk) 02:01, 11 September 2012 (UTC)
- Poisson summation formula is an interdisciplinary topic. An engineer's pov on the formula is not the same as a number theorist. That article right now is pretty basic and can go both ways. Mct mht (talk) 01:04, 8 September 2012 (UTC)
- There certainly are hairs to be split here. Where Nyquist (or whoever) said "If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart", they must have known that the "series of points" would have to be infinite, and that there would be problems in the general case of unbounded total energy, even if the max is limited, due to the failure of the sinc function to be uniformly summable, or whatever you call it. But if we're going to split those hairs in the article, it should be by reference to a good source that does so. I think we'd be on safer ground with a theorem that states that reconstruction is possible to within any positive error epsilon, or something like that. Dicklyon (talk) 06:36, 2 September 2012 (UTC)
Timelimiting, bandlimiting
Statements about timelimiting in the lead here and in Aliasing and probably other places on WP are potentially misleading as they make it sound like sampling theorem can't be applied to real signals because of an infinite time requirement.
What the theorem really says is that only bandlimited signals can be accurately reconstructed, full stop. The timelimiting issue is really about the nature of bandlimited signals, not about a requirement for infinite sampling. If you timelimit a bandlimited signal, it is no longer a bandlimited signal. e.g., A timelimited sinusoid is a a pulse-modulated sinusoid and will have out-of-band components from the pulse and from the modulation process. Or, looking at it from the other direction, if you want to do perfect bandlimiting of a signal, you need to be able to process that signal over infinite time.
I believe the statements implying a requirement for an infinite number of samples should be reworded as a reminder that, for real signals, perfect bandlimiting is impractical and with imperfect bandlimiting, sampling theorem warns you'll get aliasing. --Kvng (talk) 19:51, 26 August 2012 (UTC)
Narrow POV
This is an interdiscplinary topic, engineering and mathematics. As I commented above, an engineer's pov is not the same as someone from pure mathematics. Right now there is very little mathematics in the article, not even a precise statement of the theorem. Starting with a clean statement of the theorem, a lot of immediate relevant facts can (and should, IMO) be mentioned in an expository article for a mathematical audience. Some examples:
- The Paley-Wiener theorem. It says that bandlimited functions are the restrictions of entire functions of exponential type. Properly formulated, the Nyquist-Shannon-Whittaker sampling theorem gives you smoothness (infinite differentiability) but this is a much stronger conclusion.
- The fact that bandlimited functions form a reproducing kernel Hilbert space, and it's a pretty special one. Not only are pointwise evaluations elements of the dual, there's a countable set that forms a dual basis. Again, given a proper formulation of the theorem, this is just a corollary and can be mentioned at no cost.
- More generally, one can talk about the sampling theorem for tempered distributions whose Fourier transforms have compact support. There is an analogous statement but more involved. This should be mentioned with probably details referred to a different article.
- There is also a sampling theorem for L2 elements whose Fourier transform lie on the half-line [0, ∞). This is the sampling theorem for the Hardy space H2 due to Alberto Calderon. Part of the folklore of Littlewood-Paley theory. The statement is almost identical to the Nyquist-Shannon-Whittaker theorem but the Fourier transform is replaced by the continuous wavelet transform. This should be mentioned again with details referred to its own article.
Maybe the solution here is to have two separate articles Shannon-Nyquist sampling theorem (signal processing), and Shannon-Nyquist-Whittaker sampling theorem (mathematics), addressing different audiences with different backgrounds. Ideally, there would be some kind of harmony/correspondence between the two. The two articles could complement each other and together can be informative for both communities. Mct mht (talk) 01:36, 8 September 2012 (UTC)
- You see a reason why the math can't be accommodated in the present article? POV content forks are frowned upon. Dicklyon (talk) 01:49, 8 September 2012 (UTC)
- The proposal has merit if you buy into the idea that engineers and mathematicians see things fundamentally differently. I don't. Convince me. --Kvng (talk) 14:38, 8 September 2012 (UTC)
- But they do. See the sine wave as a counter example in the article. Mathematically nonsense. Engineers love it. Long discussion a long time ago in the archives.--LutzL (talk) 09:36, 10 September 2012 (UTC)
- Can you be more specific? Which section in the article? What should I search for in the archives? --Kvng (talk) 19:57, 10 September 2012 (UTC)
- Sections Aliasing, where the argument is reasonable, and Critical frequency, with the bogus claim that f_s=2B must be excluded. In theory, sampling at the critical frequency is fine, in practice due to time limiting and quantization errors, sampling occurs well above the critical frequency. And I would think the entire top of archive 1 contains clashes of mathematical and engineering POV.--LutzL (talk) 08:18, 11 September 2012 (UTC)
- Please explain how you can tell the difference between samples in and for any value of θ, and integer n. Or are you saying that we have to exclude finite power (and infinite energy) signals like sinusoids? 70.109.182.158 (talk) 04:26, 15 September 2012 (UTC)
- Sections Aliasing, where the argument is reasonable, and Critical frequency, with the bogus claim that f_s=2B must be excluded. In theory, sampling at the critical frequency is fine, in practice due to time limiting and quantization errors, sampling occurs well above the critical frequency. And I would think the entire top of archive 1 contains clashes of mathematical and engineering POV.--LutzL (talk) 08:18, 11 September 2012 (UTC)
- Rephrasing Critical frequency is unnecessary and might add confusion. The point is that the samples of and are identical when the sample interval is 1/(2B). --Bob K (talk) 14:06, 15 September 2012 (UTC)
- So you see, the last two answers are prime examples of the non-mathematical engineering POV. If you want extended answers, please refer to the discussions in archive 1. In short, pure cosine or sine waves do not have finite L2-norm or "energy", so the example were only admissible with amplitude zero, where sampling is no problem.--LutzL (talk) 10:28, 16 September 2012 (UTC)
- Rephrasing Critical frequency is unnecessary and might add confusion. The point is that the samples of and are identical when the sample interval is 1/(2B). --Bob K (talk) 14:06, 15 September 2012 (UTC)
- And if I had said the samples of and are identical (for any ) when the sample interval is 1/(2B), there would be no POV difference? --Bob K (talk) 11:46, 16 September 2012 (UTC)
- These time-limited functions are no longer band-limited, thus in a certain sense even worse as counter-examples.--LutzL (talk) 11:57, 16 September 2012 (UTC)
I guess I have questions for both of you. BobK, the second equality in Eq 1 of the article is not shown to be true where it used to be. And Poisson summation formula has become sorta useless since October 2010. Do you know why?
- IMO, the best place to support Eq 1 is the PSF article. It seems your beef is with PSF, not Sampling Theorem. That discussion is at Talk:Poisson_summation_formula#Simplification_of_the_summation_formula. If you think that article is now "sorta useless", that might be the best example yet of the POV difference we are discussing here.--Bob K (talk) 13:29, 17 September 2012 (UTC)
- My beef is that this equation:
-
- (assuming)
-
- ...is not justified in the article, when at one time it used to be (without any reference to PSF). It's an example of how Wikipedia articles do not always improve with time. Sometimes they get worse.
- My beef is that this equation:
- IMO, the best place to support Eq 1 is the PSF article. It seems your beef is with PSF, not Sampling Theorem. That discussion is at Talk:Poisson_summation_formula#Simplification_of_the_summation_formula. If you think that article is now "sorta useless", that might be the best example yet of the POV difference we are discussing here.--Bob K (talk) 13:29, 17 September 2012 (UTC)
- If the justification was in the wrong article (which it obviously was), then removing it and referencing the correct article is not an example of "how Wikipedia articles do not always improve with time". It's an example of cross-referencing. --Bob K (talk) 23:01, 17 September 2012 (UTC)
- Regarding Poisson summation formula, it is not coincidental that "improvements" made by the very same editor crippled that article too after October 2010. That critical mathematical fact is not derived in either article. They got worse instead of better. 71.169.182.56 (talk) 21:22, 17 September 2012 (UTC)
- As stated at Talk:Poisson_summation_formula#Simplification_of_the_summation_formula, the "critical mathematical fact" is:
- And its derivation (though not required by Wikipedia, AFAIK) is at: Poisson_summation_formula#Applicability
- --Bob K (talk) 23:01, 17 September 2012 (UTC)
- As stated at Talk:Poisson_summation_formula#Simplification_of_the_summation_formula, the "critical mathematical fact" is:
- Regarding Poisson summation formula, it is not coincidental that "improvements" made by the very same editor crippled that article too after October 2010. That critical mathematical fact is not derived in either article. They got worse instead of better. 71.169.182.56 (talk) 21:22, 17 September 2012 (UTC)
... And LutzL, the L2 norm you refer to is
and the associated inner product is
Now it turns out that a lot of serious professionals consider samples of signals of infinite energy and finite power.
and the associated inner product is
Are you saying that the sampling theorem is of no use to these people? 71.169.182.56 (talk) 03:39, 17 September 2012 (UTC)
- And sorry if my remark sounded too strict. I'm not against the critical frequency section and the functions used. They provide, similar to Nyquists reasoning, an elementary, hands-on motivation for the sampling theorem. The argument provided is just not mathematically exact, so these examples do not contribute to the mathematical statement of the sampling theorem.--LutzL (talk) 12:05, 16 September 2012 (UTC)
- I think we can find better examples of POV difference. For instance, I must confess that I have no interest in the lower half of section Poisson_summation_formula#Applicability, starting with "Conditions that ensure Eq.3 is applicable are that ƒ is a continuous integrable function which satisfies...". --Bob K (talk) 13:09, 16 September 2012 (UTC)
- The proposal of course has merit, whether or not the POV differences are "fundamental". The frown on "content forks" also has obvious merit. And both have their disadvantages. A metric that I don't think has yet been mentioned, either now or in the archive, is: Which policy is most attractive to a potential contributor? Thus: Which policy attracts the most contributors? Speaking only for myself, I am more likely to contribute to an article written for engineers. And I imagine I'm not alone.--Bob K (talk) 15:57, 10 September 2012 (UTC)
- Of course it should also (primarily?) be about the readers. --Kvng (talk) 19:57, 10 September 2012 (UTC)
- I think you should cover it all here; start the article off as simply as possible, and then gradually wind the complexity (and mathematics) level up towards the end.TooComplicated (talk) 14:47, 17 September 2012 (UTC)
- Yep, that also has merit... and disadvantages. Among the disadvantages are things like notational conflicts, for instance:
- radians/sample or cycles/sample ?
- or ?
- or ?
- Yep, that also has merit... and disadvantages. Among the disadvantages are things like notational conflicts, for instance:
- At what point in the article do you switch from one comfort zone to the other?
- --Bob K (talk) 22:19, 17 September 2012 (UTC)
- That's up to you.
- If only it were that simple. :-) --Bob K (talk) 22:39, 17 September 2012 (UTC)
- Any article is a compromise. Or mention both notations if they're both in use.
- TooComplicated (talk) 22:32, 17 September 2012 (UTC)
- That's up to you.
The issue, raised by Kvng, whether engineers and mathematicians see things fundamentally differently has already been addressed above. The answer was yes, very much so. I will add a little to it. Take just the statement of the theorem for example:
- For me it is significant that a bandlimited L2 function is automatically continuous, except possibly on a set of measure zero. In general element of L2 need not be continuous at all. Also important is the fact that in the Whittaker-Shannon formula, the convergence of the series only hold in the L2 sense but uniformly; those two modes of convergence are very different.
- For an engineer, signals that arise in practice are already continuous(I was told this above). And, (correct me if I am wrong here) I suppose that when he computes the series in the Whittaker-Shannon formula, he can see or hear the partial sums resemble more and more the original signal. So in what mathematical sense the convergence holds is unimportant.
For an engineer, (my interpretation, hopefully fair) this is merely a formal representation of what is encountered in practice. If the partial sums resemble more and more the original signal 99% of the time in practice, then it's OK to say the series "converges." In mathematics, there is no "in practice." You have to know what it is exactly you are to prove (what is the class of functions under consideration? in what sense does the convergence hold?), then one either has a proof or one doesn't.
Same can be said about these two POV's about Fourier analysis in general, even at the basic level. What engineers call the fundamental frequencies are called characters of the underlying group (could be the real line, the circle, or the integers) in math. The context is Pontryagin duality. All Fourier transforms ("continuous", "discrete", "discrete time", etc) are the same transform, but with different groups. Harmonic analysis is then about what one can say if we relax some assumptions on the group. Only in the context of precisely abstract formulations can one talk about abstract generalities. Engineers don't care about such things.
These two competing POV's are tricky to accommodate in the same article. See for example Hilbert transform, which is probably guilty in the other direction. The engineering part seems unaware of the mathematics above. Point of the proposal was that both POV should be covered and there be some communication between the two. This is just my take. Mct mht (talk) 06:11, 23 September 2012 (UTC)
- In the final analysis, the essential difference between the two POVs (mathematician vs. engineer) is how we behold the dirac delta function. Engineers play fast and loose with it. Engineers don't mind a naked delta function; not one inside an integral. Mathematicians need to always see it in an integral. We all know that a normal function that is zero almost everywhere has an integral of zero, but engineers like to think of the dirac delta as being a function that is zero almost everywhere, yet its integral is 1. That is, IMO, the bottom line about the two POVs. 70.109.190.91 (talk) 02:44, 24 September 2012 (UTC)
Aliasing sinusoids whose frequencies are half the sampling rate
In the Critical frequency section, I want to change this:
- But for any θ such that |cos(θ)| < 1, x(t) and xA(t) have different amplitudes and different phase. Ambiguities such as that are the reason for the strict inequality of the sampling theorem's condition.
to this:
- But for any θ such that |cos(θ)| < 1, x(t) and xA(t) have different amplitudes and different phase (any sinusoid will alias xA if its amplitude is that of xA multiplied by sec(2πθ) where θ ≠ (k+1/2)π for integer k). Ambiguities such as that are the reason for the strict inequality of the sampling theorem's condition.
This directly relates the amplitude of any aliasing sinusoid as a function of the phase θ, rather than indirectly with the definitions x(t) = cos(2πBt + θ), xA(t) = cos(θ)*(2πBt). (This allows you to make this sort of graph.) If it's not immediately obvious to you why the amplitude gets divided by cos(2πθ) rather than just cos(θ), then the definitions alone aren't sufficiently serving their justice. X-Fi6 (talk) 03:58, 19 November 2012 (UTC)
- We already have that sort of graph in the article (starting with File:CriticalFrequencyAliasing.png in 2006). It's not clear how the more "direct" expression makes it more clear. It's also not clear why you put that 2pi in there; the angle theta is in radians, I would presume, and multiplying radians by 2pi can't make any sense. Dicklyon (talk) 05:02, 19 November 2012 (UTC)
- My interpretation of the suggestion is that he would have the section changed to something like this:
- To illustrate the necessity of fs > 2B, consider the sinusoids:
- With fs = 2B or equivalently T = 1/(2B), the samples are given by:
- That sort of ambiguity is the reason for the strict inequality of the sampling theorem's condition.
- --Bob K (talk) 18:09, 21 November 2012 (UTC)
New WikiProject Signal Processing
Wikipedia:WikiProject Signal Processing is a stub of a new project. We could use help, especially from someone with experience setting up project templates, or patient enough to figure out how to cobble them from another project. Dicklyon (talk) 19:24, 7 March 2013 (UTC)
requirements for the Shannon theorem should be included into the article
the Shannon theorem only applies to:
- smooth functions
- square integrable functions
for reference see Google Books. --Biggerj1 (talk) 21:21, 4 May 2013 (UTC)
- But the band-limited condition already implies smooth. And I'm not convinced the square integrable condition is necessary, though it certainly makes it easier to prove. Dicklyon (talk) 22:14, 4 May 2013 (UTC)
- Ostensibly, it might be easier to require abs-integrable (which is stricter) to guarantee that any of the Fourier integrals converge, but even if you stick with the less strict square-integrable, the F.T. of non-time-limited sinusoids don't really exist. But, with the magic of dirac impulse functions, we can pretend that the F.T. of sinusoids do exist.
- Personally, on this, on Dirac delta and on Dirac comb, I think the rigor that you get in a typical undergrad EE text should be sufficient. That is where you can treat the Dirac impulse as a "function" that is a limit of nascent delta functions all with unit area. But we know the mathematicians are unhappy with that. But I don't think it helps most people trying to learn the concepts to be that anal about it which is why most undergrad EE courses don't object to the Dirac delta being the "unit impulse function". 71.169.184.238 (talk) 00:23, 5 May 2013 (UTC)
- Isn't square integrable more strict than absolute integrable? Either way, a sinusoid would be out. But sinusoids work fine with sampling and exact reconstruction, as long as they're not right at the Nyquist frequency, I think. You don't need the existence of Fourier transforms for the theorem to be true, so it doesn't really matter whether or not you like Dirac deltas. At least, that's my view as an engineer; but as you suggest, mathematicians may not agree. Dicklyon (talk) 03:37, 5 May 2013 (UTC)
- Oh, I see you're right. The sinc function is square integrable, but not absolute integrable and that's the rub. It means the sinc is not BIBO stable, as I had noted in the sinc filter article already. Dicklyon (talk) 03:41, 5 May 2013 (UTC)
The theorem obviously applies to sinusoids, unless you can tell me what "smooth" function other than cos(2π f t), f < B, is bandlimited (< B) and fits these samples: cos(2π f n/(2B)), all integer values of n. Seems to me that both square integrable and abs-integrable are overly restrictive... sufficient but not necessary, which is likely the reason that sinusoid-loving EEs ignore them.
--Bob K (talk) 11:57, 5 May 2013 (UTC)
Oh, perhaps I should have asked for clarification... Are you saying "smooth and integrable", or, "smooth or integrable"?
--Bob K (talk) 14:04, 5 May 2013 (UTC)
You need both, square integrable and smooth. Both is provided by the condition band-limited. And no, you can't sample a sinusoid. That is, of course you can sample it, but the reconstruction formula will not converge. Since the reconstruction formula is essentially a representation in an orthonormal basis, it converges in the L2 sense exactly if the coefficient sequence is square summable, and the sample sequence of a sinusoid is not square summable because of its periodicity. But that was already discussed ad nauseatum years ago, so consult the archive pages to find the points of view of the participants.--LutzL (talk) 15:59, 5 May 2013 (UTC)
- Since you just said that a sinusoid is not bandlimited (because it's not square integrable), I can see why it was discussed ad nauseum. But I'm not going to look, ... better things to do. And I see that it's the reconstruction part of the theorem that you think requires square integrability. (Thanks for that.) Maybe so, but I still think that this part also applies to sinusoids:
- "If a function x(t) contains no frequencies ≥B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart."
- --Bob K (talk) 17:29, 5 May 2013 (UTC)
- The problem with finding interpolated values of a sinewave from samples via the Whittaker formula (dot product with sinc filter) is that the dot-product series has only conditional convergence; if you evaluate it by moving outward from the middle you'll approach the right answer, but other orders of evaluation may not do so. As for a sinewave not being "bandlimited", that's just because he took the space of bandlimited functions to be a subset of square-integrable functions. So, we engineers and mathematicians will continue to go our own ways. I'm with you (in most cases). Dicklyon (talk) 17:53, 5 May 2013 (UTC)
- Reviewing archive 1, which I now recall better, I'd summarize LutzL's point as being that since the reconstuction formula is not absolutely convergent if the signals are not in L2, the proof of the theorem doesn't apply there and therefore the theorem doesn't hold there. Or maybe he didn't even say the latter clause. Either way, the fact the proof doesn't work outside of L2 does not mean that the theorem doesn't hold outside of L2; it means it might or might not hold, and should motivate us to find a better proof to cover a wider set. Not being a mathematician, I expect I won't see the subtle pitfalls of this approach, but consider proving it by specifying evaluating the sinc from the middle outward, in which case the series will approach a limit, I'm pretty sure. Or if that doesn't work, make a windowed version of the sinc and look at the limit as the window width goes to infinity. I'll be a monkey's uncle if that limit doesn't exist for sinewaves below the Nyquist frequency, for DC, and for WSS process, which are all outside of L2. Dicklyon (talk) 20:07, 5 May 2013 (UTC)
- Returning to the start of this thread, which asserts that the Shannon theorem only applies to functions that are differentiable and square integrable. A more precise statement is that those are merely sufficient conditions (as far as we know). It has not been proven that those are necessary conditions. And indeed, we routinely sample sinusoids and linear combinations thereof all the time. That important point should stand high above the "ad nauseum" academic issues, not vice versa. -Bob K (talk) 23:01, 6 May 2013 (UTC)
- True, but we also routinely sample signals that are not bandlimited; nothing prevents it. There's still an interesting mathematical question about the theorem; that is, what can be proved about when perfect reconstruction is possible. I think we're all comfortable noting that the conditions of this theorem are sufficient, but not necessary. Then the interesting question, mathematically, and maybe even to us engineers, is whether can we come up with a looser set of sufficient conditions, not including the signal being in L2. I think it's there, but I'm not enough of a mathematician to know whether my hand-waving outline of a proof could work rigorously. Dicklyon (talk) 23:58, 6 May 2013 (UTC)
- Returning to the start of this thread, which asserts that the Shannon theorem only applies to functions that are differentiable and square integrable. A more precise statement is that those are merely sufficient conditions (as far as we know). It has not been proven that those are necessary conditions. And indeed, we routinely sample sinusoids and linear combinations thereof all the time. That important point should stand high above the "ad nauseum" academic issues, not vice versa. -Bob K (talk) 23:01, 6 May 2013 (UTC)
To be mathematically clear, the theorem, or rather its careful proof (unfortunately not quite the one in the article), says that any band-limited square-integrable function:
- is equal almost everywhere to a smooth (C∞-)function, and
- can be sampled as specified, where the convergence is uniform, therefore a fortiori L2.
So smoothness is actually a consequence of the band-limited assumption. Mct mht (talk) 23:34, 6 May 2013 (UTC)
Notation / Language
The notation for this article, from a mathematician's POV, is terrible.
Since it is the job of mathematicians to discover mathematical concepts and communicate them via proofs, definitions, etc., (i.e., we construct the language of mathematics so others can use it), our experience with notation is far superior to that of engineers when it comes to communication of ideas - that is what we do.
As such, I think we should use the standard mathematical notation (f for function, k for frequency, etc) since that is consistent with the vast majority of mathematical texts as well as a large number of engineering texts on this subject. — Preceding unsigned comment added by 142.136.0.102 (talk) 16:05, 27 June 2013 (UTC)
- Maybe you should point out the changes you want, and link a reference that does it that way, so we can see what you're proposing. Let's don't make this a mathematicians versus engineers dispute, as each group is pretty good at communicating to their own. Dicklyon (talk) 16:36, 27 June 2013 (UTC)
the "technical" tag
The tag says
This article may be too technical for most readers to understand. Please help improve this article to make it understandable to non-experts, without removing the technical details. The talk page may contain suggestions.
So I am creating this place for suggestions. My own assessment is different. I think there are probably too many "technical details", such as critical frequency, non-baseband sampling, non-uniform sampling, undersampling, and multivariable sampling that could be relegated to the See Also list, if we need to make the article less intimidating. The actual discussion of the theorem itself has excellent illustrations and is (in my opinion) readily understandable to those with an understanding of the Fourier transform, which seems to be a universally accepted prerequisite. If the tagger is suggesting that we explain the sampling theorem without use of frequency domain concepts, that would be highly unconventional. Unfortunately, I think the Fourier transform article suffers much more than this one from technical overkill (from the non-experts' point of view). It would be a good candidate for a 2-pronged approach, meaning an experts' version and a dumbed down version for the rest of us. But that doesn't seem to be the way Wikipedia likes to do things.
--Bob K (talk) 11:49, 9 November 2013 (UTC)
- The "2-pronged approach" is, in fact, the way Wikipedia articles should work, according to the WP:UPFRONT guideline: include highly technical information for experts -- but put it later in the article, *after* the up-front information for the rest of us. --DavidCary (talk) 03:41, 19 December 2013 (UTC)
Thanks. It looks to me like we've done that. So I don't have any suggestions to improve the article.--Bob K (talk) 15:46, 19 December 2013 (UTC)
I am having difficult time to understand this approach. It sounds absurd. It is a technical subject. What do you expect? This is an encyclopaedia article. Tutorials or further background material should be linked. I don't see any requirement of having technical articles in the level of layman in wikipedia. mcyp (talk) 09:14, 31 December 2013 (UTC)
- Are you wishing to discuss WP:UPFRONT? The best place for that is probably Make_technical_articles_understandable. --Bob K (talk) 04:18, 1 January 2014 (UTC)
@Bob ; I totally agree that introductions should be well written and clear for general audience. But in some subjects, like this one, it is difficult to explain without using jargon and technical concepts. Thank you for the pointers. I will read first the guidelines. mcyp (talk) 04:15, 3 January 2014 (UTC)
Confusing: Shannon's version of the theorem
That there is a paragraph after the verbatim quote of Shannon's version of the theorem is testament to how confusing the original is, particularly with regard to what is meant by 1/2W.
Is it really necessary to include the confusing original text?
The use of 'cps' rather than Hertz, and W rather than B, confound things further.
Wootery (talk) 17:37, 28 August 2014 (UTC)
- Looks like IP 71.169.182.71 has taken care of the W vs. B (and f(t) vs. x(t)) symbol confusion. I hope that edit sticks. 207.136.224.138 (talk) 18:00, 29 August 2014 (UTC)
References in "other discoverers"
Quoted text goes back to Meijering, for which http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=993400 ist given as URL. This, however, includes different references, so numbering is worng in wikipedia. And worthless as well, as the numbers are not explained/actual references given. 131.234.247.64 (talk) 07:18, 13 November 2014 (UTC)
Contradiction in lede
The lede says "It establishes an upper limit on signal bandwidth or a lower limit on the sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal." which suggests a necessary condition. It later says "The theorem does not preclude the possibility of perfect reconstruction under special circumstances that do not satisfy the sample-rate criterion." This seems like a confusion about whether the theorem gives a necessary condition, versus a sufficient condition. The body of the article suggests sufficient, so the lede paragraph probably needs to be edited to reflect that. 2620:0:1000:157D:84A4:C4BB:A81E:EA36 (talk) 16:28, 21 April 2015 (UTC)
- I've copyedited the lead to be more precise. The discrepancy appears to be related to additional constraints on the signal that could be exploited in the reconstruction; the absence of known constraints on the signal other than bandwidth is necessary for the theorem to be applied in the sense of prescribing a necessary condition for perfect reconstruction. —Quondum 18:14, 21 April 2015 (UTC)
- I made some more edits to make it more precise by avoid stating a converse, which is not true in general. 2620:0:1000:157D:FC67:567C:2428:191 (talk) 20:05, 22 April 2015 (UTC)
- I like it. —Quondum 20:41, 22 April 2015 (UTC)
Empty citations in historical background section
The paragraph on the historical background is full of pseudo LaTeX style citations in brackets that are superfluous because there are no equivalent entries in the references section. Either we remove them entirely, or replace them with <ref>...</ref>
citations. Dicklyon (talk · contribs) added this paragraph more than 10 years ago, I hope he can help fix it by adding actual citations. --bender235 (talk) 23:54, 13 September 2016 (UTC)
- I'm not sure when I can get to this, but you can help by saying exactly which citations are messed up or missing, and whether they were in my 10-year-old version or not. Dicklyon (talk) 04:26, 14 September 2016 (UTC)
White washing
"The name Nyquist–Shannon sampling theorem honors Harry Nyquist and Claude Shannon. The theorem was also discovered independently by E. T. Whittaker, by Vladimir Kotelnikov, and by others."
Well it wasn't discovered by "others" but 15 years earlier than anybody else by Kotelnikov in 1933. But not worth mentioning it right? Wikipedia rules!--2.242.113.76 (talk) 02:07, 17 April 2015 (UTC) --2.242.113.76 (talk) 02:07, 17 April 2015 (UTC)
- The historical background section goes into a bit more detail on the various inventors/discoverers, dates, etc., and cites secondary sources about the history. I think the guy who really get shortchanged the most is Küpfmüller, 1928. Dicklyon (talk) 04:20, 17 April 2015 (UTC)
- FYI, Ryogo Kubo mentions Nyquist's Thorem already in 1957, which is earlier than the sources provided in the article at the moment (Journal of the Physical Society of Japan, vol 12, no 6, 1957). Dafer45 (talk) 19:20, 7 January 2017 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified 3 external links on Nyquist–Shannon sampling theorem. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20100208112344/http://www.stanford.edu/class/ee104/shannonpaper.pdf to http://www.stanford.edu/class/ee104/shannonpaper.pdf
- Corrected formatting/usage for http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1455040
- Corrected formatting/usage for http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1455576
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 22:42, 2 December 2017 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified one external link on Nyquist–Shannon sampling theorem. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20130926031230/http://www.ieee.org/publications_standards/publications/proceedings/nyquist.pdf to http://www.ieee.org/publications_standards/publications/proceedings/nyquist.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 16:43, 12 January 2018 (UTC)