Talk:Kell factor
This article is rated C-class on Wikipedia's content assessment scale. |
Vs. Nyquist
[edit]This sounds like it's related to the Nyquist frequency - am I right in this assumption? Peter S. 14:51, 18 November 2005 (UTC)
- Kell is built on top of Nyquist, but they're separate ideas --Dtcdthingy 04:56, 28 May 2006 (UTC)
- It's my impression that they're different measures of the same issue, except Kell doesn't involve nearly as much math (and seems, to me, vaguely-defined by comparison). Kell factor is applied only to image processing - in audio and radio you cannot ignore the frequency domain and thus cannot avoid the math. (comment by 216.191.144.135).
- Thank you :-) Peter S. 23:20, 16 August 2006 (UTC)
- Your impression is wrong. The Nyquist frequency sets an upper bound on the highest frequency a sampled signal may contain, while it still can be perfectly reconstructed. For practical purposes this upper bound is too high. Multiplying it with the Kell factor gives a more reasonable estimate. 91.41.80.71 (talk) 19:55, 3 January 2009 (UTC)
- Perfectly reconstructed with a perfect reconstruction filter. The Kell factor accounts for, and the article hints at, the fact that digital displays have no reconstruction filter.2601:640:10A:3BB3:20D:4BFF:FEBB:69BF (talk) 22:40, 25 January 2021 (UTC)
Both the Article and this Discussion Are Ambiguous and Conflicting
[edit]Before writing this discussion I read numerous articles posted on the Internet including the subject Wikipedia article. What I found is a lot of ambiguity and conflicting information.
Let’s begin with the definition as stated in the subject article. The definition states that the Kell Factor is related to the resolution of a “discrete display device.” What might be very significant to this definition is that it does not include the camera or scanning device and/or the relationship between the camera or scanning device and the discrete display. Presently, I don’t know whether this is true or not, but this distinction will become more relevant as this discussion continues.
Next, and probably worth noting up front is that the definition implies that the effective resolution of a display is possibly less than that suggested by the resolution inherent in the design. Specifically, in the case of a television display, the implication is that the effective resolution will be less than the vertical line resolution of the television display.
Next, the defining discussion includes no explanation for what causes this factor to have a value other than precisely one. Of course, the possibility does exist that the cause was never investigated or determined by Raymond Kell or anyone else, although that would seem doubtful.
Next, in the opening discussion of the subject article the writer claims that the Kell Factor has no fixed value. This suggests that the value is dependent upon (a) a variable parameter, (b) several parameters that are independent of each other, or (3) a relationship between parameters that is other than a simple quotient of two variables. For example, the resolution of a telescope based on the Rayleigh Criterion is a ratio of the wavelength of the light to the diameter of the optics (lens or mirror). Although the resolution determined by this criterion is not fixed, the relationship expressing the criterion is fixed. It is also based on a theoretical proof.
Next, the discussion implies that displayed resolution is dependent upon the spot size and/or the Gaussian intensity distribution of the electron beam used in the display. Regarding the spot size, this would appear to make more sense than the prospect of the resolution being dependent on simply the vertical-line resolution. For example, if two dots that are produced by two adjacent horizontal sweeps are positioned in vertical alignment, then increasing the size of the two electron-beam spots would cause increasing overlapping of the dots being displayed. Also, with a sufficient increase in the dot size the adjacent dots would eventually become inseparable and indistinguishable. This is similar in concept to the common example used to illustrate the Rayleigh Criterion.
Regarding the electron beam spot having Gaussian intensity distributions, I’m not sure if that is true. The spot created by the electron beam is created by using a magnetic lens that has an apparent diameter and focal length that is similar in principle to that of an optical lens. In an optic model, if an equal intensity of light falls entirely on the light collecting optics, the focused spot will have equal intensity across its area. So this would probably be expected to be true for the electron bean as well. Also, although there is such a thing as Gaussian beam divergence related to beam propagation, this phenomenon (a) relates to divergence typically noted in lasers having very narrow beams, (b) is relevant to very narrow beam widths that are within orders of magnitude of the propagation wavelength, and (c) does not address irregularities in beam intensity over the spot area.
Next, regarding the first example that discusses the image containing black and white stripes placed in front of the camera, I found several problems. First, this example is more related to the problems encountered in scanning by the camera than to the problems of the “discrete display” (as I noted earlier). Note, however, that the third sentence uses the term “effective resolution of the TV system,” which is inconsistent with the initial definition that the Kell factor applied only to the discrete display.
Second, the explanation in the first example appears to establish or relate the cause for achieving a lower resolution in the camera scan to be related to the difficulties of aligning the stripes on the card (image) to the scan lines of the camera rather than to something else. This rationale is strongly noted by the words “since it is unlikely the stripes will line up perfectly with the lines on the camera’s sensor,” as if to suggest that it is because of the camera operator’s inability to align the stripes that this causes the Kell Factor.
Regarding the discussions on this “talk” page, there are conflicting discussions over the relationship of the Kell Factor to the Nyquist Criterion. In the opening discussion both Peter S. and Dtcdthingy suggest that the Kell Factor is simply another term or application of the Nyquist Criterion. The writer of the section titled “Interlace” states that “the Kell factor, while determined empirically, is still a manifestation of the Nyquist effect.” However, under the section titled “CCD Display” Jluff states “Kell is empirically determined and not derived. It is not related to Nyquist.” It cannot be much more conflicting than this.
Regarding what I have read in other articles on the Internet, some writers suggest that Raymond Kell was simply attempting to express the relationship between what he and others had observed in assessing the resolution of television sets in the early days of television. None of the articles described the details or any rationale explaining the cause. One textbook with two references to documents naming Raymond Kell as a contributing author states that Kell’s reasoning was that an image simply had to be over-sampled to be reliably represented on the display. More specifically, it states that if the scanning resolution were not greater than the resolution contained in a pattern, then the effects of phasing could dominate the overall representation seen on the display. Accordingly, it is not until the scanning resolution is around 1.5 times the pattern resolution that the pattern is sufficiently relieved of the noted effects of phasing.
For those unfamiliar with the term phasing, phasing is simply a single word label for the effects caused by moving the scanning device with respect to the pattern. In one case, if the resolutions are identical, then shifting the alignment of a scanner over a black and white striped pattern can provide a range of representations from alternating stripes to a uniform gray solid. The writer also made the comment that Kell had been working with interlaced displays at the time, so his ideas became associated with problems in interlacing, when in fact they were not. The bottom line on this article is that it basically attributes the Kell Factor to scanning problems directly related to the Nyquist Criterion and not directly to the display as suggested in the subject Wikipedia article.
Regarding any additional comments that I may have gleaned from my attempt to gain an understanding of the Kell Factor, I have two: The first relates to the fact that pretty much every discussion that attempts to explain the Kell Factor attempts to explain it in terms of scanning a striped pattern without success and then increasing the scan resolution to about 140 percent (to provide 70 percent of the pattern resolution) and thereby achieving success. This phenomenon is simply related to the Nyquist Criterion, and although the Nyquist Criterion only requires a minimum of two samples for sampling one period of a sine wave, other wave shapes and patterns are certain to require additional samples.
Accordingly, even for a repeating-striped pattern (which is analogous to a square wave), two samples per period can be insufficient when the effects of phasing become dominate and/or when the number of periods or repeats is sufficiently low in number as to not allow a recurring cycle in the resulting scanned pattern. For example, two samples taken from one period of a sine wave may provide little or no information while 199 or 201 samples taken from 100 periods is likely to provide full amplitude, frequency, and phase information (if the waveform is known to be a sinusoid). And, although a Nyquist approach can be used to analyze scanning patterns, it may not necessarily be the best or the only approach to actually measuring resolution. And, in fact, using the Nyquist Criterion may be completely misleading simply because of the effects of phasing. As a rule, it is always best to understand all the details when applying theories, models, standards, and criteria to observations.
However, and although this is not my field of expertise, if I were to devise a method to evaluate the resolution of only a display, I would probably alternate lines and simply verify that the alternating lines displayed as alternating lines. Next, if I were to devise a method of to evaluate the resolution of a system that included a scanner or camera and a display, I would probably apply either two adjacent lines or two adjacent dots of various widths and vary the spacing between them. Because of the effects caused by the alignment of the position of the scan lines over the pattern, I would probably provide both a best case and worse case resolution, where the best case would correspond to a scan line falling on center between the two pattern lines or dots and the worst case would correspond to two scan lines falling symmetrically on both sides of the center line between the two pattern lines or dots. It may be helpful to draw a diagram of this to see why this is true.
My second comment relates to black and white television displays. First, black and white television sets do not contain a mask, so that a full electron beam produces full illumination by striking the phosphors on the viewing screen. Second, because the subject electron beam is typically focussed to provide a round spot, and because sweeping a round spot across a screen provides a line with non-uniform intensity across its width, black and white televisions would not be expected to provide sweep lines having uniform intensity. Third, to provide for uniform intensity and minimal banding, it would appear that scan lines would be required to overlap around 13 percent (one minus the square root of three over two). Accordingly, it would be expected that this increase in line width would thereby decrease the resolution over that suggested by counting only the number of lines per unit width.
Also, a similar effect analogous that described above could be attributed to the effect of altering the width of the scan lines as well. Tighter or more narrowed scan lines would provide scanned values closer to the values in the center of the path or band being scanned, while wider scan lines would provide scan values that are affected by the overlapping of scans and by the values of the adjacent bands.
All in all, it is certainly interesting that this phenomenon is so poorly understood and explained by so many who are willing to state what is apparently more opinion based than fact. Possibly we need to follow the references to Raymond Kell and his writings and actually read what he wrote.
BillinSanDiego (talk) 02:33, 26 March 2008 (UTC)
- Thanks for the great write-up. I'll have to come back to it later when I have a bit more time on my hands. I came to the Kell Factor page totally by accident. Damienivan (talk) 20:53, 28 January 2015 (UTC)
interlace
[edit]Note that the presence or absence of interlacing is irrelevant to Kell factor.
Is this correct? I thought that interlace was the reason that Kell Factor on a TV was .7, while a CCD scan viewed on a computer could be up to .9. Algr 08:42, 30 March 2006 (UTC)
- The number for Kell factor is a matter of perception. I suppose you might come up with a higher number if interlacing is absent (because the picture in general looks better), but there's no direct relationship --Dtcdthingy 04:56, 28 May 2006 (UTC)
Interlacing is important. Images must be filtered in the vertical dimension to avoid temperal aliasing, and thus interlaced images have lower resolution. This is a contentious issue, so someone from the television industry may be inserting erroneous comments into articles.
- Yes, I'm a shill for the TV industry. You got me. No, that statement was in there because I've heard a few people (including in reference works) claim Kell factor exists to account for interlacing, which I hope you agree is inaccurate. I stand corrected on it being irrelevant. --Dtcdthingy 23:47, 24 October 2006 (UTC)
The Kell factor, while determined empirically, is still a manifestation of the Nyquist effect. In any sampled system the maximum frequency is determined by the Sinx/x loss. In a CRT the vertical scan is sampled, whereas the horizontal is continuous. Therefore the maximum horizontal can be reduced to compensate for vertical sampling losses. Since interlace increases apparent vertical resolution but does not double it, the effective vertical resolution upon which the Kell factor is calculated depends of whether the scan is interlaced or not. Since modern LCD screens are progressive scan (even when fed with an interlaced signal) and use sampling in both horizontal and vertical directions, and do not use scanning but simultaneous clocking, they do not have a Kell factor (or the Kell factor is 1), since the sampling losses are the same in each direction. This is the reason why no Kell factor is used for modern HD television signals. The reason there is so much disagreement about the Kell Factor is that all explanations are true - they are all different ways of seeing the same effect, but are still all equally valid. 82.40.211.149 19:59, 11 February 2007 (UTC)BM82.40.211.149 19:59, 11 February 2007 (UTC)
ClearType?
[edit]This sounds related to ClearType as well. —Ben FrantzDale 03:25, 28 May 2006 (UTC)
- Nope, nothing to do with it. --Dtcdthingy 04:56, 28 May 2006 (UTC)
- I my not understand Kell factor entirely, but I'm surprised you say this. I guess font hinting is actually more related to this than ClearType. As I understand it, font hinting takes advantage of the fact that the resolution of a display device goes up when the signal you want to display happens to be in phase with the pixels on the screen (the extreme being if I want to draw a one-pixel square at (2,2), an LCD display can do this exactly whereas the same square cannot be drawn accurately centered at (2.5,2.5)). This sounds like Kell factor to me. Am I mistaken? —Ben FrantzDale 12:04, 30 May 2006 (UTC)
- Possibly yes. They're both concerned with the problem of picture details not lining up with the pixel grid. Kell is simply a method of estimating its effects, whereas font hinting is a way to minimize it. --Dtcdthingy 15:41, 30 May 2006 (UTC)
- I just found a fascinating paper that relates Kell factor with subpixel (ClearType) rendering: "Subpixel Image Scaling for Color Matrix Displays" by Michiel A. Klompenhouwer and Gerard de Haan. —Ben FrantzDale 19:12, 30 May 2006 (UTC)
- I my not understand Kell factor entirely, but I'm surprised you say this. I guess font hinting is actually more related to this than ClearType. As I understand it, font hinting takes advantage of the fact that the resolution of a display device goes up when the signal you want to display happens to be in phase with the pixels on the screen (the extreme being if I want to draw a one-pixel square at (2,2), an LCD display can do this exactly whereas the same square cannot be drawn accurately centered at (2.5,2.5)). This sounds like Kell factor to me. Am I mistaken? —Ben FrantzDale 12:04, 30 May 2006 (UTC)
CCD display
[edit]The article makes two references to a "CCD display". What is this meant to refer to? --Dtcdthingy 00:05, 24 October 2006 (UTC)
- It means the author needs to be beaten with the cluestick. 87.11.30.175 22:16, 22 January 2007 (UTC)
Text by 87.11.30.175:
- This makes no sense whatsoever, since CCD is a sensor technology, not a display technology. From the explanation given in the second paragraph, it's clear that the Kell factor is related to the acquisition process, not the display process. Thus it makes no difference whether you're talking about a cheap crappy old TV or the brand-spanking-new HDTV you just bought. You should be comparing the cameras used to acquire video, and the format it is encoded in.
This was moved from the article. —Ben FrantzDale 22:30, 22 January 2007 (UTC)
Kell is empirically determined and not derived. It is not related to Nyquist. Kell determined the value in 1934 by using expert and non-expert viewers using a PROGRESSIVELY scanned image, thus it is unrelated to interlace, but rather was an attempt to come up with a way to relate horizontal and vertical resolution. The value used is affected by the type of scanning and the MTF of the total system. Kell was first determined when cameras used tubes with Gaussian scanning beams, and CRT displays with similar Gaussian beam distribution. modern CCD cameras and digital displays act differently due to the physics of the process of acquiring and displaying the image. The best way to relate resolution is using MFT (Modulation Transfer Function), which is discussed in the Gary Tonge citation added to the article. Jluff 02:06, 28 January 2007 (UTC)
- I changed the article to more clearly state that the fixed-pixel nature of the image sensor and display are the issue. But can anyone jump in with a citation for the 0.90 figure? --Damian Yerrick (talk | stalk) 18:44, 11 March 2007 (UTC)
Cleanup/disputed
[edit]I've removed these two tags until someone makes an actual complaint about something that needs changing --Dtcdthingy 21:39, 12 February 2007 (UTC)
Answers?
[edit]I've followed the links around and, without digging up Kell's original paper, I can't find any description of how to measure the Kell factor of a given display. I assume you display a resolution test pattern of increasing spatial frequency and have an observer pick the perceived cutoff frequency, then go on to compute the Kell factor accordingly. Is that right? Isn't that essentially finding the frequency for which the MTF is some given percent, at least if you measure MTF as the minimum MTF over all phases of input signals?
Relating this to the Nyquist limit: The Nyquist–Shannon sampling theorem says that the sampling rate must be strictly greater than the maximum signal frequency. That seems to imply that the Kell factor cannot be 1, at least not without phase aliasing of up to 90° in the highest-frequency component. —Ben FrantzDale (talk) 02:16, 3 April 2008 (UTC)
How is the Nyquist-frequency related to the beat frequency?
[edit]How is the Nyquist-frequency related to the beat frequency? Especially, if we use digital image data, and simply cut off the harmonics? (A better question is harmonics of what exactly?)
Suppose we have some movie file on a PC, connect the PC to a HDTV via DVI, DVI transfers the data without any risk of interference with the signal's own harmonics. Then the HDTV's LCD panel changes the subpixels' voltage to adjust their intensity (saturation) and thus the pixels' color. Where does any kind of beat could present itself in this situation? What am I missing? Thanks, for answering! -- PAStheLoD (talk) 19:49, 28 December 2009 (UTC)
Kell Factor is related to reconstruction error in sampled images.
[edit]Much of this material is indeed confusing... so please allow me to attempt to explain what is the issue, and what the Kell factor was about.
First, if one reviews the Shannon-Nyquist Theorem, one learns that one may sample a band-limited image at twice the band-limit of a signal, then, using proper low pass reconstruction filters, exactly recover the origanal band-limited signal. Everyone is familiar with the problem of aliasing that occurs when a signal is sampled at below twice the band-limit? Once that occurs, no filtering can distinquish between the real original signal and the aliased signal now found in the samples.
So, assuming a properly sampled image, we can reconstruct the signal with exactly the same number of reconstruction points? YES and NO... Yes if we have a perfect low pass filter after the reconstruction. Interestingly, this is exactly where display can fail. In CRTs, the scan lines were effectively sampling and recontructing the image in the vertical access. The guassian beam spot provided some filtering, but often not enough, as people would try to sharpen the image by making the beam spot smaller, helping to increase the horizontal resolution, which in a purely analog TV system was limited only be the band-width of the total system. So, with small beam spots, the CRT lines did not sufficiently filter out the effect of the sampling scan lines themselves. These lines would beat with vertical image frequency components. Those components that were just below the Nyquist Limit would suffer very high distortion. So, Kell noted this distortion and made the determination that for the typical beam spot, limiting the band-width of the image to 0.7 of the reconstruction lines would effectly eliminate this moire beat pattern.
Fast forward to modern digital displays, now we have images that are sampled and reconstructed in both the vertical and the horizontal. Further, the use of flat panel displays with sharp edged pixels (or rather groups of subpixels), there is no low pass reconstuction filter at all! The only filter may be the viewers own eyes, with their own Optical Modualtion Transfer Function, which in a high resolution display viewed from a distance, might filter out the moire from the lack of a low pass reconstruction filter.
So... now we come to another point... we can increase the number of reconstruction points and use a digital low pass reconstruction filter. This is exactly what subpixel rendering can provide. One can use the separate RGB (or RGBW in the case of the PenTile RGBW system http://www.nouvoyance.com ) This will reduce the appearance of the moire artifact such that the Kell factor is equal to the Nyquist limit.
If one does not use either subpixel rendering or an optical low pass filter, the Kell factor is needed and is indeed around 0.7 for such digital imaging systems. BTW, the Kell factor can be determined mathematically by taking the difference between:
MoiréMax = [1-cos(piK/2)]/[1+cos(piK/2)] MoiréMin = |sinpiK-1|[1-cos(pK/2)]/[1+cos(piK/2)] where K is the ratio of the spatial frequency to the Nyquist Limit frequency.
The difference between these two fuctions is the amplitude of the moire beat pattern at each value K. The highest value of K that equals zero is the Kell Factor. If someone is ambitious, they might plot this and use it in the article.
You can read more about these issue in Chapter 12: Image Reconstruction on Color Subpixelated Displays by Candice H. Brown Elliott, in Mobile Displays, Applications and Technology, SID Series, Wiley Press.
99.183.244.198 (talk) 19:48, 5 March 2010 (UTC)Sunbear —Preceding unsigned comment added by 99.183.244.198 (talk) 19:41, 5 March 2010 (UTC)
- So basically the problem is when the samlping frequency is insufficent because at the reconstruction stage the scanning latencies sometimes just "add up" and aliasing happens. Kell figured out that by sampling even more (or limiting the original signal's band) this can be avoided. (ORLY?)
- Now, the article should explain the why instead of the how, in my opinion, and clarify the problem.
- The first example is just application of the factor (multiplying, woah), and some incomplete explanation of the problem. The second example fails to address whys regarding the points it tries to make, therefore not helping again. The third one uses a different number (0.9) without any word on why it's the right number to use. Only the forth example helps in understanding the problem, however it should start with showing how in the case of DSLRs the original undersampling problem occurs.
- I don't think we have bad examples, or this is a poorly written article, it's just not fine-tuned to people who want to understand it, instead it favors those who just want to use it. PAStheLoD (talk) 20:42, 6 March 2010 (UTC)
- I don't mean to offend, but this is BS. Engineers just blindly applied techniques used for analog transmission without understanding why they were used. Nyquist theorem does not apply to digital images, as digital images consist of discrete values, not waves of any kind. If this sampling theorem BS is thrown away and pixels treated as discrete values and simple gamma-correct average is used for downscaling, perfectly sharp images with no artifacts can be created. I can give you examples if you don't belive me.--90.179.235.249 (talk) 23:11, 9 June 2010 (UTC)
It seems that the 2 images used in article were produced without taking "gamma correction" for sRGB space into account. https://en.wikipedia.org/wiki/SRGB#The_sRGB_transfer_function_.28.22gamma.22.29 Therefore introducing another visual (d)efect unrelated to Kell-factor. If you look to current version of image on screen from few meters, you will not see tiny vertical lines, but you will see alternating light-dark stripes. To ilustrate only efect related to Kell factor, Fix the gamma (for sRGB). Then you will see only monotone gray from larger distance. I have gamma-corrected these images, but I cannot manage to upload them successfully so somebody else has to do it himself and then possibly delete my post to keep talk concise. Original image can be gamma-corrected (almost correctly to sRGB) e.g. in IrfanView - open Kell_factor1.png, then Menu/Image/Color corrections ... / Gamma correction: - Set: 2.20 , then apply, save, replace original --Jan Malec (talk) 00:44, 17 April 2017 (UTC)
Kell factor: Its meaning and its purpose still a total mystery!
[edit]So what exactly IS the Kell factor and what is it good for? After a few hours of reading this article, the links it provides, and several books on AV technology, it seems to me like Kell factor means, "Hey, we have too little resolution to properly resolve our picture, so let's make it even smaller!" Is that what it is? If I move the line card by less than a pixel and everything turns grey, what advantage would there be if I'll make my resolution even smaller? Wouldn't that result in even more cases of grey, as in, the lines would have to be wider to even be recognizable? Or is this some sort of anti-aliasing in order to prevent moire patterns?
Also: Is there any relation between the Kell factor dating from something like the 1940s and the fact that we have non-square pixels in modern digital video? Some explanations sound like it, as they're basically saying the Kell factor means that the relative vertical and horizontal resolutions (relative as in per given length, such as dots per inch) can't be identical. --91.65.185.199 (talk) 14:22, 9 May 2011 (UTC)
Aliasing Ratio
[edit]FYI, ISO 12233 defines aliasing ratio, which seems to be closely related, as:
- The ratio of the "maximum minus the minimum response" for the white bars within a burst to the "average modulation level" (equal to the average white-bar signal minus the average black-bar signal) within the burst provides the aliasing ratio for that particular spatial frequency burst.
so that seems like a contemporary (as of 2000) way to describe the phase-dependence of amplitude modulation. —Ben FrantzDale (talk) 15:46, 30 May 2013 (UTC)
Original Kell Paper?
[edit]I found a reference to Raymond Kell's 1934 paper. I believe it is called: "An Experimental Television System," R. D. Kell, A.V. Bedford and M. A. Trainer, Proceedings of the Institute of Radio Engineers, Vol. 22, Issue No. 11, November 1934, p. 1246. A further explanation can be found in an article titled "A Determination Of Optimum Number Of Lines In A Television System," Kell, Bedford and Fredendall; RCA Review, Vol. 5, Issue 1, [July, 1940,] pp 8-30. --L. Robert Taylor (talk) 07:51, 9 September 2016 (UTC)