Talk:Vertical blanking interval
This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Someone could probably reword that bit about buffer transfer in computer graphics systems during VBI so that it would be a bit more understandable to an outsider. --Russvdw 7/12/06
VBI really available for DVI?!
[edit]I have looked for facts regarding how VBI is related to DVI as stated in this article. I haven't found anything on the net and suspect that VBI is superseeded by the DVI and HDMI digital standards through HDCP. Anyone that know this for fact or can refer to a source please correct the article to either point to a source that DVI has VBI or remove the reference to it, it confuses things. --Starbar 12:38, 28 September 2006 (UTC)
- Yes the blanking interval is still there. As stated in the article, it allows programmers and hardware designers to do offline processing during that time. It's a ton easier to ping-pong different buffers than to try to handle "true" streaming data. HDCP has nothing to do with it - some of the article states what can happen in the VBLANK time. Page 32 of the TMDS data sheet, which is the protocol for a DVI connection, shows the timing of active video vs blanking. — RevRagnarok Talk Contrib 14:04, 28 September 2006 (UTC)
- Yes yes you are correct and I understand that there is a VBI intervall, I was looking for VBI data such as Closed Caption (CC), Wide Screen Signaling (WSS) and CGMS-A copy protection as well as the European Teletext data transmition over VBI. But you are right of course there is a VBI well defined even for DVI/HDMI but still, is it used to transfer these data packets on the hidden video lines? I guess not, and maybe that would be worth mentioning from a technical standpoint. The enduser never see the difference sinse it is burnt in by the receiver STB. --Starbar 14:17, 3 October 2006 (UTC)
- Good point. I added a brief comment to this article stating that DVI and HDMI have a vertical blanking interval, but cannot carry closed caption (CC) text, with a link to another article that goes into more detail. --DavidCary (talk) 05:36, 10 February 2021 (UTC)
Wording of first paragraph is confusing
[edit]Surely the first paragraph should read "the time difference between the end of the first line of one frame or field of a raster display, and the beginning of the next" - "beginning" and "end" appear to be swapped round. Colindente 17:19, 2 November 2007 (UTC)
Not true
[edit]Quote: "The pause between sending video data is used in real time computer graphics to perform various operations on the back buffer before copying it to the front buffer instead of just switching both pointers,..."
That's simply not true. Back buffer graphic operations can be perfored at any time. Because they occur in the back buffer, nothing will be visible on the screen. The only operations that need to be synchronized with the VSYNC are the switching of the two (or more) pointers (page flipping) or drawing directly to the screen. BTW, there is nowhere an explanation about what "the two pointers" are.
There is no explanation about the reason why a software should wait for VSYNC. The problem is that, if the image on the screen is changed while the screen refresh is in progress, only the last part of the new image will be displayed, the rest of the screen will keep displaying the old image (till the next refresh). That will cause flickering. By waiting for the VSYNC, an application can make sure the new image will be displayed all at once.
Under normal circumstances nobody will use the refresh rate for timing. (Being a Windows programmer I can only speak about that operating system.) There is an API function called Sleep (requiring very little resources) allowing timing with a resolution as short as 16 ms. That matches the monitor refresh rate (and may originate from it). For shorter timings, ( one milisecond resolution) a multimedia timer can be used (see timeSetEvent function). Multimedia timers are resource intensive users and should be used only if they're REALLY necessary.
I think that the article should mention that speeding up the graphic operations over the refresh rate is pointless, the monitor will only display 50-60 frames per second (the refresh rate), no matter how many requests it will receive. (Some monitors can be set to a higher refresh rate.) —Preceding unsigned comment added by 86.126.212.48 (talk) 06:58, 20 May 2010 (UTC)
Programmers of Commodore 64 computers used double buffering extensively -- and it was very easy to accomplish. Frame buffering was done in system RAM, and the video controllers could easily be set to read from a specified address. In addition, the video processor generated a vertical blanking interrupt at the beginning of vertical retrace. The programmer could service this interrupt and switch to the back buffer simply by changing the address of the frame buffer in the video chip. Also, since the system was simple and hardware driven, it was not necessary to use higher level drivers to accomplish the task. Smooth sprite animation was a hallmark of Commodore games. This may also be true of Atari computers at the time -- I wrote only for C-64. —Preceding unsigned comment added by TwangGuru (talk • contribs) 22:24, 30 November 2010 (UTC)
- I agree with the above that Wikipedia *should* say a few words about "what the two pointers are", why software should wait for VSYNC, etc.
- Would some other article such as screen tearing or double buffering would be a better place to document those things than this "Vertical blanking interval" article? --DavidCary (talk) 03:30, 10 February 2021 (UTC)