Bill Dirks wrote: > Chris Worley wrote: > > My video hardware and frame buffer are tightly integrated (a capture > > engine that receives decoded video input on the front end of a > > non-visible portion of the frame buffer, and an overlay engine that > > blends the video and graphics on the back end for output). I think > > this tight integration will be used by many cards in the future (video > > and graphics cards will be the same, as with the ATI All-In-Wonder, > > and Matrox Marvel). > > > > The frame buffer memory used by the video hardware must be accounted > > for and managed from user space, but it should be the V4L driver that > > responds with the amount of memory currently required by the video > > hardware. > > [...] > > Should the V4L2 specification be adjusted for this consideration? Or, > > how best would this be implemented? > > > This is normally done by creating custom ioctls. No. This is not different from dma-to-userspace (which we need a new API / API extention for anyway). The application just passes a userspace pointer to the driver and asks it to transfer the video data to that location. That might be simply malloc()ed memory, a MIT-SHM segment, a pointer to the mmap()ed video framebuffer, whatever... Gerd -- Get back there in front of the computer NOW. Christmas can wait. -- Linus "the Grinch" Torvalds, 24 Dec 2000 on linux-kernel