Should the V4L2 API be modified to return the amount of frame buffer
memory used by the video hardware?
Different video devices have different frame buffer allocation
requirements, not just when different video formats are selected (i.e.
YUV422 vs. RGB), but also other considerations for multiple frames,
scaling, etc....
I manage the non-visible portion of the frame buffer as regular memory
is managed: the visible graphical region is reserved, the remainder
can be used for other video/graphics related stuff. For example, I can
place bitmaps in the non-visible part of the frame buffer, and use
hardware accelerated routines to blt the bitmaps between visible and
non-visible portions of the frame buffer.
This frame buffer memory allocation must be managed in user space.
My video hardware and frame buffer are tightly integrated (a capture
engine that receives decoded video input on the front end of a
non-visible portion of the frame buffer, and an overlay engine that
blends the video and graphics on the back end for output). I think
this tight integration will be used by many cards in the future (video
and graphics cards will be the same, as with the ATI All-In-Wonder,
and Matrox Marvel).
The frame buffer memory used by the video hardware must be accounted
for and managed from user space, but it should be the V4L driver that
responds with the amount of memory currently required by the video
hardware.
This might also be useful for those applications that memory map
captured video.
Should the V4L2 specification be adjusted for this consideration? Or,
how best would this be implemented?
Thanks,
Chris