> OK - then how about keep the REQBUFS(0) (or something similar), and > scrap this stuff about locking down a buffer for 1 second, etc. That is > starting to look like a horrible kludge, with a lot that can go wrong. The current API was designed with mmap()ing buffers as only way to do streaming capture using multiple buffers. This implies some limitations which I simply want to get rid off. One of the limits is that you _have_ to use the same buffers again and again. > Could you possibly include a short draft of your spec, just so that I > can see it all together (possibly add how you want to handle multiple > planes) - I would really appreciate it. Ok. First step obviously is to set the format you want to capture (VIDIOC_S_FMT). Then ask the driver using REQBUFS for buffers, indicating whenever you want to use your own userspace memory buffers or mmap() the memory from the driver. For the second case you'll have to mmap() the buffers here. The you can queue the number of buffers VIDIOC_QBUF which you aquired using REQBUFS, start capture with VIDIOC_STREAMON, get/requeue buffers using VIDIOC_DQBUF/VIDIOC_QBUF until you are done, and finally call VIDIOC_STREAMOFF and REQBUFS(0). munmap() the driver memory if needed. In case the driver supports multiple opens the driver should lock down capture resources between REQBUFS(n) and REQBUFS(0) I think. struct v4l2_requestbuffers { int count; __u32 type; __u32 flags; __u32 size; __u32 reserved[2]; }; Two new fields: flags (we need one to indicate we are going to use userspace memory) and size (the size of the driver memory block the application should mmap() in case it uses driver memory). struct v4l2_buffer { int index; __u32 type; __u32 flags; stamp_t timestamp; struct v4l2_timecode timecode; __u32 sequence; __u32 datasize; /* for compressed */ __u8 *data; __u8 *data2; __u8 *data3; __u8 *data4; __u32 reserved[3]; }; While looking at it: Why *both* timestamp and timecode? Is'nt that somewhat redundant? Any news from the media time code? Was'nt there planned something for 2.5 (alsa?)? If so we should take care. If we are going to break stuff we should at least try to get right *everything* so we don't have to break apps again if possible... For userland buffers the application has to fill the data* pointers for the QBUF ioctl. The application use the same memory areas all the time, but it doesn't has to. If it is going to reuse the buffers it should set a flag (V4L2_BUF_ATTR_REUSE ?) as hint to the driver. The driver may use this to keep some state information cached / the buffer locked / whatever. The driver can also completely ignore that flag... For driver buffers the driver will return valid (userspace) pointers on DQBUF (which will obviously point into the memory block mmap()'ed by the application). The driver is absolutely free here how it manages it's memory. It can return pointers to a different location every time if it wants (althrouth it probably isn't very useful). The driver must guaranty that any buffer returned to the application is valid until the application frees the buffer (either by calling STREAMOFF or by requeuing the buffer using QBUF). For compressed images data + datasize will be used. uncompressed packed pixel formats use just data, planar formats can additionally use data2 ... data4. I think 4 planes should be enouth. Both yuv and rgb have three planes, so we have one spare in case we want to use a alpha channel some day. Packed pixel formats can use the v4l2_pix_format.bytesperline to specify any padding they want. Planar formats can't have padding (any objections on this? If so: why? I can't think of any useful application for this...). Gerd -- Get back there in front of the computer NOW. Christmas can wait. -- Linus "the Grinch" Torvalds, 24 Dec 2000 on linux-kernel