On 5 Dec 2003, Gerd Knorr wrote: > Laurent Pinchart <laurent.pinchart@xxxxxxxxxxx> writes: > > The second way to implement the hardware multiplexing would be for the driver > > to return the input number in the v4l2_buffer structure passed to DQBUF. The > > application would also need a way to enable or disable a specific input. The > > problem with that approach is that it leads to a change in the API. On the > > other hand, this method uses 4 times less input buffers than the first method When you have one channel split into four devices, like your first way, the framerate for each device is one quarter what it is when there is only one device. So you should be able to get away with the same total number of input buffers. e.g. one device at 60 fps with 32 buffers, or four devices at 15 fps each with 8 buffers each. > This one. Havn't found time yet to look into this in detail (your > mails still sitting in my inbox waiting ...). Basic idea is to make > one of the reserved ints in v4l2_buffer a input number and add a new > bit for the flags field to indicate that the new input field is used. > To use the new feature apps must set the flag and fill the input field > when queuing buffers via QBUF. That should be backward compatible What about overlay? Say someone wanted to start four instances of xawtv on one device and have each overlay one of the time domain multiplexed channels into a different window? With this method you can't do this because: a) xawtv (or anything else for that matter) doesn't support multiple overlay windows. b) the V4L2 api doesn't support multiple overlays at all, and there is no simple way to use a reserved int in some structure to add that. For mmap capture, it may not be hard to add this in a backward compatible manner. But no software supports this. From a point of view of a hardware manufacture or user, it's a huge drawback if your hardware time domain multiplexing doesn't work with nvrec, motion, mencoder, and so on.