How to support hardware controlled time multiplexing ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Hi everybody,

I'm having a problem writing a V4L2 driver (or rather helping a coworker to do 
the job) for a high-end frame grabber.

The frame grabber has 3 completely independant channels. It offers raw 
(YUV/RGB) and mjpeg concurrent capture on each channel and hardware 
controlled optimized time-multiplexing between up to 4 sources on each 
channel.

Each channel is handled by a device node in /dev/v4l/. Each node can be opened 
multiple times, and two concurrent captures (raw and mjpeg) can be started in 
two different open.

We are now trying to find the best way to implement time multiplexing.

The V4L2 API states (http://bytesex.org/v4l/spec/common.html#AEN154) that

"V4L2 drivers should not support multiple applications reading or writing the 
same data stream on a device by copying buffers, time multiplexing or similar 
means. This is better handled by a proxy application in user space. When the 
driver supports stream sharing anyway it must be implemented transparently. 
The V4L2 API does not specify how conflicts are solved. "

This makes sense, as software time multiplexing is better handled in user 
space. Now, the board offers hardware controlled time multiplexing, which 
ensures that sources are switched during vertical blanking, and tries to 
achieve the maximum throughput but modifying the order of the sources during 
multiplexing on the fly. This gives better performances than software 
controlled time multiplexing, as the switching time between sources is 
reduced.

This hasn't, in my opinion (but I can be wrong), been taken into account when 
designing V4L2.

We have two ways to implement that in the driver.

The first one is to let the user open the device up to 4 times, selecting a 
different input each time, and starting streaming on those 4 instances. This 
may seem a good idea (it certainly looks nice and clean), but I'm not sure if 
it is a good solution. If an application opens the device and starts 
streaming, another application could open the same device, start streaming, 
which would lead to a "spontaneous" framerate drop for the first application. 
Is that a good idea ? I doubt it (but again I can be wrong :-).

The second way to implement the hardware multiplexing would be for the driver 
to return the input number in the v4l2_buffer structure passed to DQBUF. The 
application would also need a way to enable or disable a specific input. The 
problem with that approach is that it leads to a change in the API. On the 
other hand, this method uses 4 times less input buffers than the first method 
(when using mmaped streamed I/O), and doesn't modify the framerate behind the 
application's back.

I'm not convinced by any of those methods. The first one seems wrong somehow, 
and the second one is not compatible with the current V4L2 API.

Any advice or help or comment would be appreciated.

Laurent Pinchart




[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux