Re: How to support hardware controlled time multiplexing ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> When you have one channel split into four devices, like your first way, the
> framerate for each device is one quarter what it is when there is only one
> device.  So you should be able to get away with the same total number of
> input buffers.  e.g. one device at 60 fps with 32 buffers, or four devices
> at 15 fps each with 8 buffers each.

That's right, but if four applications open the device at the same time, thet 
will all request 32 buffers and not 8 as they don't know about each other.

> > This one.  Havn't found time yet to look into this in detail (your
> > mails still sitting in my inbox waiting ...).  Basic idea is to make
> > one of the reserved ints in v4l2_buffer a input number and add a new
> > bit for the flags field to indicate that the new input field is used.
> > To use the new feature apps must set the flag and fill the input field
> > when queuing buffers via QBUF.  That should be backward compatible
>
> What about overlay?  Say someone wanted to start four instances of xawtv on
> one device and have each overlay one of the time domain multiplexed
> channels into a different window?
>
> With this method you can't do this because:
>  a) xawtv (or anything else for that matter) doesn't support multiple
> overlay windows.
>
>  b) the V4L2 api doesn't support multiple overlays at all, and there is no
>     simple way to use a reserved int in some structure to add that.

Multiplexing inputs in the time domain isn't about creating "virtual" devices 
and expending a cheap card into a four-channel frame grabber. It is rather 
about using all the possible inputs of a frame grabber for dedicated 
application such as remote monitoring. We definitely should not try to create 
4 devices out of a single one. The idea here is to use all the capabilities 
of the device.

> For mmap capture, it may not be hard to add this in a backward compatible
> manner.  But no software supports this.  From a point of view of a hardware
> manufacture or user, it's a huge drawback if your hardware time domain
> multiplexing doesn't work with nvrec, motion, mencoder, and so on.

Why would you want to encode from a single time multiplexed stream ? I doubt 
MPEG encoding would achieve outstanding results if 4 video inputs are 
multiplexed into the stream. And all existing software will work with the 
hardware, as they won't set the extra bit in the v4l2_buffer.flags, and so 
the driver will not time multiplex inputs.

Laurent Pinchart




[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux