Re: v4l2 + kernel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



> >  * virt_to_bus() is gone in 2.4.x.  It is not available any more for
> >    all architectures (i386 does still work, sparc doesn't for example).
> >    You have to use the PCI API (see Documentation/DMA-mapping.txt)
> >    instead.  Looks like the mm helper functions are obsolete now...
> 
> Rats, and they were soooo convenient.  Right now I would say leave them
> in, but make it generate a syslog message informing the user that the
> application is out of date the first time they are used.

Hmm, a compile time warning would be better IMHO, as it is a driver devel
issue.  But I have no idea how to do that ...


> >  * What exactly is the purpose of the v4l2_q_* functions?  Any reason
> >    not to use the generic lists (linux/lists.h) instead?
> 
> v4l2_q functions include locking.  I suppose they could be implemented
> on top of lists, but the current implementation is pretty clean and
> solid.

Is this useful?  My driver code often looks like this:

spin_lock(btv->lock);
/* list operations, maybe multiple */
/* other stuff which needs run locked (tell the hardware about
   the new buffer, ...) */
spin_unlock(btv->lock);

so there is no point in using list ops which include locking ...


> >  * What exactly was the reason to have one mmap() call for every buffer
> >    instead of one big chunk like in v4l1?  As the drivers have to
> >    support one big chunk anyway for full v4l1 backward compatibility,
> 
> Not really - only if the apps don't check the number of buffers returned
> by GMBUF (if they do, you get full v4l1 compatibility, but only 1
> buffer...)

The point of using mmap()ed buffers is being able to do double buffering.
Just one buffer isn't really useful ...


> >    why not handle v4l2 the same way?  That would make things a bit easier
> >    I think.  And for dma-to-userland support we have to touch the API
> >    anyway ...
> 
> I'm not sure about this one.  You probably find that there is some
> hardware that ca[tures onto on-board memory at some fixed locations? 

You can let the applications map the whole memory block then.

> Not really sure, but the flexibility is good.

I don't think having multiple mappings gives you more flexibility as
you can do in the nopage handler whatever you want.  It's just a virtual
mapping which has nothing to do with the physical memory locations.


> Wouldn't one single block be _worse_ for dma-to-userland?  Restricting
> the user to one big buffer, instead of pointing the driver to multiple
> independant buffers, or do you plan to differentiate the two techniques
> (in which  case, why kill the ability to mmap multiple buffers)?

I'm thinking about using a (userspace pointer, length) tuple to reference
the buffer memory.

For a kernel buffer the driver has to fill this.  It will point into the
memory chunk mapped by the application first for obvious reasons, and the
driver even can pick fixed locations if it wants.

For userland buffers the application has specify that tuple:  either
register a fixed set of buffers first (QUERYBUF) or fill them for every
QBUF.  Not sure yet which way is better.  Of course the later is more
flexible, but allocating new buffers all the time isn't for free (neither
for the v4l driver nor for the application which might want to put stuff
directly into X11 MIT SHM blocks for example).

  Gerd





[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux