Hello Gerd, > > With this, Justin's approach using a "videodevX" would > > not be needed any more. > > What is bad with this? > > > You could load both "videodev.o" > > and "videodev2.o" and use your old radio-drivers for > > v4l and the new v4l2-driver for you video-hardware > > for example. > > So you can with "videodevX.o". Sure, but it's additional code, so it adds unnecessary complexity. But "videodevX.o" is not the problem -- as long as it only dispatches apps to the right interface, that's ok. The thing that bothers me most is the backward compatibility. (see below) > > Nobody has said: well, you have to add a compatibilty layer > > to ext2, so it can work with old ext-partitions as well. > > It was an complete overhaul, too. > > It's a different story. Your filesystem is either ext or ext2, > period. The v4l2 driver has to handle both v4l1 and v4l2 > interfaces *at the same time* to be able to support both old > and new applications. And that's the thing I don't understand. In my eyes it should be either v4l or v4l2 - period. 8-) v4l2 should not provide a half working compatibilty layer (look at mmaping and capturing in general) for old v4l applications. With this, you will never make a clear statement like: v4l is obsolete, use v4l2 instead. v4l2 should be integrated in the kernel -- I think we all agree on that. But in my eyes it should be as tight as possible. So, assigning a new major is a solution or using "videodevX.o". > > What about memory in regions > > that cannot be addressed via DMA? > > return -EINVAL on QUERYBUF? Another question: how can an application force memory to be allocated in an DMA accessible region? (Perhaps this is trivial, but I never have used such a machine...) > > Have these things been solved? > > IMHO they have nothing to do with the API design. Surely not, but if things are not working, designing the API is unnecessary IMHO. > Having overlay (i.e. watch TV / check camera with xawtv) and > capture work in parallel is useful. I expect this will be > used more frequently than having two applications do capture > in parallel (at least with the common TV cards which can > capture from a single input source only). What do you mean exactly with "work in parallel"? I can only think of this: "capture" has a higher priority than overlay, e.g. if there are two capturing opens (one overlay, one capture), starting capture disables overlay. If capturing is stopped, overlay is enabled again. If one app is capturing, starting capturing with the other app fails. > > If multiple capturing opens should be supported, this should > > be done using a userspace-library anyway. There is no need > > to add the complexity to every other driver out there -- > > some simple hooks + userspace library are much better. > > (1) Supporting multiple opens is not mandatory. > (2) If a driver decides to support multiple opens it has to > keep multiple contexts anyway. I can't see how switching > them with "simple hooks" rather than with multiple > filehandles makes it easier for the driver. The driver does not have to maintain a copy of all settings for each open. Plus, not every driver needs to have the necessary code for multiple opens -- only the existings functions have to be supported. All relevant aspects can be set via calls like S_FREQ, S_FMT, S_CHAN. The userspace library can keep a copy of all these parameters. Whenever an open wants to do capture, the library checks if this is currently possible and then uses the standard calls to set up everything according to the open. But after thinking this over, I must admit that this will most probably not work. One big problem is, that the library cannot request multiple buffers and then let the different apps mmap it. This could only be done by heavily modifying both existing drivers plus the existing v4l2 api. This is nothing I want either... > Gerd CU, Michael.