On Thursday, March 21, 2002 9:38 AM, Ronald Bultje wrote: > On Thu, 2002-03-21 at 14:10, Joe Burks wrote: > > There are a lot of good reasons for a camera *NOT* to do all > the filtering in > > firmware. > > And for exactly that reason, you shouldn't do it in a base lib as well, > but let the application find a suitable way of doing it, as it feels > like doing itself. See Alan's response. There are two major reasons to provide these higher levels of video processing at some level below the application. 1) Common consistent interface To quote from: http://www.linuxpower.org/display.php?id=216 : "You cannot satisfy the basic goal of a video I/O API--device independence--unless there is a common set of scales, pixel formats, aspect ratios, and crops that are supported by all devices" While I don't necessarily agree with all of the article, I agree that device independence is the goal. This doesn't preclude device dependence...you can always allow applications the ability to establish what the particular device can and can't do in hardware. It does mean that you need to provide a way for all devices to do a common subset of things in a standard way. The big question is what is the common subset. My answer would be: as much as possible. Like I said, applications that *want* to implement their own custom filters can, but most won't want to, and will therefore have nothing that you don't provide. Compare and contrast: How many V4L apps implement gamma correction, or even cropping? how many would benefit? 2) Hardware assistance/acceleration Here we are back to the device independence point again. To make any hardware acceleration practical there needs to be a way for applications to be able to use it (without having to know all capabilities of all devices). This is only practically accomplished by having software take up the slack on devices not having the hardware filters. And again, you can (and probably should) always provide info to the application about what is done in hardware and what is not, so the application can make intelligent choices. > How many people don't have Xv? I don't (nvidia's XFree driver doesn't do > Xv and I don't want to run nvidia.com's driver). Other people feel fine > with it. Hence my point, everyone's needs are different! Yes, everyone's needs are different, so what? Yes, please all of the people all of the time. You can do that and still provide standard mechanisms for video processing stages. > So, if it isn't there, hack it in the applications. But don't let the > driver or quasi-driver (libv4l) do it. What about closedsource apps (not > that we care) that have terribly fast asm/mmx/sse-algorithms (but > proprietary). Should we just make their live impossible? This is a horrible idea! Do you realize how *annoying* it is when 5 applications say they do the same thing but all have different results? Why not provide these very commonly used facilities, and let BigMoney Corp just not use them in their applications if they think they have something better. To quote Scott Meyers (Effective C++, Second Edition, pages 79-81): "Trying to figure out what functions should be in a class interface can drive you crazy. ... Try this: aim for a class interface that is _complete_ and _minimal_. A _complete_ interface is one that allows clients to do anything they might reasonably want to do. That is, for any reasonable task that clients might want to accomplish, there is a reasonable way to accomplish it, although it may not be as convenient as clients might like. A _minimal_ interface, on the other hand, is one with as few functions in it as possible, one in which no two functions have overlapping functionality. ... It is often justifiable to offer more than a minimal set of functions. If a commonly performed task can be implemented much more efficiently as a member function, that may well justify it's addition to the interface. If the addition of a member function makes the class substantially easier to use, that may be enough to warrant it's inclusion in the class" While I agree that V4L is minimal, I argue that it is *not* complete. The tasks of scaling, cropping, gamma correction, de-interlacing, rotation and color space conversion (and possibly others...for example closed captioning/logo overlay support) are so common in video applications that a video API lacking the ability to configure and/or provide them is incomplete. -Scott Tillman Viewcast - Osprey division