On Thursday 21 March 2002 03:43 am, Ronald Bultje wrote: > Imo, a driver just does what the device is designed for. If the device > is designed to give a bad image, then let the driver give a bad image. How many cameras and other image acquisition systems rely on software filtering? You are happy with those devices being of inferior quality on Linux? The quality of the image coming off a camera or scanner is directly proportional to how much processing power they decided to put on the device. What is the cost to increase the computing power on the device vs. the cost of producing a CD with a driver on it? If corrections need to be made, which is cheaper to upgrade? There are a lot of good reasons for a camera *NOT* to do all the filtering in firmware. > So, let's see... Scaling. How are you gonna scale? There's various ways > to do this, either of which has advantages over some of the others. So, > which are we gonna use? And how are we going to program it? If I have a > better but uglier way, which won't be put in libv4l, then what? How is it done now? The reccomendation I saw was "we could just let xv do it". How many apps are perfectly happy to let xv manipulate their images for display? > Application developers are supposed to be smart people who know what > they're doing and can write smart apps. Sure, they can use each others > code and make a generic library for some functions, but these functions > should in no way be forced upon the developer. If someone has better > code, let him use that. How many current v4l applications support gamma correction? How do I express the need for this sort of post processing to the v4l application? Why aren't the application developers pumping out more v4l apps with advanced post processing capabilities? How about obvious postprocessing tasks like rotation? It's pretty easy to rotate a camera 90 degrees to get better scene composition, few v4l apps have that feature. Heck my live webcam is mounted to a bracket on the ceiling, that image needs 180 degrees of rotation. If I wanted to stream live video from that camera, it'd look horrible. > Everyone's demands are different. Too different to make one generic > library and force everyone to use it. Not sure where that came from... I'm suggesting the need for a user space segment for a driver because the acquired image needs post processing before it is ready for the app requesting it and doing this in the kernel is a Bad Thing. A library could help with things like gamma correction which would be called from the user space segment of the driver, but the driver writer could at his discretion decide he preferred some other algorithm. > It's not the kernel or driver or whatever core library's function to do > all that. The application should do that. So, build a nice application > for webcam grabbing that can grab images and improve their quality. But > never force people upon using one method or the other. I do not want to write my own gqcam, xawtv, zapping and gnomemeeting that can do post processing. I want to be able to feed the applications that somebody is pouring effort into with a good video stream. I want a driver that already understands the hardware and knows what the proper steps to take to make that video of the highest quality practical are. If we would want the application to have the option of saying "give me an unfiltered video stream", I have no problem with that. Then they could use their own gamma, rescaling, contrast, whatever. But I'm going to bet that few applications are going to make use of it. I base this on the fact that the apps available now could do this and generally don't.