> How many cameras and other image acquisition systems rely on > software filtering? You are happy with those devices being of inferior > quality on Linux? The quality of the image coming off a camera or scanner is If the quality is inferior then you got the code wrong > There are a lot of good reasons for a camera *NOT* to do all the filtering in > firmware. Ditto in the kernel driver > How is it done now? The reccomendation I saw was "we could just let xv do > it". How many apps are perfectly happy to let xv manipulate their images for > display? For many applications its ideal. Its efficient too. For other stuff then if I'm doing something like video stream encoding I want to do the scaling in specific ways and tiled which you won't be doing in a generic driver. If I am doing high quality processing I probably want to FFT the image in both directions and filter it too. > composition, few v4l apps have that feature. Heck my live webcam is > mounted to a bracket on the ceiling, that image needs 180 degrees of > rotation. If I wanted to stream live video from that camera, it'd look > horrible. Rotation is hard unless you happen to have square pixels at both source and destination. It only _seems_ easy. > If we would want the application to have the option of saying "give me an > unfiltered video stream", I have no problem with that. Then they could use > their own gamma, rescaling, contrast, whatever. But I'm going to bet > that few applications are going to make use of it. I base this on the fact > that the apps available now could do this and generally don't. Actually they all pretty much do. They open the kernel interface and use it rather than going via a preprocessing library. Once you have a good library most people would use it