On Thu, 2002-03-21 at 22:16, Billy Biggs wrote: > Joe Burks (joe-v4l@xxxxxxxxxxx): > > > I don't think the majority of time would be spent writing the > > conversion function. I bet I could google for a RGB->YUV or YUV->RGB > > algorithm and have the result converted to code, tested and debugged > > within an hour. > > The problem with this argument is that the resulting code will likely > be subtly wrong. Remember: > > 1. Conversion from Y'CbCr->R'G'B' is very lossy And it is slow, since it is done pixel by pixel. > 2. The conversion code is much more complex if you care about quality Generally, it should not be done in integers. > 3. There are about a million variants of YUV colorspaces and the V4L > API isn't sufficient to correctly describe them :) The worst variants come from OEMs who say "No, our firmware for the DSP is not broken, it's just we invented Yet Another YUV colorspace" :-) I have a camera for which I still can't figure out what it sends. I even have the decoder, but the colors are not quite right... and I fear that without the OEM's help (the exact transformation matrix) it is not even theoretically possible to fix it. > So one would imagine that all the webcam apps would do the long, slow, > but accurate conversion code, probably using some expensive processing > filters, and that the playback applications would, where necessary, do > the fast but very inaccurate mmx transforms you see in the DVD apps. I am yet to see a working code like that, suitable for kernel space. > You'd also expect that eventually the webcam apps would be sharing so > much code with all these expensive filters that eventually they'd do a > library. Maybe they're all waiting for someone to write this code for > them? Is that what you're saying? The kernel-space library is already available (the usbvideo module). I intentionally do not expand its conversion capabilities because eventually even whatever little is there now, should be removed at some point, as Alan vaguely hinted more than once :-) > > That is why I'm lobbying for a user space (middle tier, if you will) > > driver component that will keep the kernel space portion of the driver > > simple and make the kernel hackers happy, make the application side > > simple so they're happy, and make the user experience good. > > I intend to help erik and wim with libcolorspace (codecs.org), but I > worry apps will either care too much about fretting over dependencies > (nobody will run my app if they have to download a library) or over > style (eww, but that lib is in C!) or something else just as silly. Dependencies are well known issue. But they are not as frightening as you make them. App developers can even include the library into their code, as long as the license(s) permit that. > If you do something that's userspace but transparent to any user of > the driver API, you'll piss off anyone who cares about full framerate > video recording (which is pretty much everyone these days). 16ms per > field is not alot of time to be switching around doing conversions. Not > only that, you're likely to offend any quality watchers. Not that many USB 1.1 cameras come even *close* to 30 fps :-) > > Well, as you just said there are already a number of drivers that do > > colorspace conversion in the kernel (that's why there is a YUV->RGB > > macro in ibmcam.c). In fact cpia.c (my personal favorite sample v4l > > driver - it's the place I started when writing mine) does color space > > conversion to RGB555, RGB565, RGB24, RGB32, GREY, YUYV, UYUV, and > > YUV422 (I think it might be YUV420 natively). > > Well that sucks. Why do these drivers have to totally screw those of > us who care about quality? CPiA was the first video USB driver, ever. I copied big chunks of it when it had only RGB24 output. That's what we have now. > > Basically the email attached to that one says "Making a driver > > compatible with the existing v4l applications is forbidden". > > Forbidden because no app supports all the color space encodings. Actually, few best apps will survive. The rest will die because, without a 3rd party library, nobody will be able to cope with so many different formats coming from so many different drivers... > > Basically v4l is broke, and nobody is allowed a work around to fix it > > (except those who already managed to get in). v4l was bttv-centric, and it still is. > Application writers need to either get over the fact that they should > be building a lib since none of the kernel folks have. libcolorspace is > a step in that direction. That is definitely true. Other people wanted (a year ago) to write such a lib, but apparently nobody had time to do that. I can remove YUV->RGB conversion from ibmcam driver in less than 10 minutes (plus some time to figure out what output format shall I use). However there are some questions: - how will I paint into the output buffer? I want to draw numbers, lines and colored pixels there. I'd assume, I have only Y channel to easily play with? - how do I implement the software controls (contrast, hue, saturation) for cameras that don't have them in hardware? In RGB, I can do 2 out of those 3 more or less reasonably. I think it would be very safe to make this change in 2.5 tree. But it is worth mentioning that in some modes the camera sends RGB stream, and that will result in RGB24 output. No conversions, though. Cheers, Dmitri
Attachment:
signature.asc
Description: This is a digitally signed message part