I'm writing a Linux driver for a USB webcam that is currently unsupported. I just finished the last of the necessary reverse engineering and have a working usermode utility to capture images so I'm looking to make a proper driver for it, which of course includes all the necessary v4l bindings. I've run into two problems: 1) These pixels aren't square. 2) V4L doesn't recognize its unique color encoding scheme. The color encoding is similar to YUYV (or UYUV) but not the same. My inclination was to convert the raw data into square RGB24 before passing it on. However there were two problems with this: 1) Scaling an image in the kernel driver takes time and may be a no-no 2) I don't know whether or not some kernel code already exists to do this manipulation or if I should just copy the code from my favorite image manipulation program (gimp or ImageMagick). I didn't want to be accused of being a kernel newbie, even though I am. If I don't do the color manipulation in kernel code, I have no matching VIDEO_PALETTE_* video_picture entry. Also, looking through the v4l API draft at the building #3 page, I noticed that the only "exposure" control v4l offers is "brightness", kind of like a TV. So should brightness work like a TV - I believe that brightness on a regular TV adjusts the image gamma. Should I include gamma correction in my kernel driver? The video_picture struct also contains several other bits and pieces which the camera doesn't support (like hue, colour and contrast). Should I include support in the kernel driver for those? Most of the things in video_picture sound like things that should be done post image acquisition (in application code), but I assume they are there because other cameras offer them. Also, three items affect camera exposure: Aperture (or Iris for those film industry conscious among you), Shutter speed, and gain. While it's hard for me to envision a consumer end camera having Aperture control, the camera I'm working on offers shutter speed and gain (I do have a high end consumer camcorder that offers aperture control - but it uses firewire). V4L offers no way to control these, therefore no way to set correct exposure. I'm assuming that unlike my camera here, most cameras automatically set exposure themselves. So, should I try cooking up my own automatic exposure setting (again in the kernel driver), or should I write a utility which works with the ioctl interface to set these parameters outside of V4L applications? Thanks in advance -Joe