> 1) Scaling an image in the kernel driver takes time and may be a no-no Don't scale it - you damage the data as well. You have a good point though and people are working on updating the API for video in the kernel. The pixel shape one is a very relevant point. > If I don't do the color manipulation in kernel code, I have no matching > VIDEO_PALETTE_* video_picture entry. The reason we complain to people about format conversions is that most don't need them. If your camera is a YUYV like format convert to a supported YUV one, if its RGB based then RGB888 is great. YUV->RGB in kernel is not nice generally. > TV. So should brightness work like a TV - I believe that brightness on a > regular TV adjusts the image gamma. Should I include gamma correction in my > kernel driver? If the camera doesnt have controls dont implement them. The reason for this is that apps then know they should do it and that they can expect a performance hit from it and should plan appropriately > support in the kernel driver for those? Most of the things in video_picture > sound like things that should be done post image acquisition (in application > code), but I assume they are there because other cameras offer them. Exactly. > consumer camcorder that offers aperture control - but it uses firewire). V4L > offers no way to control these, therefore no way to set correct exposure. > I'm assuming that unlike my camera here, most cameras automatically set > exposure themselves. So, should I try cooking up my own automatic exposure > setting (again in the kernel driver), or should I write a utility which works > with the ioctl interface to set these parameters outside of V4L applications? For the moment add a private ioctl, but this is also relevant to fixing the API