Re: Questions about writing a new webcam driver (3Com Homeconnect)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



Quoting Joe Burks <joe-v4l@xxxxxxxxxxx>:

> I'm writing a Linux driver for a USB webcam that is currently unsupported.  I 
> just finished the last of the necessary reverse engineering and have a 
> working usermode utility to capture images so I'm looking to make a proper 
> driver for it, which of course includes all the necessary v4l bindings.  I've 
> run into two problems:
> 
> 1) These pixels aren't square.

You can't do rescaling in kernel. It's too complicated.

> 2) V4L doesn't recognize its unique color encoding scheme.

Most webcams have their own, unique and usually weird encodings. You are
expected to convert to one nearest standard format, as Alan suggests. If
your camera can produce datastreams in several, wildly different formats
and switch between them depending on image size then you have to decide
what to do about it - either to do the conversion in some cases, or to
switch the output format (not good if you do the image format change while
the camera is opened and running...)

> The color encoding is similar to YUYV (or UYUV) but not the same.  My 
> inclination was to convert the raw data into square RGB24 before passing it 
> on.  However there were two problems with this:

Conversion into RGB24 makes sense only if it is a nearest approximation of
the native datastream. As you describe, it isn't - so you'd be better off
with some YUV format. But then you can't do software image controls...

> 1) Scaling an image in the kernel driver takes time and may be a no-no

You can do deinterlacing in the kernel if you really want. The usbvideo
module has the procedure for that - but you are not required to do
that, *and* this procedure works only on RGB24 images. I don't even want
to think what it would take to average two YUV-encoded pixels.

Rescaling of the image (to compensate for non-square pixels) would involve
"throwing" one of your pixels onto a grid of "regular" pixels and adding
the covered portions of pixels to them. I can think of a large table that
controls the process, or a fast formula instead. But you definitely may
want floating point for that... do it in userspace somehow?

> 2) I don't know whether or not some kernel code already exists to do this 
> manipulation or if I should just copy the code from my favorite image 
> manipulation program (gimp or ImageMagick).  I didn't want to be accused of 
> being a kernel newbie, even though I am.

YUV-related conversions generally require matrix transformations using
real coefficients. But you can not have floating point operations in
kernel (because, at very least, not all CPUs have FPU). This means that if
you do the conversion you do it in integers, and that is not very precise,
and you can not use CPU-specific instructions (MMX etc.) that are easily
available to userland apps.

> If I don't do the color manipulation in kernel code, I have no matching 
> VIDEO_PALETTE_* video_picture entry.

Your driver should produce nearest VIDEO_PALETTE_YUVxxx instead.

> Also, looking through the v4l API draft at the building #3 page, I noticed 
> that the only "exposure" control v4l offers is "brightness", kind of like a 
> TV.  So should brightness work like a TV - I believe that brightness on a 
> regular TV adjusts the image gamma.  Should I include gamma correction in my 
> kernel driver?

The usbvideo module has software implementation of some of those controls
for RGB24. At least you can see how they work. The YUV brightness control
would need to do a lot of calculations... probably you don't want to do
that. To change the brightness of a colored YUV pixel you'd need to change
all three components, otherwise Y alone just changes the color. This will
work for monochromatic image, though (because UV=[128,128] will translate
into itself).

> Also, three items affect camera exposure: Aperture (or Iris for 
> those film industry conscious among you), Shutter speed, and gain.  While 
> it's hard for me to envision a consumer end camera having Aperture control, 
> the camera I'm working on offers shutter speed and gain (I do have a high end 
> consumer camcorder that offers aperture control - but it uses firewire).  V4L 
> offers no way to control these, therefore no way to set correct exposure.  
> I'm assuming that unlike my camera here, most cameras automatically set 
> exposure themselves.  So, should I try cooking up my own automatic exposure 
> setting (again in the kernel driver), or should I write a utility which works 
> with the ioctl interface to set these parameters outside of V4L applications?

Many of these settings are optional, and can be passed to the driver as
parameters - or accessed through the /proc filesystem. The usbvideo module
offers this interface already.

Generally, you should use the usbvideo module (if you can) because it
already contains 2000+ lines of v4l code that you most certainly don't
want to duplicate. You need to write a "minidriver" that only deals with
your device and nothing but your device, and leave the rest to the shared
code. It's time to put an end to cut-and-paste practices :-) We already
have thousands of lines of "similar-but-slightly-different" code that
has to be maintained individually; that hurts.

Examples of minidrivers that use usbvideo are ibmcam, webcamgo,
ultradrv. If you need to add something to usbvideo to support your class
of cameras, patches are welcomed!

Cheers,
Dmitri

-- 
panic("esp_handle: current_SC == penguin within interrupt!");
(Panic message in the kernel.)

Attachment: pgpsYSskf6fif.pgp
Description: PGP signature


[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux