Quoting Joe Burks <joe-v4l@xxxxxxxxxxx>: > In fact if I read usbvideo.c right, any mini driver based on it will start > streaming after a call to VIDIOCMCAPTURE. It prepares the indicated frame, and returns immediately. The camera sends data continuously, and this data is stored in a ring queue. This data is NOT moved into the frame buffer because that would be the wrong context (the queue runs in HCD interrupt). > If you start streaming after a VIDIOCMCAPTURE ioctl, and block on VIDIOCSYNC > we have a problem. The most important operations in usbvideo.c are performed in VIDIOCSYNC context: case VIDIOCSYNC: while (not done) { - wait for new "raw" data from the camera - call the datastream decoder, fill the frame buffer } Postprocess the valid frame buffer (contrast etc. - optional) return 0; > *or* > VIDIOCSYNC does free the frame for reuse in which case there is a chance that > the driver could be writing to the buffer while the application is trying to > read from it. In usbvideo the frame will be marked as unused, but it needs another VIDIOCMCAPTURE call to mark it as FrameState_Grabbing. This means that after the VIDIOCSYNC call the frame won't be touched by the driver until the app explicitly asks to capture some more into it. > So, if the answer is you call VIDIOCMCAPTURE to queue a frame to be read, and > you call VIDIOCSYNC to wait for the frame to come in... Then the API doc is > not just unclear but arguably completely incorrect, the bttv driver looks > like it is broken, and the zapping application looks like it is broken too. API.html: >> Once the mmap has been made the VIDIOCMCAPTURE ioctl sets the image >> size you wish to use (which should match or be below the initial query >> size). So far so good. usbvideo does that. >> Having done so it will begin capturing to the memory mapped >> buffer. usbvideo is putting the camera data into another, internal buffer. But for all practical purposes, it will "start capturing" right away, unless you wait too long, then some old frames will be lost, of course. However *nothing* will be put into the mmap'ed buffer yet - because it can not be safely done in the driver's context (unless it runs its own thread.) - it involves waiting for data; been there, done that, changed back to user (ioctl) context. >> Whenever a buffer is "used" by the program it should called >> VIDIOCSYNC to free this frame up and continue. Not "used" but "consumed", I guess. Whatever. It can't work as described in the paragraph above, simply because the driver and the camera need time to collect the frame and put it into the frame buffer. This time varies and can't be known beforehand. That is exactly why the app has to issue the VIDIOCSYNC request - to *wait* until the entire buffer is here. Additionally, the driver may need to perform some operations on the *entire image* (inverting, mirroring, software adjustments etc.) I guess, the text in API.html is good for memory-mapped grabbers, which offer instant access to the frame (which is already and always there). This won't work for other, sequential devices. Dmitri -- The new Linux anthem will be "He's an idiot, but he's ok", as performed by Monthy Python. You'd better start practicing. (Linus Torvalds, announcing another kernel patch.)
Attachment:
pgpyfvlFPOrAz.pgp
Description: PGP signature