Gerd Knorr wrote: > > > The core uses deep buffering to give some latitude to the synch > > routines, and absorb any transient latencies. Until v4l gets > > timestamping, the only way to buffer deeply with v4l would be to have a > > tight loop CSYNCing frames as soon as possible, and timestamping them, > > then copying them into a userspace buffer - which is a hideous waste of > > processing power. > > Having a thread which does nothing but CSYNC + gettimeofday() certainly > helps to get more exact timestamps (one of the reasons why I've moved > xawtv's compression code to another thread recently). But you don't > need to copy the buffer. The v4l2 API allows up to 32 buffers (and so I wasn't aware that the v4l2 API put any limits on this? > does bttv when loaded with gbuffers=32), that should be enouth ... Yes, but this won't cope with transient latencies. If the capture thread doesn't get run for a couple of hundred milliseconds, you can develop quite a skew in the timestamp - that is why I though of using "shallow" kernel buffers, and a deep userspace buffer. But all of this is pointless (and there just is not 1 completely functional solution). The current bttv 0.7.x implementation is timestamping the buffer - why don't we just add enhanced CSYNC capabilities to get this timestamp back to the user? Something like: struct video_timedbuffer { int index; unsigned long long timestamp; // possibly sequence numbers/buffer size/ other usefull information int reserved[8]; // for everything we forgot the first time round... } And an ioctl VIDIOCSYNC_TIMED which takes a struct video_timedbuffer as an argument, and will sync the buffer with index "index", and then fill in the rest of the information. Or, maybe piggyback it on VIDIOCSYNC, using a negative index to trigger the extra capabilities (which is nice, as drivers that do not support it will simply return -EINVAL at this point, and we can fall back to the traditional interface. -justin