> The core uses deep buffering to give some latitude to the synch > routines, and absorb any transient latencies. Until v4l gets > timestamping, the only way to buffer deeply with v4l would be to have a > tight loop CSYNCing frames as soon as possible, and timestamping them, > then copying them into a userspace buffer - which is a hideous waste of > processing power. Having a thread which does nothing but CSYNC + gettimeofday() certainly helps to get more exact timestamps (one of the reasons why I've moved xawtv's compression code to another thread recently). But you don't need to copy the buffer. The v4l2 API allows up to 32 buffers (and so does bttv when loaded with gbuffers=32), that should be enouth ... Gerd -- Gerd Knorr <kraxel@xxxxxxxxxxx>