Peter Lohmann wrote: > > Ideally, the specification of the expected data format would not depend on > the size of the buffer provided. Also, since different parts of the world > use the VBI region for different things, no assumption should be made as to the > data format or which line the data would be on. > Maybe we could specify flags for these so that the user program and driver > can communicate the formats capable of slicing via the VIDIOC_QUERYCAP and > the format desired or currently selected via the VIDIOC_G_FMT and > VIDIOC_S_FMT IOCTLs. Let's start with VIDIOC_QUERYCAP. Presumed we won't add a new device type v4l2_cap has to diversify into raw vbi, sliced data or flag both capabilities. If, however, the sliced api works for other (radio...) services as well one should keep the raw vbi api strictly video and create another general purpose sliced data api. Sliced formats known by the hardware must be enumerated, when I think about it v4l2_cap is probably the wrong place. IMO VIDIOC_QUERYCAP is already overloaded with context dependant information we don't need here, it should focus on the kernel interface capabilities and not rough hardware hints. Other device types have VIDIOC_ENUM ioctls to enumerate formats, shouldn't we follow the major design principle? We could name the available formats, if only to tell there's a format the application fails to interpret because it's a custom format or has been defined at a later date. Also space for additional per format details we want to know, eg. hardware limitations. (I wonder why v4l2_format has a union and v4l2_fmtdesc hasn't?) Adding BUF_TYPE_SLICED and a v4l2_sliced_format looks fine to me, I don't feel comfortable re-using v4l2_vbi_format for a completely different job, always assumed we will in fact end up with a mostly incompatible api. The disadvantage is more bloat compared to a bitmask in v4l2_cap and extended semantics for v4l2_vbi_format.sample_format et al. Can we agree a serial stream of payload is out of question because it's vital to associate bytes with field and line numbers? That seems to leave two options, a stream of variable sized units of header + payload, or data stored in a 2d array. Suppose raw vbi like direct mapping of scan lines to array lines. It can be a quite large (or split) array when you truly don't care about the source line, or services are encoded way apart, but let's assume we get away with a number of canonical VBI lines, say 25 per field. Empty lines (not carrying any information) must be unambiguously marked and also typed if we allow format multiplexing, so a header seems unavoidable, or be that flags. The array width can be determined implicitly by the requested format, to reverse your idea, eg. CC 2 bytes, System B Teletext 42 bytes, VPS 13 bytes. Or determined by the widest format requested, or some arbitrary constant eg. 64 bytes. I'm only concerned we regret the latter when new, larger formats appear. Suppose we make the buffer variable in height by adding field and line numbers to each array line, no more gaps and the app has to parse a stream as described above, line by line. Fine for read() and write(), but how would it work with streaming i/o where the buffer size is predetermined at mmap()ing time and one timestamp associated with all the buffer contents? Possibilities I see include: a) Each buffer holds only one line of sliced data, prohibitively expensive. b) We concatenate buffers, eg. stamp #1, first ten lines; stamp #1, next ten lines; stamp #1, remaining five lines, rest unused; stamp #2, next frame... a packetized elementary stream.:-) c) Buffers must provide enough room for one field or frame, eg. 25 or even 625 lines. d) Each line gets its own timestamp. e) No streaming i/o, no kernel timestamping allowed. :-( One shouldn't forget about future kiobufs, but it doesn't seem to make a big difference. > The Philips SAA7114 can be programmed to expect the following data > formats: > > WST625 Teletext EuroWST, CCST > CC625 European Closed Caption > VPS VPS (video programming service) > WSS Wide Screen Signalling > WST525 US teletext (WST) > CC525 US closed Caption (line 21) > Intercast Raw > Gen. Text Teletext programmable > VITC625 VITC/EBU Timecodes (Europe) programmable > VITC625 VITC/SMPTE Timecodes (USA) programmable > NABTS US NABTS > Japtext Moji (Japanese) > JFS Japanese format switch (L20/22) A remarkable versatile device. Is bit/byte-endianess across devices an issue? Probably not. How about DMA, advantageous to store payload in adjacent locations? Difficulties because DMA writes additional information, padding necessary? How about output devices? The ones I'm familiar with are SAA 7111A and 7125 (output), they both only support CC525 polling via i2c. It should be safe to assume all video data services are, and will be, line oriented and deliver in small units, or has anyone other news? How about DVB, is it something to consider or not? (Judging from the current pace, by the time v4l2 will make it into a stable kernel I have no hope any analogue device will still remain in use... :-) Michael