Alan Cox wrote:
You want to support read so you can do things like
cat /dev/video | stream2jpeg| magicperlbits
and the like
Ah, you've hit on my biggest gripe with V4L read: there is no way to
set the format or resolution, and even if you don't care, there's still
no standard way to find out how the data you just captured is formatted.
One solution is to write a suite of tiny apps that make various V4L
ioctls, and assume that the driver saves the states of these settings
across open/close.
This is what ov511 used to do, but I got many complaints like "if I run
vidcat before I run gqcam, the image comes out different". Now I reset
the video parameters on open so that things are at least consistent, but
this effectively makes it impossible to write a hardware-independent
read-based script.
The solution, IMHO, is to do something like "vidcat -d /dev/video0 -s
352x288 -f jpeg | magicperlbits", and forget about read altogether. Do
you see any disadvantage to that?
If you have the entire frame in the internal buffer then that case becomes
easy but the remembering where in the frame you are cases do become trickier
(imagine a perl script reading the top of the frame removing the black from
letterboxing and skipping the bottom by using block reads)
Agreed. Gerd kindly reminded me that partial reads are useful for more
than just getting at the data as soon as possible. I'll implement them
in ov511 (depending on how this discussion pans out).
Im assuming the library is something like
convert_fromfmt_tofmt(from, to
Not to nitpick, but wouldn't this be better:
convert(src, srcformat, dest, destformat)
That way, new formats could be added without changing the apps' code (or
even without recompiling it if we use a shared lib)
Anyway, your idea is much better than what I had in mind. I was thinking
along the lines of:
grab_frame(buffer, desired_format)
but that takes a lot of control away from the app. I don't want anything
to stand in the way of app developers using the library. They will have
to if they want ov511 support.
--
Mark McClelland
mmcclell@xxxxxxxxxxx