Michael Hunold replied to myself:
The user would basically have to use ps|grep and kill everytime
he/she would like to switch the channel in a TV app. Would be
a huge step backwards, far beyond v4l1.
Please keep in mind that v4l1 basically meant bttv. It might have
worked for bttv, but that was a policy that was not specified
anywhere. Most of the time it was pure luck that it simply
worked.
Point taken. Bt8x8 is a very flexible chip and may have allowed
features which may have been hard or even impossible to implement with
less flexible hardware. However it appears to me you're proposing that
driver writers shouldn't even have to try anymore, instead just lower
the general standards (in respect to S_FREQ limitations which seem
not to be dictated by your platform but rather to be deliberate.)
BTW how do you implement a channel scan including AFC if you have
to stop capturing for every frequency change?
There are both technical problems as well as policy problems. If you
change the norm, the maximum capture size can change, so you have
to reprogram your capture engine. In that case, you have to check all
capture buffers if they can handle the new size.
Most of the time, the size will shrink (let's say you have a
fullscreen PAL and change to NTSC), but there is no way that the
driver can tell "Hey, I'm capturing with 640x480 now in your
720x576 sized buffer".
Please check out the event reporting mechanism I proposed June 15th
here on the v4l mailing list ("Proposal for channel coordination
between multiple v4l2 users")
But actually I was only asking for *allowing* the norm change (i.e.
do not harass the application user with stupid error messages), not
necessarily for continuing to capture for the other processes if
the hardware can't cope with that. You could as well give them an
error code and stop. The apps can then query the new norm and
reconfigure their buffers accordingly.
There is a hidden policy in there that says "If the user changes the
norm and the maximum size is lower, then simply write to the buffer
with a smaller resolution". But hidden policies are always a bad
thing.
See above, that's not what I asked for. My proposed policy is more
general: "comply with the requested parameters as good as possible;
if it's not possible, adapt parameters instead of rejecting the
change." Also don't try to be more clever than the user/application
writer, e.g. don't reject S_FREQ because there might be "dropped
frames" which most people probably expect to happen anyways and if
they wanted to prevent them, they could still stop capturing.
(see also the newly introduced priority handling: a background
app should be able to avoid interfering with interactive/fore-
ground apps in any way)
The problem with that is, that the code has to be duplicated in
each and every driver. It adds bloat to the driver that needs to be
debugged. But kernel drivers should be small and simple, ev-
erything else should be done in userspace.
Sorry, which "code" are you referring too?
Anyways, I do accept the principle of "simplicity" of the kernel, but
one still has to look at the big picture, i.e. what hoops would users
have to jump through if the kernel doesn't support it. If you'd take
that principle to the extreme you wouldn't need v4l at all, just use
something like the Win32 DScaler driver which only does the PCI setup
and then allows userspace apps direct access to PCI registers and DMA.
Works very well for DScaler, just has a few drawbacks when you want
to run more than one video/vbi application at the same time.
So why not write a v4l2-dispatch-daemon, that lets your vbi ap-
plication harvest data in the background, gives exclusive capture
access to another application and manages channel switching and
norm changing?
How does this solve the problem? Capturing will still be done by
the indiviual applications, so you'd basically have to ask all other
clients first to stop capturing, wait until they comply and then
allow the channel change. Imagine the overhead in that.
bye,
-tom