Re: Studio-grade hardware support?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Sun, Apr 14, 2002 at 01:46:05AM -0500, Billy Biggs wrote:
> 
>   Well I'm assuming 29.97.  Since I don't know of a format that can
> record 352x240 @ 59.94 and keep track of the dominance of each field, I
> don't consider that an option.

Fair enough.  I thought perhaps you were referring to "ideally".

>   Consider MPEG2.  You could do 352x480 frames, interlaced, put in the
> dominance, and you get exactly what you describe later in your email.

So MPEG2 decoder/playback is capable of displaying interlaced fields
@59.94?  That would have to be dependent on the video driver in the
case of a TV out (i.e. G400) situation still though, right?

>   You're using:
> 
>   5 - Vertical interpolation
>       Averages every current and next line, has a blurring effect.

Indeed.  I shoulda remembered they are on the manpage.

>   So that means that instead of seeing the raw data you get a blur
> between both fields, so motion just looks sort of blurry instead of
> harsh artifacts from looking at two frames at once.  I wouldn't think
> that would look too good, but it's definitely one (highly lossy)
> 'deinterlacing' option.

Right.  I think between the "combing" effect of not deinterlacing and
bluring of interpolation, I would choose the latter.  I just find the
combing effect too anoying.

>   Which is basically recording to 59.94 @ 720x480 for each frame.  Lots
> of bandwidth.  I'd rather fix up mpeg2enc and use that on prerecorded
> stuff using a lossless codec, but that should be obvious from the code
> I've written so far (and useless for what you want to do, that is,
> realtime PVR!).

Right.  :-)

>   Well you can send it frames that are interlaced,

Just to make sure we are using the same nomenclature, when you say
"frames that are interlaced," do you mean sending a frame that is a
composition of the two fields or do you mean sending a frame that is
only one field, interlaced into every other line?

>   but you have no
> control over the dominance (which field is shown first),

Which seems to imply sending the two fields in one frame because if
you were sending the two fields separtely, the timing of them would
indicate dominance, no?

> nor do you get
> an interrupt to tell you when to blit the next frame.

The G400 supports "stacking" (or buffering) of frames into it's memory
and it will display them at the framerate of the output device.  i.e.
asychronous operation instead of sychronous.  Does that not make the
non-interrupt a non-issue?

>   Because I'm a deinterlacer.  Each field I have to show as a frame.  I
> want to watch TV at full framerate.  So I interpolate each field to
> frame height and send it.  Is there something that isn't clear?

Ahhh.  I guess what was not clear is that I assumed you were using the
hardware scaler on the video card to stretch the 240 lines into 480.
If you could, you cut your bandwidth requirements in half.

>   Yes that's correct.

But would be half if you used the video cards hardware scaler to
double the vertical resolution.

>   Are you sure?

If I understand it correctly.

>   The G400 docs say that if you have the TV input version
> of the card (I didn't know there was one) or some sort of TV input thing
> onboard, then it can use the refresh of that to drive the monitor.  Is
> that what you're talking about?  It can't genlock from software as far
> as I can tell, nor can it genlock from another video capture card.

Hmmmm.  Maybe we are talking about slightly differnet things.  It was
my impression that you could buffer frames into the G400's memory, and
not have to worry about timing because it would keep the timing (at
the output device's rate, NTSC, PAL, etc.) and display new frames on
the vertical retrace, eliminating tearing and (theoretically) judder.
All you have to do is make sure that you are pushing frames into the
buffer at least as fast as the card is displaying them.

Maybe I am misunderstanding how this works.

>   I know that most TV graphics are generated for 720x480, and most DVDs
> store video as 720x480 images.

Right.  These numbers are familiar to me.

> However it is analog, so the scanlines
> read from your capture card might not match up.  I found that running
> tvtime with 720/scanline gives alot more horizontal detail than
> 640/scanline,

Which would make sense.

> and if you go to like 960 which the bttv samples at itself
> you can see even more detail.  I think it would be neat to record at
> 960x480 on the bttv and then do a really slow interpolator and see if
> you can't get a better reproduction of the original source that way.

:-)

Well I did a 720x480 capture and played back with MPlayer's -fs
option to my G400/TV-Out.  I got, predictably, black bands at the top
and bottom of the picture, because my framebuffer mode is set at
800x600.  I guess the goal would be to have a framebuffer ratio of 3:2
rather than 4:3.

>   But again, I want to write a high quality VCR app before I write my
> PVR, so this is how I approach this stuff. :)

Where are you drawing your distinction between PVR and VCR?

>   I have some nice examples I show people.  One is of a scene from a
> special features film roll on the 'Lawrence of Arabia' DVD.  As
> movietime starts it takes a few seconds to lock onto the 3:2 pulldown
> sequence, but the image goes from kinda poor to ultra-clear and people
> go 'cooool'.

That is cool that you have that.

>   You should go to OLS.  There will be some video types there and I'll
> try and show off some demos and stuff.  http://ottawalinuxsymposium.org

Oh, I am going to be there!  How can I not be being only 1.5-2hrs down
the road?  :-)  Just checked out the webpage.  There are lots of
people I want to meet and some faces I will know, people I have worked
with, etc.  I don't think I realized how cool OLS is.

>   That's part of the problem.

I figgered it would be like this.  :-)

>   You need to do image heuristics to detect
> the sequence (except on well-mastered NTSC DVDs, which store the
> progressive 24fps stream and have flags to tell the player to perform
> the process).  I have code to do this and so do others, but since it's
> heuristics based, off-line versions can do a better job.

Of course.  I guess you would capture the NTSC and then telecine it
offline.  But if I were just going to watch the movie and then delete
it, does the difference in viewing pleasure really warrant the
process?  Automated, I suppose it would be alright.

>   There are telecine detectors in higher end TVs.  For a neat (GPL'ed)
> Windows deinterlacer that does 59.94fps output and can detect pulldown,
> see http://deinterlace.sf.net/

Wow.  It would be nice if they abstracted enough to separate the
display stuff from the rest so that the results could be used on other
platforms.  Maybe one day.  Maybe MPlayer can make use of the win32
binaries.


>   Yep.

But not possible with today's drivers.

>   Um, well any video card with TV output that has some drivers started,
> I guess. :)  There is some code for TV out on some Radeons in cvs at
> gatos.sf.net, so that's a possibility, and for nVidia cards there is
> some code here: http://sourceforge.net/projects/nv-tv-out  but I'm not
> sure how to make sure I get a supported card.

A card that you will have enough access to the hardware that you can
output field-at-a-time, interlaced, with proper dominance?

I have a Radeon.  :-)  I never liked the TV-Out judder it produced
though, and the only option for TV-Out was to use it in VESA mode (if
you don't want to use X11, which I don't if I don't have to).

b.

-- 
Brian J. Murrell

Attachment: pgpgMu2Jb40jY.pgp
Description: PGP signature


[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux