Re: Xvideo extension artifacts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



(From Mr. Cox)
> The hardware and video4linux api already can for many cards DMA into
> arbitary target buffers. If Xvideo doesnt support it fix the X end. For
> non DMA/mmap cases it would also be advantageous to support asking the
> X server to pull a frame across rather than via the client. Think for
> example of a voodoo2 wrapped as a v4l device being memcpy'd by the server
> into an overlay buffer and used to run glide games in a window..


So basically I would tell X that this is the pixmap location for the window, 
and let X read from that arbitrarily?  Let me think about that for a second
I would want control over which pixmap is copied by X, and exactly when.  
Then it would work find.

 Would this indeed be less work (for CPU) than using shared memory segments 
with xvideo?  Would I be able to use the same formats (YUV420, I420)?  

The beauty of the setup right now is that you do most of your work in 
compression, so minimizing that step is key.  A YUV420 image is already half 
the size of a rgb image, with no (especially if you are colorblind like me) 
visual artifacts.  So the compressor operates on an image effectively half 
the size to begin with.  With the xvideo extension, I can display that same 
image without using any colorspace conversions.  

So right now I am using the streaming interface for v4l2, then memcpy into an 
XvImage, then display (and compress if queue is empty) that image.  I need 
access to image, I need control over what is captured to next (or a memcpy 
for the compression system so I can use queueing).  That cuts out direct 
display to screen.  Now I don't like telling the bttv driver to allocate 12 
buffers on startup, because that memory doesn't go away when my application 
stops, it is the kernels and cannot be used for anything else.  

I am not familiar enough with the system you are proposing to say whether it 
would fit my needs or not, but I can tell you that I do need explicit access 
to each frame for as long as I want (compression is arbitrarily long step, 
even if it wasn't, such as jpeg w/ fixed huffman encoding, whether the 
compressing thread gets CPU time is arbitrary...).  So now I copy into 
another buffer.  

I am worried that I would not have the same level of access to the frame as I 
had before.  There are more issues as well (decompressing, how would i tell 
the X server to read from MY buffer instead of the capture device's), and on 
that I cannot comment as well.  Any input would be greatly appreciated.  Chris





[Index of Archives]     [Linux DVB]     [Video Disk Recorder]     [Asterisk]     [Photo]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Free Photo Albums]     [Fedora Users]     [Fedora Women]     [ALSA Users]     [ALSA Devel]     [Linux USB]

Powered by Linux