> And exactly this attitude (write incomplete software which simply fails > in some cases) is one of the reasons why I don't want to do make the > v4l2 API more complex. That was not my attitude. My attitude is to simply use the software and hardware of the machine to their greatest potential. That means as efficiently as possible. If the capture card is capable of capturing directly to the display buffer, then that is obviously more efficient than capturing to an arbitrary buffer then copying. It is the user who will determine the use of the buffers, so it would be nice if the user >From Chris Pirazzi's "Video I/O on Linux: Lessons Learned from SGI" (http://www.linuxpower.org/display.php?id=216) "Resist the urge to wrap buffers of data in any kind of "helper" object: let the application provide its own pointers to malloc()ed memory to receive the video data, or from which to read video data. Think of video I/O as a slightly augmented version of read() and write() rather than some high-level encapsulated object-oriented thing. Believe me, we've been there, and it's a waste of time." Further down he states that constraints on the memory passed to the device are fine (byte alignment, etc. ). It would be fair enough to state that the memory's real address needs to be below the 4G line. Then we should figure out a way to ask the OS for memory that will always be guaranteed to fit the restraints require by the system, or fail cleanly. This is where Mr. Cox could help out a bit. As far as complexity is concerned, that complexity occurs at two points in the program, one in the allocation of the buffers, and one in the de-allocation of the buffers. The complexity of the setup of software that is supposed to be tuned and efficient is almost always manageable (Check out the code for setting up and managing hardware buffers for directX), the complexity of a running system is where you get hurt. But as long as the driver is explicit in its error codes, who cares? I take a bit of offense to the top comment, but I see your point. I will now see if there is a way to somehow get X to display from my own arbitrary memory. This is where I don't know enough about shared memory, not even close. I need to have the shared memory system in Linux simply give an ID to the memory I got from the driver, then I can give that ID to X. But the problem is to get the ID the system always seems to allocate its own memory. There has GOT to be a solution somewhere, it is just too logical not to work. Chris