Hello there!
Can I use spout as an input device?
When I get the device list with command
ffmpeg -list_devices true -f dshow -i dummy I doesn’t see the virtual camera (spout).
Help me please.
Best regards.
Hello @DufeRob,
I have never tried SpoutCam with FFmpeg but it should be possible because it’s simply a DirectShow filter.
The command you listed works OK for me :
[dshow @ 000001705669c7c0] DirectShow video devices (some may be both video and audio devices)
[dshow @ 000001705669c7c0] "SpoutCam"
[dshow @ 000001705669c7c0] Alternative name "@device_sw_{860BB310-5D01-11D0-BD3B-00A0C911CE86}\{8E14549A-DB61-4309-AFA1-3578E927E933}"
The problem could be a 32-bit/64-bit issue. Download the latest Spout update (Update 3d) and follow the instructions in setup.pdf to register SpoutCam. This will register both 32 bit and 64 bit versions on a 64 bit system. FFmpeg should then detect it.
Great! Thanks! Now it works! Spout Update2 was not listed. Thank you.
OK, that’s a good outcome. I had not thought about FFmpeg before. This opens up all sorts of possibilities.
For best performance the sender resolution width should be a multiple of 16. WIth SpoutCamSettings, select “Active sender” and 30 fps. Then SpoutCam will match resolution with the sender and copy will be most efficient.
I’m actually trying to use Spoutcam as an input and all I’m getting is a black screen. Is this a GPU issue?
Try running a sender before you start FFmpeg. Maybe it’s a 32/64 bit problem as above. If your copy of FFmpeg is either 32 bit or 64 bit, you could confirm that by running another host program of the same type.
The latest release is not dependent on OpenGL/DirectX texture linkage and should work on most graphics. The command above lists SpoutCam for me but I have not tried it as an input.
Edit -
Just for a test :
ffmpeg -f dshow -i video=“SpoutCam” output.mkv
This records the demo sender OK at 30fps. But if that is not started first, I just get a recording of static. There were a few frame drops at 1920x1080 but hardware encoding kept up. Other encoding adjustments or buffering might help as well.
How would I go about using a spout sender with Ffmpeg? let me clarify; I want to share a window in an application thats playing video. I don’t have a spout sender app or plugin, but ffmpeg can ‘see’ it when it does something like a gdigrab (window grab). I’m interested in using TouchDesigner to recieve the spout ‘signal’ but can’t seem to wrap my head around how to do this.
I am not sure I understand what application is actually playing the video. Is it something like VLC or is it your own application using FFmpeg?
If FFmpeg reads the video, you can set it up as an input pipe with “image2pipe” to read individual frames and use that pixel buffer for “SendImage”.
There is a lot to consider here, but as it happens I do have a project that might help with using FFmpeg in this way. It depends on what you are doing and how that might fit in.