Using ffmpeg to record a Spout sender


I am currently using OBS to record to a file the output of a Spout sender.

I would like to stop using OBS and directly use the ffmpeg command line to record my Spout sender.

I have tried listing devices like so : ffmpeg.exe -list_devices true -f dshow -i dummy

But I don’t see my Spout sender.

I am using the latest version of Spout on windows 10.

Is there any way to record a Spout sender simply by using the ffmpeg command line?

Or do I have to use GitHub - leadedge/SpoutRecorder: A Windows console application to encode Spout input with FFmpeg. ?

I tried using it as a normal command line .\SpoutRecorder.exe -start -t 10 E:\recordings\output.mp4 but there is so many issues with it.

The “-t 10” standard ffmpeg option is not taken into account. The recording does not stop after 10 seconds.

The output destination is taken into account but an intermediary file is created in DATA/Video, which does not seem to be changeable. In my case it is a system drive with very little space.

It completely messed-up the console. The ffmpeg output is mixed-in with the SpoutRecorder text based user interface.

Also, I can not simply use CTRL+C and other usual ffmpeg command line hotkeys because the SpoutRecorder textbased user interface redefined these hotkeys with ALT+Q and others.

And when I press ALT+Q, it completely closes the console. Which means I lose all the console output text and each time I try different ffmpeg parameters I need to re-open a console window and setup the command.

I have no need for a user interface, just a simple command.

I ideally just need a ffmpeg option like “-f x11grab” to simply grab the Spout sender.

Is there any way this is possible?

Thank you

Spout senders are not recognised directly by FFmpeg. The listing you show is for DirectShow devices. SpoutCam will show in the list if it has been registered. The latest release is here. Then you could record that using a simple command line.

The SpoutRecorder console project is replaced now and this will be used as demonstration programs for the next Spout release, hopefully very soon.


Thank you very much for your help.

Is there anyway to register any Spout Sender as a DirectShow device without the use of SpoutCam?

I have the need to record 4K 60 fps videos and I am trying to limit the number of applications that are required to save some CPU, GPU and memory resources.

I see the new SpoutRecorder console uses OpenFramework which is awesome.

Unfortunately it is still a console application and not a command-line application. Which suffers from all the pain points I mentioned in my previous post.

Thanks again for your help.

You can only register a Spout Sender as a DirectShow device by way of SpoutCam. A Spout sender is basically a shared texture and nothing to do with DirectShow or FFmpeg. A command line using SpoutCam is the only way without a host like a console or window application.

If there is a ffmpeg libavdevice for DirectShow (ffmpeg/blob/master/libavdevice/dshow.c) or for x11 raw capture (ffmpeg/blob/master/libavdevice/x11grab.c) can’t there be one for a “shared texture” ?

I tried using SpoutCam + ffmpeg.

ffmpeg.exe -f dshow -rtbufsize 2147.48M -i video=SpoutCam -c:v libx264 -b:v 58000k -preset ultrafast

GPU 3D usage (as seen in task manager) of my 4070 goes to 100% with frame drops. CPU is fine.

Using OBS, same encoding method (libx264 / CPU encoding) GPU 3D usage goes to 70% without frame drops.

There is unfortunately clearly a penalty going through SpoutCam which makes it unusable for 4K 60 fps content.

(SpoutSettings are all at default values, SpoutCamSettings, only FPS set to 60 is specified)

I have created a ticket in the ffmpeg project : #10766 (Feature request - Spout capture) – FFmpeg

All alternative solutions I tried are unfortunately inefficient workarounds.

Let’s hope somebody has an interest in this request.

SpoutCam is intended to emulate a webcam of typical dimensions, 1920x1080 maximum. OBS takes Spout input using the texture directly, but DirectShow requires a CPU image buffer and pixels are unloaded from a texture to CPU. 4K is too much data for efficient operation. Also 60fps is asking a lot. 30fps might work better.

Whether 1080p, 4K, 30/60 fps, there is something in the way SpoutCam operates that makes it use more GPU resources (as seen in the 3D graph in Task Manager) than OBS or the SpoutRecorder.

That means more energy used and less GPU resource available for other tasks, such as actually rendering 3D in games for example.

You describe OBS use a texture directly. But that texture is also at some point unloaded to CPU to be given to the input of ffmpeg (libx264 is using CPU for encoding).

How is this different from what’s happening in SpoutCam?

SpoutCam requires protocols for DirectShow and that means a CPU RGB image buffer rather than the GPU RGBA pixels of the shared texture. For maximum efficiency, SpoutCam uses dual staging texture copy and SSE RGB image conversion as required for DirectShow. OBS takes the Spout texture directly and does not have to conform with the requirements of DirectShow. There is no more that can be done other that develop another solution.

Thanks for taking the time to go into the details. I am learning a lot.

Both functions you linked seem to happen on CPU. Which would not explain why I am seeing 100% GPU usage vs 70% for obs/ffmpeg.

I would have to profile it to find out exactly. I can say that while the SSE part is all CPU, as is the DirectShow processing, the two staging textures involve GPU to CPU copy. At 4K you will have 3 x 4K textures busy. That’s a lot of memory.

I have monitored GPU memory and never maxed it during my testing.

It is really the 3D usage graph which goes to 100%, which leads to poor performances in other applications which require 3D usage and dropped frames in the output video file.

I think what might be happening is build-up of the D3D11 command queue. At 4K texture size, I can reproduce the high GPU usage.

It is possible that because the texture copy command “CopyResource” and the command queue “Flush” are asynchronous, the driver continues to queue the commands to the limit.

There is a method that waits for command queue completion and I have confirmed that this reduces GPU load. I can introduce this when time permits if more testing does not show any problems. However, this still will not provide a solution for recording at 4K.

Thanks for taking a stab at these highly technical details.

Using OBS and ffmpeg CPU h264 encoding, I have now 0% GPU 3D usage added after pressing “Record” at 4K 60fps.

I used to have at least 50% of GPU 3D usage. I simplified my scene workflow among other things.

Which means OBS is for my use case the best current solution, but far from ideal.

It crashes, has a UI which consumes CPU and memory, and it is not a command line.

Considering the overhead of DirectShow as you mentioned, the ideal solution seems more and more to be a Spout libavdevice integrated into ffmpeg.

You could also try NDI “Application.NDIRecording.exe” that comes together with Studio Monitor. There is no documentation that I can find. Open a command console and type in the name for help.

Spout to NDI can convert 4K successfully and, with all options enabled, produces a YUV compressed image which is the most efficient for NDI.

NDI does not allow specifying target bitrates, quality (CRF parameters), multi-threading options.

And it adds overhead because of the network layer.

It is not usable for my use case.

Thanks for trying to find alternatives.