Spout

NDI to Spout commandline?

Hi there and thanks for your continuous work on this whole project!

Is there any way to start the NDI to Spout application with a commandline argument (or via some settings file) to specify which NDI source should be used?

(And: Are the sources for the “Spout to NDI” applications available online? I couldn’t find them.)

Cheers

I am sorry, but I have not made a repository for the project, although the underlying Openframeworks addon ofxNDI is open source.

As it happens, I have had contact from somebody else who needed a command line option for connecting with a required Spout sender. This is possible using certain functions in the Spout SDK and works quite well. It’s awaiting results of practical testing and has not been released yet.

This is with Spout to NDI, rather than NDI to Spout as you suggest. Although NDI does not have the same functions, it might be possible by other means. I will look into it, but can’t invest the time for a complete solution at the moment.

Thanks for your quick answers!

@spout can i bother you with another question regarding the NDI to Spout app: I see the process uses significant GPU time when using 4K video. I wonder if we could potentially lower the GPU use if the app would not display anything itself?
I guess it’d be next to nothing but I have to deal with a hungry main application so every little bit would help :slight_smile:

Maybe you could consider putting the sources online as a starting point for endeavours like this.

Yes I see the GPU usage problem. I will see what I can do about that. Specifying the receiver could also be possible within the ofxNDI addon.

Great, thanks!

As for the ofx addon: I’ve actually dabbled in ofx some years ago on the raspberry pi but that was it. That is why I do not feel up to the task for creating a (performance-critical) app from ground up even if it just means plugging two plugins together :slight_smile: Fiddling with some existing sources would be much easier to me.

My version of ofxNDI is there already for an Openframeworks application. There are now many others (pity about the same name).

I find that GPU performance can be improved at the resolution you mention by sending only when received. For example, if NDI is sending at 30 fps, the sender can follow. Originally there was trouble with Test Patterns sending at 1 fps and flashing. But I can get around this.

However, setting the receiving name is a different problem because ofxNDI has been set up to use an index into the NDI sender list rather than a name. If a name is specified it has to be checked against the list… I am looking at this at the moment and have found so far that implementing this can get out of hand and the code badly bloated. It’s a change necessary for ofxNDI rather than the application itself.

There has been interest in ofxNDI for the rpi and changes have been contributed for Linux. Presumably it works OK but I have had no feedback.

.

Ah ok, so I guess right now it follows the vsync of the monitor? That’s good to know!

Do you know if this list sorted by name in some way or does it follow another pattern (e.g. sorted by start order)? If it’s sorted by name that’d already be enough for my use case as we have fixed number of NDI streams. But a general solution would be great of course.

Just tested it: Yes it is.

That might introduce some trouble, e.g. when monitor and NDI stream have equal framerates but are not in sync (which they never will be without some additional sync tech I guess).

So an option to only send when received as you said makes sense!

Here is a pre-release for testing.:

Both “Spout to NDI” and “NDI to Spout” can be started from a command line. Details are in the documentation. GPU performance for “NDI to Spout” should be improved and perhaps the command line option will work for you.

As for index order, the NDI sender list is not sorted and comes as NDI produces it. I don’t know any details about that process.

@krisrok “NDI to Spout” only sends when received, but has to draw all the time, or flashing occurs due to different frame rates. To eliminate the draw time, minimize the application to the Taskbar and Openframeworks will not render.

1 Like

Thanks so much, will report back next week!

I think it’s OK, so I hope it works for you.

Compared to “Spout to NDI” there is an advantage that the application already waits until the sender returns if there is a network drop-out.

All it needed was a way to specify that sender so that the application waits for it to start.

I have commented the logic of this in the CreateReceiver function of ofxNDI in ofxNDIreceive.cpp.

The connection is a bit slow but I can’t do much about that.

Hey there, I just tested the prerelease version and it doesn’t have the effect I expected, it still seems bound to the monitor’s refresh rate and the GPU is still as busy as in the version before.

My test case is sending a 3840p25 video via VLC’s NDI output. GPU is a RTX2070SUPER.
I naively read the GPU load values from task manager set to the high update interval as I don’t have better tool at hand.

When I use “NDI to Spout” on a 60Hz display it consumes about 40-45% GPU when minimized. Interestingly, when it’s in windowed mode it fluctuates between 25-50%.

With the display set to 30Hz I get 10-11% GPU load. Minimized or not does not make a difference.

Those are my fndings with the 2.009 prerelease. With 2.008 it performs exactly the same for me. I just noticed one tiny difference: When on 60Hz the load fluctuates no matter if the app is minimized or not :slight_smile:

That’s strange because the sending rate should now be exactly the same as that received from NDI. ReceiveImage does not return true unless video data is received. I have confirmed a significant drop in GPU usage using Task Manager and Performance Monitor. I get about 6% for a 4K NDI source. I will look at it again carefully though.

Strange indeed. How do you generate your NDI input?

This time I used OBS to output a 4K video to NDI, this way I can easily switch between 60fps and 10fps output.

60fps NDI stream and 60Hz display result in about 30% GPU load for “NDI to Spout”.
When I set OBS to 10fps it drops to around 15-20%.
But when I set my display from 60 to 30Hz the GPU load significantly drops to <10%.

I’ve repeated the same actions for 2.008 and 2.009 and recorded it with PerfMon:

2.008:

2.009:

Edit: We’ve seen this same behaviour on three PCs so at least it’s not just my machine :slight_smile:

Sorry to mislead, but my previous figures were based on a 1920x1080 source, not 4K. 3840x2160 does not do so well. Sending rate is the same as that produced by NDI up to a point. I see around 24 fps for a 4K source, so it doesn’t matter if it’s running at 60 fps, that rate will not be achieved for the Spout output.

The function that takes the time and consumes GPU resources is loading the texture with pixel data from the CPU, the OpenGL function is glTexSubImage2D. Unlike NDI, the data is uncompressed so time per frame is increased significantly with the 4K frame size.

It certainly uses GPU resources but I can’t achieve any better. I have tried different methods such as receiving into pixels instead of a texture, but it isn’t as efficient and at some point the NDI pixels have to be uploaded to a GPU texture.

Changing the refresh rate of the monitor will change the openframeworks draw cycle speed and rate at which the NDI source is sampled. I am not sure how this would interact with the NDI source rate.