VMem leak or am i doing wrong?

I have my own videoplayer and using the latest version of Spout2. So i -

  1. Created sender using CreateSender(senderName, Width, Height);
    But final texture size may be different from Width/Height of sender. It’s actually dynamic.
  2. Playing videos and sending GL texture using SendTexture(TextureID, TextureTarget, Width, Height, bInvert, HostFBO);

Everything is ok if playing only one file no matter how long it plays, but when i open other videos with different resolutions (4k or 8k) one by one i got vmem leak until i get gl 1285 error. Same behavior for 8gb cards and 16gb cards. So my HW is not the problem.
So the questions is - should i recreate sender when video dimensions change or there is another way to clean it? Should i do the same with receivers?

Hello Igor

I will run some GPU memory leak checks. Preliminary tests don’t show a leak.

Meanwhile, there certainly would be a problem if the width and height you use for the sender is not the same as the texture dimensions. It’s important that the “SendTexture” width and height arguments match the texture dimensions.

When these dimensions change, SendTexture updates the sender size to the width and height passed in. You don’t have to update the sender within your application as long as the changed texture dimensions are passed to SendTexture.

Receivers detect a sender size change by testing “IsUpdated()” at each render cycle. When that returns true, the receiving texture must be re-sized.

Hello, nice to hear from you!
I changed my “sender” code to check if texture dimensions are changed. Then i just release and recreate sender. And when switching videos/pictures now in playlist “dedicated GPU memory” in taskmanager for my player is grow up and down as it should :slight_smile: Now the problem is with receivers, dedicated GPU mem always grow up for them…
There is only one cycle in receiver and it checks whether the texture is received then it
BindSharedTexture(), renders it and UnBindSharedTexture(). So it does not copy shared texture to a new texture.

BTW I have just tried to use UpdateSender() when texture dimensions change, w/o releasing sender and got nothing, vmem leak remains. And i tried both Nvidia and AMD HW, both have the same problem, so it seems like my fault =)
Usecase of this player is like VIOSO one, or many others if you know what i mean :wink:

Also i launched both ofSpoutReceiver.exe and ofSpoutSender.exe from examples folder of SDK and started to resize sender window back and forth and got vmem leak for only receiver (dedicated vmem always grow up), sender seems like ok here… That’s strange…

I made a sender producing a texture starting at 640x360, increasing in size every second from 640 to 7680 width and proportional height, and repeating this cycle.

I also made a receiver using BindSharedTexture and UnBindSharedTexture exactly as you have mentioned and confirmed that “IsUpdated” is not necessary if there is no receiving texture.

I tested these programs, as well as the Spout demo receiver, to monitor GPU usage.

I monitored using Task Manager>Performance>GPU, GPU-Z and within the sending and receiving applications using OpenGL memory extensions, in my case ATI. There are simlar for NVIDIA but I have not tested those.

I all cases, I monitored Dedicated Memory Usage and observe the expected increase and decrease but do not find any increase over time. I ran the test for over an hour.

I tried using ofSpoutReceiver.exe and ofSpoutSender.exe from the examples folder but do not see any increase in GPU usage when the sender is re-sized.

There has to be a reason for what you observe, but I can’t duplicate your findings. I am wondering if it is a GPU hardware/driver problem and the OpenGL/DirectX interop extensions.

Please try the DirectX examples Tutorial04 (sender) and Tutorial07 (reveiver). This will eliminate the OpenGL/DirectX interop.

What is your graphics hardware and driver version.

Currently have two rigs, NV 531.41 New Feature driver and two Quadros A5000. Win11.
And the second one is single 5700xt, Win10 (btw will test it). First time i faced the problem my rig was equipped with rtx2700+5700xt (both had the latest drivers, game ready for NV), and later (in a week or so) with Quadro rtx 4000 + 5700xt. And in a week again with two A5000.

Tutorial 4+7 gives:
Receiver (cube with sender texture) is ok - vmem up and down, but sender just in about 15± clicks on “Maximize/Restore” window button starting to consume 1.5 gigs of vram. Got 4k monitor. That’s for 2 quadros A5000.
upd: Checked the second rig with single 5700xt and got the same behavior for sender.

That’s my rig. 3 gigs with ~30 clicks. Don’t forget to manually add “Dedicated vram” column to your taskmanager. I got two columns “Dedicated vram” and “Shared VRAM”. Shared VRAM shows nothing. Just to eliminate any kind of misunderstandings. I’m not native english speaker, as you can see. :blush:

upd: i see a lot of threads for sender, btw. 1468 here (NV screenshot, AMD’s does not have this column). Checked. Growing by ~20 with every sender window resize.
upd2: User Objects also increasing, and dedicated RAM (not VRAM) also - amd screenshot 1 681 140K (there is no such column in NV screenshot), 2 628 552K - VRAM.
Hope you understand what is what. In short - first columns are all about RAM and last columns before “command promt” are all about GPU and VRAM.

I think what might be happening is DirectX deferred destruction when the textures and other objects are released. I checked that all objects are released and have also introduced a context flush after every release which should destroy any objects whose destruction has been deferred.

I used Tutorial04 sender for testing and observe an increase and decrease of GPU dedicated memory for each size change up and down and this does not grow. Number of threads and objects is steady.

I have uploaded the revised programs to the Beta branch. The code changes are also up to date in this branch.

Please try Tutorial04.exe from here and let me know if there is any difference.

Tutorial04 works good now. No resources leak found.

Also tried both Beta branch demos SpoutReceiver.exe and SpoutSender.exe… I manually resized sender’s window several times.
User objects - ok
Threads - ok
Dedicated RAM and VRAM - not ok (leak)
Both executables affected.

and
SpoutLibraryReceiver.exe
SpoutLibrarySender.exe
User objects - ok
Threads - ok
Dedicated RAM and VRAM - not ok (leak) (ram/vram decreases, but decreased value grows)
Also both executables affected.

Thanks for your time and attention to this. I think we have found the problem. We are not in control over what the driver decides, but I don’t mind if dedicated ram decreases!.

Thank you! Glad to be useful.

Didn’t you try using glInvalidate… functions for textures and buffers, may be this would help?

Thanks for the suggestion, but I didn’t test that because we eliminated OpenGL by using the Tutorial04 example and found a conclusive result, so I think that DirectX deferred object destruction is at the heart of the problem.

This could be a significant solution for the OpenGL side if the problem still persists, so meanwhile please use the Beta branch code and keep an eye on any leaks you might observe.

I tested my player again with new beta.
The leak still exists however it’s not so gigantic now :slight_smile: I don’t see dedicated RAM leak anymore but dedicated VRAM still leaks for receivers a bit (it is really a problem when you have 8 or 10 or more of them). I left sender’s complete recreation for my player if texture dimensions change and monitor only receivers. So my receivers’ VRAM leaks with GL. I tried releasing/recreating receiver when texture dimensions change but it doesn’t help at all.

If you tell me what and how to test i’ll test it for you! :slight_smile:
I understand that others use fixed size framebuffer for their projects more likely and just do not see the problem. Resolume for example uses fixed size composition. We have direct contact with guys from VIOSO, so gotta ask them whether they met the problem or not, but i think they also use fixed sized FB + more DX then GL.

ps. I use static lib to build my own dll, btw, so could we all also have beta MT static build, but not dynamic, i don’t like additional dll dependencies :slight_smile:

Ok I will look into it.
The library files are recent, but have not been re-built with the changes yet. You can use CMake to re-build them. There is a Building the libraries pdf documentation. You have the /MT build option.

Rebuild spout_static.lib manually as you told me with /MT from beta, thanks!
Removed the code that checks whether texture dimensions are changed and recreates the sender from the player and it works really great now with or w/o UpdateSender() - no vmem leak detected! But all GL receivers still feel sick :slight_smile: (I also tested ofSpoutReceiver.exe with my player)

I think we have found a driver leak. This was reported back in 2021. At that time it was resolved by a driver update but it seems to have returned. I ran the test again and find exactly the same D3D debug result with AMD Radeon but no leak with Geforce. I will investigate further.

I would test the production branch of NV drivers, but such things like:
Open Issues in Version R525 U7

  • [MultiGPU][Mosaic] Black bars and tearing intermittently occurring

just scare me, because this is the key fearure :slight_smile:

ps to my previous post: sender is ok with NV as I mentioned above, but AMD 5700xt is not ok, sender and receiver both still leak. Just tested 2nd amd rig with the new edits.
Will test 23.4.3 amd’s driver and come back. :smiley:
23.4.3 is bad too.

Back in 2017 this problem was reported for Intel drivers. It was supposedly resolved, but in that post there are some useful test programs and I will see what I can do with this.

Decided to check some older NV drivers, so 470 series drivers (474.30) does not work for me at all. Win11 + Quadro A5000.
R515 U7 (518.03) WHQL works, but has the same problem with receivers (receivers leak) as New Feature latest R530 U2 (531.41) Quadro driver.
Can you open a ticket if you don’t mind?

Got my old ThinkStation P320 Tiny working. It has Quadro P600 + i7 7700T integrated graphics (Intel HD 630). Updated Intel graphics drivers to the latest 101.2121. Though it does not report dedicated GPU VRAM, just shows 0 Kb. It uses dedicated RAM and after resizing window of ofSpoutSender.exe, dedicated RAM of ofSpoutReceiver.exe slowly decreases to low values always. So i can conclude Intel gfx driver does not have the leak here with OGL.
Q P600 with an old driver 516.xx leaks in OGL so at least Nvidia is having this bug for a long time.