VMem leak or am i doing wrong?

Thanks for this testing. It certainly is an old issue but at the time it was resolved. To isolate the exact problem, a basic sender and receiver have to be created without using Openframeworks, and I will look into that now.

My player and receivers are all using one simple class to create OGL context, so i create it manually, not using any kind of frameworks.
Currently I create simple temp context with:
options
PFD_DRAW_TO_WINDOW
PFD_SUPPORT_OPENGL
PFD_SWAP_EXCHANGE
PFD_SUPPORT_COMPOSITION
PFD_DOUBLEBUFFER

and then just recreate context with wglCreateContextAttribsARB
with attribs:
[WGL_CONTEXT_MAJOR_VERSION_ARB, 4, WGL_CONTEXT_MINOR_VERSION_ARB, 5, WGL_CONTEXT_RELEASE_BEHAVIOR_ARB, WGL_CONTEXT_RELEASE_BEHAVIOR_FLUSH_ARB, WGL_CONTEXT_FLAGS_ARB,
WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0, 0]

so i have 4.5 context.

Thanks for sharing this. I have used a function “CreateOpenGL()” that is already in the library and this does the job. I made a bare-bones sender and receiver and eliminated OpenFrameworks but the problem persists.

I went on to make a leak test program and have narrowed it down as far as I can.

I find that the leak is caused when “OpenSharedResource” is used to obtain the texture pointer from the handle of a shared texture produced by another process and that is linked to an OpenGL texture using the WGL_NV_DX_interop2 functions. The functions that cause the leak are wglDXRegisterObjectNV / wglDXUnregisterObjectNV.

Strangely, the leak still occurs if a separate shared texture is produced by OpenSharedResource and copied to the texture that is linked to OpenGL. For reasons unknown, there is no leak if the linked texture is produced with a keyed mutex (D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX).

This behaviour is the same for my old hardware AMD Radeon R200, driver 21.5.2 and Nvidia Geforce GTX 640, driver 474.30. More Recent drivers for my hardware are not available.

The leak test program allows these options to be tested.

0 - Create new D3D11 texture
Create a D3D11 texture within the application and link it to an application OpenGL texture (no leak).

1 - Open D3D11 texture from sender handle
Create a D3D11 texture using OpenSharedResource from the handle of a shared texture from another process (leak).

2 - Open sender texture from sender handle
Create a separate texture using OpenSharedResource from the sender’s texture handle but don’t link it with OpenGL. This isolates the OpenSharedResource function (no leak).

3 - Copy sender texture to D3D11 texture
Create a separate texture using OpenSharedResource and copy that to another shared texture that is linked to OpenGL (leak).

4 - Copy sender texture to D3D11 texture with Keyed mutex
Create a separate texture using OpenSharedResource, copy to another shared texture created with a Keyed Mutex that is linked to OpenGL (no leak).

It would be valuable if you could run the leak test program and basic sender using your hardware that is more up-to-date. The source code and Visual Studio project files are included.

Tested with “coala sender” included in ZIP :slight_smile:
All tests had more then 1000 iterations.

NV A5000 (531.41)
0 - no vmem leak (seems like RAM slightly grows, or just console app eats some RAM for the console output with every other iteration)
1 - huge vmem leak
2 - no vmem leak
3 - seems like a small vmem leak. 34mb starting from 17mb after ~3000 iterations.
4 - no vmem leak

AMD Radeon RX 5700 XT (23.4.3)
0 - no vmem leak (seems like RAM slightly grows, or just console app eats some RAM for the console output with every other iteration)
1 - just epic vmem leak 3 times bigger then NV one.
2 - no vmem leak
3 - huge vmem leak straightaway
4 - no vmem leak

Thanks for that. It confirms the results I find.
There is a difference for Nvidia with Option 3, but that is not a solution.
I have a few more flush tests to do.

I found that although DirectX flush prevents deferred release of textures, this is not effective for the GL/DX interop.

I added OpenGL memory reporting and ran the program using Option 1 for over 4 hours. As before, the OpenGL memory showed no increase. However, Task Manager showed over 5,000,000,000 Kb, which is clearly impossible with a 3072mb graphics card.

I found this Microsoft article and it would seem that Task Manager can’t be used as a reliable measure in this case. The count is increased but never resolved. The same observation came up last year with Vulkan/DX memory objects.

I added memory reporting to the minimal OpenGL sender and receiver programs. It would be useful if you could test the reported memory for the receiver when the sender is repeatedly maximized. Also please try GPU-Z. The “Sensors” page shows a “Dedicated memory” reading.

Edit -

I forgot to give you a new download link

Receiver from the upper DL is showing just a black screen with NV card when sender is running, is it normal? AMD is ok here.

upd currently it looks like this (more then 100 resizes of sender’s window):

not a fresh GPU-Z, just used one existed on this rig.
upd same behavior i see when testing my player. VMem fills to the top gradually and then i get OpenGL Error: 1285 (Out of memory). When i remove the line which sends texture, everything is ok.

upd2: continue clicking, now "current available memory is -255904 KB (yes, with minus).
upd3: ran the second receiver it reported that almost all vmem is free, but in a few clicks i got just a solid marsh color instead of texture in receiver and sender and seems like the first receiver just closed.

So what i see here is - basic sender and receiver can somehow operate even if dedicated memory is full (not a long time), opengl can’t create new resources and you get 1285 error.

Receiver black screen - don’t know. It’s a test program but I can’t see any errors. Try the Spout demo receiver which is stable. It that shows a black screen, the fault is in the sender.

At least you can get a memory report independent of Task Manager. Your image shows a steady increase in memory usage. I see no memory increase at all for the sender.

The sender of the previous download is equivalent to leak test Option 0 and the receiver to Option 4. I have rebuilt them so that the receiver is equivalent to Option 1, which is the default. The console memory report is now easier to read.

I have also revised the leak test. Of interest for the sender is Option 0, does memory usage increase? There is also a new option 5 which is equivalent to Option 0 but changes size every cycle. Memory should increase with the larger size and reduce again by the same amount. I have confirmed that his happens with Nvidia and there is no increase over time. But for ATI I see no change at all, which could be a fault with the OpenGL diagnostics.

Download link.

Option 0:

NV
all lines look like this:
New texture ( 640x 360) 27822 : NV : total : 23249056 max : 23271456 min : 22842464 change : 0 KB
VMem in taskmanager is stable.

AMD
“change” always = -1024 KB
“total” decreases
“min” decreases to zero near 10k iteration, then console output becomes slow and broken.
VMem in taskmanager is stable.

Option 5:

NV
New size ( 640x 360) 5262 : NV : total : 22783320 max : 23065048 min : 22751640 change : 31680 KB
New size (3840x2160) 5263 : NV : total : 22751640 max : 23065048 min : 22751640 change : -31680 KB
VMem in taskmanager is stable

AMD
New size ( 640x 360) 344 : ATI : total = 2532659 max = 8344883 min = 2532659 change = -1024 KB
New size (3840x2160) 345 : ATI : total = 2499891 max = 8344883 min = 2499891 change = -32768 KB
New size ( 640x 360) 346 : ATI : total = 2498867 max = 8344883 min = 2498867 change = -1024 KB
New size (3840x2160) 347 : ATI : total = 2466099 max = 8344883 min = 2466099 change = -32768 KB

console output is broken… and then…

New size ( 640x 360) 3130 : CreateSharedDX11Texture ERROR - [0xE] : Unlisted error
GLDXLeakTest null D3D11 texture
New size (3840x2160) 3131 : CreateSharedDX11Texture ERROR - [0xE] : Unlisted error
GLDXLeakTest null D3D11 texture

Driver crashes.

OK, that’s positive for Nvidia. What happens for options 0 and 5 in Task Manager?

Edit
I didn’t notice your Task Manager stable comment. I will look into the implications of that.

For AMD I will investigate.

Option 5 for AMD
VMem is also stable in taskmanager. But you can see the result above.

also for NV
Demo sender SpoutSender.exe + WinSpoutGLreceiver.exe = black screen in receiver’s window.
WinSpoutGLsender.exe + WinSpoutGLreceiver.exe = black screen in receiver’s window.

WinSpoutGLsender.exe + demo SpoutReceiver.exe = ok

for AMD all combinations work. (maybe that’s why we don’t see leak with NV? :slight_smile: )

I found a bug in WinSpoutGLsender. The image buffers were not released. In SpoutSettings check “Buffer” off. Option 0 will no longer report memory increase if that’s the cause.

I will try to reproduce and isolate the 1024kb increase with AMD. Leave WinSpoutGLreceiver for now.

Already had buffer off on both rigs. BTW today i ran spout settings on AMD rig, changed nothing, applied and then got garbage screen in WinSpoutGLreceiver.exe instead of koala :slight_smile: just like black screen on NV rig. Demo apps work though. Is buffer off by default? I think so. So if i just copied some spout apps somewhere without running settings (there is no registry entries for example for spout) it uses some default settings? Path for selecting spout sender in demo apps for example is absent. Could this somehow influence on texture transfer behavior?

Sorry for the delay in attending to this. The errors you see are most likely bugs in the OpenGL demo programs and these are not useful any more. The “Buffer” option is on by default with 2 buffers. The path to “SpoutPanel” for selecting senders just means that the dialog will not appear. It will not affect texture transfers. I believe the problem is related to the interop unregister function where objects are not released, but I cannot reproduce the OpenGL memory leak or create a definitive test. I will follow up with a private forum message.