Linux+MESA+OpenGL: Measuring amount of allocated video memory of an application

415 views Asked by At

Problem

I am currently looking for a command or tool (or something else) that allows me to easily measure the amount of allocated video memory of an OpenGL application running on Linux. Specifically, I am working with the Software-MESA drivers, and the goal is to measure the impact of some code and library changes of a big application.

I already know that there are some big differences between my program versions, but right now the only way I found to read out "used memory" is using graphics-card specific commands like nvidia-smi. This is not very portable or an "easy to automate way" of doing this because these tools need manual interpretation and also requires me running everything on that hardware - and not with the software MESA drivers!


So here the questions in a nutshell (more details below):

  • Is it even possible to do this accurately given the X11+OpenGL architecture on Linux?
  • What are portable options (or maybe also non-portable options) to measure video memory use on Linux?

Is it even possible to do this accurately given the X11+OpenGL architecture on Linux?

I wonder about this, because as far as I understand, the graphics context will be requested from X11 (or your framebuffer or whatever - I am using Xvfb), and it will associate the OpenGL context with your application. As a result and depending on the OpenGL implementation, memory allocation might be done on the level of the X11-server and cannot easily be related back to an individual application. However, this might all be different with the software-implementation of MESA OpenGL and everything is done in the application in that case?! I do not actually know...

What should be possible in any case is to somewhat approximate an application's OpenGL-related video memory use by tracing the OpenGL API calls and just summing up all the buffer allocations. I assumed that GALLIUM_HUD would have an option for that, but I could not find any video memory related options in the list when calling GALLIUM_HUD=help glxgears. Only a bunch of sensors, temperatures, fps, cpufreq... no memory measurements at all...

Maybe there is something else to do this, something I missed?

What are portable options (or maybe also non-portable options) to measure video memory use on Linux?

For measuring "general video memory use" on Linux, I know a tools that work, for example, with my NVIDIA card:

However, these tools are dependent on me having an nvidia-card. I would like to avoid the "works on my dev machine" issue, however. Also: This is dependent on features from NVIDIA and with my deployed application, I am trying to use the Software-MESA drivers, and the effective memory load might differ between the two OpenGL implementations!

As such these tools do not work very well. Of course, good enough as an approximation and the only thing I know that I can use right now!

But maybe there are others? Is there something or some "trick" to maybe trace/valgrind the memory use for Software MESA? Maybe some other tools or options for the OpenGL layer? Maybe someone knows something that allows me to do some better form of profiling instead of interpreting nvidia-smi outputs manually!

That is it, help is welcome! I will go back and trace some nvidia-smi logs until then -.-

1

There are 1 answers

2
genpfault On

Use ATI_meminfo and/or NVX_gpu_memory_info to get GL free memory at program startup ('A'), then again whenever you want the current allocation ('B'). A - B is then your actual allocated memory.

Mesa 22.3.3 on a Radeon RX 6700 XT supports both ATI_meminfo and NVX_gpu_memory_info:

$ glxinfo
...
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: AMD (0x1002)
    Device: AMD Radeon RX 6700 XT (navi22, LLVM 15.0.6, DRM 3.49, 6.1.0-5-amd64) (0x73df)
    Version: 22.3.3
    Accelerated: yes
    Video memory: 12288MB
    Unified memory: no
    Preferred profile: core (0x1)
    Max core profile version: 4.6
    Max compat profile version: 4.6
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.2
Memory info (GL_ATI_meminfo):
    VBO free memory - total: 11410 MB, largest block: 11410 MB
    VBO free aux. memory - total: 15549 MB, largest block: 15549 MB
    Texture free memory - total: 11410 MB, largest block: 11410 MB
    Texture free aux. memory - total: 15549 MB, largest block: 15549 MB
    Renderbuffer free memory - total: 11410 MB, largest block: 11410 MB
    Renderbuffer free aux. memory - total: 15549 MB, largest block: 15549 MB
Memory info (GL_NVX_gpu_memory_info):
    Dedicated video memory: 12288 MB
    Total available memory: 28301 MB
    Currently available dedicated video memory: 11410 MB
OpenGL vendor string: AMD
OpenGL renderer string: AMD Radeon RX 6700 XT (navi22, LLVM 15.0.6, DRM 3.49, 6.1.0-5-amd64)
OpenGL core profile version string: 4.6 (Core Profile) Mesa 22.3.3
OpenGL core profile shading language version string: 4.60
...

However, when forcing software rendering via LIBGL_ALWAYS_SOFTWARE=true those extensions are not reported; best it can do is GLX_MESA_query_renderer's GLX_RENDERER_VIDEO_MEMORY_MESA (the Video memory: line):

$ LIBGL_ALWAYS_SOFTWARE=1 glxinfo
...
Extended renderer info (GLX_MESA_query_renderer):
    Vendor: Mesa/X.org (0xffffffff)
    Device: llvmpipe (LLVM 15.0.6, 256 bits) (0xffffffff)
    Version: 22.3.3
    Accelerated: no
    Video memory: 32026MB
    Unified memory: yes
    Preferred profile: core (0x1)
    Max core profile version: 4.5
    Max compat profile version: 4.5
    Max GLES1 profile version: 1.1
    Max GLES[23] profile version: 3.2
OpenGL vendor string: Mesa/X.org
OpenGL renderer string: llvmpipe (LLVM 15.0.6, 256 bits)
OpenGL core profile version string: 4.5 (Core Profile) Mesa 22.3.3
OpenGL core profile shading language version string: 4.50
...