I'm making a Rust desktop app that acts as a kind of 3D model viewer. I would need to render both a 3D scene and GTK widgets to control some rendering parameters. I want it to integrate well with the user's desktop and theme, and I'm mainly targeting Ubuntu, hence the choice of GTK. I may eventually port it to other widget libraries and even OSes.
The 3D scene would be rendered with wgpu
, which is a cross-platform polyfill for the WebGPU API. I believe that wgpu
currently renders using Vulkan on Linux, but this is abstracted away: the entry point to choose a rendering surface accepts anything that implements raw_window_handle::HasRawDisplayHandle
, which includes an XlibDisplayHandle
, an XcbDisplayHandle
, a WaylandDisplayHandle
or a GbmDisplayHandle
to quote a few Linux-related technologies.
Now, from what I understand, GTK is much higher-level than any of these, and it won't be as easy as just having some kind of "surface" widget that can give me one of these handles. So I have two ideas:
- Render to an offscreen buffer, then give the image to Gtk. This creates a GPU/CPU round-trip that may not be desirable but could be acceptable for a 3D model viewer
- Find a way to create and attach an relative-positioned border-less window to another window, to act as the rendering surface.
This means I'll have to implement this for every single display server out there. But I could publish that as a library to help others and receive contributions in return.Actually, GdkWindow seems to solve this problem for me. That's what GtkGLArea uses internally.
What could I do?
Note: I'm doing this in Rust, but any C solution will help me. It's not really hard to translate the code and/or perform C calls from Rust. Most Rust libraries that wrap over C libraries provide escape hatches to work with the C pointers directly.