I am a mathematician who wants to program a geometric game.
I have the exact coordinates, and math formulae, of a few meshes I need to display and of their unit normals.
I need only one texture (colored reflective metal) per mesh.
I need to have the user move pieces, i.e. change the coordinates of a mesh, again by a simple math formula.
So I don't need to import 3D files, but rather I can compute everything.
Imagine a kind of Rubik cube. Cube coordinates are computed, and cubelets are rotated by the user. I have the program functioning in Mathematica.
I am having a very hard time, for sleepless days now, trying to find exactly how to display a computed mesh in SceneKit - with each vertex and normal animated separately.
ANY working example of, say, a single triangle with computed coordinates (rather than a stock provided shape), displayed with animatable coordinates by SceneKit would be EXTREMELY appreciated.
I looked more, and it seems that individual points of a mesh may not be movable in SceneKit. I like from SceneKit (unlike OpenGL) the feature that one can get the objects under the user's finger. Can one mix together OpenGL and SceneKit in a project?
I could take over from there....
Animating vertex positions individually is, in general, a tricky problem. But there are good ways to approach it in SceneKit.
A GPU really wants to have vertex data all uploaded in one chunk before it starts rendering a frame. That means that if you're continually calculating new vertex positions/normals/etc on the CPU, you have the problem of schlepping all that data over to the GPU every time even just part of it changes.
Because you're already describing your surface mathematically, you're in a good position to do that work on the GPU itself. If each vertex position is a function of some variable, you can write that function in a shader, and find a way to pass the input variable per vertex.
There are a couple of options you could look at for this:
Shader modifiers. Start with a dummy geometry that has the topology you need (number of vertices & how they're connected as polygons). Pass your input variable as an extra texture, and in your shader modifier code (for the geometry entry point), lookup the texture, do your function, and set the vertex position with the result.
Metal compute shaders. Create a geometry source backed by a Metal buffer, then at render time, enqueue a compute shader that writes vertex data to that buffer according to your function. (There's skeletal code for part of that at the link.)
Update: From your comments it sounds like you might be in an easier position.
If what you have is geometry composed of pieces that are static with respect to themselves and move with respect to each other — like the cubelets of a Rubik's cube — computing vertices at render time is overkill. Instead, you can upload the static parts of your geometry to the GPU once, and use transforms to position them relative to each other.
The way to do this in SceneKit is to create separate nodes, each with its own (static) geometry for each piece, then set node transforms (or positions/orientations/scales) to move the nodes relative to one another. To move several nodes at once, use node hierarchy — make several of them children of another node. If some need to move together at one moment, and a different subset need to move together later, you can change the hierarchy.
Here's a concrete example of the Rubik's cube idea. First, creating some cubelets:
Next, the process of doing a rotation. This is one specific rotation, but you could generalize this to a function that does any transform of any subset of the cubelets:
The magic part is in the cleanup after the animation. The cubelets start out as children of the scene's root node, and we temporarily re-parent them to another node so we can transform them together. Upon returning them to be the root node's children again, we set each one's local
transform
to itsworldTransform
, so that it keeps the effect of the temporary node's transform changes.You can then repeat this process to grab whatever set of nodes are in a (new) set of world space positions and use another temporary node to transform those.
I'm not sure quite how Rubik's-cube-like your problem is, but it sounds like you can probably generalize a solution from something like this.