I want to build a complex generic synthesizer with Web Audio API, and I am not sure how to handle each note:
- A given synthesizer can be built by connecting multiple AudioNodes, and a complex sound can be made of potentially dozens of AudioNodes.
- After playing a given note, all oscillator nodes must be discarded, re-created and re-connected to the rest of the modules.
- But also the nodes that connect to those oscillators to control their parameters, e.g. LFOs, envelope generators, etc. must be disconnected from the old and re-connected to the new oscillators.
- I want to build generic code that can handle any synthesizer graph without previously knowing about its structure.
So for every new note, I am either forced to do very complex tracking of the node graph... or else just re-create all the nodes. My preference of course is the simple, brute force approach of re-creating all the nodes for each note. For complex synthesizers, is it a bad practice? Will it consume too much memory and/or CPU? What is the recommended approach for such scenario?
There is also the alternative of muting the synth at the end of a note, but then the oscillators keep playing for ever, and also there is the "trigger" problem, e.g. on an LFO I want the wave to start at the beginning when the note starts, and I think that can only be done by creating a new oscillator.