TLTR; Does NEAT algorithm allow it's input/output layers to also evolve activation functions, or only uses identity?
I'm working on a custom NeuralNet, largely inspired by the implementation of NEAT (NeuroEvolution of Augmenting Topologies).
As far as my experience and knowledge goes, input neurons in most networks activate without affecting the values they hold - they just pass it (identity function) And the output layer neurons can have activation functions that are preset based on the problem that the network is trying to solve, usually it's identity, softmax or sigmoid.
For the NEAT algorithm do the inputs/outputs evolve their functions, or are they unchangeable?
Yes NEAT allows to "evolve" activation functions. Which means nodes get inserted with a random activation function (you can choose what kind of activation functions to use there)
However they don't "evolve" as in the activations function changes continously. However different nodes can have different activation function and they can mutate (existing nodes changing activation function).
https://neat-python.readthedocs.io/en/latest/config_file.html#defaultgenome-section
Regarding your other statement:
Actually it's the norm to have activation functions other than identity. There is a lot of theory regarding this topic.
The gist of it is deeper networks can be more efficient than shallow networks. If you only use the "identity function" as an activation function, you can rewrite a neural network of arbitrary depth to a shallow network therefore you have virtually no benefit of using a deep network (with activation function=identity function) vs a shallow network.