What does the Bias exactly, is it for shifting or firing?

243 views Asked by At

I still do not understand what bias is and when a neuron is activated. So now i have some questions.

When exactly does an artificial neuron get fired? Does the neuron also fire when the result of the activation function is <0 or does the neuron only fire for positive values?

As far as I know, the bias should shift the activation function. But.... How should this work? What do I not understand?

The standard calculation with the Bias looks like this.
Multiply input by weight and add the Bias
So we have the calculation: x = a * w1 + b * w2 + c * w3 ..... + Bias
After that, apply the activation function. For this example we use the activation function, the sigmoid function: y = 1 / (1 + e ^ (- x)). The x value we get from the step before.

but if I do it in this way, then no shift can arise. It only affects how much/intensive the neuron fires.

In other videos I saw that when the bias is used, that negative values in the ReLU function can also be fired (for example: weight + Bias -> -0.5 + 1), but there was no mention of a shift of the function.

And again in another video / blog I saw that the Bias has additionally supplemented to an activation function. For example in the sigmoid function: y = 1 / (1 + e ^ (- x + Bias))

I am now completely confused with the bias. I hope you can help me.

1

There are 1 answers

0
cheersmate On

Artificial neurons (used in machine learning / ANNs) only take loose inspiration from biological neurons. They don't "fire" in the way that biological neurons do. Instead, they compute a scalar output from a vector of inputs (a mapping from input numbers to an output number). The bias adjusts the "sensitivity" with respect to inputs, i.e. sets the region of the nonlinear function in which the output will lie. It is not connected to "firing" or "firing rates" as there is no firing in ANNs.


To get simulated neurons which do fire, you need move from machine learning to the field of computational neuroscience where spiking neural networks (SNNs) are used. These model biological neurons more accurately, here, each neuron has defined firing times, and thus we can compute firing rates, etc.

The confusion arises as some researchers think of ANNs as a model for SNNs (using the assumption that firing rates capture all important aspects of neural activity). Then, the activity of an artifical neuron is interpreted as "firing rate" at a given time. This interpretation is neither necessary for using / understanding ANNs, nor is there a consensus that it is justified at all.