What is the reason for adding a bias term (one that calculates the model) to a neural network?

Rental bike visitor prediction code has no bias, but has been successfully learned.

The framework also has a NoBias setting, but I don't know what NoBias is based on.

  • Answer # 1

    The role of bias is like a threshold for transmitting a signal at the next node. What is a threshold value can be any real number. In other words, it can be zero. Strictly speaking, "no bias" in the code we are referring to is not correct and represents zero bias.

    Usually, the bias is determined by learning, but it can be fixed if the prior analysis knows the value to set for the bias. Therefore, the code means that the bias is fixed at zero and not targeted for learning.

    Probably, when the explanatory variable is all zero, it is desirable that the calculation result is zero, so I wonder if I tried to express it well by fixing the bias to zero.

  • Answer # 2

    This is thought to be the expression of the "easiness of ignition" of the neuron that is the model.

    The threshold at which the ion channel opens and closes in nerve cells and the spike occurs is almost constant because the weight of the physical characteristics of the channel is large, but the resting membrane potential is the result of the equilibrium of the concentration gradient and potential gradient. It changes if the concentration changes.
    Nerve cells ignite when (resting membrane potential) + (input weighted sum)>(threshold), that is, (input weighted sum)>(threshold)-(resting membrane potential), but neural The bias term for each cell in the network is related to determining this (threshold)-(resting membrane potential) value.

    Assuming that the resting membrane potential is a network of roughly the same cell population, the spike generation and propagation model works without the bias term.

  • Answer # 3


  • Answer # 4

    I recommend.

    y = ax + b

    Is it necessary to have a parameter corresponding to b even if

    is regressed?