TY - GEN
T1 - Anti-Hebbian synapses as a linear equation solver
AU - Zhang, Kechen
AU - Ganis, Giorgio
AU - Sereno, Martin I.
PY - 1997/12/1
Y1 - 1997/12/1
N2 - It is known that Hebbian synapses, with appropriate weight normalization, extract the first principal component of the input patterns. Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes. Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultaneous linear equations. During repetitive learning with a simple anti-Hebb rule, the weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No explicit matrix inversion is involved and there is no need to normalize weights. Mathematically, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping. Since solving systems of linear equations is a very basic computational problem to which many other problems are often reduced, our interpretation suggests a potentially general computational role for the anti-Hebbian synapses and a certain type of long-term depression (LTD).
AB - It is known that Hebbian synapses, with appropriate weight normalization, extract the first principal component of the input patterns. Anti-Hebb rules have been used in combination with Hebb rules to extract additional principal components or generate sparse codes. Here we show that the simple anti-Hebbian synapses alone can support an important computational function: solving simultaneous linear equations. During repetitive learning with a simple anti-Hebb rule, the weights onto an output unit always converge to the exact solution of the linear equations whose coefficients correspond to the input patterns and whose constant terms correspond to the biases, provided that the solution exists. If there are more equations than unknowns and no solution exists, the weights approach the values obtained by using the Moore-Penrose generalized inverse (pseudoinverse). No explicit matrix inversion is involved and there is no need to normalize weights. Mathematically, the anti-Hebb rule may be regarded as an iterative algorithm for learning a special case of the linear associative mapping. Since solving systems of linear equations is a very basic computational problem to which many other problems are often reduced, our interpretation suggests a potentially general computational role for the anti-Hebbian synapses and a certain type of long-term depression (LTD).
UR - http://www.scopus.com/inward/record.url?scp=0030653272&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0030653272&partnerID=8YFLogxK
U2 - 10.1109/ICNN.1997.611699
DO - 10.1109/ICNN.1997.611699
M3 - Conference contribution
AN - SCOPUS:0030653272
SN - 0780341228
SN - 9780780341227
T3 - IEEE International Conference on Neural Networks - Conference Proceedings
SP - 387
EP - 389
BT - 1997 IEEE International Conference on Neural Networks, ICNN 1997
T2 - 1997 IEEE International Conference on Neural Networks, ICNN 1997
Y2 - 9 June 1997 through 12 June 1997
ER -