TY - GEN
T1 - Quantization effects of Hebbian-Type associative memories
AU - Chung, Pau Choo
AU - Chung, Yi Nung
AU - Tsai, Ching Tsorng
PY - 1993/1/1
Y1 - 1993/1/1
N2 - Effects of quantization strategies in Hebbian-Type associative memories are exploded in this paper. The quantization strategies considered include two-level, three-level strategy with a cut-off threshold, and linear quantizations. The two-level strategy is to clip positive interconnections into +1 and negative interconnections into -1. The three-level quantization uses the same strategy in turning the interconnections into +1 or -1. except that it is applied only to those interconnections having their values larger than a cut-off threshold. Those interconnections within the cutoff threshold are then set to zero. Results indicate that three-level quantization with a properly selected cut-off threshold gives network higher performance than two-level quantization. The performance of a network with linear quantization is also compared with a network with three-level quantization. It is also found that the linear quantization, although it preserves more network interconnections, does not significantly enhance network performance compared with the three-level quantization. Hence, it is concluded that the three-level binary quantization with an optimal threshold is a better choice when network implementations are considered.
AB - Effects of quantization strategies in Hebbian-Type associative memories are exploded in this paper. The quantization strategies considered include two-level, three-level strategy with a cut-off threshold, and linear quantizations. The two-level strategy is to clip positive interconnections into +1 and negative interconnections into -1. The three-level quantization uses the same strategy in turning the interconnections into +1 or -1. except that it is applied only to those interconnections having their values larger than a cut-off threshold. Those interconnections within the cutoff threshold are then set to zero. Results indicate that three-level quantization with a properly selected cut-off threshold gives network higher performance than two-level quantization. The performance of a network with linear quantization is also compared with a network with three-level quantization. It is also found that the linear quantization, although it preserves more network interconnections, does not significantly enhance network performance compared with the three-level quantization. Hence, it is concluded that the three-level binary quantization with an optimal threshold is a better choice when network implementations are considered.
UR - http://www.scopus.com/inward/record.url?scp=0027211266&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0027211266&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:0027211266
SN - 0780312007
T3 - 1993 IEEE International Conference on Neural Networks
SP - 1366
EP - 1370
BT - 1993 IEEE International Conference on Neural Networks
PB - Publ by IEEE
T2 - 1993 IEEE International Conference on Neural Networks
Y2 - 28 March 1993 through 1 April 1993
ER -