Neural Network suffers four major issues including acceleration, power consumption, area overhead and fault tolerance. In this paper we develop a systematic approach to design a low-power, compact, fast and reliable neural network based on a redundant residue number system. Residue number systems have been applied in designing neural network except the CORDIC-based activation functions including hypertangent, logistic and softmax functions. This issue results in that the entire neural network cannot be totally self-checked and extra operations make the time, power and area reductions wasted. In our systematic approach we propose some design rules for ensuring the checking rate without loss of reductions in time, area and power consumption. From experiments on three neu-ral network with 24-bit fixed-point operations for the MNIST handwritten digit data set, 3, 4, and 5 moduli are separately employed for achieving all balanced improvements in power-saving, area-reduction, speed-acceleration and reliability pro-motion. Experimental results show that all the power, time and area can be reduced to only about one third, and the entire network in any combination of software and hardware can be self-checked in an aliasing rate of only 0.39% and TMR-correctable under the single-residue fault model.