The paper presents a vision-cognitive mobile robot for using in earthquakes rescues, which integrate vision-cognitive model and image processing technique. Most robots only rely on non-vision sensor technology to sense an unknown environment. In order to make the mobile robot be more intelligent and act like human does, a cognitive model with self-learning ability has been developed based on Cell Assemblies (CAs) with fatiguing Leaky Integrate and Fire (fLIF) neurons. In addition, this model integrated image processing techniques for object recognition. The advantages of this model are associated with short-term and long-term memory and try to imitate human thinking and making decisions. The vision-cognitive mobile robot has been tested with several simulated schemes of the virtual environment of disaster area. The experimental results showed that the robot can produce right action commands regarding different schemes. Thus this study gives one solution to obstacle-avoidance and sense-perception in the application of mobile robot.