Object detection acts as an essential part in a wide range of measurement systems in traffic management, urban planning, defense, agriculture, and so on. Convolutional Neural Networks-based researches reach a great improvement on detection tasks in natural scene images enjoying from the strong ability of feature representations. However, because of the high density, the small size of objects, and the intricate background, the current methods achieve relatively low precision in aerial images. The intention of this work is to obtain better detection performance in aerial images by designing a novel deep neural network framework called Feature Fusion Deep Networks (FFDN). The novel architecture combines a designed structural learning layer based on a graphical model. As a result, the network not only provides powerful hierarchical representation but also strengthens the spatial relationship between the high-density objects. We demonstrate the great improvement of the proposed FFDN on the UAV123 data set and another novel challenging data set called UAVDT benchmark. The objects which appear with small size, partial occlusion and out of view, as well as in the dark background can be detected accurately.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)