Domain Knowledge Powered Two-Stream Deep Network for Few-Shot SAR Vehicle Recognition
Synthetic aperture radar (SAR) target recognition faces the challenge that there are very little labeled data. Although few-shot learning methods are developed to extract more information from a small amount of labeled data to avoid overfitting problems, recent few-shot or limited-data SAR target recognition algorithms overlook the unique SAR imaging mechanism. Domain knowledge-powered two-stream deep network (DKTS-N) is proposed in this study, which incorporates SAR domain knowledge related to the azimuth angle, the amplitude, and the phase data of vehicles, making it a pioneering work in few-shot SAR vehicle recognition. The two-stream deep network, extracting the features of the entire image and image patches, is proposed for more effective use of the SAR domain knowledge. To measure the structural information distance between the global and local features of vehicles, the deep Earth mover’s distance is improved to cope with the features from a two-stream deep network. Considering the sensitivity of the azimuth angle in SAR vehicle recognition, the nearest neighbor classifier replaces the structured fully connected layer for K -shot classification. All experiments are conducted under the configuration that the SARSIM and the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset work as a source and target task, respectively. Our proposed DKTS-N achieved 49.26% and 96.15% under ten-way one-shot and ten-way 25-shot, whose labeled samples are randomly selected from the training set. In standard operating condition (SOC) as well as three extended operating conditions (EOCs), DKTS-N demonstrated overwhelming advantages in accuracy and time consumption compared with other few-shot learning methods in K -shot recognition tasks.