From Local to Global: Class Feature Fused Fully Convolutional Network for Hyperspectral Image Classification
Current mainstream networks for hyperspectral image (HSI) classification employ image patches as inputs for feature extraction. Spatial information extraction is limited by the size of inputs, which makes networks unable to perform effective learning and reasoning from the global perspective. As a common component for capturing long-range dependencies, non-local networks with pixel-by-pixel information interaction bring unaffordable computational costs and information redundancy. To address the above issues, we propose a class feature fused fully convolutional network (CFF-FCN) with a local feature extraction block (LFEB) and a class feature fusion block (CFFB) to jointly utilize local and global information. LFEB based on dilated convolutions and reverse loop mechanism can acquire the local spectral–spatial features at multiple levels and deliver shallower layer features for coarse classification. CFFB calculates global class representation to enhance pixel features. Robust global information is propagated to every pixel with low computational cost. CFF-FCN considers a fully global class context and obtains more discriminative representation by concatenating high-level local features and re-integrated global features. Experimental results conducted on three real HSI data sets demonstrate that the proposed fully convolutional network is superior to multiple state-of-the-art deep learning-based approaches, especially in the case of a small number of training samples.