
In this paper, we propose a reliability-guided aggregation network (RANet) for hyperspectral and RGB tracking, which guides the combination of hyperspectral information and RGB information through modality reliability to improve tracking performance. However, there is no fusion tracking algorithm based on hyperspectral and RGB images.

Hyperspectral images with rich spectral features can provide more information for RGB-based trackers. Object tracking based on RGB images may fail when the color of the tracked object is similar to that of the background. Extensive experiments confirm the advantages and the efficiency of the proposed FSSF for object tracking in hyperspectral video tracking in challenging conditions of background clutter, low resolution, and appearance changes. To validate the proposed scheme, we collected a hyperspectral surveillance video (HSSV) dataset consisting of 70 sequences in 25 bands. The spatial-spectral features are integrated into correlation filters to complete the hyperspectral tracking.
#Hstracker blue numbers Offline#
In FSSF, the real-time spatial-spectral convolution (RSSC) kernel is updated in real time in the Fourier transform domain without offline training to quickly extract discriminative spatial-spectral features. However, the high-dimensionality of hyperspectral images causes high computational costs and difficulties for discriminative feature extraction.

Hyperspectral imaging uses unique spectral properties as well as spatial information to improve tracking accuracy in such challenging environments. Traditional object tracking in surveillance video based only on appearance information often fails in the presence of background clutter, low resolution, and appearance changes. This paper presents a correlation filter object tracker based on fast spatial-spectral features (FSSF) to realize robust, real-time object tracking in hyperspectral surveillance video. Extensive results show that our method significantly outperforms state-of-the-art unsupervised methods and even exceeds the latest supervised method under some settings. In addition to conducting quantitative experiments over two widely-used datasets for HS image reconstruction from synthetic RGB images, we also evaluate our method by applying recovered HS images from real RGB images to HS-based visual tracking. Besides, we embed the semantic information of input RGB images to locally regularize the unsupervised learning, which is expected to promote pixels with identical semantics to have consistent spectral signatures. This paper investigates the problem of reconstructing hyperspectral (HS) images from single RGB images captured by commercial cameras, \textbf_1$ gradient clipping scheme. Experimental results on the collected dataset show the potentials and advantages of material based object tracking. These two types of features are embedded into correlation filters, yielding material based tracking.

Material information is represented by spectral-spatial histogram of multidimensional gradients, which describes the 3D local spectral-spatial structure in an HSI, and fractional abundances of constituted material components which encode the underlying material distribution. In terms of dataset, we construct a dataset of fully-annotated videos, which contain both hyperspectral and color sequences of the same scene. In this paper, we conduct a comprehensive study on how material information can be utilized to boost object tracking from three aspects: dataset, material feature representation and material based tracking. Alternatively, material information of targets contained in large amount of bands of hyperspectral images (HSI) is more robust to these difficult conditions. Traditional color images only depict color intensities in red, green and blue channels, often making object trackers fail in challenging scenarios, e.g., background clutter and rapid changes of target appearance.
