Poster
Gait-X: Exploring X modality for Generalized Gait Recognition
Zengbin Wang · Saihui Hou · Junjie Li · Xu Liu · Chunshui Cao · Yongzhen Huang · Siye Wang · Man Zhang
Modality exploration in gait recognition has been repeatedly mentioned as a core research topic, evolving from binary silhouette to some promising modalities like parsing, mesh, point clouds, etc. These latest modalities agree that silhouette is less affected by background and clothing noises, but argue it loses too much valuable discriminative information. They seek to retain the strengths of silhouette while extracting more semantic or structural information through upstream estimation for better recognition. We agree with this principle but argue that these upstream estimations are usually unstable and the resulted modalities rely on pre-defined design. Moreover, the crucial aspect of modality generalization remains underexplored. To address this, inspired by the stability and high-dimension analysis in frequency decomposition, we propose Gait-X to explore how to flexibly and stably develop a gait-specific generalized X modality from a frequency perspective. Specifically, 1) We replace upstream estimation with stable frequency decomposition and conduct a comprehensive analysis of how different frequencies impact modality and within-/cross-domain performance; 2) To enable flexible modality customization and mitigate the influence of noise and domain variations, we propose to remove irrelevant low-frequency noise and suppress high-frequency domain-specific information to form our X modality; 3) To further improve model generalization, we expand the representation across multiple frequencies to guide the model in balancing whole frequencies for enhanced generalization. Extensive experiments on CCPG, SUSTech1K, and CASIA-B datasets show superior within- and cross-domain performance.
Live content is unavailable. Log in and register to view live content