Skip to yearly menu bar Skip to main content


Poster

Beyond the Limits: Overcoming Negative Correlation of Activation-Based Training-Free NAS

Haidong Kang · Lianbo Ma · Pengjun Chen · Guo Yu · Xingwei Wang · Min Huang


Abstract:

Training-free Neural Architecture Search (NAS) has emerged an efficient way to discover high-performing lightweight models with zero-cost proxies (e.g., the activation-based proxies (AZP)). In this paper, we observe a new \textit{negative correlation phenomenon} that the correlations of the AZP dramatically decrease to be negative with the increasing number of convolutions, significantly degrading the prediction performance of AZP over target architectures. No existing works focus on such negative correlation and its underlying mechanism. To address this, through deep analysis of the architectural characteristics scored by AZP, we propose a series of AZP design principles and reveal the potential reason of the above phenomenon that \textit{high non-linearity dramatically degrades the magnitude of AZP score}. Those findings show that existing AZP designs do not obey the proposed principles. Finally, grounded in these insights, we propose a simple yet efficient \underline{N}egative \underline{C}orrelations-\underline{D}efied (\textbf{NCD}) method, which utilize stochastic activation masking (SAM) and non-linear rescaling (NIR) to effectively eliminate negative correlation of AZP and significantly improve performance. Extensive experimental results validate the effectiveness and efficiency of our method, outperforming state-of-the-art methods on mainstream 12 search spaces with 4 real-world tasks.

Live content is unavailable. Log in and register to view live content