Skip to yearly menu bar Skip to main content


Workshop

Representation Learning with Very Limited Resources: When Data; Modalities; Labels; and Computing Resources are Scarce

Hirokatsu Kataoka, Yuki M. Asano, Iro Laina, Rio Yokota, Nakamasa Inoue, Rintaro Yanagi, Partha Das, Connor Anderson, Ryousuke Yamada, Daichi Otsuka, Yoshihiro Fukuhara

306 A

Sun 19 Oct 4 p.m. PDT — 9 p.m. PDT

Modern vision and multimodal models depend on massive datasets and heavy compute, magnifying costs, energy use, bias, copyright, and privacy risks. The “DeepSeek shock” of January 2025 spotlighted the urgency of learning powerful representations under tight resource limits. Now in its third edition, our workshop continues to explore strategies for robust representation learning when data, labels, modalities, parameters, or compute are scarce. We focus on techniques such as synthetic and distilled data, self-supervision, transfer learning, sparsity, and low-rank adaptation that squeeze maximum performance from minimal resources.

Live content is unavailable. Log in and register to view live content