Poster
DAViD: Data-efficient and Accurate Vision Models from Synthetic Data
Fatemeh Saleh · Sadegh Aliakbarian · Charlie Hewitt · Lohit Petikam · Xiao-Xian Xiao-Xian · Antonio Criminisi · Thomas J. Cashman · Tadas Baltrusaitis
The state of the art in human-centric computer vision achieves high accuracy and robustness across a diverse range of tasks. The most effective models in this domain have billions of parameters, thus requiring extremely large datasets, expensive training regimes, and compute-intensive inference. In this paper, we demonstrate that it is possible to train models on much smaller but high-fidelity synthetic datasets, with no loss in accuracy and higher efficiency. Using synthetic training data provides us with excellent levels of detail and perfect labels, while providing strong guarantees for data provenance, usage rights, and user consent. Procedural data synthesis also provides us with explicit control on data diversity, that we can use to address unfairness in the models we train. Extensive quantitative assessment on real input images demonstrates accuracy of our models on three dense prediction tasks: depth estimation, surface normal estimation, and soft foreground segmentation. Our models require only a fraction of the cost of training and inference when compared with foundational models of similar accuracy. We release our annotated synthetic dataset, SynthHuman, as well as our models, upon publication.
Live content is unavailable. Log in and register to view live content