Skip to yearly menu bar Skip to main content




ICCV 2025 Keynotes

Tue 21 Oct 1:15 p.m. PDT

Black holes are cosmic objects so small and dense, that nothing, not even light can escape their gravitational pull. Until recently, no one had ever seen what a black hole actually looked like. Einstein's theories predict that a distant observer should see a ring of light encircling the black hole, which forms when radiation emitted by infalling hot gas is lensed by the extreme gravity near the event horizon. The Event Horizon Telescope (EHT) is a global array of radio dishes, linked together by a network of atomic clocks to form an Earth-sized virtual telescope that can resolve the nearest supermassive black holes where this ring feature may be measured. On April 10th, 2019, the EHT project reported success: we have imaged a black hole, and have seen the predicted strong gravitational lensing. In 2022, our team again saw this phenomenon towards the supermassive black hole at the center of our Milky Way galaxy. This talk will cover the background of the project, the technique, and the imaging strategies employed. Expansion of the global array to a next-generation EHT, enabling capture of multi-color movies of black holes, will be discussed.


Sheperd Doeleman

Shep Doeleman is Founding Director of the Event Horizon Telescope (EHT) project and led the international team that made the first image of a black hole. He received his bachelor's from Reed College, a PhD in astrophysics from MIT, and spent a year in Antarctica conducting space-science experiments where he got hooked on doing research in challenging circumstances. After serving as assistant director of MIT’s Haystack Observatory and receiving a Guggenheim Fellowship in 2012, he moved to the Harvard-Smithsonian Center for Astrophysics. There he co-founded the Black Hole Initiative – the first center dedicated to the interdisciplinary study of black holes – which is supported by the John Templeton Foundation. He now leads the next-generation EHT (ngEHT), which has a goal of making movies of black holes to answer the next set of big questions.

photograph © Brigitte Lacombe for the Breakthrough Prize

Wed 22 Oct 12:30 p.m. PDT

This talk tells the story of virtual unwrapping, conceived during the rise of digital libraries, computer vision, and large-scale computing, and now realized on some of the most difficult and iconic material in the world - the Herculaneum Scrolls - as a result of the recent phenomena of big data and machine learning. Virtual unwrapping is a non-invasive restoration pathway for damaged written material, allowing texts to be read from objects that are too damaged even to be opened. The Herculaneum papyrus scrolls, buried and carbonized by the eruption of Mount Vesuvius in 79 CE and then excavated in the 18th century, are original, classical texts from the shelves of the only library to have survived from antiquity. The 250-year history of science and technology applied to the challenge of opening and then reading them has created a fragmentary, damaged window into their literary and philosophical secrets. In 1999, with more than 400 scrolls still unopened, methods for physical unwrapping were permanently halted. The intact scrolls present an enigmatic challenge: preserved by the fury of Vesuvius, yet still lost. Using a non-invasive imaging approach, we have now shown how to recover their texts, rendering them "unlost." The path we have forged uses high energy physics, artificial intelligence, and the collective power of a global, scientific community inspired by prizes, collaborative generosity, and the common goal of shared glory: reading original classical texts for the first time in 2000 years.


Brent Seales

Dr. W. Brent Seales is the Stanley and Karen Pigman Chair of Heritage Science and Professor of Computer Science at the University of Kentucky. He earned a Ph.D. in Computer Science at the University of Wisconsin-Madison and has held research positions at INRIA SophiaAntipolis, UNC Chapel Hill, Google (Paris), and the Getty Conservation Institute. The Heritage Science research lab (EduceLab) founded by Seales at the University of Kentucky applies techniques in machine learning and data science to the digital restoration of damaged materials. The research program is funded by the National Science Foundation, the National Endowment for the Humanities, the Arts and Humanities Research Council of Great Britain, the Andrew W. Mellon Foundation, and Google. Seales is a co-founder of the Vesuvius Challenge, an international contest formed around the goal of the virtual unwrapping of Herculaneum scrolls. He continues to work with challenging, damaged material (Herculaneum Scrolls, Dead Sea Scrolls), with notable successes in the scroll from En-Gedi (Leviticus), the Morgan MS M.910 (The Acts of the Apostles), and PHerc.Paris.3 and 4 (Philodemus / Epicureanism). The recovery of readable text from still-unopened material has been hailed worldwide as an astonishing achievement fueled by open scholarship, interdisciplinary collaboration, and extraordinary leadership generosity.

Thu 23 Oct 12:30 p.m. PDT

Much of the information in the world is latent, not revealed without some action by the perceiver. What we see, for example, depends on our posture, on where we turn our heads and eyes, what we do with our hands, where we move and how we move. In human infants and toddlers, the tight tie between momentary behavior and the momentary properties of the visual input lead to highly biased training data at multiple levels –edge statistics, mid-level properties, similarity distributions, semantic level properties and temporal properties. I will present findings from our analyses of the visual statistics of infant ego-centric images (collected at the scale of daily life in the home) and argue that the quality of the training data is a key factor in the efficient visual learning of infants and toddlers. When the efficiency of human learning exceeds current understanding of learning mechanisms, theorists often posit intrinsic “inductive biases” in the learning machinery that constrain learning outcomes enabling faster and more certain learning from complex, variable, and noisy training data. The visual statistics generated by infants and toddlers interacting with their everyday world reveal intrinsic constraints that directly bias, not learned inferences from noisy training data, but the training data itself. The findings provide insights to potential principles of designing training data that may support efficient learning even by machines with learning mechanisms unlike those of humans.


Linda B Smith

Linda B. Smith, Distinguished Professor at Indiana University Bloomington, is an internationally recognized leader in cognitive science and cognitive development. Taking a complex systems perspective, she seeks to understand the interdependencies among perceptual, motor and cognitive developments during the first three years of post-natal life. Using wearable sensors, including head-mounted cameras, she studies how the young learner’s own behavior creates the statistical structure of the learning environments with a current focus on developmentally changing visual statistics at the scale of everyday life and their role in motor, perceptual, and language development. The work extended through collaborations has led to new insights in artificial intelligence and education. Smith received her PhD from the University of Pennsylvania in 1977 and immediately joined the faculty at Indiana University. Her work has been continuously funded by the National Science Foundation and/or the National Institutes of Health since 1978. She won the David E. Rumelhart Prize for Theoretical Contributions to Cognitive Science, the American Psychological Association Award for Distinguished Scientific Contributions, the William James Fellow Award from the American Psychological Society, the Norman Anderson Lifetime Achievement Award, and the Koffka Medal. She is an elected member of both the National Academy of Sciences and the American Academy of Arts and Science.