Keynote
The efficiency of learner generated experiences
Linda B Smith
Much of the information in the world is latent, not revealed without some action by the perceiver. What we see, for example, depends on our posture, on where we turn our heads and eyes, what we do with our hands, where we move and how we move. In human infants and toddlers, the tight tie between momentary behavior and the momentary properties of the visual input lead to highly biased training data at multiple levels –edge statistics, mid-level properties, similarity distributions, semantic level properties and temporal properties. I will present findings from our analyses of the visual statistics of infant ego-centric images (collected at the scale of daily life in the home) and argue that the quality of the training data is a key factor in the efficient visual learning of infants and toddlers. When the efficiency of human learning exceeds current understanding of learning mechanisms, theorists often posit intrinsic “inductive biases” in the learning machinery that constrain learning outcomes enabling faster and more certain learning from complex, variable, and noisy training data. The visual statistics generated by infants and toddlers interacting with their everyday world reveal intrinsic constraints that directly bias, not learned inferences from noisy training data, but the training data itself. The findings provide insights to potential principles of designing training data that may support efficient learning even by machines with learning mechanisms unlike those of humans.
Live content is unavailable. Log in and register to view live content