Human perceptual development commences with initially limited sensory capabilities, which mature gradually into robust proficiencies. Studies of late-sighted children who skip these initial stages of development as well as simulations with deep neural networks indicate that these early degradations serve as scaffolds for development, rather than constituting hurdles. In contrast, dispensing with these degradations compromises later development. These findings inform our understanding of typical and atypical development and provide inspiration for the design of more robust computational model systems.
Newborns are born with initially poor visual acuity. Data from late-sighted individuals and computational simulations suggest that such early acuity limitations are adaptive. They may help instantiate extended spatial integration mechanisms, which are key for robust performance in visual tasks like face recognition.
Akin to the case of visual acuity, newborns initially exhibit poor color sensitivity. Experiments with late-sighted children and computational models demonstrate that these initial degradations may have functional significance and could underlie our remarkable resilience to color variations encountered later in life.
Considering visual acuity and color sensitivity together, training deep networks with such joint progression suggests that the temporal confluence in spatial frequency and color sensitivity development also significantly shapes response properties characteristic of the division of parvo- and magno systems.
A human fetus is able to register environmental sounds. This in-utero experience, however, is restricted to exclusively low-frequency components of auditory signals. Computational simulations suggest that such inputs yield temporally extended receptive fields, which are critical for tasks such as emotion recognition.
Here, we review the ‘adaptive initial degradation’ hypothesis across visual and auditory domains. We propose that early perceptual limitations may be akin to the flapping of a butterfly’s wings, setting up small eddies that manifest in due time as significant salutary effects on later perceptual skills.
The processing of temporal information is crucial for making sense of the dynamic sensory environment we live in. Throughout my PhD, I have studied long-lasting visual temporal integration in typically-developed adults. Extending this work, I now investigate how temporal regularities are extracted from the environment more broadly: in typical development, in congenitally blind children who gain sight later in life, and in autistic individuals. Integrating these perspectives has led to a broader theoretical framework highlighting temporal processing as foundational to perceptual organization.
In the sequential metacontrast paradigm, visual information is mandatorily integrated along motion trajectories, for up to 450 ms. Here, we find that the extent of integration is determined by absolute time, not the number of elements presented, and can be further expanded by increases in the overall processing load.
Congenitally blind individuals who gained sight late in childhood were found to exhibit robust temporal order judgment capabilities several years, but not immediately, post-surgery. This work highlights significant neural plasticity for acquiring key temporal proficiencies in late childhood.
Recent predictive processing accounts of autism suggest difficulties with temporal prediction and processing. Using several behavioral tasks with a large cohort of autistic adults, we investigate their ability to detect temporal regularities in sensory streams as well as to predict and react to rhythmic events.
How does the developing nervous system extract meaning from complex sensory inputs across different modalities? Drawing from neuroscience, psychology, and computer science, we propose temporal regularity detection as a fundamental organizing principle, rendering perceptual organization tractable.