We previously showed that the flash-lag effect could be explained as a spatial bias caused by motion signals rather than time delays (Eagleman and Sejnowski 2000). We analyzed several other motion illusions and showed that spatial bias in localization judgments gives a unified explanation for the flash-lag, flash-drag, flash-jump and Frohlich illusions (Eagleman and Sejnowski 2007). In another study, early, cross modal interactions in the visual and auditory cortex assessed with event-related potentials underlie a sound-induced visual illusion (Mishra, Fellous et al. 2006). These studies point toward a window of 80 ms within which interactions between signals in the cortex can influence perceptual and timing judgments. Spatial interactions within the visual cortex can also alter perceptual judgments and we developed a cortical model in which the tilt illusion can be accounted for by changes in a system of divisive gain normalization (Schwartz, Sejnowski et al. 2009) whose function is to create a generative model of natural scene statistics (Schwartz, Sejnowski et al. 2006).
Prior exposure to an image improves subsequent responses and is accompanied by reduced neural activity. But whether suppression of neuronal activity with priming is responsible for the improvement in perception is unclear. In a model of visual cortex, we showed that perceptual priming could be explained by representation sharpening, in which connection strengths between neurons at a high firing rate were increased and connection strengths to low and moderately firing neurons decreased (Moldakarimov, Bazhenov et al. 2010). This led to decreased interference of representations in higher visual areas and faster recognition. The model explained a wide range of psychophysical and physiological data observed in priming experiments, including antipriming phenomena and a decrease of power in the gamma band. This produced sparse cortical codes for representing objects. Odor learning in insects may also depend on learning mechanisms that produce a sparse output representation. Building on previous models of the insect antennal lobe (Bazhenov, Stopfer et al. 2001; Bazhenov, Stopfer et al. 2001), activity-dependent plasticity at synapses from the antennal lobe drove the observed specificity, reliability, and expected persistence of odor representations in the mushroom bodies (Finelli, Haney et al. 2008)
Learning to associate visual features with probabilistic rewards involves the areas of the visual system interacting with other areas that predict future reward. We asked subjects to choose between visual features to decide on how to categorize an object and found that they chose features that optimize the information gain (Nelson, McKenzie et al. 2010). In an fMRI experiment, subjects presented with the more-useful feature produced higher activity than the less-useful feature in the ventral striatum (nucleus accumbens), amygdala/hippocampus, and cerebellar vermis (Filimon 2011). In a related probabilistic reward experiment, we showed that changes in the pattern of event-related potentials in the frontal cortex were consistent with reward prediction error (Peterson, Lotz et al. 2011) and Parkinson’s patients off medication had impaired reversal learning, suggesting that the dopamine reward system was involved.
Bottom up processing of visual input is combined with top down control in visual search tasks. We have developed a new search task in which observers quickly learn to find hidden targets on a blank screen, which are chosen on each trail from a Gaussian distribution (Chukoskie 2005). We showed that asymptotic subject performance matches the theoretical optimum for this task. This new task allows us to examine the top down signals in the brain without interference from bottom up visual processing. We are developing a temporal-difference reinforcement learning model to compare with human performance and to make predictions for where in the brain to look for the top down signals and changes during learning. We plan to study activity in the human cortex during this task with fMRI. The task is also rapidly acquired by monkeys and we will collaborate with Tom Albright at the Salk Institute to record from neurons in the parietal cortex of monkeys. Finally, we are collaborating with Andrea Chiba at UCSD to develop a rodent version of this search task (Alexander 2010).
When there is a rotation of the visual field relative to the body, the initial pointing error of the arm is compensated over time by adaptation, which is restricted to a narrow range of directions around the training direction. We developed a population-coding model that updated the weights between narrow Gaussian-tuned visual units and motor units on each trial. The model reproduced experimental trial-by-trial learning curves for rotation adaptation and the generalization function measured postadaptation, suggesting that rotation adaptation occurs at synapses between neurons in posterior parietal cortex and motor cortex driven by a prediction error computed by the cerebellum (Tanaka, Sejnowski et al. 2009). We also developed a multi-rate model of adaptation that has a fast, trial-by-trial component and a slower component that matched the psychophysical data (Tanaka in press).
There has been a long-standing disagreement between researchers about how neurons in the primary motor cortex control arm movements. Neurons have been found in motor cortex that compute joint torques and muscle tensions, but other motor cortex neurons carry kinematic spatial information and encode hand-movement directions and velocities. This has led to an impasse since motor cortex neuronal activities reflect such a broad mixture of movement-related variables, without explaining how a desired trajectory is converted by motor cortex neurons into joint torques. We have shown that the equations of motion for reaching based on the spatial positions of limbs in Cartesian coordinates are considerably more concise and have physically intuitive interpretations (Tanaka Submitted). In this reference frame, joint torques are the sums of vector cross products between the spatial positions of limb segments and their spatial accelerations and velocities. We simulated a model for a 3 joint arm in a two-dimensional plane and compared the results of the model with recordings from neurons in the motor cortex during similar movements. Comparisons between the population vector of the model and the population vector from the cortex revealed a close correspondence in the distribution of preferred directions, dependence on the workspace, and the rotation of the population vector. Finally we show how vector products can be used directly to compute the muscle tensions for the agonist and antagonist muscles in an arm. This computational framework for the function of the motor cortex greatly simplifies the control of multijoint limbs and can be scaled up to control all of the muscles in the body.
In curved hand movements, the angular speed is proportional to ⅔ power of the curvature. Several models have been proposed to explain the origin of this power law from optimality principles. However, we recently found that this law only holds for movement paths that have quickly changing curvature (such as ellipses), but not for paths with slowly changing curvature, such as spirals. We have developed a new model for smooth arm movements and have shown that it agrees with the results for spirals and makes many more predictions that we are in the process of testing (Huh 2011). The new model integrates the dopamine reward system (Montague, Dayan et al. 1996) with motor control and gives rise to a version of optimal stochastic control with an infinite horizon; that is, a fixed spatial target rather than a fixed time for the arm movement. This new model makes testable predictions that should inspire new experiments.
Sleep spindles are bursts of 11-15 Hz oscillations in the EEG during non-REM sleep. Previous models of spindles in cats focused on the thalamus, which is capable of generating spindles in the absence of the cortex (Destexhe, Bal et al. 1996). In collaboration with Igor Timofeev at Laval, we developed a model of interactions between the cortex and the thalamus that better explains experimental data on the initiation, duration and termination of sleep spindles (Bonjean in press). We recently extended the model to humans sleep spindles (Fig. 3), which are highly synchronous across the scalp when measured with EEG, but have low spatial coherence and exhibit low correlation with EEG signals when simultaneously measured with MEG. We used a computational model to explore the hypothesis that MEG was picking up the deeper core system and the EEG was most sensitive to the superficial matrix system (Bonjean submitted). The model included interactions between the two systems. We collaborated with Syd Cash at MGH who recorded directly from the cortex of epilepsy patients with laminar electrodes and confirmed the predictions of the model.
Terry Sejnowski was the senior author on a Science article that explored the implications of advances in our knowledge of biological and machine learning for educational practice (Meltzoff, Kuhl et al. 2009). The goal is to improve the educational outcomes of children by making them better learners.