Ari Rosenberg

Position title: Associate Professor - Ph.D., Computational Neuroscience, University of Chicago

Email: ari.rosenberg@wisc.edu

Phone: Phone: (608) 265-5782 | Fax: (608)265-5512

Address:
RESEARCH INTERESTS - Neural computations underlying 3D vision, multisensory integration, and the neural basis of autism

Ari Rosenberg

Visit the Rosenberg Lab

Positions Available
The lab is seeking highly motivated and industrious persons to fill several key positions.

A technical position is available for a computer programmer interested in developing systems for real-time dynamic (closed-loop) control of neuroscience experiments involving 3D visualizations. Demonstrated previous experience with OpenGL and the software/hardware requirements of real-time control with millisecond precision is highly desired.

We are looking for graduate students with undergraduate or master’s level training in the sciences or engineering. Interested students can contact me and apply to the following Ph.D. programs:
Neuroscience Training Program
Physiology Graduate Training Program

Postdoctoral positions are also available. Individuals with experience in multi-electrode neural recordings and/or fMRI in behaving animals are particularly encouraged to apply.


ari #1
Click to enlarge

3D vision

How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense 2D projections like a movie on a screen? Estimating the 3D scene structure of our environment from a pair of 2D images (like those on our retinae) is mathematically an ill-posed inverse problem plagued by ambiguities and noise, and involving highly nonlinear constraints imposed by multi-view geometry. Given these complexities, it is quite impressive that the visual system is able to construct 3D representations that are accurate enough for us to successfully interact with our surroundings. A major area of research in the lab is devoted to understanding how the brain achieves accurate and reliable 3D representations of the world. A critical aspect of 3D vision is the encoding of 3D object orientation (e.g., the slant and tilt of a planar surface). By adapting mathematical tools used to analyze geomagnetic data (Bingham functions), we developed the first methods for quantifying the selectivity of visual neurons for 3D object orientation. Our work on this topic employs a synergistic, multifaceted approach combining computational modeling, neurophysiological studies, and human psychophysical experiments.


ari #2
Click to enlarge

Multisensory integration

Our visual system first encodes the environment in egocentric coordinates defined by our eyes. Such representations are inherently unstable in that they shift and rotate as we move our eyes or head.However, visual perception of the world is largely unaffected by such movements, a phenomenon known as spatial constancy. Perception is instead anchored to gravity, which is why buildings are seen as vertically oriented even if you tilt your head to the side. This stability of visual perception is a consequence of multisensory processing in which the brain uses gravitational signals detected by the vestibular and proprioceptive systems to re-express egocentrically encoded visual signals in gravity-centered coordinates. Vestibular deficits can thus compromise visual stability, and the absence of gravity in space can cause astronauts to experience disorienting jumps in the perceived visual orientation of their surroundings. A second area of research in the lab investigates where and how the brain combines visual information with vestibular and proprioceptive signals in order to achieve a stable, gravity-centered representation of the world. Our work on this topic relies on a combination of computational modeling and neurophysiological studies.
(Click here to play or download a movie illustrating one of our experimental protocols.)


ari #3
Click to enlarge

Neural basis of autism
The prevalence of autism is growing at a dramatic rate, with a proportionate response from the scientific community. The extent to which this rise in prevalence reflects growing awareness, over diagnosis, or a genuine increase in incidence is currently unclear. Considering that last year alone saw almost 4000 publications related to autism, it is perhaps not surprising that a number of controversies are emerging within the field. A third area of research in the lab aims to lift the fog surrounding this complex disorder. Given the heterogeneity of the genetic and environmental factors that may give rise to autism, as well as its phenotypic diversity, our approach takes the perspective that we can better understand autism by studying how the disorder affects neural computation. We are chiefly interested in determining if the behavioral consequences of autism reflect alterations in canonical neural computations that occur throughout the brain, such as divisive normalization. To test this hypothesis, we are conducting psychophysical studies based on predictions of our recent computational work on the disorder. Our hope is that identifying where and how neural computations are altered in autism will provide a unique window of understanding into the disorder and may provide important insights into its treatment.


See publications here.