Stephen David Lab

Stephen David, Ph.D.

 

Current Lab Members

Brad
Instructor
Brad Buran joined the lab as a postdoctoral fellow after previous training at the Liberman lab at Harvard and the Sanes lab at NYU and a brief stint in industry. He is interested in auditory perception and long-term plasticity resulting from hearing loss.

Jesyin
Postdoctoral Fellow

Jean
Postdoctoral Fellow

Daniela
Graduate Student
Daniela is a graduate student in the Neuroscience Graduate Program. She arrived at OHSU after completing a master's degree at the University of Trieste and a fellowship at the Italian Institute of Technology.

Zachary
Graduate Student
Zack is student in the NGP program who joined the lab in June 2014. He is interested in sensory representation and the influence of behavioral state on perception.

Ulysses
Research Assistant 2

Seán
Research Assistant 2

Lab Projects

Humans and other animals are exquisitely adept at creating a coherent sense of the world from the complex patterns that continuously bombard their senses. Throughout development, we identify patterns in the sounds around us and learn to categorize and discriminate important signals, while ignoring irrelevant but often substantial noise. State of the art audio processing systems attempt to mimic these abilities, but even the most common sources of environmental noise severely confound automatic speech processors and distort the output of hearing aids and prosthetics. We are interested in understanding the neurophysiological and computational processes that underlie the remarkable abilities of the auditory brain, with an aim of improving engineered systems for sensory signal processing.

Along a different intellectual line, but in a similar spirit of understanding complex systems, we also study the history of neuroscience and, in particular, how mentorship influences the transmission and evolution of ideas.

Behavior-driven changes in the representation of sensory informationConsider how the brain might represent an important natural sound, such as a vocalization, in a noisy environment. In more peripheral areas (left), such as primary auditory cortex, responses to both the vocalization and background noise are distributed broadly across the neural population (red indicates neurons that respond more to the vocalization and blue those that respond more to the noise). As information passes to subsequent stages, responses to the noise are suppressed and a smaller subset of neurons become more selective for the vocalization. Finally, in brain areas that execute decisions or motor responses (right), a small set of neurons encoding an appropriate behavior respond categorically only to the vocalization. The goal of understanding this process of hierarchical feature extraction lies at the core of our research.

During normal behavior, important information can arrive from multiple sensory modalities, and the relevance of any given stimulus can change with behavioral demands. Thus the ability to robustly identify sensory events represents a combined effort of bottom-up multimodal representations that are modulated by top-down down demands for information appropriate to the task at hand. To understand these processes, we conduct experiments that manipulate auditory attention and study how the cortex functions under these different behavior conditions. Data from these studies is used to develop computational models that integrate top-down and bottom-up processing under realistic, natural conditions.

Neural representation of natural auditory and visual stimuliWe are also interested in basic questions of how sensory information is represented by cortical neurons, especially under the rich and varied conditions encountered in the natural environment. Neural representations can be characterized by models, i.e., mathematical equations that describe the relationship between a sensory stimulus and subsequent neural activity. Much of our understanding of auditory brain representations is derived from experiments that probed neural response properties with simple synthetic stimuli, such as tones and noise bursts. While these experiments have revealed much about representation, particularly in more peripheral brain areas, recent experiments measuring neural responses to speech and other vocalizations have shown consistently that models based on responses to simple synthetic stimuli do not predict responses to the more complex natural stimuli. Instead, new and more comprehensive dynamical models are required to describe representations of these more behaviorally relevant stimuli.

With the continuous increase in available computational power, we have the ability to test and compare a huge variety of increasingly complex models. This new potential raises new issues: What is the best way to compare functional models of neurons? How should the large and diverse neurophysiological datasets be stored so that they can be available for testing new models? The Neural Prediction Challenge, a collaboration with Jack Gallant and Frederic Theunissen at UC Berkeley, is a database of single neuron recordings from auditory and visual systems using natural stimuli. Interested researchers can download the data and compare the performance of their model against other models fit with the same data. A related project, STRFpak, is a software package providing model estimation and validation tools that can be applied to any neurophysiological data set.

Effects of hearing loss on central auditory representations

Hearing loss is not just a problem with the ear. Although most difficulty with hearing is caused by damage to the ear's hair cells or to the auditory nerve, the impact of reduced auditory input can cause lead to substantial compensatory plasticity in the brain areas responsible for processing sounds. This plasticity is particularly large if the hearing loss happens early in development. In such situations, when the brain does not wire normally, the benefits of restored hearing later in life (through hearing aids or cochlear implants) can be limited, as the brain is does not have the capability to process sounds normally. Recently, we have begun studying how natural sounds are represented in the auditory cortex after hearing loss and how hearing loss impacts the ability of top-down control systems to extract behaviorally relevant information from sounds.

History of neuroscience

Neuroscience is a new but rapidly growing field, drawing on ideas and methodologies from many other research areas, including biology, psychology, physics, mathematics and philosophy. Depending on their training, each neuroscientist brings a unique perspective into their research. In ongoing research, we are studying how academic mentorship, the hands-on training received at the doctoral and postdoctoral level, influences the work of individuals and how training in multiple disciplines allows for the synthesis of different approaches into the new techniques that define neuroscience as its own field. Neurotree is a collaborative, open-access website that tracks and visualizes the academic genealogy. After nine years of growth driven by user-generated content, the site has captured information about the mentorship of over 45,000 neuroscientists. As a public resource, it has become a unique tool for a community of primary researchers, students, journal editors, and the press. The database captures a unique aspect of the history of the field, and it allows us to explore the evolution of new ideas and how mentorship has contributed their development. We are exploring new ways to improve the quality of the existing data and ways to link Neurotree to other datasets, such as publication and grant databases. Inspired by Neurotree's example, genealogies have been launched for a number of other fields under auspices of the Academic Family Tree, which aims to build a single genealogy across all academic fields.

See Dr. David's Laboratory of Brain, Hearing, and Behavior website.