Neural encoding of auditory features during music perception and imagery

Stephanie Martin, Christian Mikutta, Matthew K. Leonard, Dylan Hungate, Stefan Koelsch, Shihab Shamma, Edward F. Chang, José Del R. Millán, Robert T. Knight, Brian N. Pasley

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.

Original languageEnglish
Pages (from-to)4222-4233
Number of pages12
JournalCerebral Cortex
Volume28
Issue number12
DOIs
StatePublished - 1 Dec 2018

Keywords

  • auditory cortex
  • electrocorticography
  • frequency tuning
  • spectrotemporal receptive fields

Fingerprint

Dive into the research topics of 'Neural encoding of auditory features during music perception and imagery'. Together they form a unique fingerprint.

Cite this