Tecumseh Fitch

Dept. of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna

1. Comparative Neural Basis of Music and Speech (with Leonida Fusani & Ludwig Huber)

Certain perceptual and cognitive aspects of music and speech appear to be based on mechanisms shared with other animals, but the neural basis of these, and the role of environmental stimuli in shaping them, remains unclear [1]. This project will use non-invasive neural recordings (EEG in pigeons and dogs [2], fMRI in dogs [3]) to gain a clearer picture of the mechanisms underlying music perception (both rhythmic and metrical structure, and the harmonic structure of chord sequences – both aspects of “musical syntax”), along with speech processing (the perception of lexical stress and of vowels). Such methods have only recently been applied to animals and have already provided important insights into shared mechanisms underlying complex auditory perception [4]. By comparing brain activations for music and speech we can determine if overlapping brain regions are activated (as for humans) or different neural circuits are involved. We will also examine the effect of environment in dogs by studying animals with more or less exposure to music in their homes (e.g. music teachers vs. owners who prefer silence at home): if the results are promising this could potentially be extended to kennel- or pack-living dogs to titrate exposure to human speech in the future. 

References:

(1) Fitch, W.T., and Martins, M.D. (2014). Hierarchical processing in music, language and action: Lashley revisited, Ann. NY Acad. Sci. 1316, 87-104.

(2) Törnqvist, H., Kujala, M. V., Somppi, S., Hanninen, L., Pastell, M., Krause, C. M., Kujala, J., & Vainio, O. (2013). Visual event-related potentials of dogs: a non-invasive electroencephalography study. Animal Cognition, 16(6), 973-982

(3) Andics, A., Gacsi, M., Farago, T., Kis, A., & Miklosi, A. (2014). Voice-Sensitive Regions in the Dog and Human Brain Are Revealed by Comparative fMRI. Current Biology, 24, 1–5

(4) Honing, Henkjan, Merchant, Hugo, Háden, Gábor P, Prado, Luis, and Bartolo, Ramón (2012). "Rhesus Monkeys (Macaca mulatta) Detect Rhythmic Groups in Music, but Not the Beat," PLoS One 7, e51369.


2.
High-throughput bioacoustics analysis of social communication in geese (with Sonia Kleindorfer)

We will use continuous acoustic monitoring of greylag goose vocalization (using wireless collars), combined with video, acoustic camera (audio/video localization) and physiological and behavioral data to develop a detailed vocal repertoire of the greylag goose, tied to their ethogram. This will use Python and Praat along with data science (hierarchical cluster analysis) and machine learning approaches (LASSO) to process the huge volume of data. We will use playback experiments, using as dependent measures both behavioral responses (looking time) and physiological responses (heart rate acceleration), to address questions of social preference (grounded in social network analysis). Finally, we will address the causal role of vocalization in group behaviors, for example to determine whether group movements are preceded by particular vocalizations from particular individuals (e.g. using Granger causality and similar measures).