- Undergraduate
- Graduate
- Performance
- Foreign Study
- News & Events
- People
Back to Top Nav
Back to Top Nav
Back to Top Nav
Back to Top Nav
Back to Top Nav
Back to Top Nav
Linguist Laura McPherson studies how some cultures embed language into instrumental music.
When she first began studying the tonal variations of Seenku, an endangered language spoken in the West African country of Burkina Faso, linguist Laura McPherson had a sudden thought: How does the meaning of Seenku change when people sing?
To find out, McPherson, an associate professor in the Department of Linguistics, asked a local language consultant to share some recordings of traditional Seenku songs with her. But instead of songs with words, McPherson was provided with the instrumental music of Burkinabè musician Mamadou Diabaté, who plays the traditional xylophone, or balafon.
"I was like, 'OK, this is cool, but it's not really what I'm studying; this doesn't have any language in it,'" McPherson recalls.
Then her language consultant told her: "I know what the xylophone is saying."
That stopped McPherson in her tracks.
"I said, 'Excuse me? What do you mean 'what the xylophone is saying'? How can a xylophone be saying anything?'"
McPherson reached out to Diabaté when she was in Vienna the following year, after a field trip to Burkina Faso was thwarted by the revolution that ousted long-time president Blaise Compaore.
Their meeting kicked off a decade-long research collaboration and inspired McPherson's current focus: music as a "surrogate language" among Indigenous peoples such as the Sambla in Burkina Faso and the Hmong in China and Southeast Asia.
In these and other cultures, instruments including xylophones, flutes, and drums are used to "utter" specific messages to audiences without words.
"Most of the work that's been done on musical surrogate languages—instruments that can communicate language—has been from an ethnomusicology or an anthropology standpoint. Linguistics really hasn't done a lot of focusing on them," says McPherson, who was awarded a prestigious multi-year NSF CAREER grant in 2020 to study the connections between language and music.
"My goal is to combine linguistic analysis with these studies to understand which elements of language are being encoded, which structures are being used, how that is being encoded musically, how people understand them," McPherson says.
She also weaves in various cultural contexts. "For example, what settings are these used in? How do these settings help listeners understand messages? It's this extremely multifaceted study, which is taking me into many new terrains."
Talking xylophones
When Diabaté told McPherson that the xylophone she was hearing in his recordings was "talking," what exactly did he mean?
"For the most part, it's used to communicate with spectators," says McPherson, noting that the instrument is the cornerstone of Sambla music—present at marriages, funerals, and any sort of festival. It's traditionally played by three people, with the person playing the highest notes, the treble parts, serving as the one who "speaks" to the audience.
"It could be asking for money, because this is how they make their livelihood. So, for example, it might say, 'Hey, son of Gogo, come bring me a thousand francs; I haven't had anything to eat today,'" McPherson says. "Or it might tell somebody they need to get up and dance."
When musicians embed their spoken languages into music—from tones and pitch to rhythm and frequency—they're recording aspects of the spoken word that aren't necessarily taught in books, McPherson says. "In English, for example, the 't' in 'top' is different from the 't' in 'stop,' but no one taught you that, and you probably wouldn't be able to articulate it," she explains. "But when people are encoding their language on instruments, they're tapping into that knowledge."
An audience member could even approach the xylophone and ask it to play a specific song, and the xylophone would respond through music.
"The xylophone will respond, 'Why do you want me to play this song?' And the person will say, 'Because it's my father's song,'" McPherson says. "Or the xylophone might respond, 'If I do this, then you need to bring me two chickens.'"
The ways in which musical instruments are used, and the types of messages conveyed, vary by culture. The Hmong, for example, use a reed mouth organ called a qeej during funeral rites to communicate with the dead.
"So the qeej tells the souls of the dead that they are dead, that they need to pass over to their ancestors, and where they need to go," McPherson says. "It gives all these instructions to dead spirits."
Music, language, and what it means to be human
By studying these musical surrogate languages, McPherson hopes to "probe what people know about their languages" and use the insights to understand language—and the human experience—more broadly.
"Language is one of the key characteristics of human beings. Therefore, when we study how language works—the structure of human language—we're studying ourselves," says James Stanford, chair of the linguistics department. "Professor McPherson's innovative research at the intersection of language and music provides important new theoretical insights about the structure of human language, and new empirical perspectives about understudied cultural and linguistic systems."
McPherson regularly integrates her research into the classroom. This spring, for example, her undergraduate seminar welcomed guest musicians and speech surrogate practitioners from Nigeria, Burkina Faso, and Southeast Asia.
"It's deeply rewarding to enable students to speak firsthand with people who practice these amazing traditions," she says.
There are also practical applications for McPherson's work. She has just begun a pilot study on how the brain processes surrogate languages, with the hope that the resulting insights could help dementia patients communicate more effectively, even as the disease progresses.
"Sometimes people with dementia or Alzheimer's can sing, but they can't really speak anymore. So what happens with surrogate languages?" she asks. "What areas of the brain are lighting up? Are they language areas? Are they music areas? Is it both? Is it different?"
Diabaté contributed to the pilot study by undergoing EEG imaging when he was teaching the Sambla balafon tradition to Dartmouth students on campus last spring as part of a course on the language-music connection co-taught by McPherson and Professor of music Ted Levin.
"All his language areas are lighting up when he hears musical surrogate languages compared with just instrumental music; a lay person would not be able to tell the difference between them at all," McPherson explains. "Could studying surrogate languages be useful for helping people communicate?"
In April, McPherson was awarded an Alexander von Humboldt Research Fellowship for Experienced Researchers. The award will support a six-month stay as a visiting scholar at the University of Cologne in Germany, where she will study tone in interdisciplinary contexts.
Ultimately, McPherson hopes her research will help preserve endangered musical traditions and communication systems.
"So many of these are being lost, and I hope that in working with communities and documenting them, it inspires younger musicians to take pride in these systems, mainstream them, and pass them down," she says. "Because they really are genius."