Science —

The curious case of whistled languages and their lack of left-brain dominance

Whistled Turkish is processed equally by both sides of the brain.

The curious case of whistled languages and their lack of left-brain dominance
Onur Güntürkün

Whistled Turkish is a non-conformist. Most obviously, it bucks the normal language trend of using consonants and vowels, opting instead for a bird-like whistle. But more importantly, it departs from other language forms in a more fundamental respect: it's processed differently by the brain.

Language is usually processed asymmetrically by the brain. The left hemisphere does the heavy lifting, regardless of whether the language in question is spoken, written, or signed. Whistled Turkish is the first exception to this rule, according to a paper in this week's issue of Current Biology. There's evidence that both hemispheres pitch in about the same amount of effort when processing the whistled words.

This evidence could contribute to our general understanding of how the brain works by answering some of the many mysteries about how and why we have asymmetrical processing. And perhaps very far down the road, this research could help stroke sufferers regain some of their lost communicative skills.

Talking with the birds

If you’ve ever tried to communicate in a kind of sing-song grunt when you have your mouth full—like asking your cousin to “please pass the mashed potatoes” by braying “eee aaa uh ash oh-ay-oh”—you can get a rough idea of what whistled languages are like. Whistled languages use the existing words in a language, but they're approximated in a whistled form, while still retaining a lot of the original acoustic information that would have been present in a normal utterance like pitch and sentence stress.

Although they’re nowhere near as common as spoken languages, a number of different whistled languages have emerged all around the world, usually in communities that live with a high level of isolation. Whistles can be understood as far as 10 kilometres away from the whistler, allowing the signal to carry much farther than spoken language. It’s even possible for a whistling form to be adapted from one language to another (much like a writing system).

The region in Turkey where the research was conducted had the typical topography of a whistled language. “When you’re living there, you recognise why it’s a good landscape for whistled Turkish,” lead author Onur Güntürkün told Ars. “It’s a very mountainous region, very steep with deep valleys, and to communicate with your neighbours, you have to climb down and up again.” Whistling, on the other hand, allows you to communicate easily with someone on the other side of the valley.

Dying whistles

Landline and mobile phones have obviously had an impact on whistled Turkish, and they now dominate communication in this region as they do everywhere else. “You can’t gossip with whistled Turkish,” said Güntürkün. “If you want to have a private conversation, it’s better to use the phone.”

The result has been an overhaul of the social structure surrounding whistling. It used to be used across the entire population, Güntürkün explained: children as young as six were already proficient, and the language was used by both genders. Now that it has been partially supplanted by technology, whistling has more to do with pride in local culture and maintenance of tradition.

Boys in early adolescence seem to become interested in whistled Turkish and learn it from adults, while girls don’t seem as interested, he reported. While the older generation had equal use of whistling across the genders, the younger whistlers are all young men. It's not really clear why this is the case.

There are efforts to keep the tradition alive, such as whistling fairs and awards paid for by the government, but Güntürkün isn’t optimistic about the future of whistling. “I think it’s doomed to die." He might be wrong, he concedes, but Güntürkün maintains that there is a noticeable reduction of the culture.

Two men using whistled Turkish to communicate about 700 meters across a valley in Turkey. Video credit: Onur Güntürkün.

Brain asymmetries

Whistled languages aren’t just fascinating in their own right: they’re a useful vehicle to study language perception in the brain. The left hemisphere is dominant in language processing—dominant, but not exclusive. Both hemispheres are active for language processing, but language tasks engage the left hemisphere more heavily than the right.

According to Güntürkün, the right hemisphere is involved in decoding properties like melody, pitch, and tone—the kinds of cues that let you know whether or not a sentence is sarcastic. His hypothesis was that whistling, as a highly melodic form of language, involves the right hemisphere more than other language forms, perhaps even to such an extent that whistled language breaks the left-hemisphere dominance trend.

Different language forms do show up differently in the brain. “It’s well-known that the form of the language matters,” said Joseph Devlin, a neuroscientist at University College London who wasn't involved in this research. “Sign language doesn’t look exactly like spoken language in the brain; reading Chinese doesn’t look exactly like reading English.” They’re different but still similar: they still all show left-hemisphere dominance, and the overlap is much larger than the difference, Devlin added.

To test Güntürkün's hypothesis, it's obviously necessary to try and figure out which hemisphere is processing the signal of whistled language. It wasn't really an option to conduct an fMRI study in mountainous rural Turkey, so the researchers opted for a simpler method: a technique called “dichotic listening." Participants were screened for their fluency in whistled Turkish, then given headphones and shown a set of written syllables. When they heard a syllable through the headphones, they had to point to that syllable on the list in front of them.

Sometimes, both headphones played the same syllable. At other times, though, each headphone played a different syllable. When this happened, the syllable that the participant reported hearing was thought to be indicative of which hemisphere had processed the syllable first. If the participant pointed to the syllable that was sent to the right ear, it suggested that the left hemisphere processed it first. If they heard the syllable in their left ear, it suggested speedier right-hemisphere processing.

Generally, in dichotic listening experiments, you'd expect to find around two-thirds to three-quarters of the syllables processed with the right ear, according to Güntürkün—a figure higher than chance and indicative of primarily left-hemisphere processing.

In this study, the results for the spoken syllables matched up with the norm; participants heard them through the right ear about two-thirds of the time. Whistled syllables, however, were heard equally through both ears, suggesting that the right hemisphere was working harder than with spoken language and showing no hemispheric dominance.

Dichotic listening is a well-established technique, but it’s not the best method because it relies purely on behaviour, said Devlin. Tests like EEG or MRI can give a better sense of what’s actually going on in the brain, but they need much more equipment than just a laptop and headphones, making dichotic listening a good choice for this particular group of people. It’s not clear, he added, whether we can really assume that hearing a syllable in the right ear means that it’s being processed largely by the left hemisphere.

That said, because each participant in this study listened to both spoken and whistled syllables and behaved differently when responding to them, we can be fairly sure that something is going on, he explained. The right ear/left hemisphere advantage was there for the spoken syllables and not for the whistled ones. That’s meaningful, even if we can't interpret it fully right now.

Good science: more questions than answers

Güntürkün was quick to emphasise that this discovery isn't about to turn linguistic neuroscience upside down by challenging the overall evidence for left-hemisphere language dominance: “It doesn’t mean that everything we think about brain asymmetries has to be changed," he said. The only thing that's being questioned is the idea that when it comes to language, input plays no role in left-hemisphere dominance. We now have reason to think that if we have the right linguistic input, we might not see that dominance.

Because of the rarity of whistled languages, studies like these are difficult and expensive to conduct, but it'll be necessary to do more research to back up these results and see if they stay the same through different testing techniques and different whistled languages. "This is a nice first step which would be further improved by more objective measures such as brain scans," said Keelin Murray, a researcher with interests in the interplay between music and language who wasn't involved with this study.

Güntürkün has an idea for future research that's exciting but also possibly a pipe dream. He told Ars he'd like to see whether this finding has an application for stroke victims. People who’ve suffered a left-hemisphere stroke can lose their language abilities to a devastating extent. If whistled languages lean more heavily on the right hemisphere, it’s possible that speakers of whistled languages who suffer left-hemisphere strokes might still be able to communicate by whistling. And if this is the case, it means that maybe changing the form of language for all left-hemisphere stroke victims could have some success.

This is still a far-off plan, and it relies on finding a very rare person: someone fluent in a whistling language who has suffered a particular kind of brain damage due to stroke. Finding the small sample of 31 fluent whistlers for this experiment was difficult enough—an impressive number given the constraints, Devlin commented—which means that the stroke study might be too difficult.

Part of what makes the results of this study intriguing, said Devlin, is the new questions they raise. What is it about Turkish whistling that affected the processing? What exactly is going on in the brain? Is the language itself processed differently, or this caused by different acoustics, or differences in working memory? “Like all good science, a step forward raises new questions that we didn’t even know to ask before the study was done,” he said.

Current Biology, 2015. DOI: 10.1016/j.cub.2015.06.067  (About DOIs).

Listing image by Onur Güntürkün

Channel Ars Technica