A human researcher floats near her ship recording a series of whistles from a nearby grey-skinned creature with great dark eyes. Amid the chatter, the computer recognizes a waveform and whispers a word into the researcher’s ear: sargassum.
This isn’t first contact with an alien race—but it’s almost as cool and closer to home.
Denise Herzing and the Wild Dolphin Project (the longest running study of its kind) have been collecting dolphin sounds and behaviors for the past 26 years. Recently, the group made up their own dolphin whistles, assigned each sound a definition, and played them to dolphins, hoping the sounds might be picked up and mimicked.
The team uses a system called Cetacean Hearing and Telemetry (CHAT) specially designed to send out sounds and recognize when a dolphin mimics them. It was one of these mimics Herzing’s hydrophones recognized as the phrase for ‘sargassum’—a kind of brown algae and favorite Atlantic spotted dolphin toy (see above).
Though researchers have long listened to dolphins whistling and clicking, the diction bewilders the human ear. Using modern microphones, however, scientists can record the full range of frequencies dolphins use to communicate (some beyond human hearing), and computers can mine the data for patterns invisible to us.
The recent result was the project’s first “translation” of a dolphin whistle, but the researchers are cautious about assigning too much meaning to it. And it’s important to note, there was no indication the whistle was being used in context.
It does seem to show, however, that CHAT—developed by Thad Starner, a technical lead on Google Glass and director of Georgia Tech’s Contextual Computing Group, and a group of graduate students—seems to be working.
The system uses a pair of hydrophones (underwater microphones) to record dolphin whistles and sends whistles via an underwater speaker. A computing device in a housing attached to a diver’s chest uses a series of pattern discovery algorithms to comb through the recorded sounds and relays any hits to a bone-conducting headset.
Starner and former graduate student, David Minnen, originally developed the software to detect interesting patterns in any data set. It successfully sifted out 23 of 40 signs in a sign language video, for example. Though the algorithms can identify patterns, it still requires a human to make associations and discover meaning.
Eventually, Herzing hopes to tease out fundamental units in the whistles and use them to make a common language of sounds the animals are more willing to use than artificially created human phrases. CHAT could result in better two-way communication with dolphins, but Herzing says it isn’t a translator.
“The word ‘translator’ conjures up images of some magical device that somehow utilizes some universally discovered patterns and translates words to the awaiting humans, something like the babble fish for those that follow science fiction. Nothing could be further from the truth.”
The team’s work last summer was foreshortened when they lost track of the dolphins they were studying. On later inspection of the data, they found what looked like a number of mimicked whistles at higher than expected frequencies. They’ll widen the frequency range upon resuming the study this summer.
The project has broader implications too. Animal communication is a fascinating big data problem. Other scientists are training similar technology on primate research.
Using machine-learning algorithms, Coen has uncovered 27 fundamental units in white-cheeked gibbon calls. Brenda McCowan of the University of California, Davis used software to search 37,000 observations of rhesus macaque behavior associated with violent conflicts dubbed “cage war.” The analysis found the periodic addition of young adult males could improve stability as they gradually took on “policing” duties.
In a video interview last year, SETI scientist Laurence Doyle noted we have a growing collection of animal sounds. Cornell’s Macaulay Library, for example, houses 175,000 audio and 60,000 video recordings of birds and other animals. The Alaska Whale Foundation has tens of thousands of humpback whale recordings.
Doyle suggests creating an open source library of animal sounds with the digital tools to analyze them. We have plenty of data, he says, the problem is analysis. We may speed discoveries by releasing the information and crowd sourcing its study—an approach used in the early distributed computing platform, SETI@Home.
And as for aliens? Doyle believes learning to communicate with earthly species might help us better recognize intelligent extraterrestrial signals. ET may not speak English, but intelligent life may have information sharing structures in common—the frequency of basic repeated linguistic units (called phonemes), for example.
“We say ‘are we alone’ has been a question asked at SETI for a long time now,” Doyle says, “But the fact is we’ve got a million languages on planet Earth that are not human.”
Using a branch of mathematics called information theory, Doyle contends dolphins and humpback whales have language, and the phoneme distribution is similar to human linguistic distributions. He thinks SETI might use similar methods to create an intelligence filter to better sift extraterrestrial signals.
SETI director, Gerry Harp, agrees, “If we can’t understand what dolphin communication is about, how likely is it we’ll understand what’s coming from space? It should be easier to understand dolphins, so it should be a good test case to try out our signal analysis.”
Images by Bethany Augliere and M. Hoffmann Kuhnt courtesy of the Wild Dolphin Project.