Hush now: Baby talk has common features across languages and societies
ML study combines with citizen science to show talking and singing to infants may have evolutionary roots
People sing and talk to young infants in a similar fashion across a range of diverse languages, locations, and societies, a machine learning and citizen science study has found.
The research claims this has implications for the evolution of language, even suggesting some common features with forms of animal communication. Led by Courtney Hilton, postdoctoral fellow in the Department of Psychology at Harvard University, the teams involved 40 international collaborators and collected 1,615 recordings of human speech and song from 21 societies across six continents.
Applying linear regression LASSO machine learning models – one for speech and one for song – the researchers were able to classify whether recordings were infant or adult-directed on the basis of their acoustic features. They found that acoustic features consistently differed between infant and adult-directed recordings. In both groups, infant-directed recordings had purer timbres, songs were more subdued, and speech was typically a higher pitch.
Languages sampled included Hadza, an East African hunter-gatherer language, the Kannada language from the Dravidian family in India, and more widely spoken languages such as English and Mandarin.
People were also able to tell the difference between infant and adult-directed speech and songs. Via a citizen science study, 51,065 speakers of a variety of languages from 187 countries heard the recording. The study showed that listeners could make a reasonable judgement when vocalizations were directed at infants.
"Their intuitions were more accurate than chance, predictable in part by common sets of acoustic features and robust to the effects of linguistic relatedness between vocalizer and listener. These findings inform hypotheses of the psychological functions and evolution of human communication," the paper, published in Nature Human Behaviour, said.
- FDA clears way for an AI stethoscope to detect heart disease
- Meta's AI-based Wikipedia successor 'may be the next big break in NLP'
- AI inventors may find it difficult to patent their tech under today's laws
- More and more CS students are interested in AI – and there aren't enough lecturers
The researchers argue the findings show that despite variation in language, music, and infant care practices worldwide, when people speak to an infant or sing to an upset baby, they change the way they speak and sing in "similar and mutually intelligible" ways across cultures.
"This evidence supports the hypothesis that the forms of infant-directed vocalizations are shaped by their functions, in a fashion similar to the vocal signals of many non-human species," the researchers said.
The researchers argue that in animals, sounds have converged across groups to show friendliness or approachability in close contact calls while other sounds have converged for alarm calls or signs of aggression.
"The use of these features in infant care may originate from signalling approachability to a baby but may have later acquired further functions more specific to the human developmental context," the paper said.
The research shows naive listeners are biased toward judging speech to be addressed to adults and songs to be addressed to babies.
"We speculate that listeners treated 'adult' and 'baby' as the default reference levels for speech and song, respectively, against which acoustic evidence was compared, a pattern consistent with theories that posit song as having a special connection to infant care in human psychology," the researchers said. ®