Newborns are sensitive to multiple cues for word segmentation in continuous speech.
Dev Sci. 2019 Jan 25;:e12802
Authors: Fló A, Brusini P, Macagno F, Nespor M, Mehler J, Ferry AL
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, 2012; Jusczyk & Aslin, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co-occurrences between syllables and the use of the language’s prosody. After a brief familiarization of about 3 minutes with continuous speech, using functional near-infrared spectroscopy (fNIRS), neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition. This article is protected by copyright. All rights reserved.
PMID: 30681763 [PubMed – as supplied by publisher]