Pre-lexical analysis
Pre-lexical analysis involves automatic peripheral perceptual processes which analyse the spoken input into linguistically relevant units. These units are then used in the course of word recognition and in further language comprehension processes. As we have seen in Chapter 7, the perceptual cues and processes that we use for these language-related tasks are special, and infants become selectively attuned to these cues very early on in their lives.
It is widely accepted that the speech input is analysed into phonemes. The phoneme is the smallest unit that when changed can result in a change in meaning by signalling a different word, as shown by minimal pairs (see sidebar). Since a difference in one phoneme can signal a difference in the word in question, it makes sense if the pre-lexical analysis of the speech input has the job of identifying these word-differentiating units.
But why not let the word be the unit of pre-linguistic analysis, since this is after all the unit that is to be recognised There are a number of very good and practical reasons why this would be ill-advised. To start with, in spoken language it is very often extremely difficult to know where one word finishes and the next begins – the segmentation problem we saw in Chapter 7. This is reflected in slips of the ear’ or misperceptions of speech, which often involve the misplacement of word boundaries, e.g. when a coke and a Danish is misheard as a coconut Danish (Bond, 1999). So if we are often unable to determine the boundaries between words in speech, then the task of identifying whole word-sized units in the spoken input before we try to find them in the dictionary will be problematic.

In addition, word-by-word analysis of the input also implies that a word will not be recognised until its entire speech pattern has been identified. As we will see later in this chapter, there is plenty of convincing evidence that we frequently recognise a word before the speech signal corresponding to that word is complete.
The advantages of analysing speech pre-lexically into something like phonemes are not only that we will be able to start the process of recognition on the basis of a shorter portion of speech than if the analysis unit were an entire word, but also that there will be fewer units that we need to recognise. This leads to more efficient and more rapid processing. If we take New Zealand English as an example, we find that there are 44 phoneme units 24 consonants and 20 vowels, see Table 8.1. This is a consider ably smaller number of units for an initial analysis system to deal with than the tens of thousands of words that we might know.
A number of researchers have questioned the assumption that phonemes are significant units of pre-lexical analysis Marslen-Wilson Warren, 1994. Some argue that phonemes are linguistic constructs, and that if they exist at all as psychologically valid entities, then this is a con sequence of learning to read and making connections between the sounds of spoken words and the shapes of written words. Interestingly, this implies that illiterate speakers and speakers whose writing system does not represent phoneme-sized chunks of sound will not use phonemes in their processing of speech. So languages like Chinese have ideographic writing systems, where a character stands for a word, and there is little representation of individual sounds in the writing system. Indeed, if speakers of such languages are asked to perform tasks that involve phoneme-sized chunks of speech e.g. a phoneme monitoring task in which subjects might be instructed to press a button as soon as they hear a particular speech sound, such as /P/, they perform far worse than literate speakers of alphabetic languages like English Morais, Cary, Alegria Bertelson, 1979. This suggests that awareness of phonemes and the ability to perform phoneme-related tasks is in part a result of literacy in an alphabetic language. Since speakers who are not literate in an alphabetic language are nevertheless capable of recognising and understanding spoken words, this demonstrates that phonemes are not a necessary part of spoken word recognition.
Smaller units of pre-lexical analysis
Alternatives to the phoneme as a pre-lexical unit of analysis include both smaller and larger units than the phoneme. Amongst the smaller units is the phonetic feature, such as the voicing feature that distinguishes /P / and /b / in English. There are many cues to this contrast in voicing, and it would be possible to break down phonetic features into smaller units of difference, such as the voice onset time VOT considered in Chapter 7 as one perceptual cue to voicing. For illustrative purposes, however, let us remain at the level of phonetic features, and consider whether it makes sense to propose a unit of this type as an appropriate unit of pre-lexical analysis.
The feature that is shared by /m, n, ŋ/ and makes them a class distinct from other sounds in English is nasality. That is, during the production of these sounds, the soft palate or velum at the back of the mouth is lowered, so that the passageway between the mouth and nose is open, and air can flow out through the nose. Unlike many other languages, English does not have contrasting oral and nasal vowels so, for instance, it would not be possible to create a new word in English that differed from a only by having a nasal vowel, i.e. a vowel during which air can pass through the nose. But vowels in English can be nasalised, meaning that their essentially oral non-nasal property can be modified by allowing air to pass through the nose during part or even the entire duration of the vowel. This has a noticeable effect on the sound of the vowel. One context in which this is likely to happen in the speech of many native English speakers is when the vowel is followed by a nasal consonant. So in the word soon, the lowering of the velum might start during the vowel. Listeners are able to utilise this information to anticipate what the final consonant is. We know this from the results of gating experiments, which show that a word like soon becomes identifiable as this word and not as, say, soup, suit or sued, during the vowel portion of the word, i.e. before the final /n / consonant itself has been heard (Warren & Marslen-Wilson, 1987, 1988). On the other hand, in languages that contrast oral and nasal vowels, such as Bengali and Hindi, the nasal quality of a vowel informs the listener about the identity of the vowel and not necessarily about the following consonant. This is confirmed in gating experiments in those languages (Lahiri & Marslen-Wilson, 1991 ; Ohala & Ohala, 1995).
The early use of phonetic featural information in word recognition suggests that pre-lexical analysis might be in terms of phonetic features and not phonemes, and there are in fact models of spoken word recognition that argue for lexical access from such featural cues or similar sub-phonemic units of analysis (Klatt, 1989). It is difficult, however, to entirely disprove the possibility that what listeners are doing pre-lexically is using such features in order to generate predictions about phonemes in the speech stream, and then using these phoneme hypotheses to access word-forms from the mental lexicon. Phonetic features, after all, cannot exist on their own, whereas phonemes do have pronounceable realisations in the form of individual speech sounds.
Larger units of pre-lexical analysis
Pre-lexical units of analysis that are larger than the phoneme have also been suggested, either instead of or as a supplement to phonemes. Such units include diphones and syllables, especially stressed syllables. Because diphones include the transition from one phoneme-sized segment to the next, they encapsulate the variation in pronunciation of speech segments that results from the influence of neighbouring sounds. As we have seen, the /u / sound in soon is likely to show nasalisation because of the following nasal. Similarly, the end of this / / sound in soon, and also in suit, will have a different quality from the quality it has in so, because in the latter case the next consonant involves a lip closure, while in soon and s it involves a closure between the tongue and the alveolar ridge the bony structure just behind the top teeth.
The structure of syllables varies from language to language. As illustrated in Chapter 4, an English syllable can be a single vowel (my, up), or a vowel with a single preceding or following consonant, multiple preceding and/or following consonants strengths. Māori, on the other hand, has a more limited choice of syllable types. The Māori syllable has to have a vowel, which can be preceded, but not followed, by one and no more than one consonant. This means that if the unit of pre-lexical analysis is the syllable, then speakers of different languages will be monitoring the input for differently sized and structured units.
There are also cross-linguistic differences in the positioning of stress on the syllables of words. Some languages have fixed stress, with the position of the stressed syllable of a content word entirely predictable. So in Czech the first syllable is stressed; in Polish it is the last but one syllable in the word. Other languages have no word stress at all. English has word stress, but while it is predictable for each word and can therefore be indicated in a dictionary with a comparatively small amount of variation between varieties of English, it is not in the same position for all words. For example, English contrasts the noun import with first-syllable stress and the verb import the with second-syllable stress. Nevertheless, corpus analyses of English have shown that a clear majority of content words – some 90 – have a strong, i.e. stressed, first syllable (Cutler & Carter, 1987). One approach in spoken word recognition recognises this pattern, and suggests that in English word searches are started each time a strong or stressed syllable is encountered. This approach is known as the Metrical Segmentation Strategy or MSS (Cutler Norris, 1988).
It has been argued that the MSS applies to all languages, but that the type of unit used for segmentation depends on the metrical or rhythmic structure of the language in question. Researchers have looked for evidence for the application of the MSS in a range of languages of differing rhythmic types. These include u tch, which has a similar rhythmic structure to English with an alternation between stressed and unstressed syllables but without the same distinction between full and reduced vowels (Quen & Koster, 1998 ; van Zon & de Gelder, 1993) ; French, which has relatively simple syllable structures and a rhythmic pattern that is based more on each syllable, i.e. it has syllable-based rather than stress-based timing (Cutler, Mehler, Norris & Segui, 1986 ; Mehler, Domm ergues, Frauenfelder &Segui, 1981); Japanese, where the rhythmic unit is the mora, which is a unit of syllable weight such that a long syllable e.g. a syllable with a long vowel has two moras, so that Tokyo has the four moras to-o-kyo-o (Otake, Hatano, Cutler & Mehler, 1993); and Finnish, where both stress and Vowel harmony have been argued to play a role in word segmentation (Suomi, Mcqueen & Cutler, 1997 ; Vroomen, Tuomainen de Gelder, 1998). Studies have also considered the acquisition of the MSS by infants exposed to different languages (for a review see van ampen, Parmaksiz, van de Vijver & Hoh le, 2008).
It has also been suggested that the MSS applies only in conjunction with another principle, the Possible Word on straint (PWC), which is also claimed to be found across different languages Norris, Mc ueen, Cutler, Butterfield earns, 2001. This constraint ensures that the speech input is exhaustively segmented into words without leaving any residual sounds. For instance, it is more difficult for English listeners to detect the real word see’ when they hear the nonsense word / ʃ/ than when they hear the nonsense word / ʃ / because the residue sh’ in the first case is not a possible word of English, but the residue shub’ in the second case is a possible word, though it happens not to exist as a current word of English Norris, Mcu een, Cutler Butterfield, 1997. In the application of the MSS to English, pre-lexical analysis needs to determine which syllables are strong and therefore likely to trigger a new word search. Evidence for the MSS has been claimed in experimental studies. In one experiment Cutler Norris, 1988, participants were asked to listen to a series of nonword stimuli and to press a button as quickly as possible whenever they heard a real word within a nonword stimulus. Participants were slower in making such a response for the stimulus than for the stimulus mintesh /mintəʃ/. The difference between these stimuli is that the first has two strong syllables, with the second such syllable starting at the /t/, while the second stimulus has one strong syllable and one weak syllable. It is claimed that participants start a second lexical search when they encounter the second strong syllable in the first stimulus, i.e. at the /t/ in mintave, so that the stimulus is segmented as min-tave, making it more difficult to recognise the word contained in that stimulus than in mintesh, where this new lexical search is not initiated. Different results are found for this type of experiment in different languages, and the proponents of this model argue that this is because there are different language-specific instantiations of a MSS, depending on the rhythmic structure of the language.
As we have seen, the size of the units of pre-lexical analysis ranges across models of spoken word recognition from sub-phonemic units like phonetic features, through phonemes, to entire syllables. It is worth remembering, though, that even if the lexical search is based on a larger unit such as the syllable, this does not rule out the importance of a smaller unit in pre-lexical analysis, since it is possible to hypothesise the identity of a larger unit on the basis of information deriving from smaller units. Nevertheless, since the output of pre-lexical analysis provides the entities that will be used in retrieving words from the mental lexicon, the size of the unit has implications for the granularity of the lexical search process. On the one hand, larger units imply greater delays before a lexical search can be initiated e.g. if the recognition system has to wait until a syllable has been identified, rather than a phoneme. On the other hand, larger pre-lexical units will result in a smaller set of items produced by the search on average, fewer words will be known that start with a particular syllable than start with a particular phoneme.
الاكثر قراءة في Linguistics fields
اخر الاخبار
اخبار العتبة العباسية المقدسة