The relationship of visual and spoken language
المؤلف:
Paul Warren
المصدر:
Introducing Psycholinguistics
الجزء والصفحة:
P226
2025-11-13
47
The relationship of visual and spoken language
The similarities and differences between the processing of visual and spoken language stimuli are mentioned at various points in this book, as are the relationships between the two modalities during processing. In specific instances, such as the dual route model of reading aloud see Chapter 9, the interaction of visual and phonological representations for words is crucial – the issue there was how we get from a visual input to a spoken output.
Some questions, however, remain. Just how much overlap is there between the representations for visual and spoken language to what extent are the processes in the two modalities identical, similar or different to what extent can we distinguish central’ processes that are common to different forms of language, and more peripheral’ processes that are modality-specific Examples of modality-specific differences will obviously include those that relate to the different media involved. For example, we can backtrack during reading and re-assess words that might have led us astray as indicated in the garden path studies reviewed in Chapter 10, but this is not so easy in listening.

Figure (13.3) expands the model sketched in Figure (13.2) by adding com ponents for visual lexical processing. The left-hand side of the figure repeats the input and output routes for spoken words, while the right-hand side adds routes for dealing with visual word recognition and pro duction. Note the links between the orthographic input components and the phonological and articulatory output components, reflecting the different routes for reading aloud that were discussed in connection with the dual route model in Chapter 9.
Consider now evidence from studies of patients that shows how it is possible for spoken and written responses to the same stimulus to be inconsistent. Such evidence supports the notion that there are different output lexicons for spoken and written forms of words, and not just different peripheral output components for each modality. For example, when one patient was asked to identify a picture of peppers, he wrote tomato’ but said artichoke’. In addition, there are patients who can provide written names for objects but who are very poor at providing the spoken word for the same objects. Some can write down words for which they can give neither a definition nor a pronunciation. Other patients show severe word finding difficulties anomia when speaking, yet they are able to write down the words that they are looking for. Others can write perfectly well to dictation, even of irregularly spelled words, but without understanding what they are hearing. Some patients with word deafness see above find that if they write down a word they have heard but not understood, then they can read what they have written and understand the word from that. These patterns of behaviour suggest a range of connections between modules, such as those suggested in Figure (13.3).
The architecture shown in Figure (13.3) is complex, but even so is something of a simplification. It seems that as more data become available from patient studies, so more and more complex interactions are required between the components in the figure in order to account for the data. It is of course possible that patients have developed strategies and alternative pathways for dealing with the difficulties they face, and that the post-trauma architecture may not accurately reflect the pathways used in so-called normal’ language processing. Figure (13.3) is a simplification also in that it omits some detail that is almost certainly required for a full account of lexical processing.
الاكثر قراءة في Linguistics fields
اخر الاخبار
اخبار العتبة العباسية المقدسة