Modularity within language processing
An on-going debate within the psychology of language concerns modularity within language processing, or the extent to which there are separate modules for different processing types, such as the various pro duction and comprehension processes outlined in the preceding chapters of this book. For instance, do listeners or readers recognise words using a word recognition system that is separate from the structural analysis of the sentences that those words make up o es the selection of the sound structure of a word during production occur as a separate processing stage from the selection of the lemma for that word based on the speak er’s intended meaning A related question concerns the extent to which processing operations at these different linguistic levels e.g. involving sounds, words, sentences act independently of one another. That is, while there may be separate components responsible for different types of language analysis, these components may in fact interact strongly with one another, so that recognition of words might be affected by the extent to which different words fit the developing interpretation of a sentence.
Interestingly, both the advocates of a modular approach to language processing and those of a non-modular or interactive approach cite efficiency in processing as a motivation for their respective positions. From the modular perspective, having specialised components for particular tasks means that these can get on with their jobs without distraction. From a non-modular viewpoint, it is argued that knowledge arising from one set of processes can improve the efficiency of another set of processes, by helping to eliminate at an early stage any analyses that might prove to be unnecessary or misleading.
There are a number of key characteristics to a strongly modular approach (Fodor, 1983). The first is that the language processing system is divided into modules that are informationally encapsulated. The processes within each module are effectively sealed off from other processes. So each module takes a certain type of representation as its input, and derives from that an output that feeds the next module in the system. For example, a phoneme-based word recognition module would take as its input strings of phonemes that are the output of a pre-lexical analysis module, and would pass on complete word-forms to a syntactic sentence structure module. There is a strict linearity to the processing system, so that for instance the module dealing with syntactic processing receives input at the word level i.e. the output of the word recognition system, and does not itself receive input from the phonetic or orthographic level. Note that this is a particularly strong view of modularity. Evidence that the modules are not encapsulated in this way, i.e. that there is leakage between the modules, or interaction between them, would be compatible with a weaker notion of modularity. That is, one where there are modules with specialised responsibilities, but which are not informationally encapsulated in the manner envisaged above.
A second characteristic of modular systems is the automaticity of the operations of each module. If the module receives some input, then it is required to process that input and to at least attempt to generate an appropriate output. This characteristic is reflected in the apparent obligatory nature of word recognition. The Stroop effect in word reading Stroop, 1935 is a classic example demonstrating this. This effect occurs when participants are asked to name the colour in which a word is print ed, and find it difficult to ignore what the word means. As a result, if they see the word GREEN printed in red, then they experience interference from the meaning of that word i.e. green’, when trying to name the colour red.
As well as being informationally encapsulated and automatic, it is argued that each module contains processes which operate without us ordinarily being consciously aware of them. In fact, many of the lower level or more peripheral processes appear to be unavailable to conscious inspection.
It is argued that these and other properties of the modules contribute to the speed and accuracy of the processing system. Indeed, it is claimed that the sheer speed and accuracy with which we process language are factors that speak most strongly in favour of a modular system, since coordinating interactions between different processing components would make the language system sluggish and open to error.
Modularity and syntax
The modularity of components at different levels of the processing system is frequently discussed in connection with the role of syntactic and other information sources during sentence parsing. As we saw in Chapters 10 and 11, the garden path model of sentence processing claims that the initial analysis of a sentence is based on syntactic information associated with the input word string. Although a semantic interpretation is also constructed, it is argued that this provides input to the sentence-building process only once the syntactic parser has done as much as it is able and encounters difficulties as in garden path experiences. This separation of syntax and semantics means that it should be possible to develop a representation in one of these domains without developing one in the other. There is some support for independence of syntactic and semantic information that comes from the study of brain-damaged patients. An early and very influential study (Caramazza & Zurif, 1976) tested different groups of aphasic patients in a sentence–picture matching task that included sentences like that in (13.1). Broca’s aphasics typically made errors in which they would choose a picture of a cat biting a dog for this sentence, i.e. a reversal error in which the superficial order of the noun phrases a before o is misunderstood to indicate who did what to whom. Wernicke’s aphasics, on the other hand, were likely to make an error that involved selecting a picture with a representation of a different noun or verb e.g. of a dog chasing a cat.

This pattern of processing difficulties suggests a separation of syntactic and semantic processes, since one can be affected without significant impairment of the other. This conclusion is supported by production data see sidebar. Broca’s aphasics have output that is non-fluent and consists mainly of content words with very little grammatical structure, and which is often referred to as agrammatic. Wernicke’s aphasics, on the other hand, produce fluent and grammatically well-formed sentences, but the sentences are rather empty of meaning, because these aphasics have great difficulty in finding words. This difficulty is reflected also in problems that such aphasics have in matching object names with pictures of those objects.
Language production and comprehension data from Broca’s and Wernicke’s aphasics also show that content words and function words can be distinguished, in that the former are more likely to be affected by Wernicke’s aphasia. Neurophysiological studies support this separation of word types. In fact, the processing of function words shows patterns of brain activity that indicate that they are more localised in the syntactic areas of the left-brain hemisphere, than content words, which show broader patterns of activity, also involving right brain areas (for a summary of some of this fascinating research see Pulvermller, 2007).
Subsequent studies of aphasia explored in more detail the notion of a syntactic deficit in Broca’s aphasia. Consider for instance a task where sentences such as (13.2) and (13.3) have to be matched to their corresponding pictures.

Broca’s aphasics find it much easier to do this sentence–picture matching task with sentences like (13.2) than with sentences like (13.3) (Byng, 1988). In 13.2 the content words – butcher, weight, meat – give only one plausible interpretation of the sentence, irrespective of grammatical constraints such as word order. This is because meat does not typically weigh butchers. In the case of (13.3), if – as a result of brain damage – syntactic information is less readily available to guide the interpretation of the sentence, then the content words could be in either of two relationships, since fire men can weigh policemen and policemen can weigh firemen see the discussion of reversible sentences in Chapter 11. Therefore, without a syntactic understanding of (13.3), Broca’s patients can have difficulty knowing that it is incorrect to match the sentence to a picture of a police man weighing a fireman. But even though their syntactic processing may be impaired, such patients can bring semantic information and their world knowledge to bear in their interpretation of (13.2) and come up with a single interpretation without requiring syntactic word-order information about the relationships of the words to one another.
As further studies were conducted, it became clear that the syntactic deficit account of Broca’s aphasia was too simplistic. For instance, large scale studies of aphasia showed that both Broca’s and Wernicke’s aphasics showed similar rank ordering of different syntactic structures in terms of their levels of difficulty. As a consequence, there are some severely impaired Wernicke’s patients who show a syntactic deficit in comprehension without showing agrammatic speech, undermining the notion that syntactic comprehension difficulties are linked to a syntactic deficit that affects all types of language processing. Similarly, patients were studied who showed agrammatic speech but without syntactic comprehension deficits. In addition, Broca’s patients were discovered who, although they performed poorly on sentence–picture matching tasks, scored highly in grammaticality judgement tasks. (For a summary of these studies see Martin, Vuong Crowther, 2007).
Despite the doubt that is cast by findings such as these on the independence of syntactic and semantic processing, there are individual case studies that show quite clear dissociation, such as patient JG (Ostrin Tyler, 1995). This patient had agrammatic speech, showed typical Broca’s behaviour on sentences with the same characteristics as (13.2) and (13.3), performed poorly in online tasks that depended on intact syntactic processing, but performed well in a semantic priming task.
Other evidence from aphasics suggests that the observed patterns of behaviour derive perhaps not from a complete loss of syntactic processing capability, but from a weakening of syntactic processing, which leads to greater reliance on other sources of information during processing. This would mean that when these other sources of information are not as reliable, then the aphasics would rely on whatever residual syntactic processing they have. This was demonstrated in one study that used a plausibility task with normal controls and aphasics with syntactic comprehension deficits Saffran, Schwartz Linebarger, 1998. The materials included sentences such as those in (13.4) and (13.5), both of which the control participants easily marked as implausible.

The interesting finding in the patient data is that while patients made many errors on sentences such as (13.4), i.e. responding that this sentence is plausible even though it is clearly not, they made few such errors with sentences like (13.5). In the case of (13.4), semantic knowledge indicates that only one of the NPs could plausibly be the agent of the verb eat (i.e. the mouse), and the patients relied on this knowledge in their interpretation of the sentence, responding incorrectly that it was plausible (i.e. interpreting it as the mouse ate the cheese). With (13.5), however, such semantic knowledge is not as constraining, since both cats and mice can carry. In this situation, it is argued, the patients were forced to make use of their residual syntactic understanding of the sentence, with the NP in subject position being interpreted as the agent, rendering the sentence implausible.
What such findings suggest is that during comprehension we call on a range of information sources, and that while these may be separately represented and differentially affected by different types of brain damage, they nevertheless interact during sentence processing. We saw in earlier chapters a range of evidence that seems to speak against a strictly modular view of language processing and in favour of interactive or constraint-based approaches to sentence processing. For instance, in Chapter 11 it was pointed out that sentence processing in reading experiments is affected by a number of non-syntactic factors, including lexical preferences. So, it matters whether a verb is more likely to be followed by a noun phrase object or by a clause object. This is shown by the preferred interpretations and relative likelihood of garden path effects with saw in (13.6) and doubted in (13.7). The animacy of a subject noun is also important – if the subject is animate then verbs with transitivity ambiguity are more likely to be treated as transitive and therefore as requiring a following object, as shown by a comparison of (13.8) and (13.9).

These findings do not deny the possible existence of dedicated modules for specific language processing tasks. In contrast with more strictly modular approaches outlined above, however, it is argued that the mod ules are permeable and capable of sharing information with one another during processing.
In similar fashion, it was pointed out in Chapter 8 that words can be recognised earlier in supporting contexts than in isolation, suggesting that there is interaction between aspects of sentence processing and aspects of word recognition. In Chapter 3, we saw that the speech errors known as blends are likely to involve words that have both phonological and semantic similarity. This again indicates the interaction of different information sources during language processing.
The discussion of modularity above has focused on whether or not there is interaction between levels of the processing system, such as between syntactic and semantic processing or between word recognition and sentence interpretation. Another issue is whether there are separate modules for different types of processing within the same level of linguistic organisation. This is perhaps no more acute than in the context of lexical processing, which provides the focus of the next two sections. In particular, the possibility has been raised that there is not just one mental lexicon, but two – one for production and one for comprehension. Or perhaps four, with a production lexicon and a comprehension lexicon for each of spoken and written language processing. It is highly unlikely that these lexicons would be entirely separate from one another, and so a crucial issue is the point at which lexical operations might become distinct from one another in different processing domains.
الاكثر قراءة في Linguistics fields
اخر الاخبار
اخبار العتبة العباسية المقدسة