المرجع الالكتروني للمعلوماتية
المرجع الألكتروني للمعلوماتية

English Language
عدد المواضيع في هذا القسم 6151 موضوعاً
Grammar
Linguistics
Reading Comprehension

Untitled Document
أبحث عن شيء أخر المرجع الالكتروني للمعلوماتية
وظـائـف اتـجاهـات المـستهـلك
2024-11-28
كيفيّة محاسبة النّفس واستنطاقها
2024-11-28
المحاسبة
2024-11-28
الحديث الموثّق
2024-11-28
الفرعون رعمسيس الثامن
2024-11-28
رعمسيس السابع
2024-11-28


The head parameter  
  
1254   10:42 صباحاً   date: 2023-12-25
Author : David Hornsby
Book or Source : Linguistics A complete introduction
Page and Part : 167-8


Read More
Date: 2023-08-23 542
Date: 2023-10-21 538
Date: 2023-11-23 671

The head parameter

A good example of a parameter within Principles and Parameters Theory is the head parameter, which determines the position of heads in a phrase. In a head-initial language like English, the head noun (N) of an NP comes before its complements:

leader of the gang

cards on the table

 

This is not just true for NPs: the head preposition of a PP also precedes its complements (in the bag, under the bridge), and verbs precede object complements in a VP (read a book, answered the question). The ‘head-first’ setting for this parameter therefore captures a number of independent facts about the syntax of English and of other languages with the same setting.

 

In Japanese, which is a head-final language, exactly the reverse pattern applies (data from Cook & Newson 2007: 44). It has postpositions, not prepositions, as heads of PPs:

kabe ni ‘on the wall’

wall on

Similarly, verbs come after their complements:

Nihonjin desu ‘I am Japanese’

Japanese am

E wa kabe ni kakatte imasu ‘The picture is hanging on the wall’.

picture wall on is hanging

 

Principles and parameters were incorporated in the 1980s in modules in government and binding (GB) theory, which set out universal structural conditions, and placed constraints on the one remaining transformation ‘move-α’ (move alpha), which essentially meant ‘move anything anywhere’. One of the GB modules, bounding theory, ruled out, for example, movement of elements outside certain constructions called islands. The minimalist programme aimed for still greater economy in the apparatus of generative theory by removing D- and S-structure altogether and allowing only very general constraints to interact with the abstract feature specifications of lexical items.

 

Chomsky’s generative paradigm has dominated theoretical linguistics and set the agenda for the subject for nearly six decades. But although it has been constantly updated and refined, it has never been uncontroversial, as we shall see later.

 

Chomsky has never been without his dissenters. Some, like Charles Hockett, whose 1968 critique, The State of the Art marked a break with generativism, have initially been sympathetic to Chomsky’s approach and goals. For others, such as Geoffrey Sampson and Larry Trask, Chomsky’s own break with the Descriptivists simply represented a wrong turn from which the subject has never recovered. While we cannot do justice to all the controversies here, we will highlight some areas in which generativism has faced persistent criticism, and some of the responses its supporters have offered.

 

A recurrent strain of criticism focuses, unsurprisingly, on Chomsky’s innateness hypothesis and the question of whether language learning is qualitatively different from other kinds of cognitive development. In particular, Chomsky has faced the accusation that his persuasive critique of behaviorism does not amount to evidence in favor of his own theory of a universal grammar. Generativists would counter that language acquisition is difficult to explain without some innate mental blueprint. It proceeds rapidly and at an early stage in development, irrespective of the child’s cognitive abilities in other areas. And while children do make errors, these are generally of an ‘intelligent’ kind, involving overgeneralization of rules which they have deduced for themselves – for example, those of plural and past tense formation as shown here:

I saw some sheeps on the hill.

Mummy readed my book.

 

Equally important are the kinds of mistake that children appear not to make. Imagine, for example, a robot attempting to make sense of pronoun use in the English language. It might notice, for example, that in a sentence like ‘Paul goes to London on Wednesdays’, the first word ‘Paul’ can be replaced by ‘he’. It might also learn that ‘he’ refers to male animates and ‘she’ to female ones. Applying a normal ‘trial and error’ approach to understanding the functioning of these two pronouns, it might then draw the obvious conclusion that the first word in a sentence can be replaced by a pronoun, a strategy which works well with proper names like John, Mary, David and so on in sentence frames like the one just quoted. But what if the subject is a noun phrase, as here?

1 The man goes to London on Wednesdays.

2 Our teacher goes to London on Wednesdays.

3 The tall man with a long beard and an umbrella goes to London on Wednesdays.

 

A similar approach would lead the robot, perfectly logically, to produce the following, ungrammatical sentences:

1 *He man goes to London on Wednesdays.

2 *He teacher goes to London on Wednesdays.

3 *He tall man with a long beard and an umbrella goes to London on Wednesdays.

 

Human children do not, however, behave like robots. They do not seem to make the ‘trial and error’ mistakes one might expect, quickly deducing instead that the underlined noun phrase in each case can be replaced by he. This suggests an early grasp of the complex notion of structure dependency, which for Chomsky is explicable only in terms of an innate understanding of how natural languages are organized. Evidence for the innateness hypothesis was provided by a famous experiment in which Neil Smith, Ianthi-Maria Tsimpli and Jamal Ouhalla (1993) worked with Christopher, a man whose development had been delayed with respect to normal cognitive abilities such as learning to walk, but who had shown a remarkable aptitude for language acquistion. The researchers presented him with unfamiliar natural languages, which he learned without difficulty. But an invented language, Epun, which displayed structure-independent operations not found in natural languages, proved beyond his capabilities.

 

Chomsky has also been criticized for ignoring semantics because it does not lend itself to the formalization his theory requires. Commentators have challenged notably the assumption that native speakers can judge grammaticality without reference to meaning. Chomsky’s claim, for example, that ‘Furiously sleep ideas green colorless’ is not accepted by English speakers seems to rest, as Moore and Carling (1982: 81) point out, on the assumption that strings of the kind

adv    V    N    adj    adj

are ill-formed. However, a structurally identical sequence such as ‘Always dye shirts greenish blue’ is likely to be accepted, suggesting that acceptability is not judged solely on the basis of grammar. Chomsky’s assumption that native speaker intuitions are based on competence has also been challenged. As Palmer (1971: 159) has pointed out, some speakers reject sentences like the following:

He will have been being beaten.

 

It is not clear, however, on what basis this is rejected: competence or performance? In other words, are informants rejecting the combination of future marker will with perfect, progressive and passive on the grounds that the sentence is ungrammatical with respect to their internalized rule system (competence), or merely because the resulting sentence is complex and difficult to process (performance)? The basis for native speaker intuitions is certainly not as self-evident as Chomsky’s model suggests.

 

The regularity with which Chomsky has proposed and then abandoned generative frameworks has frustrated many, as have his sometimes opaque style and shifting terminology. It is certainly true that Chomsky’s own work is not always an easy read, but his ideas do have powerful and articulate champions such as Stephen Pinker, who bring them persuasively to a wider audience, and a number of good, accessible introductions to Chomsky’s work are available. Nor is it necessarily a fair criticism that the model has changed so often: it is reasonable to expect any scientific endeavour to refine its assumptions in the light of new discoveries. Nonetheless, objections that the generative programme has become lost in its own obscure formalisms cannot lightly be dismissed.

 

As we saw above, the 1980s saw a decisive shift away from rules, and in favor of principles and parameters with greater explanatory power. By the early 2000s, however, as Newmeyer argued in an important article in 2004, the number of postulated ‘language parameters’ had mushroomed, and many were little more than ‘rules’ in disguise. Even the head-directionality parameter to which we alluded above turned out to be more problematical than first thought, as many languages are far from consistently ‘head-initial’ or ‘head-final’. Newmeyer concluded that parameters – an essential part of the generative framework for two decades – were in fact an unnecessary and unilluminating construct.

 

Critics have long argued that Chomsky was too quick to move to the deductive from the empirical phase of enquiry, i.e. that speculative theoretical edifices were built on knowledge of a few languages and that the staggering diversity of human language was ignored or dismissed as unimportant.

 

Syntactic Structures in particular, which refers only to English, has been compared unfavorably to Bloomfield’s Language, which draws on a vast range of natural languages for exemplification. As typologists have consistently identified exceptions to putative linguistic universals, an obsession with formal models is seen to have diverted attention from the real business of linguistics, namely the study of languages:

A widely quoted 2009 article by Evans and Levinson attempted to refocus linguistic inquiry on the diversity, rather than the supposed universality, of language structure, arguing that there are ‘vanishingly few’ linguistic universals in the sense of features shared by all languages, and that those that can be found are not particularly illuminating: Alternatives to universal grammar have also been advanced as explanations for Evans and Levinson’s ‘vanishingly few’ universals, among them the concept of convergent evolution, i.e. a common adaptation to similar conditions in unrelated languages, comparable to the independent development of flight in insects, bats and birds which secured an evolutionary advantage for all three species.

 

The criticism that generativism ignores linguistic diversity (or, worse, is exclusively anglocentric), however, is no longer a fair one: its proponents draw increasingly on a wide range of languages, of vastly different genetic make-up. In fact, as the world’s languages die at an alarming rate, the need to study and document linguistic diversity is taking on a new urgency, keenly felt by generativists and non-generativists alike. This opens up new and fascinating questions for research: why, for example, does linguistic diversity appear to mirror biodiversity, with more languages spoken around the equator than in more temperate regions? (Papua New Guinea alone is home to some one in seven of the world’s languages.) Why do some languages have highly inflected grammars while others have apparently simpler systems and, indeed, is linguistic complexity in one area of the grammar always balanced by simplicity in another, as has traditionally been assumed (the equi-complexity hypothesis.