Susanne Carroll’s AIT: Part 3

In this third part of my exploration of Carroll’s Autonomous Induction Theory (AIT), I’ll look at “categorization” and feedback. In what follows I try to speak for Carroll and I apologise for the awful liberties I’ve taken with her texts.  All the quotes come from Carroll (2002), unless otherwise cited.

Categories

A theory of SLA must start from a theory of grammar. When we look at the grammars of natural languages, we note that they differ in their repertoires of categories: words are divided into different segments, and sentences comprise different classes of words and phrases. But how? As a basic example, a noun is not reducible to the symbol ‘N’: word classes consist of sound-meaning correspondences, so a noun is a union of phonetic features, phonological structure, morphosyntactic features, morphological structure, semantic features and conceptual structure. As Jackendoff says, words are correspondences connecting levels of representation.

UG

UG provides the representational primitives of each autonomous level of representation, and UG provides the operations which the parsers can perform. In other words, UG severely constrains the ways that the categories at different levels of representation are unified and project into hierarchical structure.

I-learning

A theory of i-learning explains what happens when a parser fails, and a new constituent or a new procedure must be learned.

In the case of category i-learning, UG provides a basic repertoire of  features in  each autonomous representational system. Features will combine to form complex units at a given level: phonetic features combine to form segments (a timing unit of speech), morphosyntactic features combine to form morphemes (the basic unit of the morphosyntax),and primitive semantic features combine to form complex concepts like Agent, Male, Cause, Consequence, and so on.

But UG is not the whole story: the acquisition of basic units within an integrative processor will reflect various constraints on feature unification within the limits defined by “unification grammars”.

Some of these constraints will presumably also be attributable to UG. What these restrictions actually consist of, however, is an empirical question and our understanding of such issues has come, and will continue to come, largely from cross-linguistic and typological grammatical research.

Having constructed representations, learners then have to identify them as instances of a category. So SLA consists of learning the categories and the correspondence rules which apply to a specific L2. UG provides some correspondence rules which, in first language acquisition, are used by infants to learn the language specific mappings needed for rapid processing of the particularities of the L1 phonology and morphosyntax. These are carried over into SLA, as are all L1 correspondence rules, which leads to transfer problems.

AIT is embedded in a theory of the functional architecture of the language faculty and linked to theories of parsing and production. Autonomous representational systems of language work with  constrained processing modules in working memory. When parsing fails, acquisition mechanisms try to fix the problem. A correspondence failure can only be fixed by a change to a correspondence rule, and an integration problem can only be changed by a change to an integration procedure.

Very importantly, evidence for acquisition comes in the form of mental representations, not from the speech stream, except in the case of i-learning of acoustic patterns of the phonetics of the L2. Carroll explains:

In this respect, this theory differs radically from the Competition Model and from all theories which eschew structural representations in favour of direct mappings between the sound stream and conceptual representations. If correct, it means that simply noting the presence or absence of strings in the speech stream is not going to tell us directly what is in the input to the learning mechanisms.

Lexis

The place of lexis needs special mention. Following Jackendoff, Lexical items have correspondence rules, linking phonological, morphosyntactic and conceptual representations of words.

Since the contents of lexical entries in SLA are known to be a locus of transfer, the contents of lexical entries will constitute a major source of information for the learning mechanisms in dealing with stimuli in the L2.

Carroll says (2001, p. 84) “In SLA, the major “bootstrapping” procedure may well be lexical transfer”. She says this in the context of arguing for the limited effects of UG on SLA, and I wish she’d said more.

Summary

So, a theory of SLA must start with a theory of linguistic knowledge, of mental grammars. Then, it has to explain how a mental grammar is restructured. After that, a theory of linguistic processing must explain how input gets into the system, thereby creating novel learning problems, and finally, a theory of learning must show how novel information can be created to resolve learning problems. I’ve covered all this, however badly, but more remains.

More

On page 31 of Input and Evidence we get a re-formulation of Carroll’s research questions:

I have to say that I see little of relevance in the next 300 pages, but the last three chapters do have a shot at answering them. I don’t think she does a good job of it, but that’s for Part 4. If you’re already exhausted, think how I feel about the task of telling you about it.

We must return again to Carroll’s most central claims (IMHO) that ‘input’ and ‘intake’ are badly defined theoretical constructs which make a bad starting point for any theory of SLA, and that consequent talk of ‘L1 transfer’; ‘noticing’; ‘negotiation of meaning’; and ‘ouput’ are similarly unsatisfactory components of a theory of SLA. The starting point should be stimuli from the environment, not linguistic input (whatever that is), and we must then explain how these stimuli get represented and successfully transformed into developing interlanguages. This demands not just a property theory to describe what is being developed, but a much better model of the learning mechanisms and the reasoning involved than is presently on offer.

A taster

Long’s Interaction Hypothesis states that the role of feedback is to draw the learner’s attention to mismatches between a stimulus and the learner’s output, and that they can learn a grammar on the basis of the “negotiation of meaning.” But what is  meant by these terms? For Carroll, “input” means stimulus, and “output” means what the learner actually says, so the claim is that the learner can compare a representation of their speech to a representation of the speech signal. Why should this help the learner in learning properties of the morphosyntax or vocabulary, since the learner’s problems may be problems of incorrect phonological or morphosyntactic structure? To restructure the mental grammar on the basis of feedback, the learner must be able to construct a representation at the relevant level and compare their  output — at the right level — to that.

It would appear then,…  that the Interaction Hypothesis presupposes that the learner can compare representations of her speech and some previously heard uttefance at the right level of analysis. But this strikes me as highly implausible cognitively speaking. Why should we suppose that learners store in longterm memory their analyses of stimuli at all levels of analysis? Why should we assume that they are storing in longterm memory all levels of analysis of their own speech?… Certainly nothing in current processing theory would lead us to suppose that humans do such things. On the contrary, all the evidence suggests that intermediate levels of the analysis of sentences are fleeting, and dependent on the demands of working memory, which is concerned only with constructing a representation of the sort required for the next level of processing up or down. Intermediate levels of analysis of sentences normally never become part of longterm memory. Therefore, it seems reasonable to suppose that the learner has no stored representations at the intermediate levels of analysis either of her own speech or of any stimulus heard before the current “parse moment.” Consequently, he cannot compare his output (at the right level of analysis) to the stimulus in any interesting sense….. Moreover, given the limitations of working memory, negotiations in a conversation cannot literally help the learner to re-parse a given stimulus heard several moments previously. Why not? Because the original stimulus will no longer be in a learner’s working memory by the time the negotiations have occurred. It will have been replaced by the consequences of parsing the last utterance from the NS in the negotiation. I conclude that there is no reason to believe that the negotiation of meaning assists learners in computing an input-output comparison at the right level of representation for grammatical restructuring to occur (Carroll, 2001, p. 291).

Preposterous, right, Mike?

Fun will finally ensue when, in Part 5, I get together with a bunch of well oiled chums in a video conference session to defend Carroll’s insistence on a property theory and a language faculty against the usage based (UB) hordes. Neil McMillan (whose Lacan in Lockdown: reflections from a small patio is eagerly awaited by binocular-wielding graffiti fans in Barcelona); Kevin Gregg (train schedule permitting); and Mike (‘pass the bottle’) Long are among the many who probably won’t take part.

References

Carroll.S. (2001) Input and Evidence. Amstedam, Benjamins.

Carroll, S. (2002) I-learning. EUROSLA Yearbook 2, 7–28.

Gregg, K. R. (1993) Taking explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s