This is the second part of my exploration of Susanne Carroll’s theory of SLA. Carroll’s work is important, IMHO, because it questions many of the constructs used by SLA theorists, including ‘comprehensible input’, ‘processing’, ‘i +1’, ‘noticing’, ‘noticing the gap’, ‘L1 transfer’, ‘chunk learning’, and many others. By examining Carroll’s work, I think we can throw light on all these constructs and come to a better understanding of how people learn an L2.
In Part One, I looked at Carroll’s adoption of Jackendoff’s Representational Modularity (RM) theory; a theory of modular mind where each module contains levels of representation organised in chains going from the lowest to the highest. The “lowest” representations are stimuli and the “highest” are conceptual structures. This leads to the hypothesis of levels:
Selinker, Kim and Bandi-Rao (2004, p. 82) summarise RM thus:
The language faculty consists of auditory input, motor output to vocal tract, phonetic, phonological, syntactic components and conceptual structure, and correspondence rules, various processors linking/regulating one autonomous representational type to another. These processors, domain specific modules, all function automatically and unconsciously, with the levels of modularity forming a structural hierarchy representationally mediated in both top-down and bottom-up trajectories.
And Carroll (2002) says:
What is unique to Jackendoﬀ’s model is that it makes explicit that the processors which link the levels of grammatical representation are also a set of modular processors which map representations of one level onto a representation at another level. These processors basically consist of rules with an ‘X is equivalent to Y’ type format. There is a set of processors for mapping ‘upwards’ and a distinct set of processors for mapping ‘downwards’.
Bottom-up correspondence processors
a. Transduction of sound wave into acoustic information
b. Mapping of available acoustic information into phonological format.
c. Mapping of available phonological structure into morphosyntactic format.
d. Mapping of available syntactic structure into conceptual format.
Top-down correspondence processors
a. Mapping of available syntactic structure into phonological format.
b. Mapping of available conceptual structure into morphosyntactic format.
a. Integration of newly available phonological information into uniﬁed phonological structure.
b. Integration of newly available morphosyntactic information into uniﬁed morphosyntactic structure.
c. Integration of newly available conceptual information into uniﬁed conceptual structure.
(Jackendoﬀ 1987, p. 102, cited in Carroll, 2002, p. 16).
The second main component of Carroll’s AIT is induction. Induction is a form of reasoning which involves going from the particular to the general. The famous example (given in Philosophy for Idiots, Thribb, 17, cited in Dellar, passim) is of swans. You define a swan and then search lakes looking to see what colour particular examples of it are. All the swans you see in the first lake are white, and so are those in the second lake. Everywhere you look, they’re all white, so you conclude that “All swans are white”. That’s induction. Hume (see Neil McMillan (unpublished) The influence of Famous Scottish Drunkards on Lacard’s psychosis; a bipolar review) famously showed that induction is illogical – no inference from the particular to the general is justified. No matter how many white swans you observe, you’ll never know that they’re ALL white, that there isn’t a non-white swan lurking somewhere, so far unobserved. Likewise, you can’t logically induce that because the sun has so far always risen in the East that it will rise in the East tomorrow. Popper “solved” this conundrum by saying that we’ll never know the truth about any general theory or generalisation, so we just have to accept theories “tentatively”, testing them in attempts not to prove them (impossible) but, rather, to falsify them. If they withstand these tests, we accept the theory, tentatively, as “true”.
The assumption of all SLA “cognitive processing” transition theories is that the development of interlanguages depends on the gradual reformulation of the learner’s mental conceptualisations of the L2 grammar. These reformulations can be seen as following the path suggested by Popper to get to reliable knowledge:
P1 -> TT¹ -> EE -> P2 -> TT², etc.
P = problem
TT = tentative theory
EE = testing for empirical evidence which conflicts with TT
You start with a problem and you leap to a tentative theory (TT) and then you test it, trying to falsify it with empirical evidence. If you find such contradictory evidence, you have a problem, and you re-formulate the theory (TT²) which tries to deal with the problem, and you then test again, and round we go again, slowly improving the theory. Popper is talking about hypothesis testing and theory construction in the hard sciences (particularly physics), and while it’s a long way from describing what scientists actually do, it’s even further away from describing what L2 learners do in developing interlanguages. Nevertheless, it’s common to hear people describing SLA as hypothesis formation and hypothesis testing.
We could, I suppose, see the TT1¹ as the learner’s initial interlanguage theory. Then, at any given point in its trajectory, the theory gets challenged by evidence that doesn’t fit (perhaps went when goed is expected, for example) and the problem is resolved by a new, more sophisticated theory, the TT². But it doesn’t work – interlanguage development is not a matter of hypothesis formation and testing in Popper’s sense, and I agree with Carroll that it’s “a misleading metaphor”. In her view, SLA is a process of “learning new categories of the grammar, new structural arangements in on-line parses and hence new parsing procedures and new productive schemata” (Carroll, 2001, p. 32). Still, Hume’s problem of underdeterminism remains – the inductions that learners are said to make aren’t strictly logical. (“Just saying” (McMillan, ibid)).
So anyway, Carroll wants to see SLA development (partly) as a process of induction. The most respectable theory of induction is inference to the best explanation, also known as abduction, and I think Lipton (1991) provides the best account, although Gregg (1993) does a pretty good job of it in a couple of pages (adeptly including a concise account of Hempel’s D-N model, by the way). Carroll, however, ducks the issues and follows Holland et al., (1986), who define induction as a set of procedures which lead to the creation and/or refinement of the constructs which form mental models (MMs). Mental models are “temporary and changing complex conceptual representations of specific situations”. Carroll gives the example of a Canadian’s MM of a breakfast event, versus, say, the very different one of a Japanese MM breakfast event. MMs are domains of knowledge, schemata, if you like, and Carroll makes lots of use of MMs which I’m going to skip over. She then goes into considerable detail about categorising MMs, and then procedes to “Condition-action rules” which govern induction. These are competition rules which share ideas from abduction in as much as they say “When confronted with competing solutions to a problem, choose the most likely, the best ‘fit’ ”.
Carroll (2001, p. 170) finally (sic) defines induction as a process
leading to revision of representations so that they are consistent with information currently represented in working memory. Its defining property is that it is rooted in stimuli made available to the organism through the perceptual system, coupled with input from Long Term Memory and current computations. … the results of i-learning depend upon the contents of symbolic representations.
Carroll’s theory of learning rests on i-learning (as opposed to ‘I language’ in Chomsky’s sense, which has very little to do with it, and one can only wish she’d chosen some other term, rather like Long’s unhappy choice of “Focus on FornS”). I-learning depends on the contents of symbolic representations being computed by the learning system.
At the level of phonetic learning content of phonetic representations, to be defined in terms of acoustic properties. At the level of phonological representation, i-learning will depend on the content of phonological representations, to be defined in terms of prosodic categories, and featural specification ef segments. At the level of morphosyntactic learning, i-learning will depend upon the content ‹if morphosyntactic representations. And so on.
So, it seems, i-learning goes on autonomously within all the parts of Lackendoff’s theory of modularity, not just in the conceptual representational system. (I take it that this is where Carroll’s ‘competition’ comes in – analysing a novel form involves competition among various information sources from different levels.) Anyway, the key point is that i-learning is triggered by the failure of current representations to “fit” current models in conjunction with specific environmental stimuli.
I usually don’t comment on my choice of images, but the above image shows Goethe on his death bed. His wonderful dying words were, according to his doctor, Carl Vogel, “Mehr Licht!” And I can’t help sharing this anecdote. In my first seminar, in my first term of my first year at LSE, I read a paper presided over by Imre Lakatos, one of the finest scholars I’ve ever met, and later a friend who committed perjory in court to help me avoid being found guilty of a criminal charge. The paper was about German developments in science, and I mentioned Goethe, whose name I pronounced ‘Go eth’. Lakatos was drinking a coffee at the moment when I said “Go eth” and reacted very violently. He spat the coffee out, all over the alarmed students sitting round the table in his study, jumped to his feet, and shouted hysterically: “I fail to understand how anybody who’s been accepted into this university can so hopelessly mispronounce the name of Germany’s most famous poet!”
I use Goethe’s dying words here to refer to Carroll’s 2002 paper, which really does throw more light on her difficult-to-follow 2001 work.
In her (2002) account of I-learning, Carroll argues that researching the nature of induction in language acquisition requires the notion of a UG, which describes the properties of grammatical knowledge shared by all human languages. The psycholinguistic processes which result in this knowledge are constrained by UG – which, she insists, doesn’t mean that “UG is thereby operating on-line in any fashion or indeed is stored anywhere to be consulted, as one might infer from much generative SLA research” (Carroll, 2002, p. 11).
Carroll goes on to say that a speaker’s I-language consists of a particular combination of universal and acquired contents, so that a theory of SLA must explain not only what is universal in our mental grammars, but also what is diﬀerent both among speakers of the same E-language and among the various E-languages of the world.
In order to have a term to cover a UG-compatible theory of acquisition, as well as to make an explicit connection to I-language, I suggest we call such a theory of acquisition a theory of i(nductive)-learning, speciﬁcally the Autonomous Induction Theory (Carroll, 2002, p.12).
In other words, while Chomsky is concerned with explaining I-language, Carroll is concerned with explaining the much wider construct of I-learning; she wants to integrate a theory of linguistic competence with theories of performance. So, it goes like this:
The perception of speech, the recognition of words, and the parsing of sentences in the L2 requires the application of unit detection and structure-building procedures. When those procedures are in place, speech processing is perfomed satisfactorily. But when the procedures are not available, (e.g., to the beginning L2 learner), speech proccessing will fail, forcing the learner to fall back on inferences from the context, stored knowledge, etc. But, of course, beginners have very few such rescources to draw on, and so interpretation of the stimulus will fail, which is when i-learning mechanisms will be activated.
When speech detection, word recognition, or sentence parsing fail,… only the i-learning mechanisms can ﬁx the problem . They go into action automatically and unconsciously (Carroll, 2002, p13).
To start with then, the learner hears the speech stream as little more than noise. Comprehension depends on their learning the right cues to syllable edges and and the segments which comprise the syllables. Only once these cues to the identiﬁcation of phonological units has been learned can word learning begin. After that, form-extraction processes which map some unit or other of the phonology onto a morphosyntactic word will allow the learner to hear a form in the speech stream, but still without necessarily knowing what it means. After that, when learners can identify words and know what they mean, they might still lack the parsing procedures needed to use morphosyntactic cues to arrive at the correct sentence structure and hence arrive at the correct sentence meaning. Either they fail to arrive at any interpretation, or they arrive at the wrong one – their semantic representation isn’t the same as what was intended by the speaker. Finally, their i-learning allows them to get the right meaning – the parsers can now do their job satisfactorily.
Recall what was said in Part 1: Krashen got it backwards! This is the real thrust of Carroll’s argument: input must be seen not as linguistic stuff coming straight from the environment, but rather as stuff that results from processes going on in the mind which call on innate knowledge. Furthermore: YOU CAN’T NOTICE GRAMMAR!
So there you have it. Except that, really, that’s nowhere near “it”. Carroll admits that her theory doesn’t explain what the acquisition mechanism does when parsing breaks down. She asks:
“How does the mechanism move beyond that point of parse, and what are the constraints on the solution the learner alights on? Why do the acquisition mechanisms which attempt to restructure the existing parsing procedures and correspondence rules to deal with a current parse problem often fail?”
The answers partly lie in Carroll’s investigation of “Categories and categorization” and partly in the roles of feedback and correction. In an early refomulation of her research questions in Input and Evidence, Carroll emphasises the importance of feedback and correction to her work, which points to her important contributions to examining the empirical evidence found in the SLA literature, and also highlights some of the ways in which this evidence has been (mis)used. All this will be discussed in Part 3, where I’ll also look at what Carroll’s AIT has to say about explicit and implicit learning, and about what some of today’s gurus in ELT might learn from Carroll’s work.
This is a blog post, not an academic text. I’m exploring Carroll’s work, and I’ve no doubt made huge mistakes in describing and interpreting it. I await correction. But I hope it will provoke discussion among the many ELT folk who enjoy shooting the breeze about important questions which have a big impact (or should I say ‘impact big time’) on how we organise and implement teaching programmes.
Carroll.S. (2001) Input and Evidence. Amstedam, Benjamins.
Carroll, S. (2002) I-learning. EUROSLA Yearbook 2, 7–28.
Gregg, K. R. (1993) Taking explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.
Lipton, P.: (1991) Inference to the Best Explanation . London: Routledge.
Popper, K. R. (1972) Objective Knowledge. Oxford: Oxford University Press.
Selinker, L., Kim, D. and Bandi-Rao, S. (2004) Linguistic structure with processing in second language research: is a ‘unified theory’ possible? Second Language Research 20,1, pp. 77–94