Problems in SLA: Is Emergentism the answer to them?

Mike Long’s (2007) book Problems in SLA is divided into three parts: Theory, Research, and Practice.

Part One

In chapter 1, “Second Language Acquisition Theories”, Long reviews some of the many approaches to theory construction in SLA and suggests that the plethora of SLA theories obstructs progress. In chapter 2, Long suggests that culling is required, and he uses Laudan’s “problem-solving” framework (e.g., Laudan, 1996) as the basis for an evaluation process. Briefly, theories can be evaluated by asking how many empirical problems they explain, giving priority to problems of greater significance, or weight. Long suggests that among the weightiest problems in SLA are age differences, individual variation, cross-linguistic influence, autonomous interlanguage syntax, and interlanguage variation.

Part Two 

Chapter 3 deals with “Age Differences and the Sensitive Periods Controversy in SLA”. Why do the vast majority of adults fail to achieve native-like proficiency in a second language? Long argues that maturational constraints, or “sensitive periods” explains this problem. Chapter 4 deals with recasts. As we know, recasts are a controversial issue, but they play an important role in Long’s focus on form. Long gives his usual careful review of the literature on research so far and concludes that recasts facilitate acquisition “without interrupting the flow of conversation and participants’ focus on message content” (p. 94).

Part Three

Chapter 5 “Texts, Tasks and the Advanced Learner”, discusses Long’s version of TBLT. Long claims that his TBLT is superior to “the traditional grammatical syllabus and accompanying methodology, or what I call “focus on forms” (p. 121) because it respects, rather than contradicts, robust findings in SLA. Long gives particular attention to the methodological principles of “focus on form” (reactive attention to form while attention is on communication), and “elaborated input” (use elaborated rather than simplified texts). Finally, chapter 6, “SLA: Breaking the Siege”, responds to three “broad accusations made against SLA research in recent years”. The charges are “sociolinguistics naiveté, modernism, and irrelevance for language teaching”. Long finishes with suggestions on how the siege might be broken.

Discussion 

The book packs a powerful punch. The references section is impressive (as usual); chapters 3, 4, and 5 are still very informative; and chapters 1, 2, and 6 are still a cogently argued case for a critical rationalist approach to SLA research and its application to ELT. A slight niggle is that Long’s discussion of theory construction and evaluation in chapters 1 and 2 is not entirely consistent with the rest of the book. There’s a possible conflict between chapters 3 and 4  – the claim that SLA is maturationally constrained (a view usually associated with “nativist” theories) sits uneasily with the claims made for recasts – and the absence of any mention of the interaction hypothesis adds a bit more doubt about exactly what Long himself regards as the best theory of SLA. Such doubts are dealt with in his (2015) book Second Language Acquisition and Task-Based Language Teaching.

 Chapter 3 describes “A Cognitive-Interactionist Theory of Instructed Second Language Acquisition (ISLA”. Note that this is a theory of Instructed SLA, where, Long says, “necessity and sufficiency are less important than efficiency. Provision of negative feedback, for example, might eventually turn out not to be a relevant factor in a theory of SLA, as argued persuasively by Schwartz (1993), but its empirical track record, to date, as a facilitator of rate and, arguably, level of ultimate attainment makes it a legitimate component – in fact, a key component – of a theory of ISLA”.  Long claims that his “embryonic” theory addresses empirical problems concerning (i) success and failure in adult SLA, (ii) processes in IL development, and (iii) effects and non-effects of instruction. The explanation is based on an emergentist, or usage-based (UB) theory of language acquisition:

A plausible usage-based account of (L1 and L2) language acquisition (see, e.g., N.C. Ellis 2007a,b, 2008c, 2012; Goldberg & Casenhiser 2008; Robinson & Ellis 2008; Tomasello 2003), with implicit learning playing a major role, begins with initially chunk-learned constructions being acquired during receptive or productive communication, the greater processability of the more frequent ones suggesting a strong role for associative learning from usage. Based on their frequency in the constructions, exemplar-based regularities and prototypical morphological, syntactic, and other patterns – [Noun stem-PL], [Base verb form-Past], [Adj Noun], [Aux Adv Verb], and so on – are then induced and abstracted away from the original chunk-learned cases, forming the basis for attraction, i.e., recognition of the same rule-like patterns in new cases (feed-fed, lead-led, sink-sank-sunk, drink-drank-drunk, etc.), and for creative language use (Long, 2015, pp 48-49).

I personally don’t find Ellis’ usage-based account plausible, and I still can’t quite get used to the fact that Long went along with it.  I console myself with the fact that Long didn’t join the Douglas Fir group, and that he retained his commitment to the importance of sensitive periods and interlanguage development. Furthermore, warts and all, I think Long’s book on TBLT is the best book on ELT ever written. Having said all that, I want to go back to Long’s concern for theory construction in SLA and suggest that they don’t justify his siding with Nick Ellis “and the UB (not UG) hordes”.

Laudan’s aim was to reply to criticism of Popper’s “naïve” falsification criterion. He tried to improve on the work of Lakatos (who had the same aim of defending Popper’s falsification criteron) by suggesting, firstly, that science is to do with problem-solving, and secondly, that science makes progress by evolving research traditions. This concern with research traditions is at the heart of Laudan’s endeavor, and I don’t think Long sufficiently recognizes its importance. Laundan talks about research traditions in science; Long wants to talk about theories of SLA. In my opinion, Laudan gives a poor account of research traditions in science, and Long makes poor use of Laudan’s criteria for theory evaluation.

Laudan says that the overall problem-solving effectiveness of a theory is determined by assessing the number and importance of empirical problems which the theory solves and deducting therefrom the number and importance of the anomalies and conceptual problems which the theory generates (Laudan, 1978: 68). In a later work, Laudan (1996) develops his “problem-solving” approach and offers a taxonomy.  He suggests, first, that we separate empirical from conceptual problems, and that as far as empirical problems are concerned, we distinguish between “potential problems, solved problems and anomalous problems.” ‘Potential problems’ constitute what we take to be the case about the world, but for which there is as yet no explanation. ‘Solved problems’ are that class of putatively germane claims about the world which have been solved by some viable theory or another.  ‘Anomalous problems’ are actual problems which rival theories solve but which are not solved by the theory in question (Laudan, 1996: 79). As for conceptual problems, Laudan lists four problems that can affect any theory.

Laudan claims that this “taxonomy” helps in the relative assessment of rival theories, while remaining faithful to the view that many different theories in a given domain might well have different things to offer the research effort. Laudan argues that it is rational to choose the most progressive research tradition, where “most progressive” means the maximum problem-solving effectiveness.  Note first that Laudan refers to the most progressive research tradition, not theory. But the main problem is how we assess the problem-solving effectiveness of rival research traditions. In the end, we will be forced to compare different theories belonging to different research traditions, and then, how does one count the number of empirical problems solved by a theory?  For example, is the “problem of the poverty of the stimulus” to be counted as one problem or several?  In principle the number of problems could be infinite. And how are we to assign different weightings to theories? How much weight should we give to Schmidt”s Noticing Hypothesis, and how much to Long’s Interaction Hypothesis, for example? Laudan’s inability to suggest how we might go about enumerating the seven types of problems in his taxonomy that are dealt with by any given research tradition (itself not a clearly-defined term), or how these problems might then be weighted, seems a fatal weakness in his account.

Even if we ignore this weakness, I don’t think Long makes a persuasive case for the UB research tradition he favours. In the field of linguistics, the nativist, UG-led research tradition has an impressive record; I can’t think of any way that the UB theories of N.C. Ellis, Goldberg & Casenhiser, Robinson & Ellis, and Tomasello can be made to score higher than the UG-based theories of Chomsky (1959), White (1989), Carroll (2001), and Hawkins (2001), for example. I’ve argued elsewhere in this blog against the emergentist view, usually citing Eubank and Gregg (2002) and Gregg (2003) Let me just summarise one point Gregg makes here.

For emergentists, SLA is a matter of associative learning: on the basis of sufficiently frequent pairings of two elements in the environment, one abstracts to a general association between the two elements. The environment provides all the necessary cues for these associations to form. Gregg (2003) gives this example from Ellis: ‘in the input sentence “The boy loves the parrots,” the cues are: preverbal positioning (boy before loves), verb agreement morphology (loves agrees in number with boy rather than parrots), sentence initial positioning and the use of the article the)’ (1998: 653). Gregg asks ‘In what sense are these ‘cues’ cues, and in what sense does the environment provide them?’ The environment can only provide perceptual information, for example, the sounds of the utterance  and the order in which they are made. Thus, in order for ‘boy before loves’ to be a cue that subject comes before verb, the learner must already have the concepts SUBJECT and VERB. According to Ellis, if SUBJECT is one of the learner’s concepts, that concept must emerge from the input. But how can it?  How can the learner come to know about subjects or agreement in English? What ‘cues’ are there are in the environment for us to learn the concept SUBJECT so that later on we can use that concept to abstract SVO from other input sentences? As Gregg (2003, p. 120) puts it:

Not only is it unclear how ‘preverbal position’ could be associated with ‘clausal subject’ or ‘agent of verb’, it is also not clear that these should be associated (Gibson, 1992): For instance, in sentences like ‘The mother of the boy loves the parrots’ or ‘The policeman who followed the boy loves the parrots,’ ‘the boy’ is preverbal but is neither subject nor agent. In short, there is no reason to think that ‘comes before the verb’ is going to be useful information for a learner or a hearer, in the absence of knowledge of syntactic structure. But once again, the emergentist owes us an explanation of how syntactic structure can be induced from perceptual information in the input.

Likewise, it does not make sense to say that learners “notice” formal aspects of the language from the input – grammar cannot, by definition, be “noticed” from perceptual information in the environment.

I don’t doubt that Mike would have made short work of these criticisms had I managed to put them to him. I recently asked him if we could discuss Chapter 3 on Skype, but he was already too ill.  While there are, in my opinion, almost insurmountable problems for an empiricist, usage-based theory of language learning to overcome, and while it follows that I don’t think Long resolves them, in his (2015) book SLA & TBLT, Long uses Laudan’s “Problems and Explanations” framework to address four problems, rather than present any full theory of SLA.  He does so with his usual scholrship, and he is absolutely clear about the most important issue facing us when it comes to designing courses of English for speakers of other languages: learning a new language is “far too large and too complex a task to be handled explicitly” …. “implicit learning remains the default learning mechanism for adults”.  You can see his quote in context in this post:  Mike Long: Reply to Carroll’s comments on the Interaction Hypothesis. 

References 

Carroll, S. (2000). Input and evidence: The raw materials of second language. Amsterdam, Benjamins.

Chomsky, N. (1959). Review of B.F. Skinner Verbal behavior. Language 35, 26–8.

Eubank, L. and Gregg, K. R. (2002) News Flash–Hume Still Dead. Studies in Second Language Acquisition, 24, 237-24.

Gregg, K.R. (2003) The state of emergentism in second language acquisition, Second Language Research 19,2, 95–128.

Hawkins, R. (2001). Second language syntax: A generative introduction. Oxford: Blackwell.

Laudan, L. (1978) Progress and its problems: Towards a theory of scientific growth. University of California Press.

Laudan, L. (1996)  Beyond positivism and relativism: Theory, method, and evidence. Oxford and New York: Westview Press.

White, L. (1989). Universal Grammar and second language acquisition. Amsterdam: Benjamins.

Mike Long

Mike died on Sunday morning. He will be greatly missed by the applied linguistics academic community, by the anarcho-syndicalist movement, by his wide circle of friends all over the world, and by the hundreds, including me, who owe their academic careers to his generous help.

Over a period of more than forty years, Mike had a massive influence on developments in psycholinguistics, instructed SLA, and TBLT. His CV is testament to his contributions to the field; it includes a huge volume of published material, progressive editorial work and teaching that changed the lives of so many. Mike was a brilliant, meticulous, scrupulously honest academic, who had a lifelong commitment to “L’education Integrale”, as he called it in his book SLA and Task-based Language Teaching. He begins Chapter 4: “Education of all kinds, not just TBLT … serves either to reserve or challenge the status quo, and so is a political act, whether teachers and learners realize it or not”.  Mike combined the highest standards of intellectual rigor with a sustained fight against the status quo. He was a regular contributor to the Anarcho-Syndicalist Review, joined campaigns and picket lines fighting for the rights of intellectual workers, and made many donations  to support the anarchist movement.

Mike was a dangerous man to sit next to, in a conference plenary, a committee meeting, a seminar, wherever eruptions of laughter were frowned on. His stage-whispered asides were often just too funny to keep the laughter in, and even in restaurants, I was once asked by waiters to keep the noise down, as Mike, crying with laughter himself, told one of his funny stories. He was a delight to be with, and I’m glad that he was so fond of Cataluña, which he visited as often as he could. He and his partner Cathy Doughty called their son Jordi, his favorite football team was FC Barcelona, he loved Priorat wines, and we often went up to the cemetery in Montjuic to pay our respects to Buenaventura Durruti and his fallen comrades.

Among lots of projects he gave his support to, Mike helped Neil McMillan and I put together a teacher education course on TBLT. He thoroughly approved of the SLB Cooperative, so he happily gave us advice and materials, recorded presentations, and took part in webinars with participants on the course.

Mike and I were in the middle of writing a book about the ELT industry when he got the sudden news of his illness. He spent the last few months of his life working on the book, and, helped by Cathy, we got nine of the fourteen chapters more or less done. Cathy and I will now try to finish it.

Mike was the best teacher I ever had, and a wonderful, generous friend. I’ll miss him terribly.

P.S. There’s a great website here in honour of Mike: IN MEMORY | Mike Long (wixsite.com 

Life on Twitter

I closed my Twitter account a few months ago because I felt Twitter was mostly a waste of time. An exchange with JB Gerard, where he accused me of racism, was the trigger I needed. I opened another account under the name of Benny28908382 to watch what was happening. On Feb 18th., I “broke cover” and joined in the discussions. I replied to a tweet by ELT’s super salesman, Dr Gianfranco Conti (international keynote speaker, professional development provider, winner of the 2015 Times Educational Supplement egg and spoon race, etc., etc.) who made one of his typically crass pronouncements about SLA. Here it is, with what followed.

Part One

Discussion

Note the slide into personal attack, and I recognise that I provoked it. But my remarks were not meant as an attack on Tim; I was talking to Tim as a scholar, and I referred to what I saw as his lack of scholarship on this occasion. My interest was in questioning Conti’s confident edict, and I was surprised at Tim’s responses. Neither of us gave the best expression of our views, but anyway, we hadn’t, till the last bit, lost sight of the preposterous claim of Dr. Conti. We were talking about different theories of SLA. I find Tim an interesting man to talk to, or at least, I did, and I was certainly not treating him as an idiot. I see (or rather, I saw) Tim as someone who needs encouragement to study more. It would be in, IMHO, a waste of talent if Tim satisfied himself with what he’s learned so far about the fashionable usage-based view of SLA, and failed to delve deeper. My suggestion that he needed to read more was honestly well-intentioned, but I accept that three of my replies were a bit harsh.

On we go.

Part 2 

Discussion

Note how Tim dodges the issue. “I’m more talking about using an L2 than learning it”, he says. We were talking about learning an L2, were we not! And note that Tim’s final tweet shows a poor understanding of unconscious learning. Well never mind, Tweets are dashed off, and we can’t expect the same rigour found in texts where the author has time to express themself more clearly.

The Fall out 

The above exchange led to this, the following day:

Discussion

See what’s happened? Tim discovers that Benny is Geoff, and that’s enough for him to turn things into an attack on Geoff, all discussion of SLA long forgotten. The real content, what little there was of it, has been lost. Never mind that Conti’s original statement is wrong; never mind that Tim can’t explain working memory, never mind that efficacious ELT is at stake. No. The important thing now, for Tim, is to show that Geoff’s opinions are as nothing compared to the offence he gives to good people like himself, he the perfect representative of the good folk who make up the wonderful community of Twitter ELT.

Tim, cute, charming Tim, the darling of the rainbow warriors, the apotheosis of the young whelp and mediocre dancer, responds to “being treated like an idiot” by a typical Twitter hatchet job that shows a shameful disregard for the truth. In a way that would make hardened Daily Mail journalists cringe, he concocts a story aimed at discrediting me. Tim quite falsely states that I closed my Twitter account because I’d been revealed as a racist and that I later opened a scockpuppet account with the intention of being abrasive to people I don’t like. He accuses me of malpractice, and of manipulating social media in my fiendish drive to continue making “incredibly rude”, “obnoxious” attacks on carefully-selected targets.

Tim’s response to our exchange is that of a preening, self-righteous prig. He brings the dregs with him. Among messages of support, Hugh Dellar, with his usual, ironic disdain for context, suggests that the real cause of my criticisms of him is repressed lust, while Stirling Bannock (not his real name), vows never to read a word I say. May they all be happy together.

Dellar on Grammar

Dellar’s latest blog post is Part Nine of his views on ELT. It’s called Part Nine: the vast majority of mistakes really aren’t to do with grammar!

I’ll summarise it and then suggest that

1. Most mistakes in the oral and written production of students of English as an L2 are to do with grammar

2. Dellar’s view of how people learn English as an L2 is badly-informed and incoherent

3. Dellar’s approach to teaching English as an L2 is mistaken.

Summary of Dellar’s blog post

When he was younger, Dellar believed that the root cause of student error was essentially grammatical. It took him “quite some time” to realise that since students only did tasks that focused on the production of grammatical structures, it was unsuprising that their errors were grammatical. Dellar comments that “to extrapolate out from such experiences and to then believe that mistakes are mostly down to grammar is a fallacy of the highest order”.

To become more aware of the real issues that students face when learning English, Dellar says that teachers need to change tack and focus on tasks which require the production of language outside the narrow confines of what are essentially grammar drills of varying kinds.  Unfortunately, during these “freer slots”, teachers still pick up on grammar. “This is what we’re most trained to focus on, and the way most of us are still trained to perceive error, and old habits die hard”.

Dellar then discusses how he and his co-author and colleague, Andrew Walkley started using Vocaroo (an online audio recorder) to record fifty chunks / collocations and send the link to all ther students. “They’d then write them down as best they could, like a dictation; we’d send the original list and students would then write examples of how they think they might actually use each item – or hear each being used. These were emailed over and we’d correct them, comment on them, etc.”.  To their dismay, Dellar and Walkley found that words that they felt they had “explained well, given extra examples of, nailed, as it were”, would come back “half digested, or garbled, or in utterly alien contexts with bizarre co-text”.

Dellar explains these disappointing results as follows:

What is really going on is that the new language is somehow slowly getting welded awkwardly onto the old; meanings in the broadest sense are largely understood, but contexts of use not yet clearly grasped.

He goes on:

This should not surprise, of course. The fact that students have encountered new items in class, seen them once or twice or even three times in some kind of context, possibly translated them and more or less grasped their meanings is simply evidence of the fact that they’ve not yet been primed anywhere near sufficiently. For fluent users who’ve grasped new items, there’s been encounter after encounter after encounter, with item and with co-text in context; for learners, this process has only just begun, and as a result the odds of priming from L1 being brought over when it comes to using the new items creatively is very high indeed.

It also tempers the expectation one should have of the power and value of correction. I’m under no illusion that the detailed comments and extensive correction / recasting I carry out on student efforts (see below) will somehow magically result in correct and fluent use henceforth. Rather, I see my work here simply as further efforts to prime and to draw attention to glitches, misconceptions, perennial misuses and so on; in short, I am merely a condensed and rather more focused part of the priming process.

What else you realise is the sheer futility of trying to explain much error through the filter of grammar. Take the first sentence shown below – The area has been deserted after a huge flooding 3 years ago. What’s a dogged grammar hound to do here? Point out that if we’re using AFTER when talking about something that happened three years ago, we’d generally use the past simple, so if we want to use the present perfect, it’d be better to use SINCE? If we’re talking about flooding, it’s usually uncountable and thus kill the A? Even if you were to do this, you’d still be left with: The area has been deserted since huge flooding three years ago, which still sounds very stilted and forced. Often, the only real solution to the morass of oddness these sentences throw one into is rather severe reworking, with options sometimes given, questions sometimes asked, and explanations often proffered.

Dellar concludes that when we’re teaching new vocabulary, we need to pay careful attention to “how well we’re priming students”. Limiting instruction and feedback to single ways of saying things, or short ungrammaticalised chunks / collocations gives students little chance of “really coming to terms with the ways in which new items are typically used with previously learned grammar and vocabulary, or the kinds of (often fairly limited) contexts in which items are used”.

Dellar finishes his blog post with this:

Any of you who ever have to deal with student writing as they prepare to do degrees or Master’s in English, where all the kinds of issues seen above are compounded with serious discoursal and structural issues, spelling problems, paragraphing anomalies, and so on will know what I mean when I claim that prevention is infinitely preferable to cure.

And that the medicine needed really isn’t all that much to do with grammar as we know it!

Discussion

Let’s start with language errors made by L2 learners. Dellar ignores the work done by researchers on this subject.

We can begin with contrastive analysis research, notably Fries (1945), which suggested that errors are the result of transfer from the L1. Then came research in the 60s which showed that errors were not simply explained by L1 transfer; the same errors were commonly made by all language learners, regardless of their L1. Corder’s (1967) seminal work argued that errors were indications of learners’ attempts to figure out an underlying rule-governed system. Corder distinguished between errors and mistakes: mistakes are slips of the tongue and not systematic, whereas errors are indications of an as yet non-native-like, but nevertheless, systematic, rule-based grammar. Here, Corder is suggesting that learning an L2 is a cognitive process, not a mindless (sic – for behaviourists, the construct of mind is anathema) process of responding to stimulus from the environment), where learners work with their own ideas about the L2, which slowly approximate to a native speaker model. This “interlanguage development” theory received its first full expression in Selinker’s (1972) paper, which argues that L2 learners develop their own autonomous mental grammar with its own internal organising principles. Selinker uses the word “grammar”, as do all applied linguistic scholars, to refer to the system and structure of a language, concentrating on syntax, but including morphology, phonology and semantics.

Dellar claims that “the vast majority of mistakes really aren’t to do with grammar!”. He is, quite simply, wrong, as thousands of studies attest. Errors in the output of learners of English as an L2 are usually categorized in terms of lexical, grammatical, phrasing, and pragmatic errors, with punctuation added when looking at written texts. There is not a single study that I know of on this subject which doesn’t give grammatical errors as the most frequent type of error. Here’s an example.

MacDonald (2016) found from an examination of written texts in English of Spanish university students that grammar errors made up the majority of errors.

Dellar’s view of language learning

I’ve dealt with Dellar’s view of language learning in a separate post,  so let me focus here on his use of “priming” as an explanation of how people learn an L2. There are, at the moment, two rival, (and incompatible) views of second language acquisition (SLA). The first is that it’s a cognitive process involving the development of interlanguage, helped by innate knowledge of how language works. The second is that learning an L2 is the same as learning anything else, including the L1: it’s a learning process caused by responding to stimuli in the environment. This is a modern version of behaviorism and it’s motivated by a modern type of empiricism: language use emerges from social interaction, and only very basic statistical operations in the mind, based on the power law principle, are enough to explain how people learn an L2. These usage-based theories come in various forms and are referred to under the umbrella term “emergentism”. I think the best exponent of this view is Nick Ellis.

Priming is mostly associated with emergentist theories of SLA; it stresses frequency effects. But it’s complicated. How does priming occur? Is it unconscious? Is Schmidt’s “Noticing” construct compatible with the construct of priming? In his blog post, Dellar says:

What is really going on is that the new language is somehow slowly getting welded awkwardly onto the old

While that’s not what anybody who argues for an interlanguage development view of SLA would claim (new language doesn’t get “awkwardly welded onto the old”), it sounds as if Dellar is suggesting that learners do develop an increasingly sophisticated model of the target language. If he is, then this clashes with his insistence that priming is what explains SLA. Dellar says

The fact that students have encountered new items in class, seen them once or twice or even three times in some kind of context, possibly translated them and more or less grasped their meanings is simply evidence of the fact that they’ve not yet been primed anywhere near sufficiently. For fluent users who’ve grasped new items, there’s been encounter after encounter after encounter, with item and with co-text in context; for learners, this process has only just begun, and as a result the odds of priming from L1 being brought over when it comes to using the new items creatively is very high indeed.

First, pace Dellar, “encounter after encounter after encounter, with item and with co-text in context” is not a necessary condition for learning a language, as the daily inventive output of English users makes clear. Millions of times a day, fluent users of English as an L2 use combinations of items that they’ve NEVER encountered before, not even once. If Dellar wants to adopt a strictly “priming”, usage-based view of SLA, then he has to explain this. As Eubank and Gregg (2003) say:  “…. it is precisely because rules have a deductive structure that one can have instantaneous learning… With the English past tense rule, one can instantly determine the past tense form of “zoop” without any prior experience of that verb…….. If all we know is that John zoops wugs, then we know instantaneously that John zoops, that he might have zooped yesterday and may zoop tomorrow, that he is a wug-zooper who engages in wug-zooping, that whereas John zoops, two wug-zoopers zoop, that if he’s a Canadian wug-zooper he’s either a Canadian or a zooper of Canadian wugs (or both), etc.  We know all this without learning it, without even knowing what “wug” and “zoop” mean”.

Second, Dellar wants to explain failure to learn “new items” of the L2 by appeal to insufficient priming. But that is not how a great many scholars (including Eubank and Gregg, of course, and a legion of others) would explain it, and it’s not how those in the emergentist camp would explain it either. Dellar says that learning depends on priming, without explaining what priming refers to. Elsewhere, Dellar has said that he uses the construct “priming” to refer to lexical priming, not structural or syntactic primimg, and that he bases himself on Hoey’s 2005 book. Hoey says that priming amounts to this: “every time we use a word, and every time we encounter it anew, the experience either reinforces the priming by confirming an existing association between the word and its co-texts and contexts, or it weakens the priming, if we encounter a word in unfamiliar contexts” (Hoey, 2005).  Note that there is absolutely no way that such a statement can be tested by appeal to empirical evidence; Hoey’s theory is circular. Until the construct of “priming” is operationally defined in such a way that statements about it are open to empirical refutation, it remains a mysterious construct that people like Dellar can use as they want. Furthermore, Dellar fails to explain how his insistence that ELT should focus on the explicit teaching of lexical chunks can be reconciled with Hoey’s insistence that lexical primimg is a pscholinguistic phenomenon that refers to implcit, unconscious learning.

ELT 

Which brings us to the third matter: Dellar’s approach to teaching. We get a glimpse of it when he talks of “the sheer futility” of explaining error “through the filter of grammar”. Using the example of a student who wrote

The area has been deserted after a huge flooding 3 years ago

he asks “What’s a dogged grammar hound to do here?” and proceeds to lampoon the advice such grammar hounds might offer. He concludes that their answer

The area has been deserted since huge flooding three years ago

“still sounds very stilted and forced”, and he suggests that the text needs “rather severe reworking”, no doubt so as to include some of his beloved lexical chunks. Well, The area has been deserted since huge flooding three years ago, sounds OK to me, and reading Dellar’s own work is enough to raise serious questions about his ability to judge the coherence and cohesion of written texts. In any case, I think most students would benefit more from the recast Dellar thinks the grammar hounds would arrive at, than from Dellar’s own feedback, as evidenced in the examples he provides. What, one wonders, is the effect on a student of that kind of feedback? How does such severe reworking get welded on to the student’s current model of English? Dellar pours scorn on conventional grammar teaching, but his attempts to incorporate his own “bottom-up grammar” into his lexical approach are bewildering – see this recording    

Dellar’s preoccupation with the importance of lexical chunks informs his view of ELT. “Don’t teach grammar, teach lexical chunks” is the message. Rather than appreciate the fact that language learning is essentially a matter of implicit learning, and that any type of synthetic syllabus, be it grammar based or lexical chunk based, is fatally flawed, Dellar insists, like nobody else in the commercial field of ELT, that explicit teaching (of lexical chunks in context) should drive language learning. He talks about the problems he had in his attempts to teach students 50 lexical chunks a week, but what did he learn? Not that it’s an impossible task to teach learners the tens of thousands of lexical chunks native speakers use, nor even that there are principled ways of reducing the number. No, all he learnt was that the lexical chunks need to be embedded in context.

Dellar’s lexical chunks, served up every few days on his website, now number well over 200. What informs inclusion in this motley collection? And how are they all to be sufficiently “primed” so as to form part of the learner’s procedural knowledge of English?

For a fuller assessment of Dellar’s views of ELT, see separate posts, here, and here,

Finally, what about Dellar’s conclusion?

Any of you who ever have to deal with student writing as they prepare to do degrees or Master’s in English, where all the kinds of issues seen above are compounded with serious discoursal and structural issues, spelling problems, paragraphing anomalies, and so on will know what I mean when I claim that prevention is infinitely preferable to cure.

And that the medicine needed really isn’t all that much to do with grammar as we know it!

But is prevention better than cure when it comes to ELT?  Should teachers strive to prevent their students from making mistakes, rather than helping them to learn from mistakes? And, in the unlikely event that you reply “Yes, they should”, then what’s the preventive medicine? Learning by heart fifty randomly selected lexical chunks, along with contexts, every week?

 

References

Corder, S. P. (1967). The Significance of Learners’ Errors. International Review of Applied Linguistics in Language Teaching, 5, 161-170.

Eubank, L., & Gregg, K. (2002). NEWS FLASH—HUME STILL DEAD. Studies in Second Language Acquisition, 24(2), 237-247.

Fries, C. C. (1945). Teaching and Learning English as a Foreign Language. Ann Arbor: University of Michigan Press.

Hoey, M. (2005) Lexical Priming: A New Theory of Words and Language. Oxford: OUP.

MacDonald, P. (2016) “We All Make Mistakes!”. Analysing an Error-coded Corpus of Spanish University Students’ Written English, in Complutense Journal of English Studies, 24, 103-129.

Selinker, L. (1972). Interlanguage. International Review of Applied Linguistics in Language Teaching, 10, 209-241.

 

Radical ELT Part One

Recently I appealed for help in writing the final chapter of a book Mike Long and I are doing on ELT. The chapter is called Radical ELT: Signs of struggle: Towards an alternative organization of ELT and I’d like to thank all those who have been in touch. I’ve had replies from lots of radicals, all doing great things to challenge the interlocking publishing, teaching, teacher-training, and testing hydra that makes up the current $200 billion ELT industry, an industry whose prime motivation, profit, leads inevitably to the commodification of education, with disastrous consequences for almost everybody concerned. Woops! I should have said, perhaps, that they’re all making significant contributions to on-going attempts to change ELT practice in such a way that students and teachers benefit.

In this post, the first of a series dedicated to radicals working in ELT, I’d like to highlight the work being done by Nick Bilbrough.

The Hands Up Project  (Click the link to go their website)

Nick Bilbrough is the founder and main mover of this project, which aims to help kids in Gaza and the Occupied West Bank learn English. Five years ago, using simple video conferencing tools, he started connecting online to a small group of children in a library in Beit Hanoun, Gaza for weekly storytelling sessions. Now, to quote from the website, “the Hands Up Project works with over thirty different groups in Gaza. More than 500 kids a week now connect to volunteers around the world who work in collaboration with the local teacher to tell stories to each other, to play games and to do other activities to help them bring the English that the children are learning come to life”.

Just last week, Nick organized an online session where the winners of the “Toothbrush and other plays” competition were announced. I join the 100+ people online for the event, and I have to say it was incredibly moving to watch so many kids from all over the world taking part, all of them doing their bit to support their friends in Gaza and the occupied West Bank. The authors and actors all had their say; kids from Brazil did a play written by kids in Gaza; the solidarity and human warmth of everybody involved was truly inspirational.

By far the most important thing about the Hands Up project is its brave political stance, its support for Palestinians and all those who are marginalized by the policies of the Israeli government. We should all speak out against the long-standing abuses of human rights, the illegal expansion of territory, and the apartheid policies of the Israeli government which are, shamefully, condoned by the US and UK governments, among so many others.

When I talked to Nick on the phone, he said he agreed with Scott Thornbury’s views of ELT, his dismissal of coursebooks, his emphasis on communicative practice. (Scott, by the way, is a trustee of the Hands Up project and has done a lot of work for them, “Invaluable! Nobody else could have done it”, Nick said.) I asked him why he called himself a radical. “Because I’m trying to give a voice to those without a voice” he said. “Empowerment” was a word he used a lot. And he didn’t know a better way of empowering learners than by storytelling and putting on plays.

Now, I don’t want to claim Nick as a trophy, signed-up supporter of TBLT (although I reckon he’d be pleased to be counted as a signed-up supporter of Dogme), but it’s important to note that Nick and all those working on the Hands Up project reject current ELT practice. They care little for the CEFR, they care less for a PPP approach, and they care absolutely nothing for coursebooks. They DO English. They involve their students in storytelling, in shooting the breeze, and in the collaborative work of writing and putting on plays. That’s the focus of their work, of their classes, which together make up a coherent, exciting, alternative syllabus (sic). It would now be possible for Nick, a highly qualified and experienced teacher of English as an L2, with lots of potential commercial backers, to organize more conventional English courses, using coursebooks with all their bells and whistles, gladly donated by a savvy publisher. But Nick’s a radical, and so are all those in his growing team.

Can online teaching be a force for change?

Covid 19 has forced teachers of English as an L2 to switch to online platforms, Zoom being the most popular. One of the results has been a lot of discussion among teachers on social media about how best to adapt their practice to the new environment. Not surprisingly, most of the discussion is about technical issues, about the mechanics of how “usual”, “normal” classroom-based teaching practices can best be transferred to Zoom sessions. But I find it encouraging to see that a significant part of the discussion is about frustration at the unsatisfactory level and quality of student participation.  And from that, I dare to suggest, as an unintended consequence, come questions about the efficacy of coursebook-driven ELT.

In a normal classroom session, the teacher is in the same place with a group of students, and the fact that the teacher talks most of the time; leads the students through a set of activities which mostly involve them in working, often with their heads in the coursebook, on bits of the language; and gets them to engage in real communication among themselves for very short periods of time, goes unnoticed and unremarked. It’s normal! Still, everybody’s together, there’s often a good, shared atmosphere, and the skillful teacher moves around the students, checking and encouraging, making the classroom session friendly, purposeful, and well-structured.

But the online version of the same session is more likely to fall flat, and the lack of real communication among the group is thrown into stark relief. Typically, it’s the “production” part of the PPP methodology in online classes that doesn’t work, and, I suggest, that’s hardly surprising. If you use coursebooks in an online environment, their basic focus on talking ABOUT the language, of studying the language as an object, is magnified. Students perhaps feel more keenly that they’re here to study the language, to be told stuff, to learn that particular bit of the book.

The alternative is to use the online environment to talk IN the language and to organize the classes so that genuine communication among the group is the predominant ingredient of each session. Let’s suppose that the class is about job interviews. In a coursebook, this is, let’s say, Unit 3. The “Lead In” activity might be

“Have you ever been for a job interview? In pairs, talk about: What job was it? Who interviewed you? What happened?”.

The problem is, that activity is one of ten, it’s allotted 10 minutes, after which the REAL FOCUS of the lesson – selected bits of vocabulary and a grammar point (perhaps the present perfect) – is then developed through a series of activities, most of which involve students studying the language. How do you organize that on Zoom? Well, you use break out groups for group discussion, but that takes up a small proportion of the total time, and it’s rightly perceived as peripheral to the “real” job of learning.

In contrast, in a TBLT course, a series of lessons deals with job interviews, if this is identified as a need for those doing the course. In the first lesson, we concentrate on relevant input, and simple productive tasks. In subsequent lessons, we get students to talk to each other about various parts of a job interview, slowly leading towards getting them to do a simulation of such an interview. Every lesson is organized around their using the language, talking to each other, where the teacher gives help with vocabulary and grammar reactively, when it’s needed.

In a Dogme course, if the students expressed an interest in job interviews, the focus of the lesson would be their discussion of the topic. There would be no pre-planned focus on particular grammar points or other formal aspects of the language, and MOST of classroom time would be devoted to communicative activities.

If you emphasise learning by doing, as you do in TBLT and Dogme approaches, then you prioritise student participation, and you make it clear that that’s what you expect students to do. As a result, the online Zoom sessions are much more likely to be perceived by students as events where talking to each other is the main point.

There is no doubt that interest in alternatives to coursebook-driven ELT has grown dramatically as a result of the rise in online teaching, which has inspired teachers to take a fresh look at what they’re doing. Why is teaching English online with a coursebook such a drag? Because the vital, ameliorating effects of teachers working their magic in a classroom can’t rescue it.

On the other hand, TBLT, where tasks, not linguistic items, are the units of analysis for syllabus design, lends itself to online teaching, because tasks naturally involve students more – they demand active student participation, as they do in the classroom too, of course. Tasks are the natural organizing principle that Zoom is based on. Zoom is not the obvious home of a PPP methodology and the mentality that it encourages. Tasks can be organized on Zoom in such a way that they naturally lead to the kinds of interaction required for language learning – learning by doing.

Likewise, Dogme, in its rejection of coursebooks, its brave, inspirational insistence on the core values of communicative language teaching, offers an enticing alternative to Zoom sessions devoted to teaching “McNuggets”.

So, when we discuss online teaching, let’s not just discuss adapting what we do to an online platform. Let’s discuss radical change. Change from global, commodified, coursebook-driven ELT to local responses to local needs. There’s no doubt that Dogme and TBLT are leading the way, and I hope more teachers will join the rising numbers informing themselves about these exciting alternatives which might just, at last, threaten the thirty year old hegemony of coursebooks and the related paraphernalia of the CEFR, high stakes proficiency exams, CELTA, and all that and all that.