Teacher Trainers in ELT

This blog is dedicated to improving the quality of teacher training and development in ELT.

 

The Teacher Trainers 

The most influential ELT teacher trainers are those who publish “How to teach” books and articles, have on-line blogs and a big presence on social media, give presentations at ELT conferences, and travel around the world giving workshops and teacher training & development courses. Among them are: Jeremy Harmer, Penny Ur, Nicky Hockley, Adrian Underhill, Hugh Dellar, Sandy Millin, David Deubelbeiss, Jim Scrivener, Willy Cardoso, Peter Medgyes, Mario Saraceni, Dat Bao, Tom Farrell, Tamas Kiss, Richard Watson-Todd, David Hill, Brian Tomlinson, Rod Bolitho, Adi Rajan, Chris Farrell, Marisa Constantinides, Vicki Hollet, Scott Thornbury, and Lizzie Pinard. I apppreciate that this is a rather “British” list, and I’d be interested to hear suggestions about who else should be included. Apart from these individuals, the Teacher Development Special Interest Groups (TD SIGs) in TESOL and IATEFL also have some influence.

What’s the problem? 

Most current teacher trainers and TD groups pay too little attention to the question “What are we doing?”, and the follow-up question “Is what we’re doing effective?”. The assumption that students will learn what they’re taught is left unchallenged, and trainers concentrate either on coping with the trials and tribulations of being a language teacher (keeping fresh, avoiding burn-out, growing professionally and personally) or on improving classroom practice. As to the latter, they look at new ways to present grammar structures and vocabulary, better ways to check comprehension of what’s been presented, more imaginative ways to use the whiteboard to summarise it, and more engaging activities to practice it.  A good example of this is Adrian Underhill and Jim Scrivener’s “Demand High” project, which leaves unquestioned the well-established framework for ELT and concentrates on doing the same things better. In all this, those responsible for teacher development simply assume that current ELT practice efficiently facilitates language learning.  But does it? Does the present model of ELT actually deliver the goods, and is making small, incremental changes to it the best way to bring about improvements? To put it another way, is current ELT practice efficacious, and is current TD leading to significant improvement? Are teachers making the most effective use of their time? Are they maximising their students’ chances of reaching their goals?

As Bill VanPatten argues in his plenary at the BAAL 2018 conference, language teaching can only be effective if it comes from an understanding of how people learn languages.  In 1967, Pit Corder was the first to suggest that the only way to make progress in language teaching is to start from knowledge about how people actually learn languages. Then, in 1972, Larry Selinker suggested that instruction on formal properties of language has a negligible impact (if any) on real development in the learner.  Next, in 1983, Mike Long raised the issue again of whether instruction on formal properties of language made a difference in acquisition.  Since these important publications, hundreds of empirical studies have been published on everything from the effects of instruction to the effects of error correction and feedback. This research in turn has resulted in meta-analyses and overviews that can be used to measure the impact of instruction on SLA. All the research indicates that the current, deeply entrenched approach to ELT, where most classroom time is dedicated to explicit instruction, vastly over-estimates the efficacy of such instruction.

So in order to answer the question “Is what we’re doing effective?”, we need to periodically re-visit questions about how people learn languages. Most teachers are aware that we learn our first language/s unconsciously and that explicit learning about the language plays a minor role, but they don’t know much about how people learn an L2. In particular, few teachers know that the consensus of opinion among SLA scholars is that implicit learning through using the target language for relevant, communicative  purposes is far more important than explicit instruction about the language. Here are just 4 examples from the literature:

1. Doughty, (2003) concludes her chapter on instructed SLA by saying:

In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning in improving performance in complex control tasks, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.

2. Nick Ellis (2005) says:

the bulk of language acquisition is implicit learning from usage. Most knowledge is tacit knowledge; most learning is implicit; the vast majority of our cognitive processing is unconscious.

3. Whong, Gil and Marsden’s (2014) review of a wide body of studies in SLA concludes:

“Implicit learning is more basic and more important  than explicit learning, and superior.  Access to implicit knowledge is automatic and fast, and is what underlies listening comprehension, spontaneous  speech, and fluency. It is the result of deeper processing and is more durable as a result, and it obviates the need for explicit knowledge, freeing up attentional resources for a speaker to focus on message content”.

4. ZhaoHong, H. and Nassaji, H. (2018) review 35 years of instructed SLA research, and, citing the latest meta-analysis, they say:

On the relative effectiveness of explicit vs. implicit instruction, Kang et al. reported no significant difference in short-term effects but a significant difference in longer-term effects with implicit instruction outperforming explicit instruction.

Despite lots of other disagreements among themselves, the vast majority of SLA scholars agree on this crucial matter. The evidence from research into instructed SLA gives massive support to the claim that concentrating on activities which help implicit knowledge (by developing the learners’ ability to make meaning in the L2, through exposure to comprehensible input, participation in discourse, and implicit or explicit feedback) leads to far greater gains in interlanguage development than concentrating on the presentation and practice of pre-selected bits and pieces of language.

One of the reasons why so many teachers are unaware of the crucial importance of implicit learning is that so few teacher trainers talk about it. Teacher trainers don’t tell their trainees about the research findings on interlanguage development, or that language learning is not a matter of assimilating knowledge bit by bit; or that the characteristics of working memory constrain rote learning; or that by varying different factors in tasks we can significantly affect the outcomes. And there’s a great deal more we know about language learning that teacher trainers don’t pass on to trainees, even though it has important implications for everything in ELT from syllabus design to the use of the whiteboard; from methodological principles to the use of IT, from materials design to assessment.

We know that in the not so distant past, generations of school children learnt foreign languages for 7 or 8 years, and the vast majority of them left school without the ability to maintain an elementary conversational exchange in the L2. Only to the extent that teachers have been informed about, and encouraged to critically evaluate, what we know about language learning, constantly experimenting with different ways of engaging their students in communicative activities, have things improved. To the extent that teachers continue to spend most of the time talking to their students about the language, those improvements have been minimal.  So why do so many teacher trainers ignore all this? Why is all this knowledge not properly disseminated?

Most teacher trainers, including Penny Ur (see below), say that, whatever its faults, coursebook-driven ELT is practical, and that alternatives such as TBLT are not. Ur actually goes as far as to say that there’s no research evidence to support the view that TBLT is a viable alternative to coursebooks. Such an assertion is contradicted by the evidence. In a recent statistical meta-analysis by Bryfonski & McKay (2017) of 52 evaluations of program-level implementations of TBLT in real classroom settings, “results revealed an overall positive and strong effect (d = 0.93) for TBLT implementation on a variety of learning outcomes” in a variety of settings, including parts of the Middle-East and East Asia, where many have flatly stated that TBLT could never work for “cultural” reasons, and “three-hours-a-week” primary and secondary foreign language settings, where the same opinion is widely voiced. So there are alternatives to the coursebook approach, but teacher trainers too often dismiss them out of hand, or simply ignore them.

How many TD courses today include a sizeable component devoted to the subject of language learning, where different theories are properly discussed so as to reveal the methodological principles that inform teaching practice?  Or, more bluntly: how many TD courses give serious attention to examining the complex nature of language learning, which is likely to lead teachers to seriously question the efficacy of basing teaching on the presentation and practice of a succession of bits of language? Today’s TD efforts don’t encourage teachers to take a critical view of what they’re doing, or to base their teaching on what we know about how people learn an L2. Too many teacher trainers base their approach to ELT on personal experience, and on the prevalent “received wisdom” about what and how to teach. For thirty years now, ELT orthodoxy has required teachers to use a coursebook to guide students through a “General English” course which implements a grammar-based, synthetic syllabus through a PPP methodology. During these courses, a great deal of time is taken up by the teacher talking about the language, and much of the rest of the time is devoted to activities which are supposed to develop “the 4 skills”, often in isolation. There is good reason to think that this is a hopelessly inefficient way to teach English as an L2, and yet, it goes virtually unchallenged.

Complacency

The published work of most of the influential teacher trainers demonstrates a poor grasp of what’s involved in language learning, and little appetite to discuss it. Penny Ur is a good example. In her books on how to teach English as an L2, Ur spends very little time discussing the question of how people learn an L2, or encouraging teachers to critically evaluate the theoretical assumptions which underpin her practical teaching tips. The latest edition of Ur’s widely recommended A Course in Language Teaching includes a new sub-section where precisely half a page is devoted to theories of SLA. For the rest of the 300 pages, Ur expects readers to take her word for it when she says, as if she knew, that the findings of applied linguistics research have very limited relevance to teachers’ jobs. Nowhere in any of her books, articles or presentations does Ur attempt to seriously describe and evaluate evidence and arguments from academics whose work challenges her approach, and nowhere does she encourage teachers to do so. How can we expect teachers to be well-informed, critically acute professionals in the world of education if their training is restricted to instruction in classroom skills, and their on-going professional development gives them no opportunities to consider theories of language, theories of language learning, and theories of teaching and education? Teaching English as an L2 is more art than science; there’s no “best way”, no “magic bullet”, no “one size fits all”. But while there’s still so much more to discover, we now know enough about the psychological process of language learning to know that some types of teaching are very unlikely to help, and that other types are more likely to do so. Teacher trainers have a duty to know about this stuff and to discuss it with thier trainees.

Scholarly Criticism? Where?  

Reading the published work of leading ELT trainers is a depressing affair; few texts used for the purpose of training teachers to work in school or adult education demonstrate such poor scholarship as that found in Harmer’s The Practice of Language Teaching, Ur’s A Course in Language Teaching, or Dellar and Walkley’s Teaching Lexically, for example. Why are these books so widely recommended? Where is the critical evaluation of them? Why does nobody complain about the poor argumentation and the lack of attention to research findings which affect ELT? Alas, these books typify the general “practical” nature of TD programmes in ELT, and their reluctance to engage in any kind of critical reflection on theory and practice. Go through the recommended reading for most TD courses and you’ll find few texts informed by scholarly criticism. Look at the content of TD courses and you’ll be hard pushed to find a course which includes a component devoted to a critical evaluation of research findings on language learning and ELT classroom practice.

There is a general “craft” culture in ELT which rather frowns on scholarship and seeks to promote the view that teachers have little to learn from academics. Teacher trainers are, in my opinion, partly responsible for this culture. While it’s  unreasonable to expect all teachers to be well informed about research findings regarding language learning, syllabus design, assessment, and so on, it is surely entirely reasonable to expect the top teacher trainers to be so. I suggest that teacher trainers have a duty to lead discussions, informed by relevant scholarly texts, which question common sense assumptions about the English language, how people learn languages, how languages are taught, and the aims of education. Furthermore, they should do far more to encourage their trainees to constantly challenge received opinion and orthodox ELT practices. This surely, is the best way to help teachers enjoy their jobs, be more effective, and identify the weaknesses of current ELT practice.

My intention in this blog is to point out the weaknesses I see in the works of some influential ELT teacher trainers and invite them to respond. They may, of course, respond anywhere they like, in any way they like, but the easier it is for all of us to read what they say and join in the conversation, the better. I hope this will raise awareness of the huge problem currently facing ELT: it is in the hands of those who have more interest in the commercialisation and commodification of education than in improving the real efficacy of ELT. Teacher trainers do little to halt this slide, or to defend the core principles of liberal education which Long so succinctly discusses in Chapter 4 of his book SLA and Task-Based Language Teaching.

The Questions

I invite teacher trainers to answer the following questions:

 

  1. What is your view of the English language? How do you transmit this view to teachers?
  2. How do you think people learn an L2? How do you explain language learning to teachers?
  3. What types of syllabus do you discuss with teachers? Which type do you recommend to them?
  4. What materials do you recommend?
  5. What methodological principles do you discuss with teachers? Which do you recommend to them?

 

References

Bryfonski, L., & McKay, T. H. (2017). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research.

Dellar, H. and Walkley, A. (2016) Teaching Lexically. Delata.

Doughty, C. (2003) Instructed SLA. In Doughty, C. & Long, M. Handbook of SLA, pp 256 – 310. New York, Blackwell.

Long, M. (2015) Second Language Acquisition and Task-Based Language Teaching. Oxford, Wiley.

Ur, P. A Course in Language Teaching. Cambridge, CUP.

Whong, M., Gil, K.H. and Marsden, H., (2014). Beyond paradigm: The ‘what’ and the ‘how’ of classroom research. Second Language Research, 30(4), pp.551-568.

ZhaoHong, H. and Nassaji, H. (2018) Introduction: A snapshot of thirty-five years of instructed second language acquisition. Language Teaching Research, in press.

Advertisements

Broken Telephone

The latest spat of conferences has seen a lot of hand waving in favour of “evidence based” ELT. Among leading figures there’s a growing consensus that, like motherhood and being kind to animals, evidence based teaching is a “good thing”. No need to get excited, though, because,

  1. All the evidence that challenges current ELT practice is still routinely ignored.
  2. Even the evidence that’s being aired is quickly adulterated.

Carol Lethaby’s talk at the recent Pavillion ELT Live! conference is a case in point. In this post I want to suggest that it’s the second link in an ongoing chain in the Broken Telphone” game, where scholarly respect for the careful presentation and use of evidence is slowly but surely abandoned.

Original Text 

The story starts in 2013, with the publication of a paper by Dunlosky et al. This paper reports on work done by cognitive and educational psychologists, and its aim is to evaluate findings of studies on 10 learning techniques. It’s important to note that this is not about L2 language learning (SLA) research, or instructed SLA research; it’s a meta-analysis of research into the explicit learning strategies used by students of content-based subjects across the curriculum. See the Tables below for details.

The paper divided the 10 techniques into 3 categories, as follows:

Techniques with “low utility” (little or no evidence supporting their efficacy)

  • highlighting and underlining content as you read;
  • writing a summary of a to-be-learned text;
  • visualising what you are reading in order to learn it;
  • using mnemonic devices such as the keyword technique to learn vocabulary;
  • re-reading a text to learn its contents.

Techniques with “high utility”

At the other extreme, distributed practice and practice testing are well known to be efficacious in certain types of explicit learning, and this is confirmed by the authors.

Techniques with “moderate utility”

  • elaborative interrogation,
  • self-explanation,
  • interleaved practice.

These got “moderate utility” assessments because the evidence is far too limited to warrant any firm claims for them. As the authors say,

these techniques have not been adequately evaluated in educational contexts.

adding that

the benefits of interleaving have only just begun to be systematically explored.

The conclusion is that

the ultimate effectiveness of these techniques is currently unknown.

First Round  

Fast forward to the 2018 IATEFL conference, where Patricia Harries and Carol Lethaby gave a talk called

Use your brain! Exploiting evidence-based teaching strategies in ELT.

A summary of the talk, by the authors, is included in the Conference Selection publication, edited by Tania Patterson. Having reminded everybody that there’s no evidence to support claims about learning styles (neuro-linguistic programming – NLP) or about putative left and right hemesphere learners, Harries and Lethaby summarised the findings of Dunlosky et al’s 2013 paper. The important thing for my argument is how they presented the findings on the 3 strategies which got a “moderate utilty” evaluation.

Elaborative interrogation: Learners explain things to themselves as they learn, for example, saying why a particular fact is true. The technique is thought to enhance learning by helping to integrate new and existing knowledge. The Dunlosky et al (2013) paper stresses that that there’s little evidence so far, and that more research is required.

Self-explanation: This is another questioning strategy. Here, students use questions to monitor and explain features of their own learning to themselves. Again, Dunlosky et al’s paper stresses that further research is needed before we can say there’s a good evidence base.

Interleaved practice: Interleaved practice involves the mixing up of difference practice activities in a single session and has been found to be effective in problem solving and categorisation. Yet again, more research is needed before any firm conclusions about its efficacy can be drawn.

Implications for ELT

When they went on to discuss the implications of these findings for ELT, Harries and Lethaby said that there were a number of ways that these techniques could be adapted. And here, I think, is where the limitations of the evidence starts to get lost. In this, the eagerly awaited climax of the talk, the presenters slip away from “What if..?” to “This is what we can do now”. Particularly enthusiastic is their endorsement of the use of “prior knowledge” (see van Kesteren et al, 2014) to design pre-tasks in receptive skills development, spiral syllabuses, and exploit L1 and L2 previous knowledge in vocab learning. This, despite the fact that research into the acquisition of new knowledge which might be integrated into pre-existing conceptual schemas has so far led to no firm findings. The presenters also talk up incorporating elaborative interrogation into teaching grammatical rules and structures, and using self-explanation to ask learners about how they found their answers in language and skills tasks. They conclude:

By spacing the practice of language items and structures using increasingly greater time intervals and mixing up language practice and skills development, distributed and interleaved practice can help integrate elements, build automaticity and aid memorisation.

All of these claims can give the impression that they’re supported by strong and robust research evidence, whereas, in fact they’re not. What Harries and Lethaby should have said was: “If we ever get convincing evidence that these techniques work in L2 learning, then we could use them in the following ways”.

To be fair, the presenters concluded:

Evidence-based strategies exist and language teachers need to be aware of them. Language  teachers and learners already use many of these and it is beneficial to know that research supports this. There is, however, a danger in a whole-scale adoption of findings from research on content-based subject areas often done in inauthentic teaching situations.

Furthermore, there’s a video of J. Harmer interviewing Lethaby at the conference. When asked to give an example of how evidence from neuroscience can help more efficacious ELT practice, Lethaby replied

“There seems to be a place in the brain where new and old information connect. We can, perhaps, in the future, when we know more about how this is done, use it to help learners assimilate new information. But we have to be very cautious, we have to be careful. This is still only potential”. 

In brief, this strikes me as an interesting talk, which points to the possible future potential of three techniques and reaffirms the evidence for the efficacy of spaced practice and practice testing. We should note that the talk extrapolates from studies of content-based school courses where the focus is on the teaching of explicit knowledge.  In the case of ELT, those of us who question a PPP, coursebook-driven approach to ELT consider such a focus to be fundamentally mistaken. Most importantly, what worries me is the discussion at the end, particularly the last bit I quoted, because it could easily be misinterpreted as saying that research evidence to date gives strong support to the claim that elaborative interrogation, self-explanation, and interleaved practice are efficacious tools for classroom use.

Second Round  

We come now to Lethaby’s presentation at the recent 2019 conference. Photos of slides from the talk and tweets during and after it suggest that Lethaby more or less repeated her 2018 IATEFL talk. But did she repeat all the caveats, I wonder? Did she make it abundantly clear that conjectures based on “a place in the brain where new and old information connect” are not evidence for the efficaciousness of anything, and that there’s still no good reason to believe that interleaving and elaborative interrogation are good L2 learning strategies? Or did she, as in 2018, rather over-egg the pudding again? What were the take aways from Lethaby’s talk, I wonder, and did she do enough to prevent people from misinterpreting what she was saying?

Third Round 

Well she certainly didn’t do enough to prevent Hugh Dellar from getting it all wrong. Soon after the start of the talk, Dellar tweeted

@clethaby laying into some neuromyths that still blight our profession: left brain right brain nonsense, we only use 10% of our brains, students learn better when presented with info via preferred learning style etc. Great stuff.

Then:

Really great talk by @clethaby unpicking the truths and lies behind neuroscientific research in ELT. It’s a real cry for deeper engagement with the actual research and for a healthy degree of scepticism, as well as plea for real evidence-based teaching.

So Dellar has already forgotten what I’m sure Lethaby told him at the start of the talk, namely that she wasn’t reporting on neuroscientific research in ELT at all. Still, it’s good to see such a sincere endorsement of cries for deeper engagement with research.

And then, we read this:

 @clethaby suggests that far more effective is spaced practice and interleaving, practice testing and eleborative interrogation – explaining to someone else how/why you got the answer to something (esp. in a guided discovery manner). It helps embed new in old.

I assume that, apart from the last sentence, this is a garbled attempt to paraphrase what Lethaby actually said, and it resembles what she said in her 2018 talk, as quoted above. Still, it shows what happens when you discuss implications of unconfirmed findings without constantly repeating that the findings are unconfirmed.

And what does that last sentence – “It helps embed new in old” – mean? It sounds as if it ought to mean something and as if Dellar thinks he knows exactly what it means. Perhaps it means “Using these four techniques helps embed new knowledge in old knowledge”. But what does “embedding new knowledge in old knowledge” mean with regard to learning an L2? This is, I presume, whether Dellar knows it or not, a reference to the van Kesteren et al (2014) article, but it’s hopelessly mangled. How is one supposed to tell when “new knowledge” gets “embedded” in “old knowledge”? What happens to the new and old knowledge? Do the two types of knowledge move from the different places they were to that place in the brain where new and old knowledge connect? How do they connect, or rather “embed”? Does the new knowledge sort of snuggle in with the old knowledge, or does it all become some different kind of knowledge? How does this process differ from learning something new? And then, since we’re interested in evidence, how do we test whether, in any particular case where a student explains to someone else how/why they got the answer to something, the “new knowledge” is “embedded” in the “old knowledge”?

We started with a 2013 meta-analysis of studies on learning techniques aimed at explicit learning in content-based courses like biology and educational psychology. We end, six years later, with someone tweeting a paraphrase of what they thought they heard someone else say about neuroscientific research in ELT. The hearer jumbles together four different techniques, two of which we know little about, and confidently asserts that teachers can use them to “help embed new in old”, whatever that means.

Fourth Round  

The next step is likely to be that the tireless globetrotter will continue on his travels, now able to assure gullable young teachers all over the world that his views on ELT are backed by “actual research” and that they represent the highest form of “real evidence-based teaching”.

Finally

The tweets that followed the conference, include this one

They suggest that Carol Lethaby should re-double her efforts to avoid being misunderstood, and that all of us should re-double our efforts to carefully scrutinise what the so-called ELT experts tell us.

References

Dunlosky, J., K. A. Rawson, E. J. Marsh, M. J. Nathan and D. T. Willingham.( 2013) Improving students learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14/1: 4−58.

Lethaby also ment sources in her 2018 IATEFL presentation:

Roediger, H. L. and Pyc, M. A. (2012) Inexpensive techniques to improve education: applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition, 1/4: 242−248.

van Kesteren, M. T. R., M. Rijpkema, D. J. Ruiter, R. G. M. Morris and G. Fernandez. (2014) Building on prior knowledge: schema-dependent encoding processes related to academic performance. Journal of Cognitive Neuroscience 26/10: 2250−2261.

SLA research findings which should affect ELT

In response to a tweet from David Cullen, here’s a summary of SLA research that I think needs to be taken more seriously by the ELT community.

From time to time one sees well known “experts” on ELT refer to SLA research. The standard message is that researchers know nothing about real-world classroom practice, and that most of their findings are either irrelevant or unreliable. A few trinkets from SLA research are trotted out by the ELT experts as evidence of their scholarship, and they include these:

  • Using the L1 is OK.
  • Teaching lexical sets is not OK.
  • Guessing from context is not a reliable way of “accessing meaning”.
  • Spaced repetition is a must.
  • Getting in the flow really helps learning.

Such accounts of the research are, I think, cynically frivolous, so, within the confines of a blog post, let’s take a slightly more serious look.

The empirical study of how people learn a second language started in the 1960s and I’ve written a series of posts on the history of SLA research which you will find in the Menu on the rght of this page. Here, I’ll concentrate on theories which look at the psychological process of learning, and in particular on three areas:

  1. Interlanguage: how the learner’s mental idea of the target language develops.
  2. The roles of explicit and implicit knowledge and learning.
  3. Maturational constraints on adult SLA , including sensitive periods.

1. Interlanguages

Studies of interlanguage development were given a framework by Selinker’s (1972) paper which argues that L2 learners develop their own autonomous mental grammar (IL grammar) with its own internal organising principles. More work on the acquisition of English morphemes, and then studies of developmental stages of negation in English, developmental stages of word order and questions in English, and then Pienemann’s studies of learning German as an L2 where all learners adhered to a five-stage developmental sequence (see Ortega, 2009, for a review) put together an increasingly clear picture of interlanguage development.

By the mid 1980s it was clear that learning an L2 is a process whereby learners slowly develop their own autonomous mental grammar with its own internal organising principles. Thirty years on, after hundreds more studies, it is well established that acquisition of grammatical structures, and also of pronunciation features and many lexical features such as collocation, is typically gradual, incremental and slow. Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, the product broadly conforming to developmental sequences observed in naturalistic settings. They master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both. To be perfectly clear: the acquisition sequences displayed in IL development have been shown to be impervious to explicit instruction, and the conclusion is that students don’t learn when and how a teacher decrees that they should, but only when they are developmentally ready to do so.

2. The roles of explicit and implicit knowledge and learning

Two types of knowledge are said to be involved in SLA, and the main difference between them is conscious awareness. Explicit L2 knowledge is knowledge which learners are aware of and which they can retrieve consciously from memory. It’s knowledge about language. In contrast, implicit L2 knowledge is knowledge of how to use language and it’s unconscious – learners don’t know that they know it, and they usually can’t verbalize it. (Note: the terms Declarative and Procedural knowledge are often used. While there are subtle differences, here I take them to mean the same as explicit and implicit knowledge.)

Now here’s the important thing: in terms of cognitive processing, learners need to use attentional resources to retrieve explicit knowledge from memory, which makes using explicit knowledge effortful and slow: the time taken to access explicit knowledge is such that it doesn’t allow for quick and uninterrupted language production. In contrast, learners can access implicit knowledge quickly and unconsciously, allowing it to be used for unplanned language production.

While research into explicit and implicit L2 knowledge started out looking almost exclusively at grammar, we now have lots of evidence to show that these constructs also relate to other areas of language such as vocabulary, pronunciation, and pragmatics. Just for example, while lots of vocab. learning involves learning items by explicit form-meaning mapping (“table” is “mesa” in Spanish), there are very important aspects of vocabulary knowledge, like collocations, and lexical chunks, for example, that involve implicit knowledge.

Three Interface Hypotheses

It’s now generally accepted that these two types of language knowledge are learned in different ways, stored separately and retrieved differently, but disagreement among SLA scholars continues about whether there is any interface between the two. The question is: Can learners turn the explicit knowledge they get in classrooms into implicit knowledge? Or can implicit knowledge be gained only implicitly, while explicit knowledge remains explicit?  Those who hold the “No Interface” position argue that there’s a complete inability for explicit knowledge to become implicit. Others take the “Weak Interface” position which assumes that there is a relationship between the two types of knowledge and that they work together during L2 production. Still others take the “Strong Interface” position, based on the assumption that explicit knowledge can and does become implicit.

The main theoretical support for the No Interface position is Krashen’s Monitor theory, which has few adherents these days. The Strong Interface case gets its theoretical expression from Skill Acquisition Theory, which describes the process of declarative knowledge becoming proceduralised and is most noteably championed by DeKeyser. This general learning theory clashes completely with evidence from L1  acquistion and with interlanguage findings discussed above. The Weak Interface position seems right to me and to most SLA scholars.

Implicit Knowledge is the driver of IL development 

Whatever their differences, there is very general consesnsus among scholars that SLA has a great deal in common with L1 learning, and that implicit learning is the “default” mechanism of SLA. Wong, Gil & Marsden (2014) point out that implicit knowledge is in fact ‘better’ than explicit knowledge; it is automatic, fast – the basic components of fluency – and more lasting because it’s the result of the deeper entrenchment which comes from repeated activation. The more that any speech act draws from implicit knowledge, the less we need to rely on explicit knowledge. Doughty (2003) concludes:

In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning …, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.  

One more twist. Research shows that although there are doubts about its usefulness, explicit knowledge about the language of the kind teachers habitually offer in class is easy to learn. Equally, the implicit language knowledge that is obtained from engagement in real communicative activities is relatively hard to learn – evidence from Canadian immersion courses, for example, shows that when learners are focused exclusively on meaning, there are parts of the target language that they just don’t learn, even after hundreds of hours of practice. This leads Loewen (2015) to say:

The fact that explicit knowledge is relatively easy to learn but difficult to use for spontaneous L2 production, and that, conversely, implicit knowledge is relatively difficult to learn but important for L2 production is, I feel, one of the most important issues in ISLA and L2 pedagogy. It is essential for L2 learners and teachers to be aware of the different types of knowledge and the roles that they play in L2 acquisition and production. The implication is that the teaching of explicit language rules will not, by itself, result in students who are able to communicate easily or well in the language”.      

3. Maturational constraints on adult SLA , including sensitive periods

Our limited ability to learn a second language implcitly (in stark contrast to the way we learn our L1 as young children) brings us to the third area of SLA research I want to look at. Long (2007) in an extensive review of the literature, concludes that there are indeed critical periods for SLA, or “senstive periods”, as they’re called. For most people, the senstive period for native-like phonology closes between age 4 to 6; for the lexicon between 6 and 10; and for morphology and syntax by the mid teens. This is a controversial area, and I’ll be happy to answer any questions you might ask, but there’s general consensus that adults are partially “disabled” language learners who can’t learn in the same way children do. Which is where explicit learning comes in – the right kind of explicit teaching can help adult studens learn bits of the language that they are unlikely to learn implicitly. Long calls these bits “fragile” features of the L2 – features that are of low perceptual saliency (because they’re infrequent / irregular / semantically empty / communicatively redundant / involving complex form-meaning mappings), and he says these are likely to be late or never learned without explicit learning.

Implications

It seems that the best way teachers can help students to learn an l2 is by concentrating on giving them scaffolded opportunities to use the L2 in the performance of relevant communicative tasks. During the performance of these tasks, where enhanced and genuine texts both written and spoken give the rich input required, teachers should give students help with aspects of the language that they’re having problems with by brief switches to what Long calls “focus on form”, from quick explicit explanations to corrective feedback in the form of recasts.

A relatively inefficacious way of organising ELT courses is to use a General English coursebook. Here, the English language is broken down into constituent parts which are then presented and practiced sequentially following an intuitive easy-to-difficult criterion. The teacher’s main concern is with presenting and practicing bits of grammar, pronunciation and lexis by reading and listening to short texts, doing form-focused exercises, talking about bits of the language, and writing summaries on the whiteboard. The mistake is to suppose that whatever students learn from what they read in the book, hear from the teacher, say in response to prompts, read again on the whiteboard, and maybe even read again in their own notes will lead to communicative competence. It won’t.

The mis-use of SLA findings

Numerous prominent members of the ELT establishment use either carefully selected or misinterpreted research findings in SLA to support their views. Here are 3 quick examples.

Catherine Walter’s (2012) article has been quoted by many. She claims that, while grammar teaching has been under attack for years,

evidence trumps argument, and the evidence is now in. Rigorously conducted meta-analyses of a wide range of studies have shown that, within a generally communicative approach, explicit teaching of grammar rules leads to better learning and to unconscious knowledge, and this knowledge lasts over time. Teaching grammar explicitly is more effective than not teaching it, or than teaching it implicitly; that is now clear. … Taking each class as it comes is not an option. A grammar syllabus is needed.                 

No meta-analysis in the history of SLA research supports these assertions. Not one, ever.

Likewise, Jason Anderson (2016), in his article defending PPP, says

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

His argument is preposterous and I’ve discussed it here.

The meta-analysis which is most frequently cited to defend the kind of explicit grammar teaching done by teachers using coursebooks is the Norris and Ortega (2000) meta-analysis on the effects of L2 instruction, which found that explicit grammar instruction (Focus on FormS)  was more effective than Long’s recommended more discrete focus on form (FoF) approach through procedures like recasts. (Penny Ur is still using this article to “prove” that recasts are ineffective.) However, Norris and Ortega themselves acknowledged, while others like Doughty (2003) reiterated, that the majority of the instruments used to measure acquisition were biased towards explicit knowledge. As they explained, if the goal of the discreet FoF is for learners to develop communicative competence, then it is important to test communicative competence to determine the effects of the treatment. Consequently, explicit tests of grammar don’t provide the best measures of implicit and proceduralized L2 knowledge. Furthermor, the post tests done in the studies used in the meta-analysis were not only grammar tests, they were grammar tests done shortly after the instruction, giving no indication of the lasting effects of this instruction.

Newer, post-2015 meta-analyses have used much better criteria for selecting and evaluating studies. The meta-analysis carried out by Kang, et al (2018) concluded:

 implicit instruction (g = 1.76) appeared to have a significantly longer lasting impact on learning … than explicit instruction (g = 0.77). This finding, consistent with Goo et al. (2015), was a major reversal of that of Norris and Ortega (2000).

As for Ur’s typically sweeping asserion that there is “no evidence” to support claims of TBLT’s efficaciousness, a meta-analysis by Bryfonski & McKay (2017) reported:

Findings based on a sample of 52 studies revealed an overall positive and strong effect (d = 0.93) for TBLT implementation on a variety of learning outcomes. … Additionally, synthesizing across both quantitative and qualitative data, results also showed positive stakeholder perceptions towards TBLT programs.

Enough, already! I hope this review of some important areas of SLA research and my comments will generate discussion.

References 

Bryfonski, L., & McKay, T. H. (2017). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research

Corder, S. P. (1967) The significance of learners’ errors.International Review of Applied Linguistics 5, 161-9.

Doughty, C. J. (2003). Instructed SLA: Constraints, Compensation, and Enhancement. In C. J. Doughty, & M. H. Long (Eds.), The Handbook of Second Language Acquisition (pp. 256-310). Malden, MA: Blackwell.

Kang, E., Sok, S., & Hong, Z. (2018) Thirty-five years of ISLA research on form-focused instruction: A meta-analysis. Language Teaching Research Journal.

Krashen, S. (1977) The monitor model of adult second language performance. In Burt, M., Dulay, H. and Finocchiaro, M. (eds.) Viewpoints on English as a second language. New York: Regents, 152-61.

Loewen, S. (2015) Introduction to Instructed Second Language Acquisition.N.Y. Routledge.

Long, M. (2007) Problems in SLA. Lawrence Erlbaum.

Ortega, L. (2009) Sequences and Processes in Language Learning. In Long, M. and Doughty, C. Handbook of Language Teaching. Oxford, Wiley

Seliger, H. (1979) On the nature and function of language rules in language teaching. TESOL Quarterly 13, 359-369.

Selinker, L. (1972) Interlanguage. International Review of Applied Linguistics 10, 209-231.

Wilkins, D. A. (1976) Notional syllabuses. Oxford, Oxford University Press.

Whong, M., Gil, K.H. and Marsden, H., 2014. Beyond paradigm: The ‘what’and the ‘how’of classroom research. Second Language Research, 30(4), pp.551-568.

Conference Round Up

The ELT community is surely fortunate to have such an experienced core group of experts to help them stay at the cutting edge of their profession, and perhaps doubly fortunate that this elite band so regularly and skillfully re-package and update their well wrought, by now canonical, view of ELT. A few cynical discontents (“losers”, as one leading member of the elite so perceptively calls them) paint a picture of a commercially promoted cabal of head-in-the-sand, unscholary publishers’ agents peddling the same worn out baloney at one conference after another, year after year, in a way so vulgar and predictable that it makes the Eurovision Song Contest look classy and innovative. But away with the views of these pathetic moaners! “Progress Through Continuity” is the message which clearly resonates with teachers, so let’s take a look at the exciting plenaries soon to be served up by our experts as they lead us along carefully calibrated steps towards a better, more nuanced, understanding of coursebook-driven ELT.

Below, details of up coming events in June.

15th June The Future of English Language Teaching 2019 Conference

Plenary: Penny Ur. Title: Applied linguistics research – A teacher’s perspective

Who better than Penny Ur, OBE, author of so many books and newspaper articles about ELT, to tell us about applied linguistics research! Ur has no problem in confessing that she doesn’t know much about the research because she’s too busy with the nitty gritty classroom stuff; as she so acutely observes “researchers are not practioners”, so what do they know, right? On this occasion Ur will skillfully re-package a plenary she first gave in 1992 about the complete irrelevance of SLA research to classroom practice, this time introducing data from a survey she read about in Pier Morgan’s column in The Mail on Sunday about how East London criminals  pick up a smattering of Spanish while “holidaying” in Marbella, without any help at all from smart alec academics.

21st June Foreign Language Educators  Joint Conference in Collaboration with IATEFL TTEd SIG and TESOL Turkey

Plenary: Andy Hockley.  No title given

A regular plenary speaker at ELT conferences in the past 15 years, Hockley’s mission is to help everybody in the ELT who has entrepreneurial aspirations to reach their goal. He does this by cleverly ringing the changes on a series of well used homilies, starting from this gem: “It is alignment between people’s values and those of the organization which give meaning to work”. Just to bring out the true force of this, perhaps already worn out, cliché, Hockley quotes Podolny, Khurana & Hill-Popper (2005): “To the extent that we understand our work within an organization as contributing to a goal or ideal that we value, our work will have meaning”.  Amen to that. Hockley also talks perceptively about how to avoid being in charge of “a rudderless ship”, dealing instead “in a purposeful and coherent manner with management issues”.  One can only wish that today’s UK politicians could heed Hockley’s wise counsel.

Hockley uses Carter McNamara’s organisational life cycles model (Birth, Youth, Midlife Maturity) not just to reflect on his own life, but also to guide aspiring money-makers through managing the transition from a small organisation to a larger one. What’s great about Hockley is that he turns what might look like the dross of a typical pre-university business management course into a coherent set of easily grasped homiles and general pointers which offend nobody. The uncritical adoption of an uncaring ideology of neoliberalism which consolidates the power and wealth of a small minority to the detriment of the rest of society is cleverly understated, as are the realities of the ELT industry where most workers are treated even worse than they are in most other private industries devoted to maximising profits. What we get instead is optimism, hope, and vision.

On this occasion, Hockley will skillfully re-package a plenary he first gave in 1992 about how to go from working as a zero conract teacher with the “English in Two Weeks” school in Brighton to the owner of a multinational chain of “English in Two Weeks” academies with a strong on line presence.

26-28 June International conference on creative teaching, assessment and research in the English language  Malacca, Malasia

Plenary Speaker: Diane Larsen Freeman. In attendance: Alan Maley

It is entirely fitting that neither the title nor the content of this plenary, so eagerly awaited by the legion of Larsen Freeman’s fans, has been made public. It can be no coincidence that the great illusionist Houdini refused to say what would happen at his own shows, or that he too often had problems deciding what to wear on stage. Who knows what this illustrious maestra will say; even her most ardent fan, our very own Scott Thornbury (how long before he himself, facing the daunting challege of 2,997 appearances on stage, attempts the Chinese Water Torture Cell trick!) will not divulge.

What we do know is that Larsen Freeman will skillfully re-package a plenary she first gave in 1992 about the sociolinguisically framed re-appraisal of the significance of patterns found in the remarks made by 3 year children when shown recently buttered toast. This time, it’s rumoured that data from that original study will be compared to data from a 2018 study of a 5 billion word corpus of Larsen Freeman’s presentations. The focus is on the frequency of use of the first person singluar subjective case pronoun. (Spoiler: the kids lose by a country mile.)

An exciting first is that the one and only Alan Maley will be on stage during the presentation. State of the art attention-tracking sensors will measure how often Maley falls into deep sleep during the plenary.

12th June ESOL PRACTITIONER CONFERENCE City University Glasgow

Plenary Speaker: Hugh Dellar Grammar is dead! Long live grammar!

We leave the best till last. And who better than Hugh Dellar, a tireless globetrotter who already this year has clocked up enough air miles to significantly shorten the life of our planet, to talk about grammar! The tantalising title of his talk echoes his own dialetic thoughts on grammar, all done without any more knowledge of the subject than you could put on the back of an envelope, to use one of his top 100 favorite lexical chunks. Starting from the thesis that grammar is all, Dellar moved to the antithesis that grammar is nothing, and then to the daring synthesis that it’s all or nothing, depending on the publisher who’s paying you. In his plenary, due to lack of time, Dellar is unlikely to answer the question of how the prime mover of the lexical approach, the arch critic of grammar-based coursebooks like Headway, came to write a grammar-based coursebook like Roadmap, but what he’s guaranteed to do is make everybody question their grip on English. “What the fuck is he talking about?” is, once again,  the question that will be buzzing round the auditorium as Dellar’s plenary hopelessly unravels. Oh how I wish I could be there!

On this occasion, Dellar will skilfully re-package a plenary he first gave in 1992 about the  wealth of useful lexical chunks that can be incorporated into a fake coursebook text where somebody tries to order a “Full English” breakfast in MacDonalds.  The updates include the grammaring of “Can I get…” and “Come again?”.

Well that’s all for now, folks. Enjoy the show and here’s till the next round up of conferences in the never-ending ELT circus.

Russ Mayne on Experts and Brian Tomlinson

In his latest blog post, tooth fairy expertise, Russ Mayne reminds us to be wary against uncritically accepting what we’re told by so-called experts. This time his target is Brian Tomlinson, and the argument goes like this (quotes in italics):

Step 1: Being a tooth fairy expert means that you actually don’t have any expertise. (Wrong, but never mind.)

Step 2: Somebody on Twitter said that Brian Tomlinson thinks “PPP is the worst way to learn a language”. (Note that Mayne doesn’t verify that Tomlinson actually said such a thing.)

Step 3: Tomlinson claims to be, and is widely-accepted as, an expert on ELT materials development.

Step 4: Tomlinson believes in the false notion of left brained and right brained learners. He also  promotes writing materials which cater to students’ perceptual learning style. He also seems to accept the idea that the brain is somehow underused and we can unlock it’s full potential through good teaching.

Step 5: I can’t help but think we tend to listen to experts when they tell us things we want to hear. If we don’t like PPP then an expert saying it’s ‘the worst’ will sound very convincing to us. (Note this spurious “move” in the argument’s development.)

Now we get to the crunch. Mayne asks the pertinent question:

Step 6:  So is Tomlinson right that PPP is ‘the worst way to learn a language?’

But instead of answering it, he prefers to ask another question

Step 7: Is PPP worse than, say, Suggestopedia? (This clumsy attempt to dodge the question pinpoints the weakness of Mayne’s post.)

Step 8: Mayne cites 2 authors to show that Suggestopedia is so much hogwash.

Step 9: Conclusion: So according to Tomlinson PPP ‘booo’, quantum kinaesthetic left-brained teaching ‘hurrah!’ Forgive me if I’m skeptical. Jason Anderson has done some interesting work on PPP (Mayne gives links to 2 articles) and he doesn’t seem to think it’s the worst way to teach English. For all it’s flaws, I’d put my money on PPP producing better results than Suggestopedia. I’m no expert though.  

Discussion 

Mayne’s modest confession that he’s no expert ends this lamentable display of poor scholarship and bad argumentation. If Mayne wants to argue the case for PPP he should do so. If he wants to point out what he sees as weaknesses in Tomlinson’s understanding of second language learning, or weaknesses in Tomlinson’s criticisms of coursebooks like Headway and Outcomes, then likewise, he should do. But to argue that Tomlinson’s reputation as an expert is what makes us agree with his criticisms of PPP, and that what he says about Lazanov is evidence enough that his opinion can’t be trusted, is both lazy and intellectually dishonest.

Mayne quotes from page 20 of Tomlinson’s (2014) book on Materials design. The first chapter of the 2014 book gives a carefully-considered discussion of Tomlinson’s views, which Mayne makes absolutely no attempt to fairly summarise. There are, in my opinion, parts of Tomlinson’s view of SLA which rely on mistaken views of the psychological process of second language learning, but they deserve more scrupulous attention than Mayne seems capable of giving them. Tomlinson has spent 30 years fighting against what he sees as the suffocating effects of the PPP methodology which today’s General English coursebooks encourage teachers to adopt. For Mayne to pick over Tomlinson’s work in order to offer up this sorry mess is pathetic.

As for Mayne’s towering defence of PPP, he says: it’s probably not as bad as Suggestopedia! He adds links to two articles by Jason Anderson, both of which are almost as unscholarly and badly argued as this piece by Mayne. See here for my review of them.

A Note on Popper’s Demarcation line between science and non-science

This is in response to Mura Nava’s request for “a short-winded view of the sci / non-sci” distinction, after I’d suggested that Raphael Berthele’s  article was long-winded and confused.

Popper says falsifiability is the hallmark of a scientific theory, and allows us to make a demarcation line between science and non-science: if a theory doesn’t make predictions that can be falsified, it’s not scientific.  According to such a demarcation, astronomy is scientific and astrology is not, since although there are millions of examples of true predictions made by astrologers, astrologers deny that false predictions constitute a challenge to their theory.

Popper insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes. Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics. It’s at the next stage of the theory-building process that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it.

 

Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that the theory has to be formulated in such a way that empirical tests can be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers. If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, even though we’ll never know for certain that it’s true. The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better. If the theory doesn’t stand up to the tests, if it’s falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully. These successive cycles are an indication of the growth of knowledge (Popper, 1974).

Popper (1974: 105-106) gives the following diagram to explain his view:

P1 -> TT -> EE -> P2

P = problem    TT = tentative theory    EE = Error Elimination through empirical experiments which test the theory

We begin with a problem (P1), which we should articulate as well as possible.  We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests. The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it. These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions. The new problems give rise to a new tentative theory and the need for more empirical testing.

Popper thus gives empirical experiments and observation a completely different role to the one given to them by the empiricists and positivists: their job now is to test a theory, not to prove it, and since this is a deductive approach, it escapes Hume’s famous problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false. All you need is to find one black swan and the theory “All swans are white” is disproved.

Popper claimed that his research methodology, and the epistemology that informs it, is rationalist because it insists that we arrive at knowledge of the world, and in particular, that we build scientific theories to explain aspects of our world, by using our minds creatively. However, Popper also argues that we need to test these a priori ideas empirically. Hence, the two central premises of Popper’s argument are:

  • Knowledge advances by trying to solve problems in a rational way.
  • The role of empirical data is to test theories.

Positivism 

It’s worth noting here that “positivism” is not a synonym for a scientific method which relies on logic and empirical tests. It was a philosophical movement culminating in the writings of the Vienna Circle, and is, in my opinion, a good example of philosophers stubbornly marching up a blind alley. It is a fundamentally mistaken project, as Popper has, I believe, shown, and as Wittgenstein himself recognised in his later work (see Wittgenstein, 1953). Those critics of mainstream SLA research who label their opponents “positivists”, or who argue against “positivist science”, are either ignorant of the history of positivism or are making a straw man case which no present-day researcher in the field of SLA adopting a rationalist position need defend.

A clear distinction must also be made between empiricism as a philosophical position (a position taken by Skinner, for example and currently flirted with by emergentists) and empirical evidence – observations that can be measured and checked. Popper rejects the strict epistemolgical claims of empiricism, and claims instead that scientific theories must have some empirical content and be open to empirical tests.

References 

Popper, K. R. (1959) The Logic of Scientific Discovery.  London: Hutchinson.

Popper, K. R. (1963) Conjectures and Refutations.  London: Hutchinson.

Popper, K. R. (1972) Objective Knowledge.  Oxford: Oxford University Press.

Popper, K. (1974) Replies to Critics in P.A. Schilpp (ed.), The Philosophy of Karl Popper. Open Court, La Salle, III.

Wittgenstein, L. (1953) Philosophical Investigations. Translated by G:E.M. Anscombe. Oxford: Basil Blackwell.

The Encounter and the end of ELT

Rob fuck-and-shit Sheppard’s version on Twitter of what I said to Scott at the recent InnovateELT conference needs correcting. Here’s what really happened:

Me: Scott!

Scott:  David! How good to see you. Loved your talk; I wish I could have been there.

Me:  You’re too kind. I’m Geoff, actually.

Scott:  Deaf? That explains everything!

Seriously though, Scott’s talk sucinctly hit the nail on the head: simultaneous machine translation software radically affects the need to learn English.

In today’s world, English is a lingua franca and that’s why close to 2,000,000,000 people are learning it. A series of devices are being designed that make it possible for speakers of different languages to communicate with each other in real time without the need to use English. Such is the rate of progress in this area of software and hardware development that in 10 years time, tourists, lawyers, doctors, bankers, etc., will simply not need to communicate with each other in English any longer. Much (not all, but a significant amount) of the communication that today is done in English by non English speakers will be done by people speaking their own language and having it translated by machines. English as a lingua franca might remain, but the need for most people to learn English in the way they do today will vanish.

So: functional needs are disappearing. But social needs remain. For caring professions, human communication remains beyond the reach of machines. And gaming too – if you like that stuff. The implications are profound for all of us. I won’t go on. Here’s Scott. What I find touching about this video is Scott’s diffidence: the usual confident demeanour gives way to a hestitant, almost reluctant style. It’s like he really doesn’t want to deliver his awful message . But he’s right, and we need to deal with the truth of what he’s saying right now.

What’s the Outcome of a Roadmap?

Selling coursebooks is a a multi billion dollar business where profit, not educational excellence, is the driving criterion.

Pearson’s latest series of English coursebooks is called “Roadmap”

 

and it promotes its latest star product in just the same way as if it were selling toothpaste: it makes the same sort of absurd claims for its new, eight-level general English course for adults as Colgate makes for its latest re-branded toothpaste.

Just as Colgate’s new improved toothpaste has no demonstrated new improved beneficial effect on bocal hygiene, equally the Roadmap series has no new improved beneficial effect on L2 learning. There’s nothing new or improved to be found and no reason whatsoever to think that Roadmap is any better than any other coursebook. 

Pearson spends millions of dollars on promotional, carefully crafted commercial bullshit aimed at maximising sales and profit with scant regard for the truth or for educational values. The criticisms made of coursebook-driven ELT apply just as forcefully to this series as to previous ones. The only difference is the way it ‘s been produced. 

Everybody involved in making this series is demeaned. Once the project – Roadmap – is given the green light, some high-up executive in Pearson is put in charge and a “team” is formed.  All the work is cut up and dished out to people – most of them working on zero hour contracts so as to minimise costs. Everybody works within the strict, suffocating framework of Pearson’s over-arching GSE framework. Everything they write is strictly scrutinised and revised to make sure that all texts and exercises use words and structures in line with the finely granulated, innumerable steps all students must take to ensure “real progress” in Pearson’s frightening world. It’s a badly paid, miserable, life-sucking nightmare. Still, somebody’s got to do it, right? 

In its huge publicity campaign, Pearson is paying for authors to fly to different countries to promote the series. The authors,  dubious stars in the ELT firmament who know next to nothing about language learning, must agree to do whatever their paymasters dictate, and they’ll soon appear in promotional videos, standing in front of iconic landmarks like the sublime clock in Prague, ironically clutching a hopeless Roadmap, mouthing pointless platitudes which, in the vision of some coked-up marketing guru, prove that Roadmap is the latest must-have coursebook, regardless of the fact that it consists of an overpriced collection of dud materials leading students up a dystopian garden path to nowhere.     

Pearson claims that the Roadmap series is unique because Every class is different, every learner is unique. This is, quite simply, bullshit: an empty load of marketing nonsense. Nothing substantial justifies this vaulted claim: the same old crap is delivered in the same old way, the only difference being that skills development is marginally separated. The Roadmap series is the result of a manager delivering a corporate vision: lock-step progression in pseudo-scientifically measured steps towards a reified, mistaken, commodified version of language proficiency. Look at the sample units and weep. Roadmap is Pearson’s version of  Colgate’s “everything’s-new-but-in-fact-nothing’s changed” toothpaste – the latest attempt to package and sell a useless product to a gullible public who, were they better informed, would reject it as the preposterous crap that it is.  

Two of the eight acredited authors of the Roadmap series are also the co-authors of the Outcomes series. Strangely, Andrew Walkley, one of versatile duo, has recently been explaining how the new edition of Outlooks Beginner is a “different kind of Beginner-level book”. It’s not just based on Lewis’ lexical approach, with all the criticisms of grammar-based coursebooks that this implies, it also uses a “spiral syllabus” which, Walkley confidently claims, re-cycles material far more efficiently than the rest. Such are the conflicting claims made for the two coursebook series that one wonders how the same authors can put their names to both of them. Should a coursebook be made up of 10 units containing “three core lessons featuring grammar, vocabulary and pronunciation”, or should it, as Walkley suggests, reject “having one block followed by another block” in favour of teaching “a little about form A, followed by a little bit on form B (perhaps whilst also re-using what we learned about form A), followed by a little on form C (perhaps recycling something of A and/or B), before we return to study something more about form A, etc.”? It reminds me of Groucho Marx’s quip “Those are my principles, and if you don’t like them, I have others”.