A Summary

This blog is mostly about the failure of teacher educators (TEs) in ELT to do their job well.

I here summarise three findings from psycholinguistic SLA research which have implications for ELT, and then review how some of today’s leading teacher educators have failed to deal with these findings.

Part 1: Implications for ELT of SLA research

1. Interlanguages

By the mid-1980s, research had made it clear that learning an L2 is a process whereby learners slowly develop their own autonomous mental grammar with its own internal organising principles. Today, after hundreds more studies, it is well established that acquisition of grammatical structures, and also of pronunciation features and many lexical features such as pre-fabricated lexical chunks, collocation and colligation, is typically gradual, incremental and slow. Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, the product broadly conforming to developmental sequences observed in naturalistic settings. The acquisition sequences displayed in IL development have been shown to be impervious to explicit instruction, and the conclusion is that students don’t learn when and how a teacher decrees that they should, but only when they are developmentally ready to do so.

2. The roles of explicit and implicit knowledge and learning

Two types of knowledge are said to be involved in SLA, and the main difference between them is conscious awareness. Explicit L2 knowledge is knowledge which learners are aware of and which they can retrieve consciously from memory. It’s knowledge about language. In contrast, implicit L2 knowledge is knowledge of how to use language and it’s unconscious – learners don’t know that they know it, and they usually can’t verbalize it. (Note: the terms Declarative and Procedural knowledge are often used. While there are subtle differences, here I take them to mean the same as explicit and implicit knowledge of the L2.)

In terms of cognitive processing, learners need to use attentional resources to retrieve explicit knowledge from memory, which makes using explicit knowledge effortful and slow: the time taken to access explicit knowledge is such that it doesn’t allow for quick and uninterrupted language production. In contrast, learners can access implicit knowledge quickly and unconsciously, allowing it to be used for unplanned language production.

Three Interface Hypotheses

While it’s now generally accepted that declarative and procedural knowledge are learned in different ways, stored separately and retrieved differently, disagreement among SLA scholars continues about this question: Can the explicit knowledge students get from classroom instruction be converted, through practice, into implicit knowledge? Those who hold the “No Interface” position answer “No”. Others take the “Weak Interface” position which argues that there is a relationship between the two types of knowledge and that they work together during L2 production. Still others take the “Strong Interface” position, based on the assumption that explicit knowledge can and does become implicit, and that explicit explanation of the L2 should generally precede practice. In this view, procedural knowledge can be the result of declarative knowledge becoming automatic through practice.

The main theoretical support for the No Interface position is Krashen’s Monitor theory, which has few adherents these days, despite the reappraisal discussed in my previous post. The Strong Interface case gets its theoretical expression from Skill Acquisition Theory, which describes the process of declarative knowledge becoming proceduralised and is most notably championed by DeKeyser. This general learning theory clashes with evidence from L1 acquisition and with interlanguage findings discussed above. The Weak Interface position is adopted by most SLA scholars, including those who support the emergentist theory of SLA championed by Nick Ellis. Ellis agues that adult learners of English as an L2 are affected by their L1 in such a way that they don’t implicitly learn certain features of the L2 which clash with their L1 (see the section on maturational restraints below). Consequently, in this view, explicit instruction of a certain sort can draw attention to these features and thereby “re-set the dial”, allowing for the usual implicit learning of further instances of these features to re-enforce procedural knowledge. 

 Whatever their differences, there is today a consensus among scholars that implicit learning is the “default” mechanism of SLA. Wong, Gil & Marsden (2014) conclude that implicit knowledge is in fact ‘better’ than explicit knowledge; it is automatic, fast – the basic components of fluency – and more lasting because it’s the result of the deeper entrenchment which comes from repeated activation. Doughty (2003) concludes:In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning …, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.  

Neither Wong et.al., nor Doughty challenge the important role that explicit knowledge plays in SLA. However, what they firmly reject, as do most of their colleagues, is the view that declarative knowledge is a necessary first step in the SLA process. 

3. Maturational constraints on adult SLA

The limited ability of adults to learn a second language implicitly as children do brings us to “Critical Period” research. Long (2007) in an extensive review of the literature, concludes that there are indeed critical periods for SLA, or “sensitive periods”, as they’re now called. For most L2 learners, the sensitive period for native-like phonology closes between age 4 to 6; for the lexicon (particularly lexical chunks, collocation and colligation) between 6 and 10; and for morphology and syntax by the mid-teens. While this remains a controversial area, there’s general consensus that adults are partially “disabled” language learners who can’t learn in the same way children do. And that’s where explicit learning comes in. As suggested above, the right kind of explicit teaching can help adult students learn bits of the language that they are unlikely to learn implicitly. Long calls these bits “fragile” features of the L2 – features that are of low perceptual saliency (because they’re infrequent / irregular / semantically empty / communicatively redundant / involving complex form-meaning mappings), and he says these are likely to be late, or never, learned without explicit learning.

Implications

From all this research, a picture of ELT emerges where teachers help students to develop their interlanguages by giving them scaffolded opportunities to use the L2 in communicative activities where the focus is on meaning. The Dogme approach does this, so do some types of immersion and CLIL courses, and so do strong versions of Task-based Language Teaching. During the performance of tasks, modified, enhanced, multi-modal written and spoken texts give the rich input required, and teachers give students help with aspects of the language that they’re having problems with by brief switches to what Long calls “focus on form” – reactive attention to formal aspects of the L2 that the students indicate, through their production, are impeding their progress. In most forms of TBLT, tasks are divided into 3-stages: pre-task -> task -> post task, and as a general rule, we can say that the more explicit instruction is given priority, the weaker the TBLT version is.

From the research discussed, it follows that a relatively inefficacious way of organising ELT courses is to use a General English coursebook. Here, the English language is broken down into constituent parts or “items” which are then contextualized, explained, and practiced sequentially, following the scale laid down in the Common European Framework of Reference for Languages (CEFR), which is based not on empirical research into interlanguage development, but rather on teachers’ intuitive ideas of an easy-to-difficult progression in L2 learning. The teacher’s main concern is with explaining and practicing bits of grammar, pronunciation and lexis by reading and listening to short texts, studying boxes which summarise grammar points, doing follow-up exercises, talking about bits of the language, giving summaries, engaging in IRE (Initiation-Response-Feedback) exchanges with students, and then monitoring students’ activities which are supposed to practice what has been taught. Typically, in such courses, teachers talk for 70%+ of the time, and students’ speaking turns last for less than a minute.

For example, if we look at Unit 2 from Outcomes Intermediate, we see this:

  1. Vocab. (feelings) →
  2. Grammar (be, feel, look, seem, sound + adj.) →
  3. Listening (How do they feel?) →
  4. Developing Conversations (Response expressions) →
  5. Speaking (Talking about problems) →
  6. Pronunciation (Rising &falling stress) →
  7. Conversation Practice (Good / bad news) →
  8. Speaking (Physical greetings) →
  9. Reading (The man who hugged) →
  10. Vocabulary (Adj. Collocations) →
  11. Grammar (ing and ed adjs.) →
  12. Speaking (based on reading text) →
  13. Grammar (Present tenses) →
  14. Listening (Shopping) →
  15. Grammar (Present cont.) →
  16. Developing conversations (Excuses) →
  17. Speaking (Ideas of heaven and hell).

(Note that “Developing Conversations” are not oral activities.) Given that teachers must cover this unit in approx.10 hours, and given the amount of work students are expected to do studying the language, how much opportunity will students get to use the language for themselves in spontaneous communicative exchanges which push their interlanguage development? Like most General English coursebooks, Outcomes Intermediate focuses on explicit teaching, based on the false assumption that students will learn what they’re taught in this way. The most usual defense of coursebooks (apart from their convenience) is that they are “adapted” in a myriad of ingenious ways by teachers. Mishan (2021) cites Bolster’s (2014, 2015) study of teachers using an English for academic purposes coursebook, which showed that there was a spread of “25% to 100% of changes made to the published material, with an average percentage of adaptation of 64.5’ (p. 20).  If only 35%  of the coursebook’s content are used, one wonders just how convenient they are! Teachers are to be congratulated for the way they ameliorate the deficiencies of coursebooks, but it remains the case that they are forced to follow the synthetic syllabus laid down in the coursebook they use, which means they are making impossible demands of students and spending far too much time on explicit teaching.

In brief, research suggests that L2 learning is mostly a process of the unconscious development of interlanguages which is best helped by giving students opportunities to use the language in such a way that they work out how the L2 works for themselves. Teachers can best help this development by following a syllabus which supplies rich input, interesting, relevant tasks, and which counts on timely feedback and support from the teacher. The implication is that, when it comes to ELT, using an analytical syllabus will be more efficacious than using the synthetic syllabuses implemented in General English coursebooks. (For more on synthetic versus analytic syllabus types, see the 2 posts ‘Synthetic and Analytic Syllabuses 1′   and ‘Synthetic and Analytic Syllabuses Part 2′   Se also the post ‘Why Teach Grammar’.

Compare these two views. Caroll (1966: 96) articulated the “old” view:

Once the student has a proper degree of cognitive control over the structure of a language, facility will develop automatically with the use of the language in meaningful situations.

Hatch (1978: 404) was one of the first scholars to articulate the current view:

Language learning evolves out of learning how to carry on conversations. One learns how to do conversation, one learns how to interact verbally, and out of this interaction syntactic structures are developed.

Hatch’s work in SLA research was influential in promoting the communicative language teaching approach, an exciting new flame which burned brightly for a few years in the 1980s, only to be snuffed out by the arrival of modern coursebooks in the early 1990s. 

Part 2: The Contribution of Teacher Educators  

Teacher educators teach the teachers: they’re the purveyors of today’s lamentable Second Language Teacher Education (SLTE) programmes. They give “pre-service courses” for those wanting to start a teaching career, and “in-service courses” for those already teaching. Given the British Council’s (2015) conservative estimate of 12 million teachers working in ELT, training them is obviously a multi-billion dollar industry. The most popular pre-service courses in many parts of the world are CELTA and Trinity College’s Cert TESOL. Neither of these courses gives any serious attention to how people learn an L2 – the SLA research findings outlined above are largely ignored. Both courses concentrate on the practical job of preparing teachers, as best they can in the limited time provided, to implement the synthetic syllabuses used in the vast majority of schools and institutions offering courses of English as an L2. While the teacher educators who run these courses are not obliged to recommend using a coursebook, in practice, most of them do, and they use coursebooks for the teaching practice modules.  In brief, both courses give almost no attention to how people learn an L2. And both are based on the unquestioned, but demonstrably false assumption that explicit teaching of the formal elements of the L2 is the key to efficacious teaching.  

In the USA, China and other countries, teachers need to have a university degree, and then do a post graduate pre-service course. Some do a Masters in TEFL or TESOL, while others do a post-graduate Certificate or Diploma. In these programmes, more attention is paid to second language learning, but there is enormous variety among the programmes, making it difficult to generalise. Certainly in the USA, the pre-service courses seem more likely to have a positive affect on teaching practice than CELTA or the Cert TSOL. In China and other countries training non-native speaker (NNS) teachers, it seems that once they start their jobs, teachers often ignore what they were told about the importance of implicit learning, and the value of a communicative language teaching approach. Two explantions are suggested. First, there is a strong tendency among teachers to teach the way that they themselves were taught. Second, most NNS teachers admit to having difficulties expressing themselves accurately and fluently in English. Ironically, their difficulties mainly spring from the way they were taught, but still, the combination of bias and insecurity push teachers to adopt a “teacher tells the class about the language” approach where most of the time is dedicated to using a coursebook to instill declarative knowledge about English grammar, pronunciation and vocabulary.

As to in-service training, often referred to as Continuous Professional Development (CPD), there are literally thousands of private commercial concerns offering courses in every aspect of ELT, making this another multi-billion dollar part of the ELT industry.

Who are the teacher educators (TEs)? Right at the top, we have figures such as David Nunan and Jack Richards, both successful academics who have, over the past 40+ years, given hundreds of university courses and hundreds of plenaries at international conferences. They have worked as consultants for national governments, written more than 20 books each covering various aspects of learning and teaching an L2, and they have also both written more than a dozen series of General English coursebooks, some aimed specifically at the huge, expanding Chinese market. Both are multi-millionaires. Richards was always conservative, while Nunan only slowly grew to be so. In the 1980s at least, Nunan was an articulate, innovative scholar, as can be appreciated in some of articles in his Learner-Centered English Language Education collection. Nunan also supervised the PhD dissertations of many who went on to make innovative advances to theories of SLA and to ELT practices. I attended courses given by Nunan which pushed me towards a TBLT approach and I was impressed with his scholarship and his unfailing willingness to shot the breeze with his students.

 Whatever their academic records, both Richards and Nunan made significant contributions to the new generation of coursebooks in the late 1990s, when publishers responded to new conditions with a  multi-million-pound revamp that ushered in the ‘global coursebook’ – a new, glossier, multi-component package aimed at the global market, but often carefully tweaked for more local teaching contexts. This “advance” effectively put an end to any version of CLT worth the name. To my knowledge, in the last 20 years neither Richards nor Nunan has given any courses on, or made any serious attempts to promote an interest in, the mounting evidence from SLA which I outlined above. As a result, I think they are partly responsible for the reactionary, commercially-driven character of current SLTE.

 The majority of today’s most successful TEs are, like Richards and Nunan, coursebook writers. Alas, they have to content themselves with six figure incomes – the money coming from their coursebook series is no longer enough to make them millionaires. This is thanks to publishers’ new business plans, where an overseeing editor designs the coursebook series and its components, and then farms out the work to the lucky winners  who are chosen to do the real work for scraps. In exactly the same way as most employers in our neoliberal world treat their employees, the editor commissions “independent collaborators” to write various bits of the coursebooks following strict editorial guidelines for a set fee, and that’s all they see of the pie.  While Richards and Nunan, like Mr. and Mrs. Sears of Headway fame, have already banked millions of dollars from sales of their coursebooks, and the royalties still roll in, more recent TE coursebook writers are less fortunate. They make a small fraction of the money that used to be made from a best-selling coursebook series, and they now have to fight in a much more competitive market than their predecessors when trying to boost their incomes by writing supplementary materials and “How to Teach” books. Like their predecessors, they get further monies from fees paid to them for a wide range of CPD offerings, ranging from conference plenaries to giving presentations, workshops and short courses on “How to improve your teaching” all over the planet, sold to the highest bidder.

Two examples of today’s TEs are Jeremy Harmer and Hugh Dellar. Harmer has published more than 30 books on ELT, and made a few rather unsuccesful attempts to get into the coursebooks market. He is often referred to as “El Maestro”. His seminal work, The Practice of English Language Teaching is now in its 5th edition, has sold millions, and is required reading for teachers doing not just CELTA but also DELTA courses. The book is also listed in the bibliographies of most Masters courses in TESOL / TEFL offered by universities around the world. The book is 550 pages long, yet just one small chapter is devoted to language learning – another chapter devoted to classroom seating arrangements is longer! The chapter on language learning misrepresents the work of most of the leading scholars of SLA, including Krashen, Pienemann, Gass, Long and N. Ellis. Suffice it to say, in summary, that Harmer has done very little to inform teachers about the matters discussed in Part 1 of this essay.

Dellar is the co-author of the Outcomes and Innovations series of coursebooks, and also of one of the books in the Roadmap series. His Teaching Lexically book, co-written with Walkley, offers by far the worst summary of how people learn an L2 that I’ve ever read. I’ve written a post that reviews the book, so let me just say here that the “explanation” of L2 learning it gives is ridiculous. It paves the way for an approach to ELT that is remarkable for its emphasis on teaching students about the language, No other teacher educator today insists as much as Dellar does on the importance of explicit learning.

Let’s look briefly at a few more prominent teacher educators

Gianfranco Conti

The “MARS-EARS” framework for L2 teaching attempts to justify an “Explain-first-and-practice later” approach to teaching L2s. Conti has built himself into a brand. He spends enormous effort on promoting the brand and he tours the world promoting himself and his method. Conti and Smith are co-authors of of  a book on memory which badly misreprents research findings and blatantly promotes the “MARS-EARS” framework. See my post on the book Memory and Teaching and my post on Conti’s approach, Genius for a fuller discussion.

Jason Anderson

I’ve discussed Anderson’s work in a few posts – put “Anderson” in the Search box on the right. Common to all Anderson’s work is a defense of coursebook-driven ELT. Anderson’s work relies on cherry-picking SLA research findings and has little depth or critical acumen.

Rachel Roberts

Roberts is a quintessential example of a TE. She now concentrates on wellness training, but she has a history of teaching teachers that spans decades. She has never, in her long career – described by the British Council as “illustrious” – given the slightest importance to SLA research. Examine her work and search in vain for any serious attention to how people learn an L2.  

Tyson Seburn

Seburn was until recently the coordinator of the IATEFL Teacher Development Special Interest Group. In my opinion, he’s a good example of all that’s wrong with the way teacher educators see their jobs. Like Roberts, Seburn has never shown any interest in SLA research or in its implications for ELT. For Seburn, teacher development is primarily about identity, about “how I came to be who I am”, “how to be the best person I can be”, and all that stuff.

Scott Thornbury

Here’s the exception, the star who shines in the dull, lack-luster TA firmament. I’ve done a few posts criticising Thornbury’s work – his Natural Grammar, his ill-informed criticisms of Chomsky, his wild attempts to describe and promote emergentist theories – but he remains a splendid beacon, shining through the fog, demanding change. He knows his stuff (mostly!) about SLA, and he’s a brilliant speaker, the best performer on the big conference stage since the wonderful John Fanselow. Thornbury has never published a coursebook series, indeed, he’s a leading critic of them, the one who coined the term “Grammar McNuggets” which so acutely captures the way that coursebooks chop up, sanitise and process the life out of the English language.

Thornbury, along with his co-author Meddings, is the man behind Dogme, an approach to ELT that rejects the use of synthetic syllabuses and adopts an approach that gives full recognition to the research findings outlined in Pat 1 above. Thornbury gets it, he understands what research tells us about how people learn an L2, he recognizes that we learn by doing, and he strives to implement a radical alternative to ELT.  He sponsors the Heads Up project, he goes wherever he’s invited to talk to teachers about Dogme and he somehow makes few enemies among those who work so hard to maintain the status quo. He’s my hero, as he is for tens of thousands of forward-looking teachers.

The Three Neils

Neil McMillan is the founder of the SLB cooperative. There’s an important political dimension to his work – involvement in the local community, social change, teachers’ rights – but when it comes to teaching, he walks the talk about a strong version of TBLT. I’m proud to have worked with him on courses for teachers interested in TBLT, and I’m looking forward to further projects with him, where we further explore how ELT can respond to local needs.

Neil Anderson and Neil McCutcheon are the co-authors of Activities for Task-based Learning. They start from a well-considered appreciation of SLA findings. They show a sensitive appreciation for the contexts in which teachers have to work, and they propose a variety of practical ways in which teachers can move towards a new, better way of doing their jobs. They’re pragmatists, realists you could say, but there’s an undeniably progressive tone to their work, and I’m sure that we’ll hear more from them soon. They’re inspiring, they give me hope.

Conclusion

ELT is a huge, multi-billion dollar industry. It’s not surprising that commercial interests shape the way it’s done. But it’s inefficacious: most students of English as an L2 fail to achieve communicative competence. To be clear: most students of English as an L2 leave the courses they’ve done without the ability to use English well enough to easily cope with the demands they meet when they try to use English in the real world. They’ve been cheated. They’ve been led though a succession of courses where they’ve been taught about the language and denied sufficient opportunities to use the language in ways that help them develop communicative competence.

Leading teacher trainers have a vested interest in protecting the inefficacious model of coursebook-driven ELT – they write coursebooks, after all. The way towards a more efficacious model of ELT, depends on the dismantling of the current established paradigm for ELT which is based on the CEFR scale of language proficiency. Learning English as a second language has very little to do with the imagined progression from A1 to C2 enshrined in the CEFR, and thus, very little to do with the coursebook series which adopt the same daft idea of l2 learning.

ELT must change. It must recognize that learning English as an L2 is mostly done by using it, not by being told about it. Teacher trainers today, with a few exceptions, stand in the way of change.

References

Anderson, N. & McCutcheon, N. (2021) Activities For Task-Based Learning. Delta.

Carroll, J. B. (1966). ‘The contribution of psychological theory and educational research to the teaching of
foreign languages’ in A. Valdman (ed.). Trends in Language Teaching. McGraw-Hill, 93–106.

Doughty, C. J. 2003. ‘Instructed SLA: Constraints, compensation, and enhancement’ in C. J. Doughty,
and M. H. Long (eds.),The Handbook of Second Language Acquisition. Blackwell, 256–310.

Dellar, H. & Walkley, A. (2017). Teaching Lexically: Principles and practice. Delta.

Harmer, J. (2015) The Practice of English Language Teaching. McMillan.

Hatch, E. (ed.). (1978). Second Language Acquisition: A Book of Readings. Newbury House.

Long, M. (2007). Problems in SLA. Lawrence Erlbaum.

Meddings, L. and S. Thornbury. (2009). Teaching Unplugged. Delta.

Mishan, F. (2021). The Global ELT coursebook: A case of Cinderella’s slipper? Language Teaching, 1-16.  

Whong, M., Gil, K. H., & Marsden, H. (2014). Beyond paradigm: The ‘what’ and the ‘how’ of classroom research. Second Language Research, 30(4), 551-568.     

A Review of Part 3 of “After Whiteness”

Part 3 of After Whiteness by Gerald, Ramjattan & Stillar is now free to view on the Language Magazine website. It’s as bullshit-rich and content-poor as the first two parts; another mightily-righteous, mini-sermon which has the authors standing on the same flimsy pedestal (a rickety construction of parroted bits of Marxism, punitive moral dictums on racism and straw-man arguments) in order to preach to the choir. It’s about as edifying as a Chick tract. I’ll give a summary of the article and then comment.    

The article has 5 sections.

Introduction

Part 1 looked at pedagogical ways of challenging Whiteness. Part 2 “re-imagined” training and labor in English language teaching. Part 3 will look at ideas for “how the broader ELT industry could evolve” if “Whiteness” were “successfully decentered”.

Action Research as a Goal

A post-Whiteness ELT, we’re told, should be part of a post-Whiteness world in which ELT practitioners “strive for some micro- or meso-level changes in their contexts to combat Whiteness“. The only information offered about these new world “micro- or meso-level changes” is about pronunciation teaching, where in the post-Whiteness world, teachers would pay attention to “how their students’ racialization in society can shape external perceptions of their intelligibility and how these perceptions have material consequences”. One “material consequence” is alluded to:   

white “foreign-accented” job applicants are typically perceived as more intelligible/employable than their racialized counterparts, thereby suggesting that there are racial hierarchies when it comes to assessments of employability in relation to speech accent (Hosoda and Stone-Romero, 2010).  

To challenge these inequalities, “teachers need to use their pedagogy”.  For example,

teachers and students could engage in some sort of action research where they interrogate and challenge local employers’ aversion to hiring racialized “foreign-accented” applicants, which has the potential to substantively shift hiring policies in students’ communities.

“some sort of action research”? Really?

The Un-Canon of Lived Experience

This section suggests that “the canon” of ideas about English should be “removed, but not replaced”. This involves using “extensive student-generated input” to “dismantle linguistic and racialized hierarchies within the conceptualization of English”. Students can be asked to note how their neighbors and relatives use English and share the data with classmates as part of an “epistemological shift”, aimed at overcoming the idea of the “ownership of English”. Widdowson (1994) (sic) is cited to support the claim that standardized English is the property of White native speakers from the global North who shape the language as they see fit. Such a “White supremacist, capitalist notion of language” must be replaced by the view that English belongs to nobody: it is a community resource. This illuminating example is offered:

… when we see the word prepone, a word in so-called Indian English meaning to move an event ahead of schedule (Widdowson, 1994), it is important to remember that this is not a “made-up” word but rather a concise and useful antonym for postpone. If you were teaching students who needed to interact with Indian English users, why would you not want to teach such an innovative word?

That last sentence is the funniest example of a rhetorical question I’ve seen for quite some time!

Teaching the Perceiving Subjects

The section begins by redressing the deficit which results from “idealizing Whiteness and the ideologies that descend from it” (sic) through the imposition of standardized English. While teaching students different Englishes might help redress this deficit, the authors want to go much further. Why not, they ask, treat “minoritized varieties” as “the ideal”? They don’t explain what this radical proposal entails. What “minoritized varieties” would be included? How would these varieties form “the ideal”? What would it look like and sound like?

Moving quickly on, the authors ask the further question:

 “How might the White perceiving subject (Flores and Rosa, 2015) be taught to perceive more effectively?”

Again, they don’t explain what they’re talking about. What does the “White perceiving subject” refer to? Perhaps they assume that all readers are familiar with the Flores and Rosa (2015) article, or perhaps they’ve seen Flores’ helpful tweet:

The white listening subject is an ideological position that can be inhabited by any institutional actor regardless of their racial identity. It behooves all of us to be vigilant about how hegemonic modes of perception shape our interpretation of racialized language practices.

or perhaps they’ve read Rosa’s (2017) follow-up, where he explains that

the linguistic interpretations of white listening subjects are part of a broader, racialized semiotics of white perceiving subjects.

Anyway, let’s take it they mean that white subjects (whoever they are – noting that they’re not restricted to people with “white” features) should try to empathize with “racialized” people. Returning to the teaching of pronunciation, the authors suggest that teachers should be given time to practice listening to different Englishes so that they gain a certain level of experience with the population they might want to work with.  And this somehow demonstrates that until ELT practitioners are “freed from the monolingual cage they’re in”, so long as “raciolinguistic ideologies” are in place, the “racialized languagers” will always fail.

Conclusion

The authors admit that what they sketch out in their three articles is “something of a dream”, but they believe it can become reality “if we take the leap to a world that doesn’t yet exist”. Their ideas are born of love, not hatred. Their goal is to replace the “harmful, oppressive and, at heart, ineffective” practices which keep “racialized learners and languagers in their place below the dominant group”.

Discussion
What does this view of how the broader ELT industry could evolve if “Whiteness” were successfully “decentered” amount to?   

The first section on Action Research doesn’t make any sense. In a “post-Whiteness wold” where Whiteness has been swept away, surely there’s no longer any need for teachers to “strive for some micro- or meso-level changes in their contexts to combat Whiteness”, or to fight against job adverts that discriminate against NNSs. Apart from this incongruity, the only content in this section is the lame, undeveloped suggestion that teachers and students engage in “some sort of action research”, where the goal is to challenge employers’ prejudice against “foreign-accented” applicants.  

The “Un-Canon of Lived Experience” section proposes that the English language belongs to nobody: it’s a community resource. Apart from the bizarre example of promoting the use of the word “prepone”, this is little more than a motherhood statement until it’s properly developed. The authors assert that before we get to the hallowed post-Whiteness society, we must sweep away “Whiteness ideologies” which adopt a “White supremacist, capitalist notion of language”, and yet nowhere in their three-part series do they make any attempt to unpack the constructs of “ideology” or “capitalism” so as to explain what they mean when they say that language is a capitalist notion. Even less do they show any understanding of Marxism, or any other radical literature which makes coherent proposals for how capitalism can be overthrown and how that might lead to radical changes in education.

The “Teaching the Perceiving Subjects” section proposes that ELT should replace the teaching of standardized English with teaching English where “minoritized varieties” are used as “the ideal”. I’ve already suggested above that this is an empty proposal. Until the vague idea of making “minoritized varieties” form “the ideal” for English is properly outlined and incorporated into some minimum suggestions for new syllabuses, materials, pedagogic procedures and assessment tools, it’s no more than hand-waving rhetoric, typical of the lazy, faux academic posturing which pervades the “Against Whiteness” articles.

The re-education programme for “White perceiving subjects” doesn’t explain who the “subjects” are, and it doesn’t explain how they are to be re-educated; it sounds a bit scary to me, a bit too close to the views of Stalin, Mao and others determined to stamp out “wrong thinking”. Still, as usual, we’re not told what’s involved, except for the perfectly reasonable suggestion that teachers should be more aware of, and sympathetic to, different Englishes.

The 3-part series of articles Against Whiteness fails to present a coherent, evidence-supported argument. Students of instructed SLA will find absolutely nothing of interest here, unless they want to deconstruct the text so as to reveal the awful extent of its empty noise. Likewise, radical teachers looking for ways to challenge the commodification of education, fight racial discrimination, and move beyond the reactionary views of English and the offensive stereotyping which permeate ELT materials and practices, will find nothing of practical use here. They should look, instead, to the increasing number of radical ELT groups and blogs that offer much better-informed political analyses and far more helpful practical support. In stark contrast to Gerald, Ramjattan & Stillar, such groups and individuals not only produce clear, coherent and cohesive texts, they also DO things – practical things that make a difference and push change in the ELT industry. The SLB Cooperative; ELT Advocacy, Ireland; the Gorillas Workers Collective; the Hands Up Project; the Part & Parcel project; the Teachers as Workers group; the on-line blogs, social media engagement and published work of Steve Brown, Neil McMillan, Rose Bard, Jessica MacKay, Ljiljana Havran, Paul Walsh, Scott Thornbury, Cathy Doughty, David Block, Pau Bori, are just a few counter examples which highlight the feebleness of the dire, unedifying dross dished up in the After Whiteness articles.       

Re-visiting Krashen

The first 2021 issue of the Foreign Language Annals Journal has a special section devoted to a discussion of “Krashen forty years later”. The lead article by Lichten and Van Patten asks “Was Krashen right?” and concludes that yes, mostly he was.

Lichten and Van Patten look at 3 issues:

  • The Acquisition‐Learning Distinction,
  •  The Natural Order Hypothesis,
  • The Input Hypothesis.

And they argue that “these ideas persist today as the following constructs:

  • implicit versus explicit learning,
  • ordered development,
  • a central role for communicatively embedded input in all theories of second language acquisition”.

The following updates to Krashen’s work are offered:

1.  The Acquisition/learning distinction

The complex and abstract mental representation of language is mainly built up through implicit learning processes as learners attempt to comprehend messages directed to them in the language. Explicit learning plays a more minor role in the language acquisition process, contributing to metalinguistic knowledge rather than mental representation of language.

2. The Natural Order Hypothesis

This is replaced with the ‘Ordered Development Hypothesis’:

The evolution of the learner’s linguistic system occurs in ordered and predictable ways, and is largely impervious to outside influence such as instruction and explicit practice.

3 The Input Hypothesis

The principal data for the acquisition of language is found in the communicatively embedded comprehensible input that learners receive. Comprehension precedes production in the acquisition process.

Pedagogic Implications

Finally, the authors suggest 2 pedagogic implications:

1. Learners need exposure to communicatively embedded input in order for language to grow in their heads. …Leaners should be actively engaged in trying to comprehend language and interpret meaning from the outset.

2. The explicit teaching, learning, and testing of textbook grammar rules and grammatical forms should be minimized, as it does not lead directly or even indirectly to the development of mental representation that underlies language use. Instructors need to understand that the explicit learning of surface features and rules of language leads to explicit knowledge of the same, but that this explicit knowledge plays little to no role in language acquisition as normally defined.   

Discussion

Note the clear teaching implications, particularly this: the explicit learning of grammar rules leads to explicit knowledge which plays “little to no role” in language acquisition.

What reasons and evidence do the authors give to support their arguments? They draw on more than 50 years research into SLA by those who focus on the psychological process of language learning, of what goes on in the mind of a language learner. They demonstrate that we learn languages in a way that differs from the way we learn other subjects like geography or biology. The difference between declarative and procedural knowledge is fundamental to an understanding of language learning. The more we learn about the psychological process of language learning, the more we appreciate the distinction between learning about an L2,and learning how to use it for communicative purposes.

All the evidence of SLA research refutes the current approach to ELT which is based on the false assumption that learners need to have the L2 explained to them, bit by bit, before they can practice using it, bit by bit. All the evidence suggests that language is not a subject in the curriculum best treated as an object of study. Rather, learning an L2 is best done by engaging learners in using it, allowing learners to slowly work out for themselves, through implicit development of their interlanguages, how the L2 works, albeit with timely teacher interventions that can speed up the process.     

Translanguaging: A Summary  

Translanguaging is baloney. There’s almost nothing in all the dross published that you should pay attention to. It’s a passing fad, a blip, a mistake, a soon to be forgotten episode in the history of ELT and applied linguistics.

Translanguaging, as presented by Garcia, Flores, Rosa, Li Wei and others is an incoherent, political dogma, i.e., ‘a principle or set of principles laid down by an authority as incontrovertibly true’. There’s no way you can challenge translanguaging: you either accept it or get branded as a racist, or a reactionary, or, what it really comes to, an unbeliever. When you read the works of its “top scholars”, you’re bombarded with jargon and obscurantist prose that disguises a disgraceful lack of command of the matters dealt with. These people demonstrate an abysmal lack of understanding of Marx, or Friere, or Foucault, for example, or of Halliday even. They demonstrate an ignorance of philosophy, of political thought, of the philosophy of science, and even of linguistics, for God’s sake. They’re imposters! They talk of colonialism, capitalism and neoliberalism as if the very use of the words is evidence enough that they know what the words mean. They nowhere – I repeat nowhere – give any coherent account of their political stance. I bet they don’t know Gramsci from granola.

Furthermore, they give few signs of any understanding of how people learn languages, or of how ELT is currently organised and structured. Perhaps worst of all, they show a general ignorance of what’s actually going on in ELT classrooms; they contribute little of practical use to progressive teaching practice; and they are mostly silent when it comes to support for grassroot actions by teachers to challenge their bosses. They’re theorists, seemingly unconnected to the “praxis” they claim to champion. What, one has a right to ask, has translanguaging ever done to promote real change in the lives of those who work in the ELT industry?   

And that’s the top echelons – that’s the established academics! Go down a few hundred steps in the pecking order and take a look at what the academic wanabees like Ramjattan, Stillar, Gerald and Vas Bauler are doing. They don’t publish much in academic journals, but they’re busy on Twitter and other social media channels. I invite you to go to Twitter and see what they have to say. They delight their thousands of followers with a nausiating flow of “us-versus-them, we’re-right-they’re-wrong” tweets, plus blatently self-promtional bits of junk about how their eagerly-awaited book is coming along, and polite requests for money to help them keep writing. This is where you’ll see translanguaging at its rawest: thousands of people, all in a bubble, all convinced of their righteousness, all “liking” the baloney churned out by their scribes – a motley crew of puffed-up imposters.  

On Flores (2020) From academic language to language architecture: Challenging raciolinguistic ideologies in research and practice

Flores (2020) uses the term ‘academic language’ 35 times in the course of his article, and yet never manages to explain what it refers to. He claims that scholars (e.g. Cummins, 2000 and Schleppegrell, 2004) see academic language as “a list of empirical linguistic practices that are dichotomous with non-academic language”. Nowhere does Flores clearly state what an “empirical linguistic practice” refers to, and nowhere does he delineate a list of these putative practices. Meanwhile, Flores attributes “less precise” definitions to educators. For them, academic language “includes content-specific vocabulary and complex sentence structures”, while non-academic language is “less specialized and less complex”. Thus, Flores offers no definition of the way he himself is using the term ‘academic language’.

Seemingly unperturbed by this failure to define the key term in his paper, Flores sails on, using the clumsy rudder of “framing” to guide him. Flores asserts that scholars and educators use a “dichotomous framing” of academic and home languages, such that

academic language warrants a complete differentiation from the rest of language that is framed as non-academic.

Flores proceeds to claim that academic language is not, in fact, a list of empirical linguistic practices (as if anyone had ever succinctly argued that it was), but rather “a raciolinguistic ideology that frames the home language practices of racialized communities as inherently deficient” and “typically reifies deficit perspectives of racialized students”.

Academic Language versus Language Architecture

As an alternative to ‘academic language’, Flores outlines the perspective of “language architecture”, which

frames racialized students as already understanding the relationship between language choice and meaning through the knowledge that they have gained through socialization into the cultural and linguistic practices of their communities.

To illustrate this perspective in action, a lesson plan built around a “translingual mentor text” is offered, to serve “as an exemplar” for teachers. The text “incorporates Spanish into a text that is primarily written in English that students could use to construct their own stories”. The goal is for students “to make connections between the language architecture that they engage in on a daily basis and the translingual rhetorical strategies utilized in the book in order to construct their own texts (Newman, 2012)”.

Having described how a teacher implements part of the lesson plan (or “unit plan”, as he calls it), Flores comments

To be fair, proponents of the concept of academic language would likely support this unit plan.

But there’s a “key difference” – language architecture doesn’t try to build bridges; instead, it assumes that “the language architecture that Latinx children from bilingual communities engage in on a daily basis is legitimate on its own terms and is already aligned to the CCSS”.

Discussion

Flores’ 2020 article is based on a strawman version of Cummins’ term ‘academic language’ (see, for example Cummins & Yee-Fun, 2007) which Cummins uses to argue his case for additive bilingualism. As noted in my earlier post Multilingualism, Translanguaging and Baloney, Cummins denies García and Flores assertion that his construct of additive bilingualism necessarily entails distinct language systems. In his 2020 paper, Flores wrongly imputes to Cummins the “dichotomous distinction” between “academic language” and “home languages”, where academic language is defined as “a list of empirical linguistic practices that are dichotomous with non-academic language”. Note that Flores also suggests that Cummins’ work perpetuates white supremacy, and that, by extension, all those (scholars and teachers alike) who see additive approaches to bilingualism as legitimate ways of attacking problems encountered by bilingual students are guilty of perpetuating white supremacy. It often seems, particularly from the rantings of some of Flores’ supporters on Twitter, that only tirelessly vigilant promotion of translanguaging (whatever that might entail) is enough to exempt anyone “white” from the accusation of racism.

Throughout his work, Flores implies that practically all language teachers in English-speaking countries (and perhaps further afield) treat “the home language practices of racialized communities” as “inherently deficient”. They are thus complicit in perpetuating white supremacy. Cummins has repeatedly denied Flores’ accusations against him, and I dare say that teachers would similarly regard Flores’ accusations as wrong and unfair, if not offensive. Flores’ article raises the following questions:

  1. Is it fair for Flores to accuse “white” teachers of perpetuating white supremacy by behaving as “white listening/reading subjects” who “frame racialized speakers as deficient”? Is that really what they do?
  2.  Are teachers’ extremely varied, nuanced and ongoing efforts to use rather than proscribe their students’ L1s through code-switching, translation and other means best seen as perpetuating white supremacy?
  3. Do teachers’ attempts to “modify” the “language practices” of their students provide convincing evidence of their complicity in prepetuating white supremacy?
  4. What exactly are the differences in terms of pedagogical practice between Flores’ example of a teacher using a predominantly English text containing Spanish words and the suggestions made by Cummins (2017)?
  5. What exactly is the “new listening/reading subject position” that Flores wants teachers to adopt? How does it become “central” to their work?
  6. What changes should they ask their bosses to make in the syllabuses, materials and assessment procedures they work with?
  7. And what are the implications for the rest of us, the majority, who work in countries where English is not the L1? Does Flores even recognise that we are in a context where many of his assumptions don’t apply?
Choir-master leading a rural congregation singing hymns. Hand-colored woodcut of a 19th-century illustration

Preaching to the choir

The fact that Flores gives even a rough sketch of translangaging in action in his 2020 article is in itself worthy of note – anyone who has trudged through the jargon-clogged, obscurantist texts that translanguaging scholars grind out will know that such practical examples are hard to come by. It prompts the question “Who do Flores, and other leading protagonists such as García, Rosa, Li Wei, and Valdés, think they’re talking to?”. My suggestion is that they’re “preaching to the choir” – talking, that is, to a relatively small number of people who share their relativist epistemology, their socio-cultural sociolinguistic stance, and their same muddled, poorly-articulated political views. Just BTW, I have yet to see a good outline of the political views of translanguaging scholars by ANY of them. The case of Li Wei is particularly stark. How does the author of Translanguaging as a Practical Theory of Language reconcile the views supported in that article (e.g., “there’s no such thing as Language”) with his job as the dean and director of a famous institute of education with a reputable applied linguistics department which sells all sorts of courses where languages are studied as if they actually existed? The answer, I suppose, is that few of those who might take offence at Li Wei’s opinions have even the foggiest idea of what he’s talking about.

Conclusion

I think it’s fair to say that translanguaging is, and will remain, irrelevant to all but the most academically inclined among the millions of teachers involved in ELT, because its protagonists, pace the titles of some of their papers, show little interest in practical matters. None of the important things that teachers concern themselves with – the syllabuses, materials, testing and pedagogic procedures of ELT – is addressed in a way that most of them would understand or find useful.

Phllip Kerr, in his recent post Multilingualism, linguanomics and lingualism uses Deborah Cameron’s (2013) description of discourses of ‘verbal hygiene’ to describe the work of the translanguaging protagonists. Cameron says that these ‘verbal hygiene’ texts are    

linked to other preoccupations which are not primarily linguistic, but rather social, political and moral. The logic behind verbal hygiene depends on a tacit, common-sense analogy between the order of language and the larger social order; the rules or norms of language stand in for the rules governing social or moral conduct, and putting language to rights becomes a symbolic way of putting the world to rights (Cameron, 2013: 61).

He adds:

Their professional worlds of the ‘multilingual turn’ in bilingual and immersion education in mostly English-speaking countries hardly intersect at all with my own professional world of EFL teaching in central Europe, where rejection of lingualism is not really an option.

If teachers are to be persuaded to reject lingualism, they’ll need better, clearer arguments than those offered by Flores and the gang.

References

Cummins, J. (2017). Teaching Minoritized Students: Are Additive Approaches Legitimate? Harvard Educational Review, 87, 3, 404-425.

Cummins J., Yee-Fun E.M. (2007) Academic Language. In: Cummins J., Davison C. (eds) International Handbook of English Language Teaching. Springer International Handbooks of Education, vol 15. Springer, Boston, MA.

Flores, N. (2020) From academic language to language architecture: Challenging raciolinguistic ideologies in research and practice, Theory Into Practice, 59:1, 22-31,

Flores, N., & Rosa, J. (2015). Undoing Appropriateness: Raciolinguistic Ideologies and Language Diversity in Education. Harvard Educational Review, 85, 2, 149–171.

Rosa, J., & Flores, N. (2017). Unsettling race and language: Toward a raciolinguistic perspective. Language in Society, 46, 5, 621-647.

Coming Soon

I.V. Dim & F. Offandback (2022) Of Baps and Nannies and Texts that Go Off Overnight. Journal of Pre-School Languaging Studies, 1,1, 1 – 111.

Abstract

In this article, we offer an existentially-motivated, pluri-dimensional contribution to the on-going interrogation of reactionary expressions of whiteness, framed by colonial practices aimed at the perpetuation of white supremacy through the overdetermination of racialized otherness and deficit languaging policies which seek to misrepresent, muffle, gag, sideline and otherwise distort holistic ethnographic encounters with socially-constructed micro and macro narratives by marginalized communities which function inter alia to decentralize and destabilize whiteness and the misogynistic, reactionary, expressions of harmful views obstructing the free expression of emerging as yet unheard voicings of counter-hegemonic knowledges and lifeways. We adopt a “from the inside out” perspective, trialed and recommended by progressive tailors everywhere, which permits and encourages the framing of a challenge to two specific obstacles to the optimum development of the language and overall underdetermined educational praxis of toddlers attending pre-school educational and wellness centers in Hudson Yards, New York, and Hampstead, London. Using mixed and embedded ethnographic qualitatively authenticated and triangulated methodological procedures, we challenge the utility and ethicality of the use of standardized overdetermined academic language practices to implement the synchronic distribution of macadamia butter baps by plurilingual nannies, many of whom engage with the children in code-switching and other reactionary linguistic practices associated with the discredited practices associated with additive bilingualism.

Decolonialized educational praxis must center non-hegemonic modes of “otherwise thinking” which promote, encourage and legitimize the translanguaging instinct, consonant with multiple semiotic and socio-cultural adjustments which act as multi-sensory conduits guiding children towards a transformation of the present, anticipating  reinscribing our human, historical commonality in the act of translanguaging and leading to the metamorphosis of language into a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making. Data include 3-D imaged representational olfactory enhanced modellings of the macadamia butter baps and multi-modal rich transcripts and 6th level avatar re-enactments of the nannies’ quasi-spontaneous interventions.       

Quick Version of the Review of Li Wei’s (2018) Theory of Language

Li Wei (2018) seeks “to develop Translanguaging as a theory of language”.

Key Principles:

1 The process of theorization involves a perpetual cycle of practice-theory-practice.

2 The criterion for assessing rival theories of the same phenomena is “descriptive adequacy”.  The key measures of descriptive adequacy are “richness and depth“.

3 “Accuracy” cannot serve as a criterion for theory assessment because no one description of an actual practice is necessarily more accurate than another.

4 Descriptions involve the observer including “all that has been observed, not just selective segments of the data”.

5 A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations.

Section 3 is headed “The Practice”.

 Li Wei gives samples of conversations between multilingual speakers. The analysis of the transcripts is perfunctory and provides little support for the assertion that the speakers are not “mixing languages”, but rather using “New Chinglish” (Li 2016a), which includes

ordinary English utterances being re-appropriated with entirely different meanings for communication between Chinese users of English as well as creations of words and expressions that adhere broadly to the morphological rules of English but with Chinese twists and meanings.

His examples are intended to challenge the “myth of a pure form of a language” and to argue that talking about people having different languages must be replaced by an understanding of a more complex interweaving of languages and language varieties, where boundaries between languages and concepts such as native, foreign, indigenous, minority languages are “constantly reassessed and challenged”.

Section 4 is on Translanguaging

Li Wei’s leans on Becker’s (1991) notion of Languaging, which suggests that there is no such thing as Language, but rather, only “continual languaging, an activity of human beings in the world “(p. 34) and on ‘ecological psychology’, which challenges ‘the code view’ of language, and sees language as ‘a multi-scalar organization of processes that enables the bodily and the situated to interact with situation-transcending cultural-historical dynamics and practices’ (Thibault 2017: 78). Language learning should be viewed not as acquiring language, but rather as a process where novices “adapt their bodies and brains to the languaging activity that surrounds them”.  Li Wei concludes “For me, language learning is a process of embodied participation and resemiotization.”  

Li Wei makes two further arguments:

1) Multilinguals do not think unilingually in a politically named linguistic entity, even when they are in a ‘monolingual mode’ and producing one namable language only for a specific stretch of speech or text.

2) Human beings think beyond language and thinking requires the use of a variety of cognitive, semiotic, and modal resources of which language in its conventional sense of speech and writing is only one.

The first point refers to Fodor’s (1975) seminal work The Language of Thought. Li Wei offers no summary of Fodor’s “Language of Thought” hypothesis and no discussion of it, so the reader might not know that this language of thought is usually referred to as “Mentalese“, and is described very technically by Fodor so as to distinguish it fro named languages.

Li Wei states:”there seems to be a confusion between the hypothesis that thinking takes place in a Language of Thought (Fodor 1975) — in other words, thought possesses a language-like or compositional structure — and that we think in the named language we speak. The latter seems more intuitive and commonsensical”. Yes, it does, but why exactly this is a problem, (which it is!) and how Fodor’s Language of Thought hypothesis solves it (which many say it doesn’t) is not clearly explained.

As for the second argument, this concerns “the question of what is going on when bilingual and multilingual language users are engaged in multilingual conversations”. Li Wei finds it hard to imagine that they shift their frame of mind so frequently in one conversational episode let alone one utterance. He claims that we do not think in a specific, named language separately, and cites Fodor (1983) to resolve the problem. Li Wei misinterprets Fodor’s view of the modularity of mind. Pace Li Wei, Fodor does not claim that the human mind consists of a series of modules which are “encapsulated with distinctive information and for distinct functions”, and that “Language” is one of these modules. Gregg points out (see comment in unabridged version) that Fodor vigorously opposed the view that the mind is made up of modules; he spent a good deal of time arguing against that idea (see e.g. his “The Mind Doesn’t Work That Way”), the so-called Massive Modularity hypothesis. For Fodor, the mind contains modules, which is very different from the view Li Wei quite wrongly ascribes to him.

Li Wei goes on to say that Fodor’s hypothesis “has somehow been understood to mean” that “the language and other human cognitive processes are anatomically and/or functionally distinct”. Again, Fodor said no such thing. Li Wei does not cite any researcher who “somehow came to understand” Fodor’s argument about modular mind in such an erroneous way, and he does not clarify that Fodor made no such claim. Li Wei simply asserts that in research design, “the so-called linguistic and non-linguistic cognitive processes” have been assessed separately. He goes on to triumphantly dismantle this obviously erroneous assertion and to claim it as evidence for the usefulness of his theory.

Section 5: Translanguaging Space and Translanguaging Instict

This section contains inspirational sketches which add nothing to the theory of language.

Translanguaging Space

Li Wei suggests that the act of Translanguaging creates a social space for the language user “by bringing together different dimensions of their personal history, experience, and environment; their attitude, belief, and ideology; their cognitive and physical capacity, into one coordinated and meaningful performance  (Li  2011a:  1223)”. This Translanguaging Space has transformative power because “it is forever evolving and combines and generates new identities, values and practices; …. by underscoring  learners’  abilities to push and break boundaries between named language and between language varieties, and to flout norms of behaviour including linguistic behaviour, and criticality” (Li 2011a,b; Li and Zhu 2013)”.

As an example of the practical implications of Translanguaging Space, Li Wei cites García and Li’s (2014), vision “where teachers and students can go between and beyond socially constructed language and educational systems, structures and practices to engage diverse multiple meaning-making systems and subjectivities, to generate new configurations of language and education practices, and to challenge and transform old understandings and structures”.

Translanguaging Instinct

Li Wei’s construct of a Translanguaging Instinct (Li 2016b) uses arguments for an ‘Interactional Instinct’, a biologically based drive for infants and children to attach, bond, and affiliate with conspecifics in an attempt to become like them (Lee et al. 2009; Joaquin and Schumann 2013).

This natural drive provides neural structures that entrain children acquiring their languages to the faces, voices, and body movements of caregivers. It also determines the relative success of older adolescents and adults in learning additional languages later in life due to the variability of individual aptitude and motivation as well as environmental conditions”.

Le Wei extends this idea in what he calls a Translanguaging Instinct (Li 2016b) “to emphasize the salience of mediated interaction in everyday life in the 21st century, the multisensory and multi- modal process of language learning and language use”. The Translanguaging Instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Li Wei suggests that, pace the Minimalist programme (sic!), a “Principle of Abundance” is in operation in human communication. Human beings draw on as many different sensory, modal, cognitive, and semiotic resources to interpret meaning intentions, and they read these multiple cues in a coordinated manner rather than singularly.

Li Wei’s discussion of the implications of the idea of the Translanguaging Instinct use uncontroversial statements about language learning which have nothing relevant to add to the theory. 

Discussion

So what is the Translanguaging theory of language? Despite endorsing the view that there is no such thing as language, and that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are nonsensical, the theory amounts to the claim that language is a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making.

The appendages about Translanguaging Space and a Translanguaging Instinct have little to do with a theory of language. The first is a blown-up recommendation for promoting language learning outside the classroom, and the second is a claim about language learning itself, to the effect that an innate instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Stripped of its academic obcurantism and the wholly unsatisfactory discussion of Fodor’s Language of Thought and his work on the modularity of mind, both bits of fluff strike me as being as inoffensive as they are unoriginal.  

Theories

What is a theory? I’ve dealt with this in Jordan (2004) and also in many posts. A theory is generally regarded as being an attempt to explain phenomena. Researchers working on a theory use observational data to support and test it.

Li Wei adopts the following strategy:

1. Skip the tiresome step of offering a coherent definition of the key theoretical construct and content yourself with the repeated vague assertion that language is “a resource for sense-and meaning-making”,

2. Rely on the accepted way of talking about parts of language by those you accuse of reducing language to a code,

3. Focus on attacking the political naming of languages, re-hashing obviously erroneous views about L1s, l2s, etc. and developing the view that language is a muultilingual, multisemiotic, multisensory, and multimodal resource.

He thus abandons any serious attempt at theory construction, resorts instead to a string of assertions dressed up in academic clothes and call it a “theory of practice”. Even then, Li Wei doesn’t actually say what he takes a theory of practice to be. He equates theory construction with “knowledge construction”, without saying what he means by “knowledge”. Popper (1972) adopts a realist epistemology and explains what he means by “objective knowledge”. In contrast, Li Wei adopts a relativist epistemology, where objective knowledge is jettisoned and “descriptive adequacy” replaces it, to be measured by “richness and depth”, which are nowhere defined.

How do we measure the richness and depth of competing “descriptions”? Is Li Wei seriously suggesting that different subjective accounts of the observations of language practice by different observers is best assessed by undefined notions of richness and depth?

The poverty of Li Wei’s criteria for assessing a “practical theory” is compounded by his absurd claim that researchers who act as observers must describe “all that has been observed, not just selective segments of the data”. “All that has been observed”? Really?   

Finally, the good bits. I applaud Li Wei’s attempt, bad as I judge it to be, to bridge the gap between psycholinguistic and sociolinguistic work on SLA. And, as I’ve already said in my post Multilingualism, Translanguaging and Theories of SLA, there are things we can agree on. ELT practice should recognise that teaching is informed by the monolingual fallacy, the native speaker fallacy and the subtractive fallacy (Phillipson, 2018).  The ways in which English is privileged in education systems is a disgrace, and policies that strengthen linguistic diversity are needed to counteract linguistic imperialism. Translanguaging is to be supported in as much as it affirms bilinguals’ fluent languaging practices and aims to legitimise hybrid language uses. ELT must generate translanguaging spaces where practices which explore the full range of users’ repertoires in creative and transformative ways are encouraged.

References

Cook, V. J. (1993). Linguistics and Second Language Acquisition.  Macmillan.

Ellis, N. (2002). Frequency effects in language processing: A Review with Implications for Theories of Implicit and Explicit Language Acquisition. Studies in SLA, 24,2, 143-188.

Gregg, K.R. (1993). Taking Explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Gregg, K. R. (2004). Explanatory Adequacy and Theories of Second Language Acquisition. Applied Linguistics 25, 4, 538-542.

Jordan, G. (2004). Theory Construction in SLA. Benjamins.

Li Wei (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39, 1, 9 – 30.

Phillipson, R. (2018) Linguistic Imperialism. Downloadable from https://www.researchgate.net/publication/31837620_Linguistic_Imperialism_

Popper, K. R. (1972). Objective Knowledge.  Oxford University Press.

Schmidt, R., & Frota, S. (1986). Developing basic conversational ability in a second language: A case study of an adult learner. In R. Day (Ed.), Talking to learn: Conversation in second language acquisition (pp. 237-369). Rowley, MA: Newbury House.

See Li Wei (2018) for the other references.

Li Wei (2018) Translanguaging as a Practical Theory of Language

In his 2018 article, Li Wei seeks “to develop Translanguaging as a theory of language”. Along the way, he highlights the contributions that Translanguaging makes to debates about the “Language and Thought” and the “Modularity of Mind” hypotheses and tries to bridge “the artificial and ideological divides between the so-called sociocultural and the cognitive approaches to Translanguaging practices.”

Section 2

After the Introduction, Section 2 outlines the principles which guide his “practical theory of language for Applied Linguistics”. They’re based on Mao’s interpretation of Confucius and Marx’s dialectical materialism (sic). Here are the main points, with short comments:

1 The process of theorization involves a perpetual cycle of practice-theory-practice.

Amen to that.

2 The criterion for assessing rival theories of the same phenomena is “descriptive adequacy”.  The key measures of descriptive adequacy are “richness and depth“.

No definitions of the constructs “richness” or “depth” are offered, no indication is given of how they might be operationalized, and no explanation of this assertion is given.

3 “Accuracy” cannot serve as a criterion for theory assessment: “no one description of an actual practice is necessarily more accurate than another because description is the observer–analyst’s subjective understanding and interpretation of the practice or phenomenon that they are observing“.

No definition is given of the term “accuracy” and no discussion is offered of how theoretical constructs used in practical theory (such as “languaging”, “resemiotization” and “body dynamics”) can be operationalised.

4 Descriptions involve the observer including “all that has been observed, not just selective segments of the data”.

No explanation of how an observer can describe “all that has been observed” is offered. 

5 The main objective of a practical theory is not to offer predictions or solutions but interpretations that can be used to observe, interpret, and understand other practices and phenomena.

No justification for this bizarre assertion is offered.  

6 Questions are formulated on the basis of the description and as part of the observer–analyst’s interpretation process. Since interpretation is experiential and understanding is dialogic, these questions are therefore ideologically and experientially sensitive.

No explanation of what this means is offered.

7 A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations.

No explanation of exactly what “principles” are involved is offered, and no indicators for measuring “consequentialties” are mentioned.  

8. An important assessment of the value of a practical theory is the extent to which it can ask new and different questions on both the practice under investigation and other existing theories about the practice. Yes indeed.

Section 3 is headed “The Practice”.

 Li Wei explains that he’s primarily concerned with the language practices of multilingual language users, and goes on to give samples of conversations between multilingual speakers. The analysis of the transcripts is perfunctory and provides little support for the assertion that the speakers are not “mixing languages”, but rather using “New Chinglish” (Li 2016a), which includes

ordinary English utterances being re-appropriated with entirely different meanings for communication between Chinese users of English as well as creations of words and expressions that adhere broadly to the morphological rules of English but with Chinese twists and meanings.

His examples are intended to challenge the “myth of a pure form of a language” and to argue that talking about people having different languages must be replaced by an understanding of a more complex interweaving of languages and language varieties, where boundaries between languages and concepts such as native, foreign, indigenous, minority languages are “constantly reassessed and challenged”.

Section 4 is on Translanguaging

Li Wei starts from Becker’s (1991) notion of Languaging, which suggests that there is no such thing as Language, but rather, only “continual languaging, an activity of human beings in the world “(p. 34). Language should not be regarded ‘as an accomplished fact, but as in the process of being made’ (p. 242). Li Wei also refers to work from ‘ecological psychology’, which seees languaging as ‘an assemblage of diverse material, biological, semiotic and cognitive properties and capacities which languaging agents orchestrate in real-time and across a diversity of timescales’ (Thibault 2017: 82). Such work challenges ‘the code view’ of language, urges us to ‘grant languaging a primacy over what is languaged’, and to see language as ‘a multi-scalar organization of processes that enables the bodily and the situated to interact with situation-transcending cultural-historical dynamics and practices’ (Thibault 2017: 78). The divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are thus “nonsensical”. So language learning should be viewed not as acquiring language, but rather as a process where novices “adapt their bodies and brains to the languaging activity that surrounds them’, and in doing so, ‘participate in cultural worlds and learn that they can get things done with others in accordance with the culturally promoted norms and values’ (Thibault 2017: 76). Li Wei concludes “For me, language learning is a process of embodied participation and resemiotization (see see also McDermott and Roth 1978; McDermott et al. 1978; Dore and McDermott 1982; and Gallagher and Zahavi 2012)”.

Next, Li Wei explains that he added the Trans prefix to Languaging in order to not only have a term that captures multilingual language users’ fluid and dynamic practices, but also to put forward two further arguments:

1) Multilinguals do not think unilingually in a politically named linguistic entity, even when they are in a ‘monolingual mode’ and producing one namable language only for a specific stretch of speech or text.

2) Human beings think beyond language and thinking requires the use of a variety of cognitive, semiotic, and modal resources of which language in its conventional sense of speech and writing is only one.

The first point refers to Fodor’s (1975) seminal work The Language of Thought. Li Wei offers no summary of Fodor’s “Language of Thought” hypothesis and no discussion of it. So the reader might not know that this language of thought is usually referred to as “Mentalese“, and that very technical, but animated discussions about whether or not Mentalese exists, and if it does, how it works, have been going on for the last 40+ years among philosophers, cognitive scientists and linguists. Without any proper introduction, Li Wei simply states: “there seems to be a confusion between the hypothesis that thinking takes place in a Language of Thought (Fodor 1975) — in other words, thought possesses a language-like or compositional structure — and that we think in the named language we speak. The latter seems more intuitive and commonsensical”. In my opinion, he doesn’t make it clear why the latter view causes a problem, why, that is, “it cannot address the question of how bilingual and multilingual language users think without referencing notions of the L1, ‘native’ or ‘dominant’ language”, and he doesn’t clearly explain how Fodor’s Language of Thought hypothesis solves the problem. All he says is

If we followed the argument that we think in the language we speak, then we think in our own idiolect, not a named language. But the language-of-thought must be independent of these idiolects, and that is the point of Fodor’s theory. We do not think in Arabic, Chinese, English, Russian, or Spanish; we think beyond the artificial boundaries of named languages in the language-of-thought”.

I fail to see how this cursory discussion does anything to support the claim that Translanguaging Theory makes any worthwhile contribution to the debate that has followed Fodor’s Language of thought hypothesis.

As for the second argument, this concerns “the question of what is going on when bilingual and multilingual language users are engaged in multilingual conversations”. Li Wei finds it hard to imagine that they shift their frame of mind so frequently in one conversational episode let alone one utterance. He claims that we do not think in a specific, named language separately, and cites Fodor (1983) to resolve the problem. Li Wei reports Fodor’s Modularity of Mind hypothesis as claiming that the human mind consists of a series of modules which are “encapsulated with distinctive information and for distinct functions”. Language is one of these modules. As Gregg has pointed out to me (see the comment below) “Fodor did not think that the mind is made up of modules; he spent a good deal of time arguing against that idea (see e.g. his “The Mind Doesn’t Work That Way”), the so-called Massive Modularity hypothesis. For Fodor, the mind contains modules; big difference” (my emphases). Worse, Li Wei says that Fodor’s hypothesis “has somehow been understood to mean” something that, in fact, Fodor did not say or imply, namely that “the language and other human cognitive processes are anatomically and/or functionally distinct”. Li Wei does not cite any researcher who somehow came to understand Fodor’s argument about modular mind in that way, but simply asserts that in research design, “the so-called linguistic and non-linguistic cognitive processes” have been assessed separately. He goes on to triumphantly dismantle this obviously erroneous assertion and to claim it as evidence for the usefulness of his theory.

Section 5: Translanguaging Space and Translanguaging Instict

“The act of Translanguaging creates a social space for the language user by bringing to- gether different dimensions of their personal history, experience, and environment; their attitude, belief, and ideology; their cognitive and physical capacity, into one coordinated and meaningful performance  (Li  2011a:  1223)”. This Translanguaging Space has transformative power because “it is forever evolving and combines and generates new identities, values and practices”.  It underscores multilinguals’ creativity, “their abilities to push and break boundaries between named language and between language varieties, and to flout norms of behaviour including linguistic behaviour, and criticality —the ability to use evidence to question, problematize, and articulate views (Li 2011a,b; Li and Zhu 2013)”.

A Translanguaging Space shares elements of the vision of Thirdspace articulated by Soja (1996) as “a space of extraordinary openness, a place of critical exchange where the geographical imagination can be expanded to encompass a multiplicity of perspectives that have heretofore been considered by the epistemological referees to be incompatible and uncombinable”. Soja proposes that it is possible to generate new knowledge and discourses in a Thirdspace. A Translanguaging Space acts as a Thirdspace which does not merely encompass a mixture or hybridity of first and second languages; instead it invigorates languaging with new possibilities from ‘a site of creativity and power’, as bell hooks (1990: 152) says. Going beyond language refers to trans- forming the present, to intervening by reinscribing our human, historical com- monality in the act of Translanguaging” (Li Wei, 2018, p. 24).

As an example of the practical implications of Translanguaging Space, Li Wei cites García and Li’s (2014), vision “where teachers and students can go between and beyond socially constructed language and educational systems, structures and practices to engage diverse multiple meaning-making systems and subjectivities, to generate new configurations of language and education practices, and to challenge and transform old understandings and structures”. Stirring stuff.

Li Wei’s construct of a Translanguaging Instinct (Li 2016b) draws on the arguments for an ‘Interactional Instinct’, a biologically based drive for infants and children to attach, bond, and affiliate with conspecifics in an attempt to become like them (Lee et al. 2009; Joaquin and Schumann 2013).

This natural drive provides neural structures that entrain children acquiring their languages to the faces, voices, and body movements of caregivers. It also determines the relative success of older adolescents and adults in learning additional languages later in life due to the variability of individual aptitude and motivation as well as environmental conditions”.

Le Wei extends this idea in what he calls a Translanguaging Instinct (Li 2016b) “to emphasize the salience of mediated interaction in everyday life in the 21st century, the multisensory and multi- modal process of language learning and language use”. The Translanguaging Instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Li Wei suggests that, pace the Minimalist programme (sic!), a “Principle of Abundance” is in operation in human communication. Human beings draw on as many different sensory, modal, cognitive, and semiotic resources to interpret meaning intentions, and they read these multiple cues in a coordinated manner rather than singularly.

In the meantime, the Translanguaging Instinct highlights the gaps between meaning, what is connected to forms of the language and other signs, and message, what is actually inferred by hearers and readers, and leaves open spaces for all the other cognitive and semiotic systems that interact with linguistic semiosis to come into play (Li Wei, 2018, p. 26).

Li Wei’s discussion of the implications of the idea of the Translanguaging Instinct might have been written by an MA student of psycholinguistics. Below is a summary, mostly consisting of quotes.  

Human beings “rely on different resources differentially during their lives. In first language acquisition, infants naturally draw meaning from a combination of sound, image, and action, and the sound–meaning mapping in word learning crucially involves image and action. The resources needed for literacy acquisition are called upon later”.

“In bilingual first language acquisition, the child additionally learns to associate the target word with a specific context or addressee as well as contexts and addressees where either language is acceptable, giving rise to the possibility of code- switching”.

“In second language acquisition in adolescence and adulthood, some resources become less available, for example resources required for tonal discrimination, while others can be enhanced by experience and become more salient in language learning and use, for example resources required for analysing and comparing syntactic structures and pragmatic functions of specific expressions. As people become more involved in complex communicative tasks and demanding environments, the natural tendency to combine multiple resources drives them to look for more cues and exploit different resources. They will also learn to use different resources for different purposes, resulting in functional differentiation of different linguistic resources (e.g. accent, writing) and between linguistic and other cognitive and semiotic resources. Crucially, the innate capacity to exploit multiple resources will not be diminished over time; in fact it is enhanced with experience. Critical analytic skills are developed in terms of understanding the relationship between the parts (specific sets of skills, such as counting; drawing; singing) and the whole (multi-competence (Cook 1992; Cook and Li 2016) and the capacity for coordination between the skills subsets) to functionally differentiate the different resources required for different tasks“.

One consequence of the Translanguaging perspective on bilingualism and multilingualism research is making the comparison between L1 and L2 acquisition purely in terms of attainment insignificant. Instead, questions should be asked as to what resources are needed, available, and being exploited for specific learning task throughout the lifespan and life course? Why are some resources not available at certain times? What do language users do when some resources become difficult to access? How do language users combine the available resources differentially for specific tasks? In seeking answers to these questions, the multisensory, multimodal, and multilingual nature of human learning and interaction is at the centre of the Translanguaging Instinct idea” (Li Wei, 2018, pp 24-25).

There’s hardly anything I disagree with in all this, apart from the dubious, forced connection made between all this elementary stuff and the “Translangaguing perspective”.

Discussion

So what is the Translanguaging theory of language? Despite endorsing the view that there is no such thing as language, and that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are nonsensical, the theory amounts to the claim that language is a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making.

The appendages about Translanguaging Space and a Translanguaging Instinct have little to do with a theory of language. The first is a blown-up recommendation for promoting language learning outside the classroom, and the second is a claim about language learning itself, to the effect that an innate instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Stripped of its academic obcurantism and the wholly unsatisfactory discussion of Fodor’s Language of Thought and his work on the modularity of mind, both bits of fluff strike me as being as inoffensive as they are unoriginal.  

Theories

What is a theory? I’ve dealt with this in Jordan (2004) and also in many posts. A theory is generally regarded as being an attempt to explain phenomena. Researchers working on a theory use observational data to support and test it. Furthermore, it’s generally recognised that, pace Li Wei, we can’t just observe the world: all observation is “theory-laden”; as Popper (1972) puts it, there’s no way we can talk about something sensed and not interpreted. Even in everyday life we don’t – can’t – just “observe”, and those committed to a scientific approach to language learning recognize that researchers observe guided by a problem they want to solve: research is fundamentally concerned with problem-solving, and it benefits from a clear focus in a well-defined domain. Here’s an example of how this applies to theories of language:  

Chomskian theory claims that, strictly speaking, the mind does not know languages but grammars; ‘the notion “language” itself is derivative and relatively unimportant’ (Chomsky, 1980, p. 126).  “The English Language” or “the French Language” means language as a social phenomenon – a collection of utterances.  What the individual mind knows is not a language in this sense, but a grammar with the parameters set to particular values. Language is another epiphenomenon: the psychological reality is the grammar that a speaker knows, not a language (Cook, 1994: 480).

And here’s Gregg (1996)

… “language” does not refer to a natural kind, and hence does not constitute an object for scientific investigation. The scientific study of language or language acquisition requires the narrowing down of the domain of investigation, a carving of nature at its joints, as Plato put it. From such a perspective, modularity makes eminent sense (Gregg, 1996, p. 1).

 Both Chomsky and Gregg see the need to narrow the domain of any chosen investigation in order to study it more carefully. So they want to go beyond the common-sense view of language as a way of expressing one’s thoughts and feelings (not, most agree, following Fodor, to be confused with thinking itself) and of communicating with others, to a careful description of its core parts and then to an explanation of how we learn them. Now you might disagree, in several ways. You might reject Chomsky’s theory and prefer, for example, Nick Ellis’ usage-based theory (see, for example Ellis, 2002), which embraces the idea of language as a socially constructed epiphenomenon, and claims that it’s learned through social engagement where all sorts of inputs from the environment are processed in the mind by very general learning mechanisms, such as the power law of practice. But Ellis recognises the need to provide some description of what’s learned and I defy most readers to make sense of Ellis’ ongoing efforts to describe a “construction grammar”.  Or you might take a more bottom-up research stance and decide to just feel your way – observe some particular behaviour, turning over and developing ideas and move slowly up to a generalization. But even then, you need SOME idea of what you’re looking for. Gregg (1993) gives a typically eloquent discussion of the futility of attempts to base research on “observation”.    

Or you might, like Li Wei, adopt the following strategy:

1. Skip the tiresome step of offering a coherent definition of the key theoretical construct and content yourself with the repeated vague assertion that language is “a resource for sense-and meaning-making”,

2. Rely on the accepted way of talking about parts of language by those you accuse of reducing language to a code,

3. Focus on attacking the political naming of languages, re-hashing obviously erroneous views about L1s, l2s, etc. and developing the view that language is a muultilingual, multisemiotic, multisensory, and multimodal resource.

 If so, you abandon any serious attempt at theory construction, resort to a string of assertions dressed up in academic clothes and call it a “theory of practice”. Even then, Li Wei doesn’t actually say what he takes a theory of practice to be. He equates theory construction with “knowledge construction”, without saying what he means by “knowledge”. Popper (1972) adopts a realist epistemology and explains what he means by “objective knowledge” (accepting that all observation is theory-laden). In contrast, we have to infer what Li Wei means by knowledge through the reason he gives for dismissing “accuracy” as a criterion for theory assessment, viz., as already quoted above, “no one description of an actual practice is necessarily more accurate than another because description is the observer–analyst’s subjective understanding and interpretation of the practice or phenomenon that they are observing”. This amounts to a relativist epistemology where objective knowledge is jettisoned and “descriptive adequacy” replaces it, to be measured by “richness and depth”, which are nowhere defined.

How do we measure the richness and depth of competing “descriptions”? For example we have (1) Li Wei’s descriptions of conversational exchanges among his research participants, and (2) Schmidt and Frota’s (1986) description of an adult learner of Portuguese. The two descriptions of the learners’ utterances serve different purposes, they don’t amount to competing arguments, but how do we assess the descriptions and the analyses? How about: “I prefer (2) because the description of the weather outside was richer”. Are these two “descriptions” not better assessed by criteria such as their coherence and their success in supporting the hypothesis that informs their observations? Schmidt and Frota are addressing a problem about what separates input from intake (the hypothesis being that “noticing” is required), while Li Wei is addressing the problem of how we interpret code-switching, and his hypothsis is that it’s not a matter of calling on separately stored knowledge about 2 rigidly different named languages. Is Li Wei seriously suggesting that different subjective accounts of the observations of language practice by different observers is best assessed by undefined notions of richness and depth?

The poverty of Li Wei’s criteria for assessing a “practical theory” is compounded by his absurd claim that researchers who act as observers must describe “all that has been observed, not just selective segments of the data”. “All that has been observed”? Really?   

But wait a minute! There’s another criterion! “A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations”. As noted, we’re not told what the “principles” are, and no indicators for measuring “consequentialties” are mentioned. Still, it’s more promising that the other criteria. And, of course, it’s taken from a well-respected criterion used by scientists anchored in a realist epistemology: ceteris paribus, the more a theory leads to the practical solution of problems, the better it is.   

Finally, the good bits. I applaud Li Wei’s attempt, bad as I judge it to be, to bridge the gap between psycholinguistic and sociolinguistic work on SLA. And, as I’ve already said in my post Multilingualism, Translanguaging and Theories of SLA, there are things we can agree on. ELT practice should recognise that teaching is informed by the monolingual fallacy, the native speaker fallacy and the subtractive fallacy (Phillipson, 2018).  The ways in which English is privileged in education systems is a disgrace, and policies that strengthen linguistic diversity are needed to counteract linguistic imperialism. Translanguaging is to be supported in as much as it affirms bilinguals’ fluent languaging practices and aims to legitimise hybrid language uses. ELT must generate translanguaging spaces where practices which explore the full range of users’ repertoires in creative and transformative ways are encouraged.

References

Cook, V. J. (1993). Linguistics and Second Language Acquisition.  Macmillan.

Ellis, N. (2002). Frequency effects in language processing: A Review with Implications for Theories of Implicit and Explicit Language Acquisition. Studies in SLA, 24,2, 143-188.

Gregg, K.R. (1993). Taking Explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Gregg, K. R. (2004). Explanatory Adequacy and Theories of Second Language Acquisition. Applied Linguistics 25, 4, 538-542.

Jordan, G. (2004). Theory Construction in SLA. Benjamins.

Li Wei (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39, 1, 9 – 30.

Phillipson, R. (2018) Linguistic Imperialism. Downloadable from https://www.researchgate.net/publication/31837620_Linguistic_Imperialism_

Popper, K. R. (1972). Objective Knowledge.  Oxford University Press.

Schmidt, R., & Frota, S. (1986). Developing basic conversational ability in a second language: A case study of an adult learner. In R. Day (Ed.), Talking to learn: Conversation in second language acquisition (pp. 237-369). Rowley, MA: Newbury House.

See Li Wei (2018) for the other references.

Li Wei on Translanguaging

As translanguaging continues to attract attention, here’s a quick review of a recent contribution to the field by Prof. Li Wei. (Note that I’ve done 2 recent post on translanguaging: Multilingualism Translanguaging and theories of SLA; and Multilingualism, Translanguaging and Baloney. The first one gives a quick description of the construct.

Li Wei’s (20201) article Translanguaging as a political stance: implications for English language education” makes several claims, the most undisputed being that the naming of languges is a political act. Yes it is; and so is language teaching and indeed all teaching, – see, for example, off the top of my head, Piaget, Vygotsky, A. S. Neill, Dewey, Steiner, Marx, Friere, Illich, Gramsci, Goodman, Crookes, Long, … add your own favorites.   

So what does Li Wei offer here as the best “political stance” for ELT? He offers translanguaging, which he’s already discussed in a series of published work (see, for example, Li Wei, 2018 and Garcia et al 2021). Why is it the best? Because it sees language as a fluid, embodied social construct, whatever that means. What does it offer in terms of new, innovative, practical implications for English language education? Nothing. Absolutely nothing.  

The main points of Li Wei’s 2021 ELTJ article are these:  

1. “Named languages are political constructs and historico-ideological products of the nation-state boundaries”.

Comment: They most certainly are.

2. “Named languages have no neuropsychological correspondence…. human beings have a natural instinct to go beyond narrowly defined linguistic resources in meaning- and sense-making, as well as an ability, acquired through socialization and social participation, to manipulate the symbolic values of the named languages such as identity positioning” (Li 2018).

Comment: Typical academic jargon makes up a claim vague enough to have little force. The author’s highly disputable claims elsewhere have more force, for example, his (2018) assertion that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are “nonsensical”.    

3. We should shift from “a fixation on language as an abstractable coded system” to attention to the language user.

Comment: Amen to that, except that Li Wei elsewhere defines language and comments on constructs such as negative transfer, errors and much else besides in ways that I find preposterous.

4. ELT should embrace the “active use of multiple languages and other meaning-making resources in a dynamic and integrated way”. Furthermore, “the languages the learners already have should and can play a very positive role in learning additional languages”.

Comment: Only the most reactionary among us could disagree! The problem is that scant attention is given to how this might affect teaching practice. Nothing in this article, supposedly aimed at teachers, says anything useful to teachers. Despite its title, the article ends with a short section on “English medium education: practical challenges” where aspirational, academically expressed bullshit is all that’s on offer.

How do teachers actually change their practice? Apart from the exhortations Li Wei makes for teachers to see different languages as less fully-separated than they might suppose; to see students’ previously learned languages as assets; to  resist any urge to correct errors too quickly; and to generally encourage a multilingualist environment (all of it useful advice), he says nothing about the implications, I mean the real classroom day-to-day implications, of all this theoretical posturing. In terms of the syllabuses, materials, testing and pedagogic procedures of ELT, how is the theory of translingualism to be put into practice? Don’t expect answers from Prof. Li Wei.   

On Twitter, I asked Li Wei, who had tweeted to advertise his ELTJ article, what this bit in his article meant:

“To regard certain ways of expressing one’s thought as errors and attribute them to negative transfers from the L1 is to create a strawman for raciolinguistic ideologies”.

He didn’t answer the question.

I then asked:

A Spanish L1 student of mine writes “Freud’s Patient X dreamed with his visit to the Altes Museum”.  If I attribute this error to negative transfer from the L1, how does it create a strawman for raciolinguistic ideologies?

He replied:

Raciolinguistic ideology would ‘expect’ L2 users to produce ‘deviations’ from ‘standard’ language when such ‘deviations’ are in fact ideolects which by definition are personal and sensitive to individual’s socialisation trajectory including language learning trajectory.

He suggests that I’m trapped in a raciolinguistic ideology, where ideolects get misinterpreted as “deviations” because of an allegiance to racist-drenched, standard English.

In Spanish, they say

“Soñaba con … ” (I dreamed* with …),

 while in English we say

“I dreamed about …” or “I dreamed of  …”.  

(*”dreamt” is often used)

I think “I dreamed with you last night” is lovely. But it’s an error – it’s “marked”, as we say. “I dreamed with a visit to the Altes Museum” could be confusing to the reader / listener. How should teachers respond to such errors? They might well decide to let it go, but they might decide to do a recast, or to talk about the difference. What does Li Wei suggest teachers do? Well they certainly shouldn’t pounce on it and make the student feel bad – we don’t need his theory to tell us that. Maybe his theory suggests that teachers should celebrate this particular error, talk about it, discuss other examples. What about other errors, such as “I have twenty years” (I’m twenty) or “He goed to the library (He went to the bookshop`) and millions more? I asked Li Wei on Twitter how he would advise teachers to deal with the “dream with ..” error and he didn’t reply. I suggest that his reluctance stemmed from the fact that he’s trapped in his own daft “theory” which doesn’t want to recognise “errors” that students of English as an L2 make. The theory doesn’t like the use of the word “errors” and it doesn’t like the construct of negative transfer, either. Yet errors play a key role in interlanguage development, and negative transfer has been observed millions of times by researchers and teachers: it’s a fact which can’t – or at the very least shouldn’t – be proscribed because it offends the dictates of a half-baked theory.  

Here’s a text that I’ve invented, which might have come from an overseas student doing an MA in Applied Linguistics.

Second language is foreign language or additional language and is learned in addition to first language. There is multiple uses of L2 for example tourism, business, study and other purposes (Jones and Smith, 2018). Acquire fluency in second language learning (SLL) can prove difficulty because of considerations of age, culture clash and other environmental factors. One example prevailing theory is critical period hypothesis (DeKeyser, 2000) which says young children have imprtant advantages over adults to acquire a L2.

How would a teacher versed in the theory of Translanguaging deal with this text? How would it differ from the treatment given by a teacher who is unaware of this theory? My point is simple: Translanguaging theory is yet to give any significant guidance to teachers’ practice. Why? Because its proponents are focused on theoretical concerns, particularly the promotion of a relativist epistemology and a peculiar way of observing phenomena through a socio-cultural lens.

Philip Kerr, who I’m sure would be anxious to insist that my views and his don’t coincide, comments in his recent post about translanguaging about the poverty of its practical results.

  • Jason Anderson’s  (2021) Ideas for translanguaging offers “nothing that you might not have found twenty or more years ago (e.g. in Duff, 1989; or Deller & Rinvolucri, 2002)”.
  • Rabbidge’s  (2019) book, Translanguaging in EFL Contexts differs little from earlier works which suggest incorporating the L1,
  • The Translanguaging Classroom by García and colleagues (2017) offers “little if anything new by way of practical ideas”.

Those teachers who manage to make sense of Li Wei’s ELTJ article will be left without any idea about what its practical consequences are.   

Finally, a political comment of my own. There’s some brief stuff in the article about ELT as a commercially driven, capitalist industry, all of which has been far more carefully and interestingly discussed elsewhere. I wonder if Prof. Li Wei will ever give his full attention to the coursebook-driven world of ELT, to the producers of the CEFR, the high stakes exams like the IELTS, or the Second Language Teacher Education racket. There’s nothing new in Li Wei’s 2021 article. It’s a carefully confected, warmed-over, one-more-article-under-the-belt job. In my next post I’ll examine the more substantial 2018 article, “Translanguaging as a Practical Theory of Language”, where his discussion of multilingual students’ dialogues first appeared.

References can be found in Li Wei (2001), which is free to download – click the link at the start of this post.

The Enigma of the Interface

Trying to understand the process of SLA, some scholars have concentrated on the psychological aspects of learning. What goes on in the mind? How does a learner go from not knowing to knowing an L2? In this post, I discuss attempts to explain the psychological process of learning an L2 and the enigma of the interface between conscious and unconscious learning. Bill VanPatten said in his plenary at the BAAL 2018 conference that language teaching can only be effective if it comes from an understanding of how people learn languages. In my opinion, the question of the interface between implicit and explicit learning is vital to such an understanding; it has a direct bearing on the syllabuses, materials and assessment tools used in ELT programmes. If, as I’ll suggest, learning an L2 is predominantly an unconscious process which happens mostly when learners are focused on meaning, then current ELT syllabuses, materials and tests, which reject this conclusion, are bound to hinder rather than help efficacious teaching.  

Declarative and Procedural knowledge

One way of attempting to explain L2 learning from a psychological perspective is to make a distinction between declarative and procedural knowledge. The argument goes as follows. Unlike the knowledge required to know about geography or human anatomy for example, knowing how to use an L2 for communication relies on unconscious procedural knowledge: knowledge of how to use the language for communicative purposes. Conscious declarative knowledge about the language plays a very minor role. For all a learner’s declarative knowledge, if they lack the (procedural) knowledge needed to use the language in real time communicative events, they can do little with their declarative knowledge, except pass exams which test it. (This typical item from an English exam John — to Paris yesterday. A) goes; B) went; C) has gone tests declarative knowledge in much the same way as this item from a a geography exam: Paris is the capital of A) Belgium; B) France; C) Spain). Stories abound of primary and secondary school students in English speaking countries who were taught French for many years, passed successive exams in French based on declarative knowledge about the language, but failed to put their knowledge to effective use when they visited Paris. Their declarative knowledge – their knowledge about French – didn’t help them much when it came to using French to get things done in France. They lacked procedural knowledge.

This is, of course, a theory, a tentative explanation of (among other things) the curious phenomenon of people who know a lot about an L2 but can’t put it to practical use, and it uses the theoretical constructs of declarative and procedural knowledge to provide the explanation. But, of course, these theoretical constructs need fleshing out. Which brings us to Krashen’s Monitor Model. Krashen argues that we learn an L2 in much the same way as we learn our L1s as infants, and he relies on Chomsky’s Universal Grammar (UG) theory to explain early language learning. Chomsky’s UG theory is difficult for non-specialists to understand, but in the simplest terms, UG theory claims that all languages share universal features, and they vary in terms of certain parameters. Grammaticality judgement experiments involving very large numbers of children over a period of more than forty years demonstrate that children know a great deal more about the languages they use than can be explained by looking at the language they have been exposed to (the Poverty of the Stimulus (PoS) argument). Chomsky argues that the best explanation for children’s extraordinary ability to use language by the time they’re 12 years old, and for the profound, intricate knowledge that underlies that ability, is that humans are hard-wired for language learning: it’s part of human nature. Innate knowledge about the general structure of language allows young children to “bootstrap” language encountered in the environment. They respond to input not as “table rasas”, but rather as humans prepared for the job: input triggers the setting of parameters on the innate knowledge they already have of the deep structure of languages. I remain unconvinced by attempts in the last sixty years – chaos theories, connectionist models, emergentists theories – to answer the PoS argument, or to provide a better explanation than Chomsky’s, but let’s see.

Back to Krashen’s Monitor Model. Adults learn an L2 in much the same way as children learn languages, by exposure to “comprehensible input”, language that they are exposed to which they can broadly understand, even if there are unknown elements in it. They “acquire” language unconsciously, and what they learn consciously is metalinguistic knowledge – knowledge about the language – which is of extremely limited use. It’s perfectly possible to do without this metalinguistic knowledge, as the millions of people who settle in foreign countries and learn the new language without it attest. Here’s the model:

Given its insistence on the very limited role played by conscious learning, Krashen’s theory is an example of the “No interface” view  – there is a clear difference between implicit (unconscious) learning and explicit (conscious) learning. These types of learning go on in different parts of the mind; they hardly affect each other; and implicit learning is what matters. Krashen’s theory was heavily criticised, notable by Gregg (1984) and McLaughlin (1987) who both highlighted the weak constructs in the model, leading to circularity. Furthermore, most scholars considered that, whatever its merits, the model was too dismissive of the role explicit learning plays in L2 learning.

Next comes VanPatten’s Input Processing (IP) theory, which attempts to explain how learners turn input into intake by parsing input during the act of comprehension while their primary attention is on meaning. VanPatten’s model consists of a set of principles (see my blog post on Processing Input for a list of these principles) which  interact in working memory, taking account the fact that working memory has very limited processing capacity. Content lexical items are searched out first, since words are the principal source of referential meaning. When content lexical items and a grammatical form both encode the same meaning and when both are present in an utterance, learners attend to the lexical item, not the grammatical form. Perhaps the most important construct in the IP model is “Communicative value”: the more a form has communicative value, the more likely it is to get processed and made available in the intake data for acquisition, and it’s thus the forms with no or little communicative value which are least likely to get processed and, without help, may never get acquired. In this theory, the processing is mostly unconscious, but explicit attention to some aspects of the L2 is seen as helpful, so, IMO, the IP theory belongs in the Very Weak Interface camp.

William O’Grady proposes a ‘general nativist’ theory of first and second language acquisition where a modular acquisition device that does not include Universal Grammar is described. O’Grady says his work forms part of the emergentist rubric, but since he sees the acquisition device as a modular part of mind, he’s a long way from the real empiricists in the emergentist camp. Interestingly, O’Grady accepts that there are sensitive periods involved in language learning, and that problems adults face in L2 acquisition can be explained by the fact that adults have only partial access to the (non-UG) L1 acquisition device. O’Grady describes a different kind of processor, doing more general things, but it’s still a language processor and it’s still working not just on segments of voice streams, and words, but on syntax, and thus O’Grady conforms to the view that language is an abstract system governed by rules of syntax. Given O’Grady’s “partial access” view, I think his view also belongs in the Very Weak Interface camp.

Swain’s (1985) famous study of French immersion programmes led to her claim that comprehensible input alone can allow learners to reach high levels of comprehension, but their proficiency and accuracy in production will lag behind, even after years of exposure. Further studies gave more support to this view, and to the opinion that comprehensible input is the necessary but not sufficient condition for proficiency in an L2. Swain’s argument is that we must give more attention to output.

And now we come to Schmidt’s view that in order to learn an L2, we need to “notice” formal features of the input. This enormously influential – and enormously misinterpreted – hypothesis lies at the heart of what the heading of this post calls “the enigma of the interface”.

First, we have to appreciate that Schmidt uses the word “noticing” in a very special, technical way – it’s a theoretical construct not to be confused with the dictionary definition. “Noticing” has subsequently been used, citing Schmidt, in ways that Schmidt roundly rejected. See my post on Schmidt for a brief outline of how he came to form the construct and note Truscott’s (2015) remarks:

Perhaps more disturbing are efforts to use noticing as a theoretical foundation for grammar instruction in general, without concern for whether any given grammar point is or is not a legitimate object of noticing for Schmidt (e.g. R. Ellis, 1993, 1994, 1995; Long & Robinson, 1998; Nassaji & Fotos, 2004). A genuine application of Schmidt’s concept of noticing would have to take this issue seriously. Given the mismatch between the typically broad aims of grammar instruction and the relatively narrow scope of Schmidt’s noticing, it is perhaps not surprising that pedagogical approaches tend to be applications of noticing in name only.

 When others use the term ‘noticing’ and cite Schmidt as its source, they are claiming – whether the claim is explicit or not – that their use rests on this work, when in fact it typically does not. The result is a widespread belief that research and pedagogy in the area of L2 learning are now being guided by a firmly established concept, rooted in extensive review and analysis of research and theory in psychology. But this appearance is an illusion. The reality, whatever one thinks of Schmidt’s noticing, is that most of the relevant work is guided by nothing more than a loose, intuitive notion that consciousness is somehow important.

To the issue, then. Schmidt asks: Is it possible to learn formal aspects of a second language that are not consciously noticed? His answer, at least in his (1990) original version of the hypothesis, is “No”. Schmidt points to disagreement on a definition of “intake”. While Krashen seems to equate intake with comprehensible input, Corder distinguishes between what is available for going in and what actually goes in, but neither Krashen nor Corder explain what part of input functions as intake for the learning of form. Schmidt also notes the distinction Slobin (1985), and Chaudron (1985) make between preliminary intake (the processes used to convert input into stored data that can later be used to construct language), and final intake (the processes used to organise stored data into linguistic systems). Schmidt proposes that all this confusion is resolved by defining intake as:

that part of the input which the learner notices … whether the learner notices a form in linguistic input because he or she was deliberately attending to form, or purely inadvertently. If noticed, it becomes intake (Schmidt, 1990: 139).

Thus:

subliminal language learning is impossible, and … noticing is the necessary and sufficient condition for converting input into intake (Schmidt, 1990:  130).

I hesitantly give Rod Ellis’ model of Schmidt’s view (hesitantly because it’s been challenged by many):

This is, obviously, a Strong Interface view, the complete opposite of Krashen’s: conscious knowledge of the grammar of the L2 is a necessary and sufficient condition for L2 learning.

I’ve written various posts on Schmidt’s work (see, particularly, a series of 3 posts on “Encounters with Noticing”, so here I’ll give a very quick summary of objections to it.  

Schmidt’s original hypothesis claimed that input can’t get processed without being “noticed”, and therefore all L2 learning is conscious. This claim is either trivially true, by adopting circular definitions of ‘conscious’ and ‘learning’, or obviously false. The claim that L2 learning is a process that starts with input going through a necessary stage in short-term memory where “language features” are “noticed” is untenable. It amounts to the claim that all language features in the L2 shuffle through short-term memory and if unnoticed have to re-present themselves. But how can language features present themselves, even once? As Gregg said in a comment on this blog:

Noticing is a perceptual act; you can’t perceive what is not in the senses, so far as I know. Connections, relations, categories, meanings, essences, rules, principles, laws, etc. are not in the senses.

Schmidt can’t expect us to accept that our knowledge of language is the result of “noticing” things from the environment that are presented to the senses because, quite simply, aspects of a language’s grammar, for example, are not there to be “noticed”. Furthermore, Ellis’s figure suggests that the 3 constructs on the top row of ‘noticing’, “comparing” and “integrating” are what turn input into output and explain IL development. But where is the noticing supposed to take place according to the figure? And what is “short/medium-term memory”?

In his 2010 paper, Schmidt confirms the concessions made in 2001, which amount to saying that ‘noticing’ is not needed for all L2 learning, but that the more you notice the more you learn. He also confirms that noticing does not refer to “noticing the gap”. However, the hypothesis remains unsatisfactory, for the following reasons:

1.The Noticing Hypothesis even in its weaker version doesn’t clearly describe the construct of ‘noticing’. There is no way to discern what is and isn’t “noticed”.

2. The empirical support claimed for the Noticing Hypothesis is not as strong as Schmidt (2010) claims.

3. A theory of SLA based on noticing a succession of forms faces the impassable obstacle that, as Schmidt (2010) seemed to finally admit, you can’t “notice” rules, or principles of grammar.

Gass: An Integrated Theory of SLA

Gass (1997), influenced by Schmidt, offers a more complete picture of what happens to input. She says it goes through stages of apperceived input, comprehended input, intake, integration, and output, thus subdividing Krashen’s comprehensible input into three stages: apperceived input, comprehended input, and intake. I don’t quite get “apperceived input”; Gass says it’s the result of attention, in the similar sense as Tomlin and Villa’s (1994) notion of orientation, and Schmidt says it’s the same as his noticing, which doesn’t help me much. In any case, once the intake has been worked on in working memory, Gass stresses the importance of negotiated interaction during input processing and eventual acquisition. I find the Gass model a rather unsatisfactory compilation of bits, but it suggests that L2 learning is predominantly a process of implicit learning and so takes a Weak interface stance.

Skills-based Theory

Skills-based theory also support the Strong Interface view. It is usually based on John Anderson’s (1983) ‘Adaptive Control of Thought’ model (a general learning theory, not a theory of SLA), which makes the distinction described above between declarative knowledge – conscious knowledge of facts; and procedural knowledge – unconscious knowledge of how an activity is done. When applied to instructed second language learning, the model suggests that learners should first be presented with information about the L2 (declarative knowledge ) and then they should practice using the information in various controlled and then more loosely controlled ways so that what they have consciously learned about the language is converted into unconscious knowledge of how to use the L2 (procedural knowledge). The learner moves from controlled to automatic processing, and through intensive linguistically focused rehearsal, achieves increasingly faster access to, and more fluent control over the L2 (see DeKeyser, 2007, for example).

The fact that nearly everybody successfully learns at least one language as a child without starting with declarative knowledge, and that millions of people learn additional languages without studying them (migrant workers, for example), challenges the claim that learning a language needs to begin with the imparting of declarative knowledge. Furthermore, the phenomenon of L1 transfer doesn’t fit well with a skill based approach, and neither do putative critical periods for language learning. But the main reason for rejecting such an approach is that it contradicts SLA research findings related to interlanguage development.

Selinker (1972) introduced the construct of interlanguages to explain learners’ transitional versions of the L2. Studies show that interlanguages exhibit common patterns and features, and that learners pass through well-attested developmental sequences on their way to different end-state proficiency levels. Examples of such sequences are found in morpheme studies; the four-stage sequence for ESL negation; the six-stage sequence for English relative clauses; and the sequence of question formation in German (see Hong and Tarone, 2016, for a review). Regardless of the order or manner in which target-language structures are presented in coursebooks, learners analyse input and create their own interim grammars, slowly mastering the L2 in roughly the same manner and order. Interlanguage (IL) development of individual structures has very rarely been found to be linear. Accuracy in a given grammatical domain typically progresses in a zigzag fashion, with backsliding, occasional U-shaped behavior, over-suppliance and under-suppliance of target forms, flooding and bleeding of a grammatical domain (Huebner 1983), and considerable synchronic variation, volatility  (Long 2003a), and diachronic variation. So the assumption that learners can move from zero knowledge to mastery of formal parts of the L2  one at a time and move on to the next item on a list is a fantasy. Explicit instruction in a particular structure can produce measurable learning. However, studies that have shown this have usually devoted far more extensive periods of time to intensive practice of the targeted feature than is available in a typical course. Also, the few studies that have followed students who receive such instruction over time (e.g., Lightbown 1983) have found that once the pedagogic focus shifts to new linguistic targets, learners revert to an earlier stage on the normal path to acquisition of the structure they had supposedly mastered in isolation and “ahead of schedule.”

Note that interlanguage development refers not just to grammar; pronunciation, vocabulary, formulaic chunks, collocations, sentence patterns, are all part of the development process. To take just one example, U-shaped learning curves can be observed in learning the lexicon. Learners have to master the idiosyncratic nature of words, not just their canonical meaning. While learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. Only by passing through a period of incorrectness, in which the lexicon is used in a variety of ways, can they climb back up the U-shaped curve.

Interlanguage development takes place in line with what Corder (1967) referred to as the internal “learner syllabus”. Students don’t learn different bits of the L2 when and how a teacher might decide to deal with them, but only when they are developmentally ready to do so. As Pienemann demonstrates (e.g., Pienemann, 1987) learnability (i.e., what learners can process at any one time), determines teachability (i.e., what can be taught at any one time).

Emergentism

Emergentism is an umbrella term referring to a range of usage-based theories which are fast becoming the new paradigm for psycholinguistic research. The return to this more “empiricist” view involves a discussion of the philosophy of science which I won’t go into here, although I discuss it at length in my book on Theory Construction and SLA (Jordan, 2004). It’s complicated! Anyway, “connectionist” and associative learning views are based on the premise that language emerges from communicative use, and that the process of L2 learning does not require resorting to any putative “black box” in the mind to explain it. A leading spokesman for emergentism is Nick Ellis (e.g., 1998, 2002, 2019), who argues that language processing is “intimately tuned to input frequency”. This leads him to develop a usage-based theory which holds that “acquisition of language is exemplar based”.

The power law of practice is taken by Ellis as the underpinning for his frequency-based account. Ellis argues that “a huge collection of memories of previously experienced utterances”, rather than knowledge of abstract rules, is what underlies the fluent use of language. In short, emergentists take language learning to be “the gradual strengthening of associations between co-occurring elements of the language”, and they see fluent language performance as “the exploitation of this probabilistic knowledge” (Ellis, 2002: 173).

Ellis is committed to a Saussurean view, which sees “the linguistic sign” as a set of mappings between phonological forms and communicative intentions. He claims that

simple associative learning mechanisms operating in and across the human systems for perception, motor-action and cognition, as they are exposed to language data as part of a communicatively-rich human social environment by an organism eager to exploit the functionality of language are what drives the emergence of complex language representations”.

My personal view, following Gregg (2003), is that combining observed frequency effects with the power law of practice, and thus explaining acquisition order by appealing to frequency in the input doesn’t go very far in explaining the acquisition process itself. What role do frequency effects have? How do they interact with other aspects of the SLA process? In other words, we need to know how frequency effects fit into a theory of SLA, because frequency and the power law of practice in themselves don’t provide a sufficient theoretical framework, and neither does connectionism. As Gregg points out “connectionism itself is not a theory; it is a method, and one that in principle is neutral as to the kind of theory to which it is applied” (Gregg, 2003: 55). Emergentism stands or falls on connectionist models and so far the results are disappointing. A theory that will explain the process by which nature and nuture, genes and the environment, interact without recourse to innate knowledge, remains “around the corner”, as Ellis admits.

So where do we put emergentist theories when it comes to the interface riddle? Without doubt, they belong in the Very Weak Interface camp. They argue that language learning is an essentially implicit process, and that the role of explicit learning is a minor one. For example, Nick Ellis uses a weak version of Schmidt’s Noticing hypothesis to argue that in Instructed SLA, drawing students’ attention to certain non-salient or infrequent parts of the input can “reset the dial” (the dial set by their L1s) and thus enable further implicit learning. I note that Mike Long agrees with this view, to which, in later work he contributed. Mike’s untimely passing earlier this year prevented us from resolving our differences.

Carroll: Autonomous Industion

Finally we come to those who challenge the basis of the input -> processing-> output model, and in particular the Noticing hypothesis. Truscott and Sharwood Smith (2004) propose a MOGUL framework, and Carroll (2001) offers her Autonomous Induction theory. Both rely heavily on Jackendoff’s (1992) Representational Modularity Theory (see my post of Jakendoff’s place in Carroll’s theory). A few words on Carroll’s work.

Caroll challenges the basis of Krashen’s and subsequent scholar’s theories. She sees input as physical stimuli, and intake as a subset of this stimuli.

The view that input is comprehended speech is mistaken. Comprehending speech happens as a consequence of a successful parse of the speech signal. Before one can successfully parse the L2, one must learn it’s grammatical properties. Krashen got it backwards! (Carroll, 2001, p. 78).

So, says Carroll, language learning requires the transformation of environmental stimuli into mental representations, and it’s these mental representations which must be the starting point for language learning. In order to understand speech, for example, properties of the acoustic signal have to be converted to intake; in other words, the auditory stimulus has to be converted into a mental representation.

 “Intake from the speech signal is not input to leaning mechanisms, rather it is input to speech parsers. … Parsers encode the signal in various representational formats” (Carroll, 2001, p.10).

Carroll gives a detailed examination of what happens to environmental stimuli by appeal to Jackendoff’s theory, which I discuss in a post on his contribution to Carroll’s theory. I also discuss Carroll’s theory in a number of posts (use the Search box) . Suffice it to say here that Carroll’s and Truscott & Sharwood Smith’s theories both agree, in their different ways, that L2 learning is predominantly a matter of implicit learning, and that they belong in the Very Weak Interface camp.

Conclusion

Either Krashen or Schmidt is right; at least one of them is wrong. Either Krashen or Nick Ellis is right; ditto. Either N. Ellis or Carroll is right; ditto. And on and on. However, when it comes to deciding on the interface enigma, only Schmidt and skills-based theory take a Strong Interface view. What we can conclude is that whether they base themselves on some version of innate knowledge at work in language knowledge, or they rely on more simple and general learning mechanism working on input from the environment, SLA scholars agree that learning an L2 depends mostly on implicit learning. The implication is that ELT based on following a synthetic syllabus, and thus giving prime place to teaching explicit knowledge about the language, leads to inefficacious classroom practices.   

References         

Carroll, S. (2001) Input and Evidence. Amsterdam, Bejamins.

Chaudron, C. (1985). Intake: On Models and Methods for Discovering Learners’ Processing of Input. Studies in Second Language Acquisition, 7, 1, 1-14.

Corder, P. (1967). The significance of learners’ errors. International Review of Applied Linguistics, 5, 161-169.

DeKeyser, R. (2007). Practice in a Second Language: Perspectives from Applied Linguistics and Cognitive Psychology (Cambridge Applied Linguistics). Cambridge: Cambridge University Press.

Ellis, N. (1998). Emergentism, Connectionism and Language Learning. Language Learning 48:4,  pp. 631–664.

Ellis, N. C. (2002). Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Studies in Second Language Acquisition, 24, 2, 143-188.

Ellis, N. C. (2019). Essentials of a theory of language cognition. Modern Language Journal, 103.

Gregg, K.R. (1984) Krashen’s Monitor and Occam’s Razor. Applied Linguistics, 5, 2, 79–100.

Gregg. K.R. (1993) Taking Explanation seriously. Applied Linguistics, 14, 3.

Jackendoff, R.S. (1992) Language of the mind. Cambridge, Ma; MIT Press.

Krashen, S. 1981: Second language acquisition and second language learning.

Krashen, S. 1982: Principles and practice in second language acquisition. Oxford;

Krashen, S. 1985: The Input Hypothesis: Issues and Implications. New York: Longman.

McLaughlin, B. (1987). Theories of second language learning. London: Edward Arnold.

O’Grady, W. (2005)  How Children Learn Language. Cambridge, UK: Cambridge University Press.

Schmidt, R. (1990). The role of consciousness in second language learning. Applied Linguistics, 11, 2, 129-58.

Schmidt, R. (2001) Attention. In P. Robinson (Ed.), Cognition and second language instruction (pp.3-32). Cambridge University Press.

Schmidt, R. (2010) Attention, awareness, and individual differences in language learning. In W. M. Chan, et. al  Proceedings of CLaSIC 2010, Singapore, December 2-4 (pp. 721-737). Singapore: National University of Singapore, Centre for Language Studies.

Slobin, D. I. (ed.). (1985). The crosslinguistic study of language acquisition, Vol. 1. The data; Vol. 2. Theoretical issues. Lawrence Erlbaum Associates, Inc

Truscott, J. (1998). Noticing in second language acquisition: a critical review. Second Language Research, 2.

Truscott, J. (2015) Consciousness and SLA. Multilingual Matters.

Truscott , J. , & Sharwood Smith , M.( 2004 ). Acquisition by processing: A modular approach to language development. Bilingualism: Language and Cognition, 7, 1 – 20 .