Teacher Trainers and Educators in ELT

This blog is dedicated to improving the quality of Second Language Teacher Education (SLTE)

The Teacher Trainers and Educators 

The most influential ELT teacher trainers and educators are those who publish “How to teach” books and articles, have on-line blogs and a big presence on social media, give presentations at ELT conferences, and travel around the world giving workshops and teacher training & development courses. Many of the best known and highest paid teacher educators are also the authors of coursebooks. Apart from the top “influencers”, there are tens of thousands of  teacher trainers worldwide who deliver pre-service courses such as CELTA, or the Trinity Cert TESOL, or an MA in TESOL, and thousands working with practicing teachers in courses such as DELTA and MA programmes. Special Interest Groups in TESOL and IATEFL also have considerable influence.

What’s the problem? 

Most current SLTE pays too little attention to the question “What are we doing?”, and the follow-up question “Is what we’re doing effective?”. The assumption that students will learn what they’re taught is left unchallenged, and those delivering SLTE concentrate either on coping with the trials and tribulations of being a language teacher (keeping fresh, avoiding burn-out, growing professionally and personally) or on improving classroom practice. As to the latter, they look at new ways to present grammar structures and vocabulary, better ways to check comprehension of what’s been presented, more imaginative ways to use the whiteboard to summarise it, more engaging activities to practice it, and the use of technology to enhance it all, or do it online.  A good example of this is Adrian Underhill and Jim Scrivener’s “Demand High” project, which leaves unquestioned the well-established framework for ELT and concentrates on doing the same things better. In all this, those responsible for SLTE simply assume that current ELT practice efficiently facilitates language learning.  But does it? Does the present model of ELT actually deliver the goods, and is making small, incremental changes to it the best way to bring about improvements? To put it another way, is current ELT practice efficacious, and is current SLTE leading to significant improvement? Are teachers making the most effective use of their time? Are they maximising their students’ chances of reaching their goals?

As Bill VanPatten argues in his plenary at the BAAL 2018 conference, language teaching can only be effective if it comes from an understanding of how people learn languages. In 1967, Pit Corder was the first to suggest that the only way to make progress in language teaching is to start from knowledge about how people actually learn languages. Then, in 1972, Larry Selinker suggested that instruction on formal properties of language has a negligible impact (if any) on real development in the learner.  Next, in 1983, Mike Long raised the issue again of whether instruction on formal properties of language made a difference in acquisition.  Since these important publications, hundreds of empirical studies have been published on everything from the effects of instruction to the effects of error correction and feedback. This research in turn has resulted in meta-analyses and overviews that can be used to measure the impact of instruction on SLA. All the research indicates that the current, deeply entrenched approach to ELT, where most classroom time is dedicated to explicit instruction, vastly over-estimates the efficacy of such instruction.

So in order to answer the question “Is what we’re doing effective?”, we need to periodically re-visit questions about how people learn languages. Most teachers are aware that we learn our first language/s unconsciously and that explicit learning about the language plays a minor role, but they don’t know much about how people learn an L2. In particular, few teachers know that the consensus of opinion among SLA scholars is that implicit learning through using the target language for relevant, communicative  purposes is far more important than explicit instruction about the language. Here are just 4 examples from the literature:

1. Doughty, (2003) concludes her chapter on instructed SLA by saying:

In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning in improving performance in complex control tasks, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.

2. Nick Ellis (2005) says:

the bulk of language acquisition is implicit learning from usage. Most knowledge is tacit knowledge; most learning is implicit; the vast majority of our cognitive processing is unconscious.

3. Whong, Gil and Marsden’s (2014) review of a wide body of studies in SLA concludes:

“Implicit learning is more basic and more important  than explicit learning, and superior.  Access to implicit knowledge is automatic and fast, and is what underlies listening comprehension, spontaneous  speech, and fluency. It is the result of deeper processing and is more durable as a result, and it obviates the need for explicit knowledge, freeing up attentional resources for a speaker to focus on message content”.

4. ZhaoHong, H. and Nassaji, H. (2018) review 35 years of instructed SLA research, and, citing the latest meta-analysis, they say:

On the relative effectiveness of explicit vs. implicit instruction, Kang et al. reported no significant difference in short-term effects but a significant difference in longer-term effects with implicit instruction outperforming explicit instruction.

Despite lots of other disagreements among themselves, the vast majority of SLA scholars agree on this crucial matter. The evidence from research into instructed SLA gives massive support to the claim that concentrating on activities which help implicit knowledge (by developing the learners’ ability to make meaning in the L2, through exposure to comprehensible input, participation in discourse, and implicit or explicit feedback) leads to far greater gains in interlanguage development than concentrating on the presentation and practice of pre-selected bits and pieces of language.

One of the reasons why so many teachers are unaware of the crucial importance of implicit learning is that so few of those responsible for SLTE talk about it. Teacher trainers and educators don’t tell pre-service or practicing teachers  about the research findings on interlanguage development, or that language learning is not a matter of assimilating knowledge bit by bit; or that the characteristics of working memory constrain rote learning; or that by varying different factors in tasks we can significantly affect the outcomes. And there’s a great deal more we know about language learning that those responsible for SLTE don’t pass on to teachers, even though it has important implications for everything in ELT from syllabus design to the use of the whiteboard; from methodological principles to the use of IT, from materials design to assessment.

We know that in the not so distant past, generations of school children learnt foreign languages for 7 or 8 years, and the vast majority of them left school without the ability to maintain an elementary conversational exchange in the L2. Only to the extent that teachers have been informed about, and encouraged to critically evaluate, what we know about language learning, constantly experimenting with different ways of engaging their students in communicative activities, have things improved. To the extent that teachers continue to spend most of the time talking to their students about the language, those improvements have been minimal.  So why is all this knowledge not properly disseminated?

Most teacher trainers and educators, including Penny Ur (see below), say that, whatever its faults, coursebook-driven ELT is practical, and that alternatives such as TBLT are not. Ur actually goes as far as to say that there’s no research evidence to support the view that TBLT is a viable alternative to coursebooks. Such an assertion is contradicted by the evidence. In a recent statistical meta-analysis by Bryfonski & McKay (2017) of 52 evaluations of program-level implementations of TBLT in real classroom settings, “results revealed an overall positive and strong effect (d = 0.93) for TBLT implementation on a variety of learning outcomes” in a variety of settings, including parts of the Middle-East and East Asia, where many have flatly stated that TBLT could never work for “cultural” reasons, and “three-hours-a-week” primary and secondary foreign language settings, where the same opinion is widely voiced. So there are alternatives to the coursebook approach, but teacher trainers too often dismiss them out of hand, or simply ignore them.

How many  SLTE courses today include a sizeable component devoted to the subject of language learning, where different theories are properly discussed so as to reveal the methodological principles that inform teaching practice?  Or, more bluntly: how many such courses give serious attention to examining the complex nature of language learning, which is likely to lead teachers to seriously question the efficacy of basing teaching on the presentation and practice of a succession of bits of language? Current SLTE doesn’t encourage teachers to take a critical view of what they’re doing, or to base their teaching on what we know about how people learn an L2. Too many teacher trainers and educators base their approach to ELT on personal experience, and on the prevalent “received wisdom” about what and how to teach. For thirty years now, ELT orthodoxy has required teachers to use a coursebook to guide students through a “General English” course which implements a grammar-based, synthetic syllabus through a PPP methodology. During these courses, a great deal of time is taken up by the teacher talking about the language, and much of the rest of the time is devoted to activities which are supposed to develop “the 4 skills”, often in isolation. There is good reason to think that this is a hopelessly inefficient way to teach English as an L2, and yet, it goes virtually unchallenged.

Complacency

The published work of most of the influential teacher educators demonstrates a poor grasp of what’s involved in language learning, and little appetite to discuss it. Penny Ur is a good example. In her books on how to teach English as an L2, Ur spends very little time discussing the question of how people learn an L2, or encouraging teachers to critically evaluate the theoretical assumptions which underpin her practical teaching tips. The latest edition of Ur’s widely recommended A Course in Language Teaching includes a new sub-section where precisely half a page is devoted to theories of SLA. For the rest of the 300 pages, Ur expects readers to take her word for it when she says, as if she knew, that the findings of applied linguistics research have very limited relevance to teachers’ jobs. Nowhere in any of her books, articles or presentations does Ur attempt to seriously describe and evaluate evidence and arguments from academics whose work challenges her approach, and nowhere does she encourage teachers to do so. How can we expect teachers to be well-informed, critically acute professionals in the world of education if their training is restricted to instruction in classroom skills, and their on-going professional development gives them no opportunities to consider theories of language, theories of language learning, and theories of teaching and education? Teaching English as an L2 is more art than science; there’s no “best way”, no “magic bullet”, no “one size fits all”. But while there’s still so much more to discover, we now know enough about the psychological process of language learning to know that some types of teaching are very unlikely to help, and that other types are more likely to do so. Teacher educators have a duty to know about this stuff and to discuss it with thier trainees.

Scholarly Criticism? Where?  

Reading the published work of leading teacher educators in ELT is a depressing affair; few texts used for the purpose of teacher education in school or adult education demonstrate such poor scholarship as that found in Harmer’s The Practice of Language Teaching, Ur’s A Course in Language Teaching, or Dellar and Walkley’s Teaching Lexically, for example. Why are these books so widely recommended? Where is the critical evaluation of them? Why does nobody complain about the poor argumentation and the lack of attention to research findings which affect ELT? Alas, these books typify the general “practical” nature of SLTE, and their reluctance to engage in any kind of critical reflection on theory and practice. Go through the recommended reading for most SLTE courses and you’ll find few texts informed by scholarly criticism. Look at the content of SLTE courses and you’ll be hard pushed to find a course which includes a component devoted to a critical evaluation of research findings on language learning and ELT classroom practice.

There is a general “craft” culture in ELT which rather frowns on scholarship and seeks to promote the view that teachers have little to learn from academics. Those who deliver SLTE are, in my opinion, partly responsible for this culture. While it’s  unreasonable to expect all teachers to be well informed about research findings regarding language learning, syllabus design, assessment, and so on, it is surely entirely reasonable to expect teacher trainers and educators to be so. I suggest that teacher educators have a duty to lead discussions, informed by relevant scholarly texts, which question common sense assumptions about the English language, how people learn languages, how languages are taught, and the aims of education. Furthermore, they should do far more to encourage their trainees to constantly challenge received opinion and orthodox ELT practices. This surely, is the best way to help teachers enjoy their jobs, be more effective, and identify the weaknesses of current ELT practice.

My intention in this blog is to point out the weaknesses I see in the works of some influential ELT teacher trainers and educators, and invite them to respond. They may, of course, respond anywhere they like, in any way they like, but the easier it is for all of us to read what they say and join in the conversation, the better. I hope this will raise awareness of the huge problem currently facing ELT: it is in the hands of those who have more interest in the commercialisation and commodification of education than in improving the real efficacy of ELT. Teacher trainers and educators do little to halt this slide, or to defend the core principles of liberal education which Long so succinctly discusses in Chapter 4 of his book SLA and Task-Based Language Teaching.

The Questions

I invite teacher trainers and educators to answer the following questions:

1 What is your view of the English language? How do you transmit this view to teachers?

2 How do you think people learn an L2? How do you explain language learning to teachers?

3 What types of syllabus do you discuss with teachers? Which type do you recommend to them?

4 What materials do you recommend?

5 What methodological principles do you discuss with teachers? Which do you recommend to them?

References

Bryfonski, L., & McKay, T. H. (2017). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research.

Dellar, H. and Walkley, A. (2016) Teaching Lexically. Delata.

Doughty, C. (2003) Instructed SLA. In Doughty, C. & Long, M. Handbook of SLA, pp 256 – 310. New York, Blackwell.

Long, M. (2015) Second Language Acquisition and Task-Based Language Teaching. Oxford, Wiley.

Ur, P. A Course in Language Teaching. Cambridge, CUP.

Whong, M., Gil, K.H. and Marsden, H., (2014). Beyond paradigm: The ‘what’ and the ‘how’ of classroom research. Second Language Research, 30(4), pp.551-568.

ZhaoHong, H. and Nassaji, H. (2018) Introduction: A snapshot of thirty-five years of instructed second language acquisition. Language Teaching Research, in press.

On Flores (2020) From academic language to language architecture: Challenging raciolinguistic ideologies in research and practice

Flores (2020) uses the term ‘academic language’ 35 times in the course of his article, and yet never manages to explain what it refers to. He claims that scholars (e.g. Cummins, 2000 and Schleppegrell, 2004) see academic language as “a list of empirical linguistic practices that are dichotomous with non-academic language”. Nowhere does Flores clearly state what an “empirical linguistic practice” refers to, and nowhere does he delineate a list of these putative practices. Meanwhile, Flores attributes “less precise” definitions to educators. For them, academic language “includes content-specific vocabulary and complex sentence structures”, while non-academic language is “less specialized and less complex”. Thus, Flores offers no definition of the way he himself is using the term ‘academic language’.

Seemingly unperturbed by this failure to define the key term in his paper, Flores sails on, using the clumsy rudder of “framing” to guide him. Flores asserts that scholars and educators use a “dichotomous framing” of academic and home languages, such that

academic language warrants a complete differentiation from the rest of language that is framed as non-academic.

Flores proceeds to claim that academic language is not, in fact, a list of empirical linguistic practices (as if anyone had ever succinctly argued that it was), but rather “a raciolinguistic ideology that frames the home language practices of racialized communities as inherently deficient” and “typically reifies deficit perspectives of racialized students”.

Academic Language versus Language Architecture

As an alternative to ‘academic language’, Flores outlines the perspective of “language architecture”, which

frames racialized students as already understanding the relationship between language choice and meaning through the knowledge that they have gained through socialization into the cultural and linguistic practices of their communities.

To illustrate this perspective in action, a lesson plan built around a “translingual mentor text” is offered, to serve “as an exemplar” for teachers. The text “incorporates Spanish into a text that is primarily written in English that students could use to construct their own stories”. The goal is for students “to make connections between the language architecture that they engage in on a daily basis and the translingual rhetorical strategies utilized in the book in order to construct their own texts (Newman, 2012)”.

Having described how a teacher implements part of the lesson plan (or “unit plan”, as he calls it), Flores comments

To be fair, proponents of the concept of academic language would likely support this unit plan.

But there’s a “key difference” – language architecture doesn’t try to build bridges; instead, it assumes that “the language architecture that Latinx children from bilingual communities engage in on a daily basis is legitimate on its own terms and is already aligned to the CCSS”.

Discussion

Flores’ 2020 article is based on a strawman version of Cummins’ term ‘academic language’ (see, for example Cummins & Yee-Fun, 2007) which Cummins uses to argue his case for additive bilingualism. As noted in my earlier post Multilingualism, Translanguaging and Baloney, Cummins denies García and Flores assertion that his construct of additive bilingualism necessarily entails distinct language systems. In his 2020 paper, Flores wrongly imputes to Cummins the “dichotomous distinction” between “academic language” and “home languages”, where academic language is defined as “a list of empirical linguistic practices that are dichotomous with non-academic language”. Note that Flores also suggests that Cummins’ work perpetuates white supremacy, and that, by extension, all those (scholars and teachers alike) who see additive approaches to bilingualism as legitimate ways of attacking problems encountered by bilingual students are guilty of perpetuating white supremacy. It often seems, particularly from the rantings of some of Flores’ supporters on Twitter, that only tirelessly vigilant promotion of translanguaging (whatever that might entail) is enough to exempt anyone “white” from the accusation of racism.

Throughout his work, Flores implies that practically all language teachers in English-speaking countries (and perhaps further afield) treat “the home language practices of racialized communities” as “inherently deficient”. They are thus complicit in perpetuating white supremacy. Cummins has repeatedly denied Flores’ accusations against him, and I dare say that teachers would similarly regard Flores’ accusations as wrong and unfair, if not offensive. Flores’ article raises the following questions:

  1. Is it fair for Flores to accuse “white” teachers of perpetuating white supremacy by behaving as “white listening/reading subjects” who “frame racialized speakers as deficient”? Is that really what they do?
  2.  Are teachers’ extremely varied, nuanced and ongoing efforts to use rather than proscribe their students’ L1s through code-switching, translation and other means best seen as perpetuating white supremacy?
  3. Do teachers’ attempts to “modify” the “language practices” of their students provide convincing evidence of their complicity in prepetuating white supremacy?
  4. What exactly are the differences in terms of pedagogical practice between Flores’ example of a teacher using a predominantly English text containing Spanish words and the suggestions made by Cummins (2017)?
  5. What exactly is the “new listening/reading subject position” that Flores wants teachers to adopt? How does it become “central” to their work?
  6. What changes should they ask their bosses to make in the syllabuses, materials and assessment procedures they work with?
  7. And what are the implications for the rest of us, the majority, who work in countries where English is not the L1? Does Flores even recognise that we are in a context where many of his assumptions don’t apply?
Choir-master leading a rural congregation singing hymns. Hand-colored woodcut of a 19th-century illustration

Preaching to the choir

The fact that Flores gives even a rough sketch of translangaging in action in his 2020 article is in itself worthy of note – anyone who has trudged through the jargon-clogged, obscurantist texts that translanguaging scholars grind out will know that such practical examples are hard to come by. It prompts the question “Who do Flores, and other leading protagonists such as García, Rosa, Li Wei, and Valdés, think they’re talking to?”. My suggestion is that they’re “preaching to the choir” – talking, that is, to a relatively small number of people who share their relativist epistemology, their socio-cultural sociolinguistic stance, and their same muddled, poorly-articulated political views. Just BTW, I have yet to see a good outline of the political views of translanguaging scholars by ANY of them. The case of Li Wei is particularly stark. How does the author of Translanguaging as a Practical Theory of Language reconcile the views supported in that article (e.g., “there’s no such thing as Language”) with his job as the dean and director of a famous institute of education with a reputable applied linguistics department which sells all sorts of courses where languages are studied as if they actually existed? The answer, I suppose, is that few of those who might take offence at Li Wei’s opinions have even the foggiest idea of what he’s talking about.

Conclusion

I think it’s fair to say that translanguaging is, and will remain, irrelevant to all but the most academically inclined among the millions of teachers involved in ELT, because its protagonists, pace the titles of some of their papers, show little interest in practical matters. None of the important things that teachers concern themselves with – the syllabuses, materials, testing and pedagogic procedures of ELT – is addressed in a way that most of them would understand or find useful.

Phllip Kerr, in his recent post Multilingualism, linguanomics and lingualism uses Deborah Cameron’s (2013) description of discourses of ‘verbal hygiene’ to describe the work of the translanguaging protagonists. Cameron says that these ‘verbal hygiene’ texts are    

linked to other preoccupations which are not primarily linguistic, but rather social, political and moral. The logic behind verbal hygiene depends on a tacit, common-sense analogy between the order of language and the larger social order; the rules or norms of language stand in for the rules governing social or moral conduct, and putting language to rights becomes a symbolic way of putting the world to rights (Cameron, 2013: 61).

He adds:

Their professional worlds of the ‘multilingual turn’ in bilingual and immersion education in mostly English-speaking countries hardly intersect at all with my own professional world of EFL teaching in central Europe, where rejection of lingualism is not really an option.

If teachers are to be persuaded to reject lingualism, they’ll need better, clearer arguments than those offered by Flores and the gang.

References

Cummins, J. (2017). Teaching Minoritized Students: Are Additive Approaches Legitimate? Harvard Educational Review, 87, 3, 404-425.

Cummins J., Yee-Fun E.M. (2007) Academic Language. In: Cummins J., Davison C. (eds) International Handbook of English Language Teaching. Springer International Handbooks of Education, vol 15. Springer, Boston, MA.

Flores, N. (2020) From academic language to language architecture: Challenging raciolinguistic ideologies in research and practice, Theory Into Practice, 59:1, 22-31,

Flores, N., & Rosa, J. (2015). Undoing Appropriateness: Raciolinguistic Ideologies and Language Diversity in Education. Harvard Educational Review, 85, 2, 149–171.

Rosa, J., & Flores, N. (2017). Unsettling race and language: Toward a raciolinguistic perspective. Language in Society, 46, 5, 621-647.

Coming Soon

I.V. Dim & F. Offandback (2022) Of Baps and Nannies and Texts that go off overnight. Journal of Pre-School Languaging Studies, 1,1, 1 – 111.

Abstract

In this article, we offer an existentially-motivated, pluri-dimensional contribution to the on-going interrogation of reactionary expressions of whiteness, framed by colonial practices aimed at the perpetuation of white supremacy through the overdetermination of racialized otherness and deficit languaging policies which seek to misrepresent, muffle, gag, sideline and otherwise distort holistic ethnographic encounters with socially-constructed micro and macro narratives by marginalized communities which function inter alia to decentralize and destabilize whiteness and the misogynistic, reactionary, expressions of harmful views obstructing the free expression of emerging as yet unheard voicings of counter-hegemonic knowledges and lifeways. We adopt a “from the inside out” perspective, trialed and recommended by progressive tailors everywhere, which permits and encourages the framing of a challenge to two specific obstacles to the optimum development of the language and overall underdetermined educational praxis of toddlers attending pre-school educational and wellness centers in Hudson Yards, New York, and Hampstead, London. Using mixed and embedded ethnographic qualitatively authenticated and triangulated methodological procedures, we challenge the utility and ethicality of the use of standardized overdetermined academic language practices to implement the synchronic distribution of macadamia butter baps by plurilingual nannies, many of whom engage with the children in code-switching and other reactionary linguistic practices associated with the discredited practices associated with additive bilingualism.

Decolonialized educational praxis must center non-hegemonic modes of “otherwise thinking” which promote, encourage and legitimize the translanguaging instinct, consonant with multiple semiotic and socio-cultural adjustments which act as multi-sensory conduits guiding children towards a transformation of the present, anticipating  reinscribing our human, historical commonality in the act of translanguaging and leading to the metamorphosis of language into a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making. Data include 3-D imaged representational olfactory enhanced modellings of the macadamia butter baps and multi-modal rich transcripts and 6th level avatar re-enactments of the nannies’ quasi-spontaneous interventions.       

Quick Version of the Review of Li Wei’s (2018) Theory of Language

Li Wei (2018) seeks “to develop Translanguaging as a theory of language”.

Key Principles:

1 The process of theorization involves a perpetual cycle of practice-theory-practice.

2 The criterion for assessing rival theories of the same phenomena is “descriptive adequacy”.  The key measures of descriptive adequacy are “richness and depth“.

3 “Accuracy” cannot serve as a criterion for theory assessment because no one description of an actual practice is necessarily more accurate than another.

4 Descriptions involve the observer including “all that has been observed, not just selective segments of the data”.

5 A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations.

Section 3 is headed “The Practice”.

 Li Wei gives samples of conversations between multilingual speakers. The analysis of the transcripts is perfunctory and provides little support for the assertion that the speakers are not “mixing languages”, but rather using “New Chinglish” (Li 2016a), which includes

ordinary English utterances being re-appropriated with entirely different meanings for communication between Chinese users of English as well as creations of words and expressions that adhere broadly to the morphological rules of English but with Chinese twists and meanings.

His examples are intended to challenge the “myth of a pure form of a language” and to argue that talking about people having different languages must be replaced by an understanding of a more complex interweaving of languages and language varieties, where boundaries between languages and concepts such as native, foreign, indigenous, minority languages are “constantly reassessed and challenged”.

Section 4 is on Translanguaging

Li Wei’s leans on Becker’s (1991) notion of Languaging, which suggests that there is no such thing as Language, but rather, only “continual languaging, an activity of human beings in the world “(p. 34) and on ‘ecological psychology’, which challenges ‘the code view’ of language, and sees language as ‘a multi-scalar organization of processes that enables the bodily and the situated to interact with situation-transcending cultural-historical dynamics and practices’ (Thibault 2017: 78). Language learning should be viewed not as acquiring language, but rather as a process where novices “adapt their bodies and brains to the languaging activity that surrounds them”.  Li Wei concludes “For me, language learning is a process of embodied participation and resemiotization.”  

Li Wei makes two further arguments:

1) Multilinguals do not think unilingually in a politically named linguistic entity, even when they are in a ‘monolingual mode’ and producing one namable language only for a specific stretch of speech or text.

2) Human beings think beyond language and thinking requires the use of a variety of cognitive, semiotic, and modal resources of which language in its conventional sense of speech and writing is only one.

The first point refers to Fodor’s (1975) seminal work The Language of Thought. Li Wei offers no summary of Fodor’s “Language of Thought” hypothesis and no discussion of it, so the reader might not know that this language of thought is usually referred to as “Mentalese“, and is described very technically by Fodor so as to distinguish it fro named languages.

Li Wei states:”there seems to be a confusion between the hypothesis that thinking takes place in a Language of Thought (Fodor 1975) — in other words, thought possesses a language-like or compositional structure — and that we think in the named language we speak. The latter seems more intuitive and commonsensical”. Yes, it does, but why exactly this is a problem, (which it is!) and how Fodor’s Language of Thought hypothesis solves it (which many say it doesn’t) is not clearly explained.

As for the second argument, this concerns “the question of what is going on when bilingual and multilingual language users are engaged in multilingual conversations”. Li Wei finds it hard to imagine that they shift their frame of mind so frequently in one conversational episode let alone one utterance. He claims that we do not think in a specific, named language separately, and cites Fodor (1983) to resolve the problem. Li Wei misinterprets Fodor’s view of the modularity of mind. Pace Li Wei, Fodor does not claim that the human mind consists of a series of modules which are “encapsulated with distinctive information and for distinct functions”, and that “Language” is one of these modules. Gregg points out (see comment in unabridged version) that Fodor vigorously opposed the view that the mind is made up of modules; he spent a good deal of time arguing against that idea (see e.g. his “The Mind Doesn’t Work That Way”), the so-called Massive Modularity hypothesis. For Fodor, the mind contains modules, which is very different from the view Li Wei quite wrongly ascribes to him.

Li Wei goes on to say that Fodor’s hypothesis “has somehow been understood to mean” that “the language and other human cognitive processes are anatomically and/or functionally distinct”. Again, Fodor said no such thing. Li Wei does not cite any researcher who “somehow came to understand” Fodor’s argument about modular mind in such an erroneous way, and he does not clarify that Fodor made no such claim. Li Wei simply asserts that in research design, “the so-called linguistic and non-linguistic cognitive processes” have been assessed separately. He goes on to triumphantly dismantle this obviously erroneous assertion and to claim it as evidence for the usefulness of his theory.

Section 5: Translanguaging Space and Translanguaging Instict

This section contains inspirational sketches which add nothing to the theory of language.

Translanguaging Space

Li Wei suggests that the act of Translanguaging creates a social space for the language user “by bringing together different dimensions of their personal history, experience, and environment; their attitude, belief, and ideology; their cognitive and physical capacity, into one coordinated and meaningful performance  (Li  2011a:  1223)”. This Translanguaging Space has transformative power because “it is forever evolving and combines and generates new identities, values and practices; …. by underscoring  learners’  abilities to push and break boundaries between named language and between language varieties, and to flout norms of behaviour including linguistic behaviour, and criticality” (Li 2011a,b; Li and Zhu 2013)”.

As an example of the practical implications of Translanguaging Space, Li Wei cites García and Li’s (2014), vision “where teachers and students can go between and beyond socially constructed language and educational systems, structures and practices to engage diverse multiple meaning-making systems and subjectivities, to generate new configurations of language and education practices, and to challenge and transform old understandings and structures”.

Translanguaging Instinct

Li Wei’s construct of a Translanguaging Instinct (Li 2016b) uses arguments for an ‘Interactional Instinct’, a biologically based drive for infants and children to attach, bond, and affiliate with conspecifics in an attempt to become like them (Lee et al. 2009; Joaquin and Schumann 2013).

This natural drive provides neural structures that entrain children acquiring their languages to the faces, voices, and body movements of caregivers. It also determines the relative success of older adolescents and adults in learning additional languages later in life due to the variability of individual aptitude and motivation as well as environmental conditions”.

Le Wei extends this idea in what he calls a Translanguaging Instinct (Li 2016b) “to emphasize the salience of mediated interaction in everyday life in the 21st century, the multisensory and multi- modal process of language learning and language use”. The Translanguaging Instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Li Wei suggests that, pace the Minimalist programme (sic!), a “Principle of Abundance” is in operation in human communication. Human beings draw on as many different sensory, modal, cognitive, and semiotic resources to interpret meaning intentions, and they read these multiple cues in a coordinated manner rather than singularly.

Li Wei’s discussion of the implications of the idea of the Translanguaging Instinct use uncontroversial statements about language learning which have nothing relevant to add to the theory. 

Discussion

So what is the Translanguaging theory of language? Despite endorsing the view that there is no such thing as language, and that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are nonsensical, the theory amounts to the claim that language is a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making.

The appendages about Translanguaging Space and a Translanguaging Instinct have little to do with a theory of language. The first is a blown-up recommendation for promoting language learning outside the classroom, and the second is a claim about language learning itself, to the effect that an innate instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Stripped of its academic obcurantism and the wholly unsatisfactory discussion of Fodor’s Language of Thought and his work on the modularity of mind, both bits of fluff strike me as being as inoffensive as they are unoriginal.  

Theories

What is a theory? I’ve dealt with this in Jordan (2004) and also in many posts. A theory is generally regarded as being an attempt to explain phenomena. Researchers working on a theory use observational data to support and test it.

Li Wei adopts the following strategy:

1. Skip the tiresome step of offering a coherent definition of the key theoretical construct and content yourself with the repeated vague assertion that language is “a resource for sense-and meaning-making”,

2. Rely on the accepted way of talking about parts of language by those you accuse of reducing language to a code,

3. Focus on attacking the political naming of languages, re-hashing obviously erroneous views about L1s, l2s, etc. and developing the view that language is a muultilingual, multisemiotic, multisensory, and multimodal resource.

He thus abandons any serious attempt at theory construction, resorts instead to a string of assertions dressed up in academic clothes and call it a “theory of practice”. Even then, Li Wei doesn’t actually say what he takes a theory of practice to be. He equates theory construction with “knowledge construction”, without saying what he means by “knowledge”. Popper (1972) adopts a realist epistemology and explains what he means by “objective knowledge”. In contrast, Li Wei adopts a relativist epistemology, where objective knowledge is jettisoned and “descriptive adequacy” replaces it, to be measured by “richness and depth”, which are nowhere defined.

How do we measure the richness and depth of competing “descriptions”? Is Li Wei seriously suggesting that different subjective accounts of the observations of language practice by different observers is best assessed by undefined notions of richness and depth?

The poverty of Li Wei’s criteria for assessing a “practical theory” is compounded by his absurd claim that researchers who act as observers must describe “all that has been observed, not just selective segments of the data”. “All that has been observed”? Really?   

Finally, the good bits. I applaud Li Wei’s attempt, bad as I judge it to be, to bridge the gap between psycholinguistic and sociolinguistic work on SLA. And, as I’ve already said in my post Multilingualism, Translanguaging and Theories of SLA, there are things we can agree on. ELT practice should recognise that teaching is informed by the monolingual fallacy, the native speaker fallacy and the subtractive fallacy (Phillipson, 2018).  The ways in which English is privileged in education systems is a disgrace, and policies that strengthen linguistic diversity are needed to counteract linguistic imperialism. Translanguaging is to be supported in as much as it affirms bilinguals’ fluent languaging practices and aims to legitimise hybrid language uses. ELT must generate translanguaging spaces where practices which explore the full range of users’ repertoires in creative and transformative ways are encouraged.

References

Cook, V. J. (1993). Linguistics and Second Language Acquisition.  Macmillan.

Ellis, N. (2002). Frequency effects in language processing: A Review with Implications for Theories of Implicit and Explicit Language Acquisition. Studies in SLA, 24,2, 143-188.

Gregg, K.R. (1993). Taking Explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Gregg, K. R. (2004). Explanatory Adequacy and Theories of Second Language Acquisition. Applied Linguistics 25, 4, 538-542.

Jordan, G. (2004). Theory Construction in SLA. Benjamins.

Li Wei (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39, 1, 9 – 30.

Phillipson, R. (2018) Linguistic Imperialism. Downloadable from https://www.researchgate.net/publication/31837620_Linguistic_Imperialism_

Popper, K. R. (1972). Objective Knowledge.  Oxford University Press.

Schmidt, R., & Frota, S. (1986). Developing basic conversational ability in a second language: A case study of an adult learner. In R. Day (Ed.), Talking to learn: Conversation in second language acquisition (pp. 237-369). Rowley, MA: Newbury House.

See Li Wei (2018) for the other references.

Li Wei (2018) Translanguaging as a Practical Theory of Language

In his 2018 article, Li Wei seeks “to develop Translanguaging as a theory of language”. Along the way, he highlights the contributions that Translanguaging makes to debates about the “Language and Thought” and the “Modularity of Mind” hypotheses and tries to bridge “the artificial and ideological divides between the so-called sociocultural and the cognitive approaches to Translanguaging practices.”

Section 2

After the Introduction, Section 2 outlines the principles which guide his “practical theory of language for Applied Linguistics”. They’re based on Mao’s interpretation of Confucius and Marx’s dialectical materialism (sic). Here are the main points, with short comments:

1 The process of theorization involves a perpetual cycle of practice-theory-practice.

Amen to that.

2 The criterion for assessing rival theories of the same phenomena is “descriptive adequacy”.  The key measures of descriptive adequacy are “richness and depth“.

No definitions of the constructs “richness” or “depth” are offered, no indication is given of how they might be operationalized, and no explanation of this assertion is given.

3 “Accuracy” cannot serve as a criterion for theory assessment: “no one description of an actual practice is necessarily more accurate than another because description is the observer–analyst’s subjective understanding and interpretation of the practice or phenomenon that they are observing“.

No definition is given of the term “accuracy” and no discussion is offered of how theoretical constructs used in practical theory (such as “languaging”, “resemiotization” and “body dynamics”) can be operationalised.

4 Descriptions involve the observer including “all that has been observed, not just selective segments of the data”.

No explanation of how an observer can describe “all that has been observed” is offered. 

5 The main objective of a practical theory is not to offer predictions or solutions but interpretations that can be used to observe, interpret, and understand other practices and phenomena.

No justification for this bizarre assertion is offered.  

6 Questions are formulated on the basis of the description and as part of the observer–analyst’s interpretation process. Since interpretation is experiential and understanding is dialogic, these questions are therefore ideologically and experientially sensitive.

No explanation of what this means is offered.

7 A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations.

No explanation of exactly what “principles” are involved is offered, and no indicators for measuring “consequentialties” are mentioned.  

8. An important assessment of the value of a practical theory is the extent to which it can ask new and different questions on both the practice under investigation and other existing theories about the practice. Yes indeed.

Section 3 is headed “The Practice”.

 Li Wei explains that he’s primarily concerned with the language practices of multilingual language users, and goes on to give samples of conversations between multilingual speakers. The analysis of the transcripts is perfunctory and provides little support for the assertion that the speakers are not “mixing languages”, but rather using “New Chinglish” (Li 2016a), which includes

ordinary English utterances being re-appropriated with entirely different meanings for communication between Chinese users of English as well as creations of words and expressions that adhere broadly to the morphological rules of English but with Chinese twists and meanings.

His examples are intended to challenge the “myth of a pure form of a language” and to argue that talking about people having different languages must be replaced by an understanding of a more complex interweaving of languages and language varieties, where boundaries between languages and concepts such as native, foreign, indigenous, minority languages are “constantly reassessed and challenged”.

Section 4 is on Translanguaging

Li Wei starts from Becker’s (1991) notion of Languaging, which suggests that there is no such thing as Language, but rather, only “continual languaging, an activity of human beings in the world “(p. 34). Language should not be regarded ‘as an accomplished fact, but as in the process of being made’ (p. 242). Li Wei also refers to work from ‘ecological psychology’, which seees languaging as ‘an assemblage of diverse material, biological, semiotic and cognitive properties and capacities which languaging agents orchestrate in real-time and across a diversity of timescales’ (Thibault 2017: 82). Such work challenges ‘the code view’ of language, urges us to ‘grant languaging a primacy over what is languaged’, and to see language as ‘a multi-scalar organization of processes that enables the bodily and the situated to interact with situation-transcending cultural-historical dynamics and practices’ (Thibault 2017: 78). The divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are thus “nonsensical”. So language learning should be viewed not as acquiring language, but rather as a process where novices “adapt their bodies and brains to the languaging activity that surrounds them’, and in doing so, ‘participate in cultural worlds and learn that they can get things done with others in accordance with the culturally promoted norms and values’ (Thibault 2017: 76). Li Wei concludes “For me, language learning is a process of embodied participation and resemiotization (see see also McDermott and Roth 1978; McDermott et al. 1978; Dore and McDermott 1982; and Gallagher and Zahavi 2012)”.

Next, Li Wei explains that he added the Trans prefix to Languaging in order to not only have a term that captures multilingual language users’ fluid and dynamic practices, but also to put forward two further arguments:

1) Multilinguals do not think unilingually in a politically named linguistic entity, even when they are in a ‘monolingual mode’ and producing one namable language only for a specific stretch of speech or text.

2) Human beings think beyond language and thinking requires the use of a variety of cognitive, semiotic, and modal resources of which language in its conventional sense of speech and writing is only one.

The first point refers to Fodor’s (1975) seminal work The Language of Thought. Li Wei offers no summary of Fodor’s “Language of Thought” hypothesis and no discussion of it. So the reader might not know that this language of thought is usually referred to as “Mentalese“, and that very technical, but animated discussions about whether or not Mentalese exists, and if it does, how it works, have been going on for the last 40+ years among philosophers, cognitive scientists and linguists. Without any proper introduction, Li Wei simply states: “there seems to be a confusion between the hypothesis that thinking takes place in a Language of Thought (Fodor 1975) — in other words, thought possesses a language-like or compositional structure — and that we think in the named language we speak. The latter seems more intuitive and commonsensical”. In my opinion, he doesn’t make it clear why the latter view causes a problem, why, that is, “it cannot address the question of how bilingual and multilingual language users think without referencing notions of the L1, ‘native’ or ‘dominant’ language”, and he doesn’t clearly explain how Fodor’s Language of Thought hypothesis solves the problem. All he says is

If we followed the argument that we think in the language we speak, then we think in our own idiolect, not a named language. But the language-of-thought must be independent of these idiolects, and that is the point of Fodor’s theory. We do not think in Arabic, Chinese, English, Russian, or Spanish; we think beyond the artificial boundaries of named languages in the language-of-thought”.

I fail to see how this cursory discussion does anything to support the claim that Translanguaging Theory makes any worthwhile contribution to the debate that has followed Fodor’s Language of thought hypothesis.

As for the second argument, this concerns “the question of what is going on when bilingual and multilingual language users are engaged in multilingual conversations”. Li Wei finds it hard to imagine that they shift their frame of mind so frequently in one conversational episode let alone one utterance. He claims that we do not think in a specific, named language separately, and cites Fodor (1983) to resolve the problem. Li Wei reports Fodor’s Modularity of Mind hypothesis as claiming that the human mind consists of a series of modules which are “encapsulated with distinctive information and for distinct functions”. Language is one of these modules. As Gregg has pointed out to me (see the comment below) “Fodor did not think that the mind is made up of modules; he spent a good deal of time arguing against that idea (see e.g. his “The Mind Doesn’t Work That Way”), the so-called Massive Modularity hypothesis. For Fodor, the mind contains modules; big difference” (my emphases). Worse, Li Wei says that Fodor’s hypothesis “has somehow been understood to mean” something that, in fact, Fodor did not say or imply, namely that “the language and other human cognitive processes are anatomically and/or functionally distinct”. Li Wei does not cite any researcher who somehow came to understand Fodor’s argument about modular mind in that way, but simply asserts that in research design, “the so-called linguistic and non-linguistic cognitive processes” have been assessed separately. He goes on to triumphantly dismantle this obviously erroneous assertion and to claim it as evidence for the usefulness of his theory.

Section 5: Translanguaging Space and Translanguaging Instict

“The act of Translanguaging creates a social space for the language user by bringing to- gether different dimensions of their personal history, experience, and environment; their attitude, belief, and ideology; their cognitive and physical capacity, into one coordinated and meaningful performance  (Li  2011a:  1223)”. This Translanguaging Space has transformative power because “it is forever evolving and combines and generates new identities, values and practices”.  It underscores multilinguals’ creativity, “their abilities to push and break boundaries between named language and between language varieties, and to flout norms of behaviour including linguistic behaviour, and criticality —the ability to use evidence to question, problematize, and articulate views (Li 2011a,b; Li and Zhu 2013)”.

A Translanguaging Space shares elements of the vision of Thirdspace articulated by Soja (1996) as “a space of extraordinary openness, a place of critical exchange where the geographical imagination can be expanded to encompass a multiplicity of perspectives that have heretofore been considered by the epistemological referees to be incompatible and uncombinable”. Soja proposes that it is possible to generate new knowledge and discourses in a Thirdspace. A Translanguaging Space acts as a Thirdspace which does not merely encompass a mixture or hybridity of first and second languages; instead it invigorates languaging with new possibilities from ‘a site of creativity and power’, as bell hooks (1990: 152) says. Going beyond language refers to trans- forming the present, to intervening by reinscribing our human, historical com- monality in the act of Translanguaging” (Li Wei, 2018, p. 24).

As an example of the practical implications of Translanguaging Space, Li Wei cites García and Li’s (2014), vision “where teachers and students can go between and beyond socially constructed language and educational systems, structures and practices to engage diverse multiple meaning-making systems and subjectivities, to generate new configurations of language and education practices, and to challenge and transform old understandings and structures”. Stirring stuff.

Li Wei’s construct of a Translanguaging Instinct (Li 2016b) draws on the arguments for an ‘Interactional Instinct’, a biologically based drive for infants and children to attach, bond, and affiliate with conspecifics in an attempt to become like them (Lee et al. 2009; Joaquin and Schumann 2013).

This natural drive provides neural structures that entrain children acquiring their languages to the faces, voices, and body movements of caregivers. It also determines the relative success of older adolescents and adults in learning additional languages later in life due to the variability of individual aptitude and motivation as well as environmental conditions”.

Le Wei extends this idea in what he calls a Translanguaging Instinct (Li 2016b) “to emphasize the salience of mediated interaction in everyday life in the 21st century, the multisensory and multi- modal process of language learning and language use”. The Translanguaging Instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Li Wei suggests that, pace the Minimalist programme (sic!), a “Principle of Abundance” is in operation in human communication. Human beings draw on as many different sensory, modal, cognitive, and semiotic resources to interpret meaning intentions, and they read these multiple cues in a coordinated manner rather than singularly.

In the meantime, the Translanguaging Instinct highlights the gaps between meaning, what is connected to forms of the language and other signs, and message, what is actually inferred by hearers and readers, and leaves open spaces for all the other cognitive and semiotic systems that interact with linguistic semiosis to come into play (Li Wei, 2018, p. 26).

Li Wei’s discussion of the implications of the idea of the Translanguaging Instinct might have been written by an MA student of psycholinguistics. Below is a summary, mostly consisting of quotes.  

Human beings “rely on different resources differentially during their lives. In first language acquisition, infants naturally draw meaning from a combination of sound, image, and action, and the sound–meaning mapping in word learning crucially involves image and action. The resources needed for literacy acquisition are called upon later”.

“In bilingual first language acquisition, the child additionally learns to associate the target word with a specific context or addressee as well as contexts and addressees where either language is acceptable, giving rise to the possibility of code- switching”.

“In second language acquisition in adolescence and adulthood, some resources become less available, for example resources required for tonal discrimination, while others can be enhanced by experience and become more salient in language learning and use, for example resources required for analysing and comparing syntactic structures and pragmatic functions of specific expressions. As people become more involved in complex communicative tasks and demanding environments, the natural tendency to combine multiple resources drives them to look for more cues and exploit different resources. They will also learn to use different resources for different purposes, resulting in functional differentiation of different linguistic resources (e.g. accent, writing) and between linguistic and other cognitive and semiotic resources. Crucially, the innate capacity to exploit multiple resources will not be diminished over time; in fact it is enhanced with experience. Critical analytic skills are developed in terms of understanding the relationship between the parts (specific sets of skills, such as counting; drawing; singing) and the whole (multi-competence (Cook 1992; Cook and Li 2016) and the capacity for coordination between the skills subsets) to functionally differentiate the different resources required for different tasks“.

One consequence of the Translanguaging perspective on bilingualism and multilingualism research is making the comparison between L1 and L2 acquisition purely in terms of attainment insignificant. Instead, questions should be asked as to what resources are needed, available, and being exploited for specific learning task throughout the lifespan and life course? Why are some resources not available at certain times? What do language users do when some resources become difficult to access? How do language users combine the available resources differentially for specific tasks? In seeking answers to these questions, the multisensory, multimodal, and multilingual nature of human learning and interaction is at the centre of the Translanguaging Instinct idea” (Li Wei, 2018, pp 24-25).

There’s hardly anything I disagree with in all this, apart from the dubious, forced connection made between all this elementary stuff and the “Translangaguing perspective”.

Discussion

So what is the Translanguaging theory of language? Despite endorsing the view that there is no such thing as language, and that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are nonsensical, the theory amounts to the claim that language is a muultilingual, multisemiotic, multisensory, and multimodal resource for sense-and meaning-making.

The appendages about Translanguaging Space and a Translanguaging Instinct have little to do with a theory of language. The first is a blown-up recommendation for promoting language learning outside the classroom, and the second is a claim about language learning itself, to the effect that an innate instinct drives humans to go beyond narrowly defined linguistic cues and transcend culturally defined language boundaries to achieve effective communication. Stripped of its academic obcurantism and the wholly unsatisfactory discussion of Fodor’s Language of Thought and his work on the modularity of mind, both bits of fluff strike me as being as inoffensive as they are unoriginal.  

Theories

What is a theory? I’ve dealt with this in Jordan (2004) and also in many posts. A theory is generally regarded as being an attempt to explain phenomena. Researchers working on a theory use observational data to support and test it. Furthermore, it’s generally recognised that, pace Li Wei, we can’t just observe the world: all observation is “theory-laden”; as Popper (1972) puts it, there’s no way we can talk about something sensed and not interpreted. Even in everyday life we don’t – can’t – just “observe”, and those committed to a scientific approach to language learning recognize that researchers observe guided by a problem they want to solve: research is fundamentally concerned with problem-solving, and it benefits from a clear focus in a well-defined domain. Here’s an example of how this applies to theories of language:  

Chomskian theory claims that, strictly speaking, the mind does not know languages but grammars; ‘the notion “language” itself is derivative and relatively unimportant’ (Chomsky, 1980, p. 126).  “The English Language” or “the French Language” means language as a social phenomenon – a collection of utterances.  What the individual mind knows is not a language in this sense, but a grammar with the parameters set to particular values. Language is another epiphenomenon: the psychological reality is the grammar that a speaker knows, not a language (Cook, 1994: 480).

And here’s Gregg (1996)

… “language” does not refer to a natural kind, and hence does not constitute an object for scientific investigation. The scientific study of language or language acquisition requires the narrowing down of the domain of investigation, a carving of nature at its joints, as Plato put it. From such a perspective, modularity makes eminent sense (Gregg, 1996, p. 1).

 Both Chomsky and Gregg see the need to narrow the domain of any chosen investigation in order to study it more carefully. So they want to go beyond the common-sense view of language as a way of expressing one’s thoughts and feelings (not, most agree, following Fodor, to be confused with thinking itself) and of communicating with others, to a careful description of its core parts and then to an explanation of how we learn them. Now you might disagree, in several ways. You might reject Chomsky’s theory and prefer, for example, Nick Ellis’ usage-based theory (see, for example Ellis, 2002), which embraces the idea of language as a socially constructed epiphenomenon, and claims that it’s learned through social engagement where all sorts of inputs from the environment are processed in the mind by very general learning mechanisms, such as the power law of practice. But Ellis recognises the need to provide some description of what’s learned and I defy most readers to make sense of Ellis’ ongoing efforts to describe a “construction grammar”.  Or you might take a more bottom-up research stance and decide to just feel your way – observe some particular behaviour, turning over and developing ideas and move slowly up to a generalization. But even then, you need SOME idea of what you’re looking for. Gregg (1993) gives a typically eloquent discussion of the futility of attempts to base research on “observation”.    

Or you might, like Li Wei, adopt the following strategy:

1. Skip the tiresome step of offering a coherent definition of the key theoretical construct and content yourself with the repeated vague assertion that language is “a resource for sense-and meaning-making”,

2. Rely on the accepted way of talking about parts of language by those you accuse of reducing language to a code,

3. Focus on attacking the political naming of languages, re-hashing obviously erroneous views about L1s, l2s, etc. and developing the view that language is a muultilingual, multisemiotic, multisensory, and multimodal resource.

 If so, you abandon any serious attempt at theory construction, resort to a string of assertions dressed up in academic clothes and call it a “theory of practice”. Even then, Li Wei doesn’t actually say what he takes a theory of practice to be. He equates theory construction with “knowledge construction”, without saying what he means by “knowledge”. Popper (1972) adopts a realist epistemology and explains what he means by “objective knowledge” (accepting that all observation is theory-laden). In contrast, we have to infer what Li Wei means by knowledge through the reason he gives for dismissing “accuracy” as a criterion for theory assessment, viz., as already quoted above, “no one description of an actual practice is necessarily more accurate than another because description is the observer–analyst’s subjective understanding and interpretation of the practice or phenomenon that they are observing”. This amounts to a relativist epistemology where objective knowledge is jettisoned and “descriptive adequacy” replaces it, to be measured by “richness and depth”, which are nowhere defined.

How do we measure the richness and depth of competing “descriptions”? For example we have (1) Li Wei’s descriptions of conversational exchanges among his research participants, and (2) Schmidt and Frota’s (1986) description of an adult learner of Portuguese. The two descriptions of the learners’ utterances serve different purposes, they don’t amount to competing arguments, but how do we assess the descriptions and the analyses? How about: “I prefer (2) because the description of the weather outside was richer”. Are these two “descriptions” not better assessed by criteria such as their coherence and their success in supporting the hypothesis that informs their observations? Schmidt and Frota are addressing a problem about what separates input from intake (the hypothesis being that “noticing” is required), while Li Wei is addressing the problem of how we interpret code-switching, and his hypothsis is that it’s not a matter of calling on separately stored knowledge about 2 rigidly different named languages. Is Li Wei seriously suggesting that different subjective accounts of the observations of language practice by different observers is best assessed by undefined notions of richness and depth?

The poverty of Li Wei’s criteria for assessing a “practical theory” is compounded by his absurd claim that researchers who act as observers must describe “all that has been observed, not just selective segments of the data”. “All that has been observed”? Really?   

But wait a minute! There’s another criterion! “A theory should provide a principled choice between competing interpretations that inform and enhance future practice, and the principles are related to the consequentialities of alternative interpretations”. As noted, we’re not told what the “principles” are, and no indicators for measuring “consequentialties” are mentioned. Still, it’s more promising that the other criteria. And, of course, it’s taken from a well-respected criterion used by scientists anchored in a realist epistemology: ceteris paribus, the more a theory leads to the practical solution of problems, the better it is.   

Finally, the good bits. I applaud Li Wei’s attempt, bad as I judge it to be, to bridge the gap between psycholinguistic and sociolinguistic work on SLA. And, as I’ve already said in my post Multilingualism, Translanguaging and Theories of SLA, there are things we can agree on. ELT practice should recognise that teaching is informed by the monolingual fallacy, the native speaker fallacy and the subtractive fallacy (Phillipson, 2018).  The ways in which English is privileged in education systems is a disgrace, and policies that strengthen linguistic diversity are needed to counteract linguistic imperialism. Translanguaging is to be supported in as much as it affirms bilinguals’ fluent languaging practices and aims to legitimise hybrid language uses. ELT must generate translanguaging spaces where practices which explore the full range of users’ repertoires in creative and transformative ways are encouraged.

References

Cook, V. J. (1993). Linguistics and Second Language Acquisition.  Macmillan.

Ellis, N. (2002). Frequency effects in language processing: A Review with Implications for Theories of Implicit and Explicit Language Acquisition. Studies in SLA, 24,2, 143-188.

Gregg, K.R. (1993). Taking Explanation seriously; or, let a couple of flowers bloom. Applied Linguistics 14, 3, 276-294.

Gregg, K. R. (2004). Explanatory Adequacy and Theories of Second Language Acquisition. Applied Linguistics 25, 4, 538-542.

Jordan, G. (2004). Theory Construction in SLA. Benjamins.

Li Wei (2018) Translanguaging as a Practical Theory of Language. Applied Linguistics, 39, 1, 9 – 30.

Phillipson, R. (2018) Linguistic Imperialism. Downloadable from https://www.researchgate.net/publication/31837620_Linguistic_Imperialism_

Popper, K. R. (1972). Objective Knowledge.  Oxford University Press.

Schmidt, R., & Frota, S. (1986). Developing basic conversational ability in a second language: A case study of an adult learner. In R. Day (Ed.), Talking to learn: Conversation in second language acquisition (pp. 237-369). Rowley, MA: Newbury House.

See Li Wei (2018) for the other references.

Li Wei on Translanguaging

As translanguaging continues to attract attention, here’s a quick review of a recent contribution to the field by Prof. Li Wei. (Note that I’ve done 2 recent post on translanguaging: Multilingualism Translanguaging and theories of SLA; and Multilingualism, Translanguaging and Baloney. The first one gives a quick description of the construct.

Li Wei’s (20201) article Translanguaging as a political stance: implications for English language education” makes several claims, the most undisputed being that the naming of languges is a political act. Yes it is; and so is language teaching and indeed all teaching, – see, for example, off the top of my head, Piaget, Vygotsky, A. S. Neill, Dewey, Steiner, Marx, Friere, Illich, Gramsci, Goodman, Crookes, Long, … add your own favorites.   

So what does Li Wei offer here as the best “political stance” for ELT? He offers translanguaging, which he’s already discussed in a series of published work (see, for example, Li Wei, 2018 and Garcia et al 2021). Why is it the best? Because it sees language as a fluid, embodied social construct, whatever that means. What does it offer in terms of new, innovative, practical implications for English language education? Nothing. Absolutely nothing.  

The main points of Li Wei’s 2021 ELTJ article are these:  

1. “Named languages are political constructs and historico-ideological products of the nation-state boundaries”.

Comment: They most certainly are.

2. “Named languages have no neuropsychological correspondence…. human beings have a natural instinct to go beyond narrowly defined linguistic resources in meaning- and sense-making, as well as an ability, acquired through socialization and social participation, to manipulate the symbolic values of the named languages such as identity positioning” (Li 2018).

Comment: Typical academic jargon makes up a claim vague enough to have little force. The author’s highly disputable claims elsewhere have more force, for example, his (2018) assertion that the divides between the linguistic, the paralinguistic, and the extralinguistic dimensions of human communication are “nonsensical”.    

3. We should shift from “a fixation on language as an abstractable coded system” to attention to the language user.

Comment: Amen to that, except that Li Wei elsewhere defines language and comments on constructs such as negative transfer, errors and much else besides in ways that I find preposterous.

4. ELT should embrace the “active use of multiple languages and other meaning-making resources in a dynamic and integrated way”. Furthermore, “the languages the learners already have should and can play a very positive role in learning additional languages”.

Comment: Only the most reactionary among us could disagree! The problem is that scant attention is given to how this might affect teaching practice. Nothing in this article, supposedly aimed at teachers, says anything useful to teachers. Despite its title, the article ends with a short section on “English medium education: practical challenges” where aspirational, academically expressed bullshit is all that’s on offer.

How do teachers actually change their practice? Apart from the exhortations Li Wei makes for teachers to see different languages as less fully-separated than they might suppose; to see students’ previously learned languages as assets; to  resist any urge to correct errors too quickly; and to generally encourage a multilingualist environment (all of it useful advice), he says nothing about the implications, I mean the real classroom day-to-day implications, of all this theoretical posturing. In terms of the syllabuses, materials, testing and pedagogic procedures of ELT, how is the theory of translingualism to be put into practice? Don’t expect answers from Prof. Li Wei.   

On Twitter, I asked Li Wei, who had tweeted to advertise his ELTJ article, what this bit in his article meant:

“To regard certain ways of expressing one’s thought as errors and attribute them to negative transfers from the L1 is to create a strawman for raciolinguistic ideologies”.

He didn’t answer the question.

I then asked:

A Spanish L1 student of mine writes “Freud’s Patient X dreamed with his visit to the Altes Museum”.  If I attribute this error to negative transfer from the L1, how does it create a strawman for raciolinguistic ideologies?

He replied:

Raciolinguistic ideology would ‘expect’ L2 users to produce ‘deviations’ from ‘standard’ language when such ‘deviations’ are in fact ideolects which by definition are personal and sensitive to individual’s socialisation trajectory including language learning trajectory.

He suggests that I’m trapped in a raciolinguistic ideology, where ideolects get misinterpreted as “deviations” because of an allegiance to racist-drenched, standard English.

In Spanish, they say

“Soñaba con … ” (I dreamed* with …),

 while in English we say

“I dreamed about …” or “I dreamed of  …”.  

(*”dreamt” is often used)

I think “I dreamed with you last night” is lovely. But it’s an error – it’s “marked”, as we say. “I dreamed with a visit to the Altes Museum” could be confusing to the reader / listener. How should teachers respond to such errors? They might well decide to let it go, but they might decide to do a recast, or to talk about the difference. What does Li Wei suggest teachers do? Well they certainly shouldn’t pounce on it and make the student feel bad – we don’t need his theory to tell us that. Maybe his theory suggests that teachers should celebrate this particular error, talk about it, discuss other examples. What about other errors, such as “I have twenty years” (I’m twenty) or “He goed to the library (He went to the bookshop`) and millions more? I asked Li Wei on Twitter how he would advise teachers to deal with the “dream with ..” error and he didn’t reply. I suggest that his reluctance stemmed from the fact that he’s trapped in his own daft “theory” which doesn’t want to recognise “errors” that students of English as an L2 make. The theory doesn’t like the use of the word “errors” and it doesn’t like the construct of negative transfer, either. Yet errors play a key role in interlanguage development, and negative transfer has been observed millions of times by researchers and teachers: it’s a fact which can’t – or at the very least shouldn’t – be proscribed because it offends the dictates of a half-baked theory.  

Here’s a text that I’ve invented, which might have come from an overseas student doing an MA in Applied Linguistics.

Second language is foreign language or additional language and is learned in addition to first language. There is multiple uses of L2 for example tourism, business, study and other purposes (Jones and Smith, 2018). Acquire fluency in second language learning (SLL) can prove difficulty because of considerations of age, culture clash and other environmental factors. One example prevailing theory is critical period hypothesis (DeKeyser, 2000) which says young children have imprtant advantages over adults to acquire a L2.

How would a teacher versed in the theory of Translanguaging deal with this text? How would it differ from the treatment given by a teacher who is unaware of this theory? My point is simple: Translanguaging theory is yet to give any significant guidance to teachers’ practice. Why? Because its proponents are focused on theoretical concerns, particularly the promotion of a relativist epistemology and a peculiar way of observing phenomena through a socio-cultural lens.

Philip Kerr, who I’m sure would be anxious to insist that my views and his don’t coincide, comments in his recent post about translanguaging about the poverty of its practical results.

  • Jason Anderson’s  (2021) Ideas for translanguaging offers “nothing that you might not have found twenty or more years ago (e.g. in Duff, 1989; or Deller & Rinvolucri, 2002)”.
  • Rabbidge’s  (2019) book, Translanguaging in EFL Contexts differs little from earlier works which suggest incorporating the L1,
  • The Translanguaging Classroom by García and colleagues (2017) offers “little if anything new by way of practical ideas”.

Those teachers who manage to make sense of Li Wei’s ELTJ article will be left without any idea about what its practical consequences are.   

Finally, a political comment of my own. There’s some brief stuff in the article about ELT as a commercially driven, capitalist industry, all of which has been far more carefully and interestingly discussed elsewhere. I wonder if Prof. Li Wei will ever give his full attention to the coursebook-driven world of ELT, to the producers of the CEFR, the high stakes exams like the IELTS, or the Second Language Teacher Education racket. There’s nothing new in Li Wei’s 2021 article. It’s a carefully confected, warmed-over, one-more-article-under-the-belt job. In my next post I’ll examine the more substantial 2018 article, “Translanguaging as a Practical Theory of Language”, where his discussion of multilingual students’ dialogues first appeared.

References can be found in Li Wei (2001), which is free to download – click the link at the start of this post.

The Enigma of the Interface

Trying to understand the process of SLA, some scholars have concentrated on the psychological aspects of learning. What goes on in the mind? How does a learner go from not knowing to knowing an L2? In this post, I discuss attempts to explain the psychological process of learning an L2 and the enigma of the interface between conscious and unconscious learning. Bill VanPatten said in his plenary at the BAAL 2018 conference that language teaching can only be effective if it comes from an understanding of how people learn languages. In my opinion, the question of the interface between implicit and explicit learning is vital to such an understanding; it has a direct bearing on the syllabuses, materials and assessment tools used in ELT programmes. If, as I’ll suggest, learning an L2 is predominantly an unconscious process which happens mostly when learners are focused on meaning, then current ELT syllabuses, materials and tests, which reject this conclusion, are bound to hinder rather than help efficacious teaching.  

Declarative and Procedural knowledge

One way of attempting to explain L2 learning from a psychological perspective is to make a distinction between declarative and procedural knowledge. The argument goes as follows. Unlike the knowledge required to know about geography or human anatomy for example, knowing how to use an L2 for communication relies on unconscious procedural knowledge: knowledge of how to use the language for communicative purposes. Conscious declarative knowledge about the language plays a very minor role. For all a learner’s declarative knowledge, if they lack the (procedural) knowledge needed to use the language in real time communicative events, they can do little with their declarative knowledge, except pass exams which test it. (This typical item from an English exam John — to Paris yesterday. A) goes; B) went; C) has gone tests declarative knowledge in much the same way as this item from a a geography exam: Paris is the capital of A) Belgium; B) France; C) Spain). Stories abound of primary and secondary school students in English speaking countries who were taught French for many years, passed successive exams in French based on declarative knowledge about the language, but failed to put their knowledge to effective use when they visited Paris. Their declarative knowledge – their knowledge about French – didn’t help them much when it came to using French to get things done in France. They lacked procedural knowledge.

This is, of course, a theory, a tentative explanation of (among other things) the curious phenomenon of people who know a lot about an L2 but can’t put it to practical use, and it uses the theoretical constructs of declarative and procedural knowledge to provide the explanation. But, of course, these theoretical constructs need fleshing out. Which brings us to Krashen’s Monitor Model. Krashen argues that we learn an L2 in much the same way as we learn our L1s as infants, and he relies on Chomsky’s Universal Grammar (UG) theory to explain early language learning. Chomsky’s UG theory is difficult for non-specialists to understand, but in the simplest terms, UG theory claims that all languages share universal features, and they vary in terms of certain parameters. Grammaticality judgement experiments involving very large numbers of children over a period of more than forty years demonstrate that children know a great deal more about the languages they use than can be explained by looking at the language they have been exposed to (the Poverty of the Stimulus (PoS) argument). Chomsky argues that the best explanation for children’s extraordinary ability to use language by the time they’re 12 years old, and for the profound, intricate knowledge that underlies that ability, is that humans are hard-wired for language learning: it’s part of human nature. Innate knowledge about the general structure of language allows young children to “bootstrap” language encountered in the environment. They respond to input not as “table rasas”, but rather as humans prepared for the job: input triggers the setting of parameters on the innate knowledge they already have of the deep structure of languages. I remain unconvinced by attempts in the last sixty years – chaos theories, connectionist models, emergentists theories – to answer the PoS argument, or to provide a better explanation than Chomsky’s, but let’s see.

Back to Krashen’s Monitor Model. Adults learn an L2 in much the same way as children learn languages, by exposure to “comprehensible input”, language that they are exposed to which they can broadly understand, even if there are unknown elements in it. They “acquire” language unconsciously, and what they learn consciously is metalinguistic knowledge – knowledge about the language – which is of extremely limited use. It’s perfectly possible to do without this metalinguistic knowledge, as the millions of people who settle in foreign countries and learn the new language without it attest. Here’s the model:

Given its insistence on the very limited role played by conscious learning, Krashen’s theory is an example of the “No interface” view  – there is a clear difference between implicit (unconscious) learning and explicit (conscious) learning. These types of learning go on in different parts of the mind; they hardly affect each other; and implicit learning is what matters. Krashen’s theory was heavily criticised, notable by Gregg (1984) and McLaughlin (1987) who both highlighted the weak constructs in the model, leading to circularity. Furthermore, most scholars considered that, whatever its merits, the model was too dismissive of the role explicit learning plays in L2 learning.

Next comes VanPatten’s Input Processing (IP) theory, which attempts to explain how learners turn input into intake by parsing input during the act of comprehension while their primary attention is on meaning. VanPatten’s model consists of a set of principles (see my blog post on Processing Input for a list of these principles) which  interact in working memory, taking account the fact that working memory has very limited processing capacity. Content lexical items are searched out first, since words are the principal source of referential meaning. When content lexical items and a grammatical form both encode the same meaning and when both are present in an utterance, learners attend to the lexical item, not the grammatical form. Perhaps the most important construct in the IP model is “Communicative value”: the more a form has communicative value, the more likely it is to get processed and made available in the intake data for acquisition, and it’s thus the forms with no or little communicative value which are least likely to get processed and, without help, may never get acquired. In this theory, the processing is mostly unconscious, but explicit attention to some aspects of the L2 is seen as helpful, so, IMO, the IP theory belongs in the Very Weak Interface camp.

William O’Grady proposes a ‘general nativist’ theory of first and second language acquisition where a modular acquisition device that does not include Universal Grammar is described. O’Grady says his work forms part of the emergentist rubric, but since he sees the acquisition device as a modular part of mind, he’s a long way from the real empiricists in the emergentist camp. Interestingly, O’Grady accepts that there are sensitive periods involved in language learning, and that problems adults face in L2 acquisition can be explained by the fact that adults have only partial access to the (non-UG) L1 acquisition device. O’Grady describes a different kind of processor, doing more general things, but it’s still a language processor and it’s still working not just on segments of voice streams, and words, but on syntax, and thus O’Grady conforms to the view that language is an abstract system governed by rules of syntax. Given O’Grady’s “partial access” view, I think his view also belongs in the Very Weak Interface camp.

Swain’s (1985) famous study of French immersion programmes led to her claim that comprehensible input alone can allow learners to reach high levels of comprehension, but their proficiency and accuracy in production will lag behind, even after years of exposure. Further studies gave more support to this view, and to the opinion that comprehensible input is the necessary but not sufficient condition for proficiency in an L2. Swain’s argument is that we must give more attention to output.

And now we come to Schmidt’s view that in order to learn an L2, we need to “notice” formal features of the input. This enormously influential – and enormously misinterpreted – hypothesis lies at the heart of what the heading of this post calls “the enigma of the interface”.

First, we have to appreciate that Schmidt uses the word “noticing” in a very special, technical way – it’s a theoretical construct not to be confused with the dictionary definition. “Noticing” has subsequently been used, citing Schmidt, in ways that Schmidt roundly rejected. See my post on Schmidt for a brief outline of how he came to form the construct and note Truscott’s (2015) remarks:

Perhaps more disturbing are efforts to use noticing as a theoretical foundation for grammar instruction in general, without concern for whether any given grammar point is or is not a legitimate object of noticing for Schmidt (e.g. R. Ellis, 1993, 1994, 1995; Long & Robinson, 1998; Nassaji & Fotos, 2004). A genuine application of Schmidt’s concept of noticing would have to take this issue seriously. Given the mismatch between the typically broad aims of grammar instruction and the relatively narrow scope of Schmidt’s noticing, it is perhaps not surprising that pedagogical approaches tend to be applications of noticing in name only.

 When others use the term ‘noticing’ and cite Schmidt as its source, they are claiming – whether the claim is explicit or not – that their use rests on this work, when in fact it typically does not. The result is a widespread belief that research and pedagogy in the area of L2 learning are now being guided by a firmly established concept, rooted in extensive review and analysis of research and theory in psychology. But this appearance is an illusion. The reality, whatever one thinks of Schmidt’s noticing, is that most of the relevant work is guided by nothing more than a loose, intuitive notion that consciousness is somehow important.

To the issue, then. Schmidt asks: Is it possible to learn formal aspects of a second language that are not consciously noticed? His answer, at least in his (1990) original version of the hypothesis, is “No”. Schmidt points to disagreement on a definition of “intake”. While Krashen seems to equate intake with comprehensible input, Corder distinguishes between what is available for going in and what actually goes in, but neither Krashen nor Corder explain what part of input functions as intake for the learning of form. Schmidt also notes the distinction Slobin (1985), and Chaudron (1985) make between preliminary intake (the processes used to convert input into stored data that can later be used to construct language), and final intake (the processes used to organise stored data into linguistic systems). Schmidt proposes that all this confusion is resolved by defining intake as:

that part of the input which the learner notices … whether the learner notices a form in linguistic input because he or she was deliberately attending to form, or purely inadvertently. If noticed, it becomes intake (Schmidt, 1990: 139).

Thus:

subliminal language learning is impossible, and … noticing is the necessary and sufficient condition for converting input into intake (Schmidt, 1990:  130).

I hesitantly give Rod Ellis’ model of Schmidt’s view (hesitantly because it’s been challenged by many):

This is, obviously, a Strong Interface view, the complete opposite of Krashen’s: conscious knowledge of the grammar of the L2 is a necessary and sufficient condition for L2 learning.

I’ve written various posts on Schmidt’s work (see, particularly, a series of 3 posts on “Encounters with Noticing”, so here I’ll give a very quick summary of objections to it.  

Schmidt’s original hypothesis claimed that input can’t get processed without being “noticed”, and therefore all L2 learning is conscious. This claim is either trivially true, by adopting circular definitions of ‘conscious’ and ‘learning’, or obviously false. The claim that L2 learning is a process that starts with input going through a necessary stage in short-term memory where “language features” are “noticed” is untenable. It amounts to the claim that all language features in the L2 shuffle through short-term memory and if unnoticed have to re-present themselves. But how can language features present themselves, even once? As Gregg said in a comment on this blog:

Noticing is a perceptual act; you can’t perceive what is not in the senses, so far as I know. Connections, relations, categories, meanings, essences, rules, principles, laws, etc. are not in the senses.

Schmidt can’t expect us to accept that our knowledge of language is the result of “noticing” things from the environment that are presented to the senses because, quite simply, aspects of a language’s grammar, for example, are not there to be “noticed”. Furthermore, Ellis’s figure suggests that the 3 constructs on the top row of ‘noticing’, “comparing” and “integrating” are what turn input into output and explain IL development. But where is the noticing supposed to take place according to the figure? And what is “short/medium-term memory”?

In his 2010 paper, Schmidt confirms the concessions made in 2001, which amount to saying that ‘noticing’ is not needed for all L2 learning, but that the more you notice the more you learn. He also confirms that noticing does not refer to “noticing the gap”. However, the hypothesis remains unsatisfactory, for the following reasons:

1.The Noticing Hypothesis even in its weaker version doesn’t clearly describe the construct of ‘noticing’. There is no way to discern what is and isn’t “noticed”.

2. The empirical support claimed for the Noticing Hypothesis is not as strong as Schmidt (2010) claims.

3. A theory of SLA based on noticing a succession of forms faces the impassable obstacle that, as Schmidt (2010) seemed to finally admit, you can’t “notice” rules, or principles of grammar.

Gass: An Integrated Theory of SLA

Gass (1997), influenced by Schmidt, offers a more complete picture of what happens to input. She says it goes through stages of apperceived input, comprehended input, intake, integration, and output, thus subdividing Krashen’s comprehensible input into three stages: apperceived input, comprehended input, and intake. I don’t quite get “apperceived input”; Gass says it’s the result of attention, in the similar sense as Tomlin and Villa’s (1994) notion of orientation, and Schmidt says it’s the same as his noticing, which doesn’t help me much. In any case, once the intake has been worked on in working memory, Gass stresses the importance of negotiated interaction during input processing and eventual acquisition. I find the Gass model a rather unsatisfactory compilation of bits, but it suggests that L2 learning is predominantly a process of implicit learning and so takes a Weak interface stance.

Skills-based Theory

Skills-based theory also support the Strong Interface view. It is usually based on John Anderson’s (1983) ‘Adaptive Control of Thought’ model (a general learning theory, not a theory of SLA), which makes the distinction described above between declarative knowledge – conscious knowledge of facts; and procedural knowledge – unconscious knowledge of how an activity is done. When applied to instructed second language learning, the model suggests that learners should first be presented with information about the L2 (declarative knowledge ) and then they should practice using the information in various controlled and then more loosely controlled ways so that what they have consciously learned about the language is converted into unconscious knowledge of how to use the L2 (procedural knowledge). The learner moves from controlled to automatic processing, and through intensive linguistically focused rehearsal, achieves increasingly faster access to, and more fluent control over the L2 (see DeKeyser, 2007, for example).

The fact that nearly everybody successfully learns at least one language as a child without starting with declarative knowledge, and that millions of people learn additional languages without studying them (migrant workers, for example), challenges the claim that learning a language needs to begin with the imparting of declarative knowledge. Furthermore, the phenomenon of L1 transfer doesn’t fit well with a skill based approach, and neither do putative critical periods for language learning. But the main reason for rejecting such an approach is that it contradicts SLA research findings related to interlanguage development.

Selinker (1972) introduced the construct of interlanguages to explain learners’ transitional versions of the L2. Studies show that interlanguages exhibit common patterns and features, and that learners pass through well-attested developmental sequences on their way to different end-state proficiency levels. Examples of such sequences are found in morpheme studies; the four-stage sequence for ESL negation; the six-stage sequence for English relative clauses; and the sequence of question formation in German (see Hong and Tarone, 2016, for a review). Regardless of the order or manner in which target-language structures are presented in coursebooks, learners analyse input and create their own interim grammars, slowly mastering the L2 in roughly the same manner and order. Interlanguage (IL) development of individual structures has very rarely been found to be linear. Accuracy in a given grammatical domain typically progresses in a zigzag fashion, with backsliding, occasional U-shaped behavior, over-suppliance and under-suppliance of target forms, flooding and bleeding of a grammatical domain (Huebner 1983), and considerable synchronic variation, volatility  (Long 2003a), and diachronic variation. So the assumption that learners can move from zero knowledge to mastery of formal parts of the L2  one at a time and move on to the next item on a list is a fantasy. Explicit instruction in a particular structure can produce measurable learning. However, studies that have shown this have usually devoted far more extensive periods of time to intensive practice of the targeted feature than is available in a typical course. Also, the few studies that have followed students who receive such instruction over time (e.g., Lightbown 1983) have found that once the pedagogic focus shifts to new linguistic targets, learners revert to an earlier stage on the normal path to acquisition of the structure they had supposedly mastered in isolation and “ahead of schedule.”

Note that interlanguage development refers not just to grammar; pronunciation, vocabulary, formulaic chunks, collocations, sentence patterns, are all part of the development process. To take just one example, U-shaped learning curves can be observed in learning the lexicon. Learners have to master the idiosyncratic nature of words, not just their canonical meaning. While learners encounter a word in a correct context, the word is not simply added to a static cognitive pile of vocabulary items. Instead, they experiment with the word, sometimes using it incorrectly, thus establishing where it works and where it doesn’t. Only by passing through a period of incorrectness, in which the lexicon is used in a variety of ways, can they climb back up the U-shaped curve.

Interlanguage development takes place in line with what Corder (1967) referred to as the internal “learner syllabus”. Students don’t learn different bits of the L2 when and how a teacher might decide to deal with them, but only when they are developmentally ready to do so. As Pienemann demonstrates (e.g., Pienemann, 1987) learnability (i.e., what learners can process at any one time), determines teachability (i.e., what can be taught at any one time).

Emergentism

Emergentism is an umbrella term referring to a range of usage-based theories which are fast becoming the new paradigm for psycholinguistic research. The return to this more “empiricist” view involves a discussion of the philosophy of science which I won’t go into here, although I discuss it at length in my book on Theory Construction and SLA (Jordan, 2004). It’s complicated! Anyway, “connectionist” and associative learning views are based on the premise that language emerges from communicative use, and that the process of L2 learning does not require resorting to any putative “black box” in the mind to explain it. A leading spokesman for emergentism is Nick Ellis (e.g., 1998, 2002, 2019), who argues that language processing is “intimately tuned to input frequency”. This leads him to develop a usage-based theory which holds that “acquisition of language is exemplar based”.

The power law of practice is taken by Ellis as the underpinning for his frequency-based account. Ellis argues that “a huge collection of memories of previously experienced utterances”, rather than knowledge of abstract rules, is what underlies the fluent use of language. In short, emergentists take language learning to be “the gradual strengthening of associations between co-occurring elements of the language”, and they see fluent language performance as “the exploitation of this probabilistic knowledge” (Ellis, 2002: 173).

Ellis is committed to a Saussurean view, which sees “the linguistic sign” as a set of mappings between phonological forms and communicative intentions. He claims that

simple associative learning mechanisms operating in and across the human systems for perception, motor-action and cognition, as they are exposed to language data as part of a communicatively-rich human social environment by an organism eager to exploit the functionality of language are what drives the emergence of complex language representations”.

My personal view, following Gregg (2003), is that combining observed frequency effects with the power law of practice, and thus explaining acquisition order by appealing to frequency in the input doesn’t go very far in explaining the acquisition process itself. What role do frequency effects have? How do they interact with other aspects of the SLA process? In other words, we need to know how frequency effects fit into a theory of SLA, because frequency and the power law of practice in themselves don’t provide a sufficient theoretical framework, and neither does connectionism. As Gregg points out “connectionism itself is not a theory; it is a method, and one that in principle is neutral as to the kind of theory to which it is applied” (Gregg, 2003: 55). Emergentism stands or falls on connectionist models and so far the results are disappointing. A theory that will explain the process by which nature and nuture, genes and the environment, interact without recourse to innate knowledge, remains “around the corner”, as Ellis admits.

So where do we put emergentist theories when it comes to the interface riddle? Without doubt, they belong in the Very Weak Interface camp. They argue that language learning is an essentially implicit process, and that the role of explicit learning is a minor one. For example, Nick Ellis uses a weak version of Schmidt’s Noticing hypothesis to argue that in Instructed SLA, drawing students’ attention to certain non-salient or infrequent parts of the input can “reset the dial” (the dial set by their L1s) and thus enable further implicit learning. I note that Mike Long agrees with this view, to which, in later work he contributed. Mike’s untimely passing earlier this year prevented us from resolving our differences.

Carroll: Autonomous Industion

Finally we come to those who challenge the basis of the input -> processing-> output model, and in particular the Noticing hypothesis. Truscott and Sharwood Smith (2004) propose a MOGUL framework, and Carroll (2001) offers her Autonomous Induction theory. Both rely heavily on Jackendoff’s (1992) Representational Modularity Theory (see my post of Jakendoff’s place in Carroll’s theory). A few words on Carroll’s work.

Caroll challenges the basis of Krashen’s and subsequent scholar’s theories. She sees input as physical stimuli, and intake as a subset of this stimuli.

The view that input is comprehended speech is mistaken. Comprehending speech happens as a consequence of a successful parse of the speech signal. Before one can successfully parse the L2, one must learn it’s grammatical properties. Krashen got it backwards! (Carroll, 2001, p. 78).

So, says Carroll, language learning requires the transformation of environmental stimuli into mental representations, and it’s these mental representations which must be the starting point for language learning. In order to understand speech, for example, properties of the acoustic signal have to be converted to intake; in other words, the auditory stimulus has to be converted into a mental representation.

 “Intake from the speech signal is not input to leaning mechanisms, rather it is input to speech parsers. … Parsers encode the signal in various representational formats” (Carroll, 2001, p.10).

Carroll gives a detailed examination of what happens to environmental stimuli by appeal to Jackendoff’s theory, which I discuss in a post on his contribution to Carroll’s theory. I also discuss Carroll’s theory in a number of posts (use the Search box) . Suffice it to say here that Carroll’s and Truscott & Sharwood Smith’s theories both agree, in their different ways, that L2 learning is predominantly a matter of implicit learning, and that they belong in the Very Weak Interface camp.

Conclusion

Either Krashen or Schmidt is right; at least one of them is wrong. Either Krashen or Nick Ellis is right; ditto. Either N. Ellis or Carroll is right; ditto. And on and on. However, when it comes to deciding on the interface enigma, only Schmidt and skills-based theory take a Strong Interface view. What we can conclude is that whether they base themselves on some version of innate knowledge at work in language knowledge, or they rely on more simple and general learning mechanism working on input from the environment, SLA scholars agree that learning an L2 depends mostly on implicit learning. The implication is that ELT based on following a synthetic syllabus, and thus giving prime place to teaching explicit knowledge about the language, leads to inefficacious classroom practices.   

References         

Carroll, S. (2001) Input and Evidence. Amsterdam, Bejamins.

Chaudron, C. (1985). Intake: On Models and Methods for Discovering Learners’ Processing of Input. Studies in Second Language Acquisition, 7, 1, 1-14.

Corder, P. (1967). The significance of learners’ errors. International Review of Applied Linguistics, 5, 161-169.

DeKeyser, R. (2007). Practice in a Second Language: Perspectives from Applied Linguistics and Cognitive Psychology (Cambridge Applied Linguistics). Cambridge: Cambridge University Press.

Ellis, N. (1998). Emergentism, Connectionism and Language Learning. Language Learning 48:4,  pp. 631–664.

Ellis, N. C. (2002). Frequency effects in language processing: A review with implications for theories of implicit and explicit language acquisition. Studies in Second Language Acquisition, 24, 2, 143-188.

Ellis, N. C. (2019). Essentials of a theory of language cognition. Modern Language Journal, 103.

Gregg, K.R. (1984) Krashen’s Monitor and Occam’s Razor. Applied Linguistics, 5, 2, 79–100.

Gregg. K.R. (1993) Taking Explanation seriously. Applied Linguistics, 14, 3.

Jackendoff, R.S. (1992) Language of the mind. Cambridge, Ma; MIT Press.

Krashen, S. 1981: Second language acquisition and second language learning.

Krashen, S. 1982: Principles and practice in second language acquisition. Oxford;

Krashen, S. 1985: The Input Hypothesis: Issues and Implications. New York: Longman.

McLaughlin, B. (1987). Theories of second language learning. London: Edward Arnold.

O’Grady, W. (2005)  How Children Learn Language. Cambridge, UK: Cambridge University Press.

Schmidt, R. (1990). The role of consciousness in second language learning. Applied Linguistics, 11, 2, 129-58.

Schmidt, R. (2001) Attention. In P. Robinson (Ed.), Cognition and second language instruction (pp.3-32). Cambridge University Press.

Schmidt, R. (2010) Attention, awareness, and individual differences in language learning. In W. M. Chan, et. al  Proceedings of CLaSIC 2010, Singapore, December 2-4 (pp. 721-737). Singapore: National University of Singapore, Centre for Language Studies.

Slobin, D. I. (ed.). (1985). The crosslinguistic study of language acquisition, Vol. 1. The data; Vol. 2. Theoretical issues. Lawrence Erlbaum Associates, Inc

Truscott, J. (1998). Noticing in second language acquisition: a critical review. Second Language Research, 2.

Truscott, J. (2015) Consciousness and SLA. Multilingual Matters.

Truscott , J. , & Sharwood Smith , M.( 2004 ). Acquisition by processing: A modular approach to language development. Bilingualism: Language and Cognition, 7, 1 – 20 .

Changing Tack

Anderson’s new blog post The myth of a theory-practice gap in education makes the following argument:

1. When researchers and academics talk about a theory-practice gap in education, including language teaching, what they are usually referring to is a gap between their beliefs concerning how teachers should teach, and how teachers actually do teach.

2. Research on teacher cognition has established beyond reasonable doubt (e.g., Borg, 2006; Woods, 1996) that all teachers also have theories, either explicit, espousable ones, or the implicit “theories in use” that govern our actions (Argyris & Schön, 1974).

3. Thus, the notion of a theory-practice gap is a myth. There is no theory-practice gap, just a gap between the beliefs of practitioners in two very different communities of practice: academics and teachers.

This all looks very obvious, but what’s the point of it? Well, the point seems to be to reassure teachers that they shouldn’t worry if academics’ theories challenge their teaching practices. If this is the point, then I suggest that it’s wrong and, in any case, Anderson’s remarks do nothing to address the ongoing issue of what teachers can learn from SLA research. Anderson says that researchers and academics are usually referring to a gap between their beliefs concerning how teachers should teach, and how teachers actually do teach, but that’s not actually true; researchers say little about any such gap. To “prove” that the notion of a theory-practice gap is a myth by making it about teaching practice and then pointing to teachers’ own theories is an empty bit of rhetoric which fails to address what is, in fact, the very important gap between what academics know about language learning and what teachers know. Current ELT practice is dominated by the use of coursebooks which implement a synthetic syllabus where the L2 is treated as an object of study, and where the focus is on learning about the language. Such practices contradict the wide consensus among academics that learning an L2 is predominantly a matter of implicit learning; learning by doing, learning by engaging in meaningful use of the language. The clear implication is that current coursebook-driven ELT is inefficacious and that analytical syllabuses, such as those used in TBLT, Dogme and certain types of CLIL are more efficacious.

This is the issue that Anderson ignores. I have argued in many posts on this blog that ELT has become commodified because of the enormous commercial interests involved. These commercial interests, I suggest, explain why the gap between what most teachers know about the way people learn an L2 and what academics know is so wide. Teacher educators (who are often coursebook writers) have vested interests in coursebooks, teacher education programmes like CELTA and high stakes exams like IELTS. They are naturally biased against research findings which challenge the foundations of their approach to ELT, and they use the disagreements among academics to minimise the importance of research findings. Yet, despite all their disagreements, academics studying SLA agree on the fundamental principles which underly L2 learning. While academics would not dare to tell teachers how they should implement a syllabus or make ongoing peddagogic decisions during a lesson, the core findings of SLA research which explain the very special process of learning an L2 can – and IMO should – be taken into account when designing courses, materials, and tests in ELT.

Anderson has previously published a number of articles which cite SLA research in attempts to defend current ELT practice. One such article is Anderson (2016) “Why practice makes perfect sense: the past, present and potential future of the ppp paradigm in language teacher education”  I replied to his article with a post “Does PPP really make perfect sense?” Anderson here changes tack in his ongoing role as defender of the faith. His new message to teachers is: Never mind about the academics’ theories, use your own and just keep reciting this empty syllogism: academics have theories about education, teachers have theories about education, thus, the notion of a theory-practice gap is a myth.

Multilingualism, Translanguaging and Baloney

Introduction

In my post on Multilingualism and Translanguaging I made an effort to put a positive spin on work being done in the areas of multilingualism, translanguaging, raciolinguistics and Disability Critical Race Theory. I emphasised that this work is unified in its radical objective of challenging current language learning and language teaching practices and replacing them with “new configurations, which allow for the emergence of discourses and voices that are currently deliberately repressed by the status quo”. My argument was that while there are good reasons to support the general thrust of arguments in favor of multilingualism, there is no need for enthusiastic followers of this work to accuse those pursuing scientific research in psycholinguistics of imposing a “positivist paradigm” on applied linguistics. I suggested that the two domains of sociolinguistics and psycholinguistics need not lock horns in a culture war, just so long as those in the sociolinguistic camp don’t disappear down a relativist, socio-cultural hole where appeals to logic and empirical evidence are entirely abandoned.

Here, I take a closer, more critical look at the work of García, Flores and Rosa. My criticism is motivated by an attempt to persuade my chum Kevin Gregg that I haven’t completely lost the plot, and to persuade other readers that García, Flores and Rosa indulge in theoretical speculations, inspired by an extreme relativist epistemology, which result in little more than obscurantist baloney that gets nobody anywhere.

My Position

Let me begin by summarizing what I, from my critical rationalist perspective, acknowledge:

  • English acts as a lingua franca and as a powerful tool to protect and promote the interests of a capitalist class.
  • In the global ELT industry, teaching is informed by the monolingual fallacy, the native speaker fallacy and the subtractive fallacy (Phillipson, 2018).   
  • The ways in which English is privileged in education systems needs critical scrutiny, and policies that strengthen linguistic diversity are needed to counteract linguistic imperialism.
  • Translanguaging affirms bilinguals’ fluent languaging practices and aims to legitimise hybrid language uses.
  • ELT must generate translanguaging spaces where practices which explore the full range of users’ repertoires in creative and transformative ways are encouraged.
  • Translanguaging classroom practices can disrupt subtractive approaches to language education and deficit language policies.
  • Racism permeates ELT. It results in expecting language-minoritized students to model their linguistic practices on inapproriate white speaker norms.

Additive Bilingualism

To expose the baloney which I think characterises the work of García, Flores and Rosa, I use Cunmmins’ (2017) article, where he defends the construct of additive bilingualism and additive approaches to language education against criticisms made by them. I make comments that Cummins will certainly disagree with.

Cummins starts with Baker and Prys Jones’ (1998) definitions:

Additive Bilingualism: A situation where a second language is learnt by an individual or group without detracting from the maintenance and development of the first language. A situation where a second language adds to, rather than replaces the first language. (p. 698)

Subtractive Bilingualism: A situation in which a second language is learnt at the expense of the first language, and gradually replaces the first language (e.g. inmigrants (sic) to a country or minority language pupils in submersion education) (p. 706).”

 Cummins points out that although the additive/subtractive distinction was originally formulated as a psycholinguistic construct, it “rapidly evolved to intersect with issues of education equity and societal power relations”. His later definition is that additive bilingualism seeks to “help students add a second language (L2) while continuing to develop academic skills in their home language (L1)”.

Attacks on Cummins’ view begin with García’s (2009) claim that additive bilingualism positions bilingualism as the sum of two monolingualisms. She advocates replacing this mistaken view with the construct of translanguaging, which can act as both a description of the dynamic integrated linguistic practices of bilingual and multilingual students and as a pedagogical approach. Flores and Rosa (2015) build on García’s criticism by arguing that additive approaches interpret the linguistic practices of bilinguals through a monolingual framework where “discourses of linguistic appropriateness fueled by raciolinguistic ideologies” remain unchallenged. As a result, an additative approach “marginalizes” the “fluid linguistic practices” of bilingual students, Standard English learners, and heritage language learners (the three “communities” that Flores and Rosa (2015) deal with).

Cummins replies that García’s and Flores & Rosa’s conceptualizations of additive bilingualism “load the construct with extraneous conceptual baggage that is not intrinsic to its basic meaning”. Additive bilingualism, he argues, does not imply that bilingualism is the sum of two monolingualisms, and additive approaches to language education are as radical – and more coherent – than those of García and Flores & Rosa.

… the construct of additive bilingualism has evolved from its psycholinguistic roots to reference a set of education practices and initiatives that challenge the operation of coercive power structures. These power structures have historically excluded minoritized students’ L1 from schooling with the goal of replacing it with the L2. Extensive research carried out within the context of the additive bilingualism construct has demonstrated that minoritized students’ L1 can be promoted through bilingual education programs at no cost to students’ academic development in the L2″ (Cummins, 2017, p. 408).

So what is the “extraneous conceptual baggage” that has been loaded onto Cummins’ construct of additive bilingualism? It is, precisely,

the assumption that the construct of additive bilingualism necessarily entails distinct language systems rather than functioning as an integrated system” (Cummins, 2017, p. 411).

To be clear, the extraneous conceptual baggage resides in García’s rejection of the assumption that the categories “first language and second language” are meaningful. García insists that first language and second language are not actually distinct language systems, thus denying that Mandarin Chinese, Hindi, Spanish and English, for example, are distinct language systems. Given this preposterous claim, it should not come as a surprise that García and Wei (2014) take the extra step into Humpy Dumpty land by claiming that the construct of language/languages itself is illegitimate. According to them, the translanguaging construct demands that bilingual students’ language practices must not be separated into home language and school language. It follows that the concept of transfer must be shed in favor of “a conceptualization of integration of language practices in the person of the learner” (García & Wei, 2014, p. 80).

So …..

One consequence of the assertion that languages don’t exist is that it is meaningless to talk about transfer from one language to another. It also means, as Cummins points out, that a child who says “My home language is English, but my school language is French” is making an ideologically-charged false utterance and that any source which refers to and provides information about the 7,106 languages and dialects that humanity has generated is similarly epistemologically wrong-headed. Cummins comments:

“The claim that languages exist as social constructions but have no legitimacy “in reality” raises the issue of what “reality” and “social construction” are. Also unclear is the meaning of the claim that languages don’t exist as linguistic entities but do exist in the social world” (Cummins, 2017, p. 414).

Those of us who, unlike Cummins, question the merits of adopting an exclusively socio-cultural view of language and language learning will go further than that. The claim that “languages exist as social constructions” itself needs clarifying. What kind of claim is it? Does it base itself on the work of Vygotsky, Lantolf, or what? Unless it’s properly explained, it’s so much blather. But still, we can agree with Cummins that the claim that languages themselves have no legitimacy “in reality” raises the issue of what the hell they’re talking about. What “reality” are they referring to? Surely, according to any decent relativist epistemology, no stable “reality” exists. And what does seeing language as a “social construction” mean in the context of a “framework” where languages don’t exist?

Rather than dismissing such crap as obvious nonsense, Cummins bends over backwards to reconcile himself with his critics. He himself adopts a socio-cultural approach where a relativist epistemology is often in evidence, but where appeals to empirical evidence are nevertheless made. His article ends with an attempt to reconcile differences with García, Flores and Rosa by proposing a synthesis of perspectives that replaces the term “additive bilingualism” with “active bilingualism”. Good luck with that, Jim. It seems obvious to me that this overture will be roundly rejected. García, Flores and Rosa are committed to an agenda that is characterised by its aggressive rejection of any compromise. A relativist epistemology is coupled with hopelessly undefined constructs and a disdain for empirical evidence that makes it impervious to criticism and encourages the worst kind of cult following.

There’s more

In a reply to Cummins (2017) Flores says that Cummins and he are using different theories of language.

“For Cummins, language is a set of disembodied features that exist as separate from the people who use them and can be objectively documented by researchers. I reject this premise and instead believe that language is embodied in ways such that the social status of the speaker can impact how their language practices are taken up by the listener, which could include researchers”.

Cummins’s view of language is not fairly described by Flores, who himself nowhere gives his own view. To say that “language is embodied in ways such that bla bla bla” is not to explain any theory of language. Flores talks a lot about “academic language” but refuses to say what his theory of language is.

In a recent tweet, Flores says:

The tendency is to start conversations about language in the classroom around how to differentiate for ELs. This is the wrong starting point. You can’t differentiate effectively if you don’t have a understanding of what language is and how it is embedded in relations of power.

I replied

Please can you tell me where I can find your answer to the question “What is language?”     

And answer came there none. Does Flores agree with García’s nonsensical statement that languages exist as social constructions but have no legitimacy “in reality”?

Other tweets from Flores illustrate his stance. For example:

If we are going to psychologize racial oppression it seems to me that the white kid who believes the white doll is more beautiful than the Black doll has more psychological damage than the Black kid who believes the same thing. Their beliefs will also do more damage in the world.

For the record, I don’t think psychologizing racial oppression is generally a good idea. I’m just saying it we are going to do it at least do it accurately. (emojoi of man shrugging).      

I leave those versed in critical discourse analysis to interpret these remarks.

There remains the question of how the works of García, Flores & Rosa affect teaching practice. What do they offer in terms of how to improve language teaching? Cummins (2017) argues that Flores and Rosa do not address the instructional implications of their theoretical claims about raciolinguistic ideologies.

The relevance of this construct for teaching minoritized students will emerge in much more powerful ways when the authors address the apparent contradictions I identify and specify alternative instructional approaches to expand students’ academic registers in place of the approaches advocated by Delpit (1988, 2006) and Olsen (2010), which they reject. Additionally, it would be helpful to address the apparent inconsistency between the endorsement of additive approaches in the work of Bartlett and García (2011) and the rejection of additive approaches in the Flores and Rosa analysis (Cummins, 2017, p. 420).

Amen to that. Cummins’ article has the title “Teaching Minoritized Students: Are Additive Approaches Legitimate?”. His prime concern is to improve the teaching of minoritised students and he’s worked hard to provide answers to that question. I see very little of practical use in the assorted works of García, Flores and Rosa. Pues, I think it’s all mucho noise and pocas nueces.

References  

Bartlett, L., & García, O. (2011). Additive schooling in subtractive times: Bilingual education and Dominican immigrant youth in the Heights. Nashville, TN: Vanderbilt University Press.

Cummins, J. (2017). Teaching Minoritized Students: Are Additive Approaches Legitimate? Harvard Educational Review, 87, 3, 404-425.

Flores, N., & Rosa, J. (2015). Undoing Appropriateness: Raciolinguistic Ideologies and Language Diversity in Education. Harvard Educational Review, 85, 2, 149–171.

García, O. (2009). Bilingual education in the 21st century: A global perspective. Malden, MA: Wiley-Blackwell.

Garcia, O. & Wei, L. (2014) Translanguaging: Language, Bilingualism, and Education.  NY: Palgrave MacMillan.

Migliarini, V., & Stinson, C. (2021). Disability Critical Race Theory Solidarity Approach to Transform Pedagogy and Classroom Culture in TESOL. TESOL Journal, 55, 3, 708-718.

Phillipson, R. (2018) Linguistic Imperialism. Downloaded 28 Oct. 2021 from https://www.researchgate.net/publication/31837620_Linguistic_Imperialism_

Rosa, J., & Flores, N. (2017). Unsettling race and language: Toward a raciolinguistic perspective. Language in Society, 46, 5, 621-647.

A Sketch of a Process-influenced, Task-based Syllabus for English as an L2

Note: I’ve updated this to go with the Synthetic and Analytic Syllabuses Part 2 post, where I explain the motivation for Breen’s Process syllabus and describe what it entails.

Process Syllabuses

Breen’s (1987b) Process syllabus is based on this rationale:

  • Authentic communication between learners involves the genuine need to share meaning and to negotiate about things that actually matter and require action on a learner’s part.
  • Meta-communication and shared decision-making are necessary conditions of language learning in any classroom.
  • The Process Syllabus empowers learners and stresses the vital role of the teacher.

Rationale for a Process Task-Based Syllabus

I tentatively propose a Process Task-based syllabus based on this rationale:

  • Problem-solving tasks generate learner interaction, real communication, the negotiation of meaning, rich comprehensible input and output.
  • There is a focus on the processes of learner participation in discourse.
  • Tasks are sequenced on the basis of addressing learner problems as they arise, thereby overcoming sequencing limitations of conventional syllabus design criteria
  • Learners work on all parts of the sylllabus, including input and materials design.

Methodological Principles

  • Promote “Learning by Doing” (Real-world tasks)
  • Provide rich input (Realistic target language use)
  • Focus on Form (not FoFs)
  • Provide Negative Feedback (Recasts +)
  • Involve learners in decision-making
  • Respect Interlanguage Development.

Needs Analysis consists of:

  • Pre-course questionnaires
  • Interviews
  •  Planning sessions during course

Tasks

In all tasks

  • Meaning is primary
  • Focus is on outcomes
  • Students feel the relevance of the task to their English language needs.

We distinguish between Macro and Micro Tasks

Macro tasks: Solve a well-defined problem. My suggestion is that a macro task, involving 6 to 15 hours class time forms the framework for micro tasks, and that it is framed as a problem. Examples: 

  1. How can we re-organising the banking sector, post 2008, in Country X so as to avoid a repetition of the 2008 collapse, provide individuals with cheap, efficient, reliable services, etc. etc..?
  2. How do new sophisticated statistics software packages affect house / car / life / …. insurance in Country X?
  3. How can parents deal with teenagers’ use of mobile phones in Country X?
  4. The Reinvention of Danone: planning for continued growth.
  5. How can we ensure the continuation of Newspaper X in Country X?
  6. What is the best model for primary & secondary education: Finland, Poland, or UK?
  7. How can the problems of water scarcity in Country X be tackled?
  8. How can traffic problems in City X be tackled?
  9. How can racial discrimination in Industry X or Sector X in Country X be tackled?
  10. How can tourism in Location X be promoted?

Micro tasks: Each Macro Task is broken down into a series of Micro Tasks. Here is a suggested sequence, not definitive.

  • Understand the problem
  • Suggest Tentative Solutions
  • Gather information
  • Analyse and Assess information
  • Test Tentative Solutions
  • Propose Solution
  • Discuss Solution
  • Make decision
  • Report

Implemenation

The teacher takes charge of the first section of the course.

At the end of Section 1, there is a Feedback / Planning session. Students fill in 2 short questionnaires and then put together a plan for Section 2. They tell the teacher what topic or topics they want to work on, how they feel about help with grammar, vocab., etc., and how they want to work. The teachers uses their feedback and their plan to design Section 2 of the course.

The teacher then leads students through Section 2. At the end of the section, there is a new Feedback / Planning session, the students have learned from the first 2 sections a bit more about how to use the planning session to their best advantage, so they do a better job of planning Section 3, the teacher puts the plan together, and on it goes.

A 100 hour course will consist of 8 to 10 sections.

An Example: General Business Course for adults

  • Type of Student: Adult
  • Number of Students: 12
  • Level: Mid-Intermediate (CEFR: B2).
  • Course Duration: 100 hours; 6 hours a week.
  • Main Objectives: Improve English for business purposes. Priority: oral communication.

Section 1 (Hours 1 to 12)

Before the course starts, students fill in a NA and are interviewed.

The teacher designs and leads the first section of the course by leading work on a suitable Macro task, chosen on the basis of data gathered from NA and interviews. To carry out the Micro Tasks, materials from a Materials Bank are used: worksheets, web-based material, videos, oral and written texts, articles, newspaper reports, etc..

While carrying out the tasks, the teacher

  • attends to grammar through negative feedback and focus on form,
  • attends to vocabulary building and lexical chunks in vocab. sessions,
  • includes some written work in class,
  • sets homework of various types, 
  • establishes a website for the class,
  • provides a mixture of group work, pair work, whole class work,
  • generally tries to give students a taste of what’s possible.

First Feedback / Planning Session

Tool 1: Feedback Sheet

1= very bad   10 = excellent

General feeling about course so far:  1 2 3  4  5  6  7  8  9  10

My participation:                                                       1  2  3  4  5  6  7  8  9  10

My progress:                                                               1  2  3  4  5  6  7  8  9  10

Teacher:                                                                       1  2  3  4  5  6  7  8  9  10

Activities:                                                                     1  2  3  4  5  6  7  8  9  10

Use of time:                                                                1  2  3  4  5  6  7  8  9  10

Level of difficulty:                                                      1  2  3  4  5  6  7  8  9  10

Best parts of the classes:

Worst parts of the classes:

Too much / too little time was spent on:

General Comments:

…………………………………………………………………………………………………….

Tool 2: Planning Sheet

What should be the Topic/s for the next Section of the course?

Class Work: What proportion of the time should we work

  • individually,
  • in pairs,
  • in small groups,
  • as a whole group?

Activities   Name activities yu want to do. Be as detailed as possible.

  • Listening (video / audio / etc.)
  • Reading (texts, internet searches, etc.)
  • Writing (e-mails, reports, etc.)
  • Speaking (discussions, meetings, stories, presentations, etc.)
  • Grammar work
  • Vocabulary work
  • Pronunciation work

What other recommendations do you have?

Prepare a report giving your recommendations for Section 2 of the course.

……………………………………………………………………………………………………………….

Procedure:

Students are given the Feedback and Planning questionnaires which they fill in individually.

They then get into groups of 4 to discuss their answers, reach consensus on the plan for Section 2, and prepare a report to give to the whole class. During this first planning session, it’s important for the teacher to encourage everybody to make specific, focused suggestions. My experience using this type of syllabus is that students will tend to say “We liked Section 1 well enough, you carry on, just let’s have a bit more of this and a bit less of that.” You have to insist that they give more input than this.

The whole class meets to hear reports from each group. The teacher records this meeting. The teacher listens to the feedback comments and makes no attempt to defend himself/herself against any criticism. The groups then report on their plans for Section 2, after which the class discusses the reports together and reaches  final decisions. The teacher promises to present the plan for Section 2 in the next class.

At the next class, the teacher presents the plan for Section 2. Exactly how much this reflects the students’ plan will depend on the context, but in any case it’s the teacher’s job to explain the plan, and to make sure it sufficiently reflects the students’ suggestions. Then Section 2 begins.

Section 2

In Section 2, the teacher leads students through one or more Macro tasks, chosen in line with the students’ decisions on topic. The Micro Tasks involve activities involving the 4 skills, and are again chosen to reflect the students’ comments on where they want to concentrate. The materials for these activities come from a Materials Bank, and it’s obvious that this Materials Bank plays a very important part in the delivery of this type of course. In the SLB cooperative, we’re currently working on materials that are coded according to topic, skill, level, grammar point, vocabulary area, group / pair / whole class work, etc., etc., so that if members want to do a syllabus of this type, they can quickly assemble the needed Micro Tasks which make up the Macro Task.

At the end of Section 2, the 2nd Feedback/Planning session is held. Students fill in the same questionnaires and go through the same group and whole class discussion phases. They do it better this time; and they’ll do it better the 3rd time.

Variations in Process TBLT

The basic idea is that the syllabus is divided into sections, and each section does macro problem-solving task(s). Each section preceded by a planning session. The variable elements are:

  • Number of Sections
  • Content of Planning Session (how much students decide)
  • Materials: (from Materials Bank to Coursebook)
  • Nature of tasks
  • Nature of vocab. and grammar work

Summary

The Process TBLT syllabus is very sketchy and open to lots of criticism. The NA is open to all the criticsms Long makes of it, and so are the tasks themselves, but on the other hand, students engage in the messy work of learning and directly affect decisions; it’s adaptable; it avoid the false assumptions made by synthetic syllabuses; it’s learner-centred; and it’s likely to be more rewarding for all concerned than coursebook-driven ELT.

References

Breen, M. (1987) Contemporary Paradigms in Syllabus Design. Part I   Language Teaching Volume 20, Issue 2

Breen, M. (1987) Contemporary Paradigms in Syllabus Design. Part 2   Language Teaching Volume 20, Issue 3

Breen, M., & Littlejohn, A. (Eds.). (2000). Classroom decision-making negotiation and process syllabus in practice. Cambridge: Cambridge University Press.

Synthetic and Analytic Syllabuses, Part 2

Introduction

In Part 1, I gave a brief summary of the main differences between synthetic and analytic syllabuses, emphasizing how little time synthetic syllabuses give to the development of implicit (procedural) knowledge of the L2. I argued that the synthetic syllabuses used in coursebooks contradict robust findings from SLA research by wrongly assuming that the explicit teaching of declarative knowledge (knowledge about the L2) will lead to the procedural knowledge learners need in order to successfully use the L2 for communicative purposes. I further argued that the analytic syllabus used in Long’s TBLT is likely to be more efficacious, because it conforms to SLA research findings which support the view that SLA is essentially a matter of learning by doing. If communicative competence is the goal, implicit learning is far more important than explicit learning, and it follows that classroom time should be devoted to the accomplishment of communicative tasks, during which, attention can be given to formal aspects of the L2.

In this Part 2, I look in more detail at the two types of syllabus, in order to clear up confusion caused by equating synthetic / analytic syllabuses with product / process syllabuses and Type A / Type B syllabuses.

Synthetic vs. Analytic Syllabuses

Long & Crookes (1992) updated Wilkins’ (1976) original distinction between these two types, and in doing so, moved Wilkins’ own notional-functional syllabus from the analytic category, where Wilkins had put it, into the synthetic category.

Synthetic syllabuses cut the L2 up into linguistic “items” which are treated one at a time in a step-by-step sequence. Items include words, collocations, grammar rules, sentence patterns and pronunciation norms, and examples of synthetic syllabuses are grammatical, lexical and notional /functional. Coursebooks series such as Headway, Outcomes and English File implement a synthetic syllabus. Items of the L2 are first selected for inclusion in the syllabus, and then sequenced for treatment. Once the content and sequencing decisions have been made, the items are contextualized, presented, explained and then practiced. Thus, learning an L2 consists of gradually accumulating knowledge of its parts. The syllabuses are called synthetic because they expect the learner to “re-synthesize” the language that has been broken down into a large number of small items for practical teaching reasons.

Analytic syllabuses work in reverse. They start with the learner and learning processes. Students are exposed to samples of the L2, which, while they may have been modified, are not controlled for structure or lexis in the way a synthetic syllabus demands. The learners’ job is to use the samples in communicative tasks in such a way that they analyze the input, and thereby induce rules of grammar and use for themselves. There is no overt or covert linguistic syllabus. More attention is paid to message and pedagogy than to formal aspects of the L2. The idea is that, much in the way children learn their L1, adults can best learn an L2 incidentally, through using it. Analytic syllabuses are implemented using spoken and written activities and texts, modified for L2 learners, chosen for their content, interest value, and comprehensibility. In the classroom, the focus is on students using the language in communicative tasks, as opposed to treating the L2 as an object of study. Grammar and vocabulary presentations, drills, and strictly controlled oral practice are seldom used. TBLT, Dogme, some immersion programmes, and some CLIL courses use analytic syllabuses.

Product vs Process Syllabuses

Breen (1987) classified syllabuses into two basic types: “propositional” and “Process”.  (Note that the first type is more usually referred to, although not by Breen, as a “product” syllabus.) According to Breen, the propositional syllabus represented the “dominant paradigm” in 1987. Two examples of propositional syllabuses given by Breen are the “formal” (grammar) syllabus and the “functional” syllabus; both see the syllabus as a propositional plan giving a detailed description of what is to be taught. Once the content of the syllabus has been determined, the teacher works through the syllabus using appropriate materials and (variations of) a “Present -> Practice -> Produce” (PPP) methodology.

In contrast, a Process syllabus is concerned with “how something is done”. It is interested in two “How?” questions. First, how is communication in the L2 done? In other words, how is the L2 used so that correctness, appropriacy, and meaningfulness are simultaneously achieved during communication within a certain range of events and situations? The syllabus is derived from an analysis of language use in a variety of events and situations, and maps out the procedural knowledge which enables a language user to communicate within them.

The second “How?” question is what really distinguishes a Process syllabus from the rest. It asks: how should learners participate in the experience of language learning? Breen says: “Just as tasks are socially situated in real communication in everyday life, the Process syllabus recognises that communication and learning in classrooms are also socially situated in the classroom group. In a sense, the Process syllabus addresses three interdependent processes: communication, learning, and the group process of a classroom community” (Breen, 1987b, p. 161). As a result of this all-important concern with the group process, Breen rejects the task-based syllabus because although it addresses the first “How?” question adequately, it does not sufficiently address the second one. Breen’s Process syllabus is “primarily a syllabus which addresses the decisions which have to be made and the working procedures which have to be undertaken for language learning in a group. It assumes, therefore, that the third process – how things may be done in the classroom situation – will be the means through which communicating and learning can be achieved” (Breen, 1987b, p. 166).

The Process syllabus completely breaks the mould (the established paradigm) of syllabus design, and thus it can’t be described in terms of the five questions which Breen applies to other types of syllabus, including task-based. Breen replaces the five questions with three concerns: (a) what the Process syllabus provides; (b) the relationship between the Process syllabus and the content or subject matter to be learned and (c) the rationale of the Process syllabus. In terms of what it provides, the Process syllabus consists of a plan relating to the major decisions which teacher and learners need to make during classroom language learning, and a bank of classroom activities made up of sets of tasks. The plan is presented in terms of questions which teacher and learners together discuss and agree upon. Questions refer to three main aspects of classroom work: participation, procedure and subject-matter. “Teacher and learners are involved in a cycle of decision-making through which their own preferred ways of working, their own on-going content syllabus, and their choices of appropriate activities and tasks are realised in the classroom” (Breen, 1987b, p. 167).

Below is Breen’s summary:

In a separate post, I’ve updated a sketch of a “Process task-based syllabus” that I did some years ago, just to give an idea of what a Process approach might lead to in TBLT. Mike Long, not surprisingly, did not think much of it, and I agree with his criticisms. I won’t go through those criticisms here, suffice it to note that there are very important differences between Breen’s Process syllabus and Long’s (2015) task-based syllabus. While Long’s syllabus is clearly an analytical syllabus, it is equally clearly not a Process syllabus. Long thinks it’s a mistake to give students such a pivotal role in the design of the syllabus; his task-based syllabus relies on its special type of needs analysis, and on the protagonism of the students in carrying out the tasks, to ensure that the course is learner-centred, relevant and efficacious. In Long’s opinion, syllabus design should be carried out by experts who rely on the data collected from domain experts, language scholars, teachers, and the students themselves, in order to make pedagogic tasks, which together comprise the syllabus that all lessons in a course are built around.

Type A and Type B syllabuses (R.V. White, 1988)

Type A syllabuses, are “interventionist” – the content of the syllabus is decided by preselecting the language to be taught, dividing it up into small pieces or items, determining learning objectives, and assessing success and failure in terms of achievement or mastery. These syllabuses are thus, says White, “external to the learner”, “other-directed” and “determined by authority”.

Type B syllabuses, on the other hand, are “noninterventionist”. No preselection or arrangement of items to be taught is made and objectives are determined by a process of negotiation between teacher and learners  as the course evolves. They are thus, says White, internal to the learner, they emphasise the process of learning rather than the subject matter, and assessment is carried out “in relationship to learners’ criteria for success”.

Here’s a summary:

Like Breen’s Process syllabus, a Type B syllabus evolves as it goes along, and thus it is fundamentally different to Long’s task-based syllabus, where the pedagogic tasks which make up the syllabus have already been decided, albeit as a result of a detailed needs analysis.

Discussion

Wales High Resolution Efficacy Concept

Long & Crookes, Breen and R.V. White used their dichotomous categories of syllabus types in order to argue for change. In the name of efficacy, they all wanted to move away from the dominant syllabus type, which they described in their own ways as synthetic, propositional (product) and Type A syllabuses, respectively. These syllabuses share the same fatal flaw: they reduce second language learning to one more subject in the curriculum, making the L2 an object of study, like the human body in biology or the globe in geography. As an object of study, the L2 is chopped up into bits to facilitate the sequential teaching of the items which make up its parts, on the false assumption that declarative knowledge can be transformed into procedural knowledge through a certain type of practice. All these syllabuses adhere to Caroll’s (1966) view that L2 learning starts with explicit knowledge: through presentation / contextualization /explanation / comprehension checks / etc., the students have the items of language under consideration explained to them.

“Once the student has a proper degree of cognitive control over the structure of a language, facility will develop automatically with the use of the language in meaningful situations” (p.66).

This contrasts with the opposite, modern view, eloquently expressed by Hatch (1978):

“Language learning evolves out of learning how to carry on conversations. One learns how to do conversation, one learns how to interact verbally, and out of this interaction syntactic structures are developed” (p. 404).

Hatch’s view informs analytic, process and Type B syllabuses, all of which are based on the assumption that there is a weak interface between declarative and procedural knowledge and that procedural knowledge is best developed by engaging in relevant tasks where the focus is on meaning, but where the teacher provides scaffolding and feedback in order to improve the rate and quality of interlanguage development.

For all their agreement, it’s important to highlight the differences in the three accounts, especially the difference between Long & Crookes’ synthetic / analytic distinction, and the others. Breen and R.V. White are particularly concerned with the collaboration between teacher and students, as a result of which they both see “evolution” and “negotiation” as core elements of a syllabus. The syllabus is not pre-written, it evolves as procedures and content are continuously negotiated during the course. Long & Crookes (1992), on the other hand, use the synthetic / analytic distinction to argue for a certain type of task-based syllabus (one based on identifying target tasks and then designing pedagogic tasks), which, they claim, is more efficacious than Breen’s process syllabus, for various reasons.

The differences in the three dichotomies may help to sort out Chinaski’s doubts. I don’t know her real name, but “Chinaski” is the handle she uses on Twitter, where she has made quite a few comments about Long’s (2015) TBLT. Here’s an example:  

“I understand synthetic syllabuses to necessarily entail a product (Type A) approach to syllabus design in which the product covered by each unit of work, be it grammatical, discoursal, functional, situational, etc., is synthesised by learners and assimilated into their burgeoning language system. I would imagine Skehan’s process Type B approach to be further along the analytic continuum in that tasks are the medium and not the object of learning. Surely “building block” tasks and “exit” tasks are the hallmarks of a synthetic approach and reflect a concern for an accountability in ELT in line with neoliberal thinking”.

Synthetic syllabuses don’t “necessarily” entail covering a “product” in each unit of work – they “cover” bits of language. More precisely, each unit in a coursebook that implements a synthetic syllabus focuses on a few specific, discrete linguistic entities – structures, lexis, and notions and functions. Long’s task-based syllabus doesn’t use linguistic entities, but rather tasks, as its “course currency” or “unit of analysis”. His syllabus consists of a sequence of pedagogic tasks; these are derived from target tasks which are identified by a needs analysis. Examples of target tasks (described in Long, 2015) are flight attendants serving food & drinks; migrant workers in California dealing with a police stop while driving to work, and writing an article in English for a Catalan newspaper. The aim of the pedagogic tasks is to build students’ interlanguages so that they can eventually perform the identified target tasks. The pedagogic tasks use “elaborated” and “modified”, as opposed to “simplified” oral and written texts to provide input, and the “exit” pedagogic task is often a simulation of an identified target task. These pedagogic tasks might well be seen as building blocks, but that doesn’t make them “products”; they are not parts of a propositional syllabus as Breen uses the term, nor are they well-described as parts of a Type A syllabus. Using elaborated and modified texts to perform a relatively simple version of a target task is not the same as using simplified texts to teach a particular formal aspect of the L2, like the comparative of adjectives, for example.

Nevertheless, while to describe Skehan’s approach as “process Type B” is wide of the mark, and while I don’t think the synthetic / analytic distinction is best seen as a continuum, I think Chinaski points to something interesting when she says that Skehan’s approach to TBLT is “further along the analytic continuum in that tasks are the medium and not the object of learning”. Long’s syllabus lays out a course aimed at giving students the ability to complete certain target tasks. In Long’s words, the syllabus aims “to help students to develop their language abilities gradually to meet the demands of increasingly complex tasks, linguistic problems being treated reactively, as they arise” (Long, 2015, p. 222). Skehan, meanwhile, as I think Chinaski suggests, sees tasks as the best framework for teaching an L2, rejects Long’s reliance on identifying target tasks, and is not particularly concerned with the detailed design of pedagogic tasks.

In Ellis, Skehan, Li, Shintanti & Lambert’s (2019) TBLT, Theory and Practice, Skehan follows Rod Ellis in arguing that an “operational” syllabus is better than an “illuminative” one (these are Prabhu’s (1978) terms). An operational syllabus is said to have the relatively modest aim of providing course content from which teachers can make their own lesson plans and reach appropriate goals for their learners in their local contexts. It specifies only what will be taught, not how it will be taught. “The content is fixed, but how the teacher uses the content is flexible” (Ellis, et al, 2019, p. 212). On the other hand, an “illuminative” syllabus is much more thorough and makes a considerable effort to ensure that “what is taught and what is learned are carefully aligned”. The examples given of illuminative syllabuses are “those used in workplace training where employees are briefly trained in how to perform key tasks (e.g., cleaning bathrooms at an airport, preparing a hotel room)”. Ellis and Skehan argue that an illuminative syllabus is undesirable, because it’s too prescriptive and it thus limits teachers’ and learners’ freedom to make the many intuitive decisions and adjustments which “optimize learners’ mastery of syllabus content”.

I personally find Ellis et al’s discussion of operational and illuminative syllabuses wholly unconvincing and very lacking in substance; but while it certainly needs a response, here is not the place. I mention it only because I think it’s interesting that Skehan seems to sign up to Ellis’ view of TBLT, and because it has some relevance to Chinaski’s doubts. My aim has been the modest one of clarifying how three different accounts of syllabus design have used three similar, but not identical terms to classify syllabuses into two opposing types.

References

Breen, M. (1987a). Contemporary Paradigms in Syllabus Design. Part 1. Language Teaching, 20,2, 81-92.

Breen, M. (1987b). Contemporary Paradigms in Syllabus Design. Part 2. Language Teaching, 20, 3, 157-174.

Carroll, J. B. (1966). The contribution of psychological theory and educational research to the teaching of foreign languages’ in A. Valdman (ed.). Trends in Language Teaching. McGraw-Hill, 93–106.

Ellis, R., Skehan, P., Li, S., Shintani, N., and Lambert, C. (2019). Task-based language teaching: Theory and practice. Cambridge University Press.

Hatch, E. (ed.) (1978). Second Language Acquisition: A Book of Readings. Newbury House.

Long, M. (2015) TBLT and SLA. Wiley.

Long, M. H., & Crookes, G. (1992). Three approaches to task‐based syllabus design. TESOL Quarterly, 26(1), 27-56.

White, R.V. (1988). The ELT Crriculum: Design, Innovation, and Management. Basil Blackwell.

Wilkins, D. A. (1976). Notional syllabuses. Oxford University Press