Teacher Trainers in ELT

This blog is dedicated to improving the quality of teacher training and development in ELT.

 

The Teacher Trainers 

The most influential ELT teacher trainers are those who publish “How to teach” books and articles, have on-line blogs and a big presence on social media, give presentations at ELT conferences, and travel around the world giving workshops and teacher training & development courses. Among them are: Jeremy Harmer, Penny Ur, Nicky Hockley, Adrian Underhill, Hugh Dellar, Sandy Millin, David Deubelbeiss, Jim Scrivener, Willy Cardoso, Peter Medgyes, Mario Saraceni, Dat Bao, Tom Farrell, Tamas Kiss, Richard Watson-Todd, David Hill, Brian Tomlinson, Rod Bolitho, Adi Rajan, Chris Farrell, Marisa Constantinides, Vicki Hollet, Scott Thornbury, and Lizzie Pinard. I apppreciate that this is a rather “British” list, and I’d be interested to hear suggestions about who else should be included. Apart from these individuals, the Teacher Development Special Interest Groups (TD SIGs) in TESOL and IATEFL also have some influence.

What’s the problem? 

Most current teacher trainers and TD groups pay too little attention to the question “What are we doing?”, and the follow-up question “Is what we’re doing effective?”. The assumption that students will learn what they’re taught is left unchallenged, and trainers concentrate either on coping with the trials and tribulations of being a language teacher (keeping fresh, avoiding burn-out, growing professionally and personally) or on improving classroom practice. As to the latter, they look at new ways to present grammar structures and vocabulary, better ways to check comprehension of what’s been presented, more imaginative ways to use the whiteboard to summarise it, and more engaging activities to practice it.  A good example of this is Adrian Underhill and Jim Scrivener’s “Demand High” project, which leaves unquestioned the well-established framework for ELT and concentrates on doing the same things better. In all this, those responsible for teacher development simply assume that current ELT practice efficiently facilitates language learning.  But does it? Does the present model of ELT actually deliver the goods, and is making small, incremental changes to it the best way to bring about improvements? To put it another way, is current ELT practice efficacious, and is current TD leading to significant improvement? Are teachers making the most effective use of their time? Are they maximising their students’ chances of reaching their goals?

As Bill VanPatten argues in his plenary at the BAAL 2018 conference, language teaching can only be effective if it comes from an understanding of how people learn languages.  In 1967, Pit Corder was the first to suggest that the only way to make progress in language teaching is to start from knowledge about how people actually learn languages. Then, in 1972, Larry Selinker suggested that instruction on formal properties of language has a negligible impact (if any) on real development in the learner.  Next, in 1983, Mike Long raised the issue again of whether instruction on formal properties of language made a difference in acquisition.  Since these important publications, hundreds of empirical studies have been published on everything from the effects of instruction to the effects of error correction and feedback. This research in turn has resulted in meta-analyses and overviews that can be used to measure the impact of instruction on SLA. All the research indicates that the current, deeply entrenched approach to ELT, where most classroom time is dedicated to explicit instruction, vastly over-estimates the efficacy of such instruction.

So in order to answer the question “Is what we’re doing effective?”, we need to periodically re-visit questions about how people learn languages. Most teachers are aware that we learn our first language/s unconsciously and that explicit learning about the language plays a minor role, but they don’t know much about how people learn an L2. In particular, few teachers know that the consensus of opinion among SLA scholars is that implicit learning through using the target language for relevant, communicative  purposes is far more important than explicit instruction about the language. Here are just 4 examples from the literature:

1. Doughty, (2003) concludes her chapter on instructed SLA by saying:

In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning in improving performance in complex control tasks, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.

2. Nick Ellis (2005) says:

the bulk of language acquisition is implicit learning from usage. Most knowledge is tacit knowledge; most learning is implicit; the vast majority of our cognitive processing is unconscious.

3. Whong, Gil and Marsden’s (2014) review of a wide body of studies in SLA concludes:

“Implicit learning is more basic and more important  than explicit learning, and superior.  Access to implicit knowledge is automatic and fast, and is what underlies listening comprehension, spontaneous  speech, and fluency. It is the result of deeper processing and is more durable as a result, and it obviates the need for explicit knowledge, freeing up attentional resources for a speaker to focus on message content”.

4. ZhaoHong, H. and Nassaji, H. (2018) review 35 years of instructed SLA research, and, citing the latest meta-analysis, they say:

On the relative effectiveness of explicit vs. implicit instruction, Kang et al. reported no significant difference in short-term effects but a significant difference in longer-term effects with implicit instruction outperforming explicit instruction.

Despite lots of other disagreements among themselves, the vast majority of SLA scholars agree on this crucial matter. The evidence from research into instructed SLA gives massive support to the claim that concentrating on activities which help implicit knowledge (by developing the learners’ ability to make meaning in the L2, through exposure to comprehensible input, participation in discourse, and implicit or explicit feedback) leads to far greater gains in interlanguage development than concentrating on the presentation and practice of pre-selected bits and pieces of language.

One of the reasons why so many teachers are unaware of the crucial importance of implicit learning is that so few teacher trainers talk about it. Teacher trainers don’t tell their trainees about the research findings on interlanguage development, or that language learning is not a matter of assimilating knowledge bit by bit; or that the characteristics of working memory constrain rote learning; or that by varying different factors in tasks we can significantly affect the outcomes. And there’s a great deal more we know about language learning that teacher trainers don’t pass on to trainees, even though it has important implications for everything in ELT from syllabus design to the use of the whiteboard; from methodological principles to the use of IT, from materials design to assessment.

We know that in the not so distant past, generations of school children learnt foreign languages for 7 or 8 years, and the vast majority of them left school without the ability to maintain an elementary conversational exchange in the L2. Only to the extent that teachers have been informed about, and encouraged to critically evaluate, what we know about language learning, constantly experimenting with different ways of engaging their students in communicative activities, have things improved. To the extent that teachers continue to spend most of the time talking to their students about the language, those improvements have been minimal.  So why do so many teacher trainers ignore all this? Why is all this knowledge not properly disseminated?

Most teacher trainers, including Penny Ur (see below), say that, whatever its faults, coursebook-driven ELT is practical, and that alternatives such as TBLT are not. Ur actually goes as far as to say that there’s no research evidence to support the view that TBLT is a viable alternative to coursebooks. Such an assertion is contradicted by the evidence. In a recent statistical meta-analysis by Bryfonski & McKay (2017) of 52 evaluations of program-level implementations of TBLT in real classroom settings, “results revealed an overall positive and strong effect (d = 0.93) for TBLT implementation on a variety of learning outcomes” in a variety of settings, including parts of the Middle-East and East Asia, where many have flatly stated that TBLT could never work for “cultural” reasons, and “three-hours-a-week” primary and secondary foreign language settings, where the same opinion is widely voiced. So there are alternatives to the coursebook approach, but teacher trainers too often dismiss them out of hand, or simply ignore them.

How many TD courses today include a sizeable component devoted to the subject of language learning, where different theories are properly discussed so as to reveal the methodological principles that inform teaching practice?  Or, more bluntly: how many TD courses give serious attention to examining the complex nature of language learning, which is likely to lead teachers to seriously question the efficacy of basing teaching on the presentation and practice of a succession of bits of language? Today’s TD efforts don’t encourage teachers to take a critical view of what they’re doing, or to base their teaching on what we know about how people learn an L2. Too many teacher trainers base their approach to ELT on personal experience, and on the prevalent “received wisdom” about what and how to teach. For thirty years now, ELT orthodoxy has required teachers to use a coursebook to guide students through a “General English” course which implements a grammar-based, synthetic syllabus through a PPP methodology. During these courses, a great deal of time is taken up by the teacher talking about the language, and much of the rest of the time is devoted to activities which are supposed to develop “the 4 skills”, often in isolation. There is good reason to think that this is a hopelessly inefficient way to teach English as an L2, and yet, it goes virtually unchallenged.

Complacency

The published work of most of the influential teacher trainers demonstrates a poor grasp of what’s involved in language learning, and little appetite to discuss it. Penny Ur is a good example. In her books on how to teach English as an L2, Ur spends very little time discussing the question of how people learn an L2, or encouraging teachers to critically evaluate the theoretical assumptions which underpin her practical teaching tips. The latest edition of Ur’s widely recommended A Course in Language Teaching includes a new sub-section where precisely half a page is devoted to theories of SLA. For the rest of the 300 pages, Ur expects readers to take her word for it when she says, as if she knew, that the findings of applied linguistics research have very limited relevance to teachers’ jobs. Nowhere in any of her books, articles or presentations does Ur attempt to seriously describe and evaluate evidence and arguments from academics whose work challenges her approach, and nowhere does she encourage teachers to do so. How can we expect teachers to be well-informed, critically acute professionals in the world of education if their training is restricted to instruction in classroom skills, and their on-going professional development gives them no opportunities to consider theories of language, theories of language learning, and theories of teaching and education? Teaching English as an L2 is more art than science; there’s no “best way”, no “magic bullet”, no “one size fits all”. But while there’s still so much more to discover, we now know enough about the psychological process of language learning to know that some types of teaching are very unlikely to help, and that other types are more likely to do so. Teacher trainers have a duty to know about this stuff and to discuss it with thier trainees.

Scholarly Criticism? Where?  

Reading the published work of leading ELT trainers is a depressing affair; few texts used for the purpose of training teachers to work in school or adult education demonstrate such poor scholarship as that found in Harmer’s The Practice of Language Teaching, Ur’s A Course in Language Teaching, or Dellar and Walkley’s Teaching Lexically, for example. Why are these books so widely recommended? Where is the critical evaluation of them? Why does nobody complain about the poor argumentation and the lack of attention to research findings which affect ELT? Alas, these books typify the general “practical” nature of TD programmes in ELT, and their reluctance to engage in any kind of critical reflection on theory and practice. Go through the recommended reading for most TD courses and you’ll find few texts informed by scholarly criticism. Look at the content of TD courses and you’ll be hard pushed to find a course which includes a component devoted to a critical evaluation of research findings on language learning and ELT classroom practice.

There is a general “craft” culture in ELT which rather frowns on scholarship and seeks to promote the view that teachers have little to learn from academics. Teacher trainers are, in my opinion, partly responsible for this culture. While it’s  unreasonable to expect all teachers to be well informed about research findings regarding language learning, syllabus design, assessment, and so on, it is surely entirely reasonable to expect the top teacher trainers to be so. I suggest that teacher trainers have a duty to lead discussions, informed by relevant scholarly texts, which question common sense assumptions about the English language, how people learn languages, how languages are taught, and the aims of education. Furthermore, they should do far more to encourage their trainees to constantly challenge received opinion and orthodox ELT practices. This surely, is the best way to help teachers enjoy their jobs, be more effective, and identify the weaknesses of current ELT practice.

My intention in this blog is to point out the weaknesses I see in the works of some influential ELT teacher trainers and invite them to respond. They may, of course, respond anywhere they like, in any way they like, but the easier it is for all of us to read what they say and join in the conversation, the better. I hope this will raise awareness of the huge problem currently facing ELT: it is in the hands of those who have more interest in the commercialisation and commodification of education than in improving the real efficacy of ELT. Teacher trainers do little to halt this slide, or to defend the core principles of liberal education which Long so succinctly discusses in Chapter 4 of his book SLA and Task-Based Language Teaching.

The Questions

I invite teacher trainers to answer the following questions:

 

  1. What is your view of the English language? How do you transmit this view to teachers?
  2. How do you think people learn an L2? How do you explain language learning to teachers?
  3. What types of syllabus do you discuss with teachers? Which type do you recommend to them?
  4. What materials do you recommend?
  5. What methodological principles do you discuss with teachers? Which do you recommend to them?

 

References

Bryfonski, L., & McKay, T. H. (2017). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research.

Dellar, H. and Walkley, A. (2016) Teaching Lexically. Delata.

Doughty, C. (2003) Instructed SLA. In Doughty, C. & Long, M. Handbook of SLA, pp 256 – 310. New York, Blackwell.

Long, M. (2015) Second Language Acquisition and Task-Based Language Teaching. Oxford, Wiley.

Ur, P. A Course in Language Teaching. Cambridge, CUP.

Whong, M., Gil, K.H. and Marsden, H., (2014). Beyond paradigm: The ‘what’ and the ‘how’ of classroom research. Second Language Research, 30(4), pp.551-568.

ZhaoHong, H. and Nassaji, H. (2018) Introduction: A snapshot of thirty-five years of instructed second language acquisition. Language Teaching Research, in press.

Atkinson’s Beyond the brain Essay

It is widely assumed that the cognitivist era is over in Second Language Acquisition (SLA) studies.

So begins the abstract of Dwight Atkinson’s (2019) article Beyond the Brain: Intercorporeality and Co-Operative Action for SLA Studies. Atkinson’s argument is that the cognitivist view of SLA is wrong, and that an examination of two books can help “point toward a noncognitive future for the field”. The first book is by Meyer, Streeck, and Jordan (2017), which “explores the consequences of being a body in a world of other such bodies, versus the cognitivist vision of disembodied mind/brain”. The second book is by Goodwin (2018), which “ develops and empirically illustrates a theory of social action wherein heterogeneous, multimodal cultural tools and practices including language combine, accumulate, and transform in moment-to-moment use”. Both books “view human existence and action as fundamentally “ecosocial”—embodied, affective, and adaptive to human and nonhuman environments”.

This quick post is limited to a single argument: Atkinson misrepresents the cognitivist view adapted by scholars in SLA and fails to appreciate their motivation. In the field of SLA, psycholinguistic research treats learning an L2 as an individual process going on in the mind of the learner. By limiting the domain in this way, scientific research is enabled. No researcher in this limited domain denies that other research – into the social domain of L2 learning, for example – has merit, or that novelists, artists and teachers, for example, might have a brighter light to shine on the still incomplete understanding we have of SLA. What the researchers believe, however, is that their scientific approach gets the best results.

Atkinson (2019, p. 725) starts by defining cognitive science.

The foundational assumption of cognitive science, according to Dawson (2013), is that “cognition is information processing (p. 4).

He goes on

More recently, three major developments— connectionism, cognitive neuroscience, and so- called 4E cognition—have enriched and complexified cognitive science. They have done so, I suggest here, without threatening its cognitivist core.

He then looks at the cognitivist “tradition” in SLA. He says:

When SLA studies ties itself directly to cognitive science or cognitive psychology (e.g., Ellis, 2019; Long, 2010; Suzuki, Nakata, & DeKeyser, 2019), when input, output, processing, and competence comprise default terminology in the field, or when a hard line is drawn between cognitive and social, cognitivist traditions endure.

But what justifies this tradition?  He asks:

(a) Biologically speaking, do our minds/brains not exist primarily to keep our bodies alive in dynamic environments—that is, for adaptive eco-logical action?

(b) Are human environments not furthermore pervasively social—that is, does our embodied adaptive action not depend  crucially  on social action and cooperation with others? and

(c) Is such social action/cooperation ultimately not what language and language learning are for?

And he concludes:

If these questions can be answered affirmatively, then cognition must be reconceived within dynamic ecosocial relations and action rather than as the ultimate source and outcome of human behavior, including language learning.

Expressed more rhetorically, to exorcise the cognitivist “ghost in the machine” (Ryle, 2009, p. 5) in SLA studies, should we not start putting “mind, body, and world back together again” (Clark, 1997, subtitle)?     

Atkinson thinks all three questions can indeed be answered affirmatively and, using the two books mentioned above, he offers “intercorporeality and co-operative action” as “theoretical alternatives to cognitivism”. He then suggests how SLA studies, still blighted by the cognitivist ‘ghost in the machine’ can be replaced by a view informed by “embodiment, affect, multimodality, adaptivity, and ecosocial action”.

Here’s a rough summary of the conclusions Atkinson reaches:

  • Giving affect-as-meaning deserves serious consideration when theorizing language learning and teaching.
  • Computers/ information processors are poor models for the active, ‘hyperprosocial’ (Marean, 2015), fundamentally affect-driven organisms that human learners are.
  • Learning is a product of affect, affiliation, identity, and shared meaning-making at least as much as it is of input frequency and/or conscious behavior. Thus, affect-loaded expressions—taboo words being paradigm examples—can be acquired on one or a few exposures, while plural-marking morphemes, article systems, and other formal grammatical machinery may never be.

 

All these conclusions can easily be accepted. They do almost nothing to challenge the work done by those working on a cognitivist theory of SLA – not even the second one. And if we look at the three questions Atkinson asks – “Do our minds/brains not exist primarily to keep our bodies alive in dynamic environments—that is, for adaptive eco-logical action”, etc., we can safely answer affirmatively without the slightest qualm.

Atkinson’s essay talks about cognitivism as a “tradition”, because his work is soaked in sociology. His appeal to the history of philosophy, for example, is socially informed and fails to appreciate the substance of the thought of those he cites. For example, nobody, but nobody, today, in the field of philosophy – and more particularly in the philosophy of science – gives any credence to a mind-body dualism: Atkinson’s use of other people’s use of Plato and Descartes is outdated nonsense. How long do we have to put up with those who challenge a scientific approach to SLA using this Ladybird summary of western philosophy, for pity’s sake.

Atkinson fails to address the use that a certain group of SLA scholars makes of cognitive science. The group, as he rightly says, includes those who fundamentally disagree about an explanation of SLA. Nativists of various hues, interactionists,  and emergentists, for example, can’t all be right, and they might well all be wrong, but their theories, which are all based on a psycholinguistic, cognitivist approach, need to be criticised properly, rather than dismissed because they don’t take the right stance towards the environment. Reconceiving cognition within dynamic ecosocial relations and action might produce interesting results, and good luck to those who want to try it, but it’s wrong to suggest that those involved in developing  a cognitivist theory of SLA see cognitive science as “the ultimate source and outcome of human behavior, including language learning”.  They don’t.

As Gregg (2010) says, cognitive science sees the mind as the object of empirical scientific inquiry. Cognitive scientists ‘carve nature at its joints’, in order to categorize the domain in terms of natural categories.  Cognition is located within the individual mind and cognitive science looks for natural categories, setting aside individual  differences that might accidentally differentiate members of the same category. That’s what cognitive science does, and it’s results are impressive and on-going.

References

Atkinson, D. (2019)  Beyond the Brain: Intercorporeality and Co‐Operative Action for SLA Studies. Modern Language Journal, 4, 724-738.

Gregg, K.R., (2006) Taking a social turn for the worse: The language socialization paradigm for second language acquisition. Second Language Research, 22, 4, pp 413-442.

 

I

TEFL Equity Advocates and the Marek Kiczkowiak Academy

Following accounts by anonymous members of the Marxist TEFL Group and by Kiczkowiak himself of what I said about him, here’s my side of the story.

In November 2017, I published this short post on my old blog:

…………………………………………………………………………………………………………………………….

TEFL Equity Advocates: a conflict of interests

Every day on Twitter there are inspirational advertisements for the TEFL Equity Advocates.

They invite everybody to join in the fight against the discimination of NNESTs.

When you go the TEFL Equity Advocates web site, you see promotional stuff about training courses that Kiczkowiak runs or supervises if you click on the Webinars and TEFL Equity Academy options on the home page.

The just cause to stamp out discrimination against NNESTs should, in my opinion, be rigidly separated from Kiczkowiak’s attempts to sell his own stuff.

………………………………………………………………

42 comments were made.

Extracts from the Comments section

Kiczkowiak defended his actions, and I replied:

I think it’s wrong for you to use the TEFL Equity Advocates brand to promote your own teacher training courses because the former is a just cause and the latter is a private money making venture. There is the potential for you to take advantage of people’s interest in the just cause in such a way that your private training courses benefit. Note I say “the potential”. I’m not accusing you of anything, except perhaps poor judgement.  

I limit myself to saying that pages promoting your private teacher training courses appear on the Tefl Equity Advocates website and that you use the TEFL Equity Advocates logo to promote your courses on Twitter and Facebook.

In reply to Anthony Gaughan’s question about Nick Bilbrough’s Hands Up project I said:

If Nick uses sponsorship to finance his work for the charity, that’s fine, as long as there’s a proper contact that anybody donating money can see, and as long as he’s not lining his own pockets with money donated.

If Marek were to set up a charity, the same would apply. But he’s using the TEFL Equity Advocates brand to promote his own commercial, teacher training courses and he’s free to do what he likes with the money he makes from those courses. Whether or not he uses some of that money to finance his cause is not the point: it’s still unfair practice, in my opinion. He should call his academy the Marek Kiczkowiak Academy or something, and he should advertise his courses separately.

And in reply to Kiczkowiak’s request for me to explain what I meant by “conflict of interests”, I said:

In business, the issue of a conflict of interests is a common one. It arises when a person who has a public position in government or as head of a charity or non-profit making organisation also has personal interests which might benefit from his or her official actions or influence.

In this case, your position as founder of TEFL Equity Advocacy clashes with your personal position as a teacher trainer who advertises his courses on line. There is the chance that the goodwill created by the non-profit making TEFL Equity Advocacy activities will benefit your personal commercial interests. The TEFL Equity Academy, which can be accessed directly from the TEFL Equity Advocacy website, is a private business venture. Even calling it The TEFL Equity Academy seems to me to be wrongly taking advantage of a strong brand name of a non-profit making organisation.

How much would a teacher wanting to promote courses similar to the ones you offer have to pay to generate the amount of publicity and goodwill that you get from being associated with TEFL Equity Advocacy?

Russell Crew-Gee made this comment:

“Since then, it has grown to now feature a regularly updated blog, a job board, and most recently on-line teacher training courses on TEFL Equity Academy. Similarly to the rest of TEFL Equity Advocates activities, the aim of the courses is to further raise awareness of the profound ‘native speaker’ bias in ELT, and to give teachers the tools to overcoming it.”

The above quote is taken from the home page as displayed on my phone. As one can clearly see, the Academy is mentioned, as a link, on the opening page. Hence Geoff’s premise that the Academy is being promoted directly by the Equity Advocacy website is undeniable, a Factual Reality.

Near the end of the comments section, Kiczkowiak finally said “I do agree that clearing things up a bit is a good idea. Transparency is definitely the key”.  Changes then appeared in his website, but I invite readers to visit his website and to judge for themselves whether or not Kiczkowiak is using the logo and brand of TEFL Equity Advocates to promote his own commercially-run “academy”.

The Marxist TEFL Group

The Marxist TEFL Group’s aim is to persuade EFL teachers to fight for the overthrow of capitalism and the establishment of a communist world order. They pursue their mission by writing illiterate posts on a blog, where garbled bits of Marxist dogma are mixed with personal attacks on those they identify as enemies of the working class.

Bad Writing 

Here are a few examples of the texts:

Trade unions have been treated as a residual element of society based on industrial relations of the past, to be tolerated but heavilly  constrained to ensure the dominant order functions.

We do not want to disappear too far into Marxist exegesis, ….

We wish to examine the limitations which such forms offer teachers looking to find alternative ways of exercising their trade.

the framework of liberation has been replaced by a co-option to hierachical, irrational, life-threatening and savagely cruel regime of accumulation. 

If the west didn’t need to be bombed and persecuted into accepting greater equality why would other countries and communities.

Paradoxically, the money they sent home and stories of another would also encouraged growth and stimulated language learning

This is not to say that key individuals (with their particular idiosyncrasies do not  shape history, rather that history (social relations) calls them forth and they play their particular part in their own distinctive style, and that these particular nuances can themselves help to propel history in certain directions and at certain speeds.

The writing is bad not just at the sentence level; entire sections of posts collapse into incoherence, while most posts lack the cohesion needed to make the texts easy to follow. And when, despite the awful prose, the argument is understandable, it often turns out to be wrong. Below I give just one example.

Bad Scholarship 

In a discussion of the differences between Proudhon and Marx, we read:

It is wholly understandable that Proudhon would develop a different view of the remedies towards inequality when looking at the problem from a different reality to Marx. 

What was the “different reality” from which Proudhon looked at the problem of “remedies towards inequalities”? It turns out to be the difference between the workshops that the two thinkers observed, and the way they analysed them. While Proudhon, according to the group, focused on the power which “private property (the ownership of the workshop)” gave its owners “to command the workers and live off their labour”, Marx looked at the growth of “the new factory system in Manchester, and “the changes this new system brought about, particularly a growing division between mental and labour labour, the increased division in labour of particular tasks, and the growth in supervision of those tasks”. The conclusion drawn is that

It is important therefore to distinguish between private property as a means to exploit workers and private property as the driver of this this exploitation.

First, the conclusion doesn’t follow, and second, the premises are false. “Property” in both Proudhon’s and Marx’s work is a construct referring to unpaid labour, not to the ownership of workshops. Furthermore, Proudhon identified surplus value production long before Marx, arguing that the worker is hired by the capitalist, who then appropriates their product in return for a less than equivalent amount of wages. Nearly thirty years later, in 1844, Marx states the same thing: property results from the capitalist’s appropriation of the unpaid part of the labour of the workers. Marx wrote:

Proudhon was the first to draw attention to the fact that the sum of the wages of the individual workers, even if each individual labour be paid for completely, does not pay for the collective power objectified in its product, that therefore the worker is not paid as a part of the collective labour power. (Marx & Engels, 1844; Chapter 4, Section 4.)

The accounts offered in the pages of this blog of social, economic and political history and of the works of Marx, Engels, Proudhon and others not only fail to meet elementary standards of writing, they also fail to meet elementary standards of scholarship or analysis. Any proper Marxist scholar would be embarrassed to read this stuff.

Bad Arguments 

Too often, the group base their criticism on personal attacks. For example, in a comment on their Strategy Paper Three, I remarked on their bad writing, corrected what they said I’d said about Marek Kiczkowiak and defended Scott Thornbury. The group’s (anonymous!) reply consists mostly of silly insults, starting with the suggestion that I’d got hold of the wrong end of the stick by assuming that the Strategy Paper was all about me. Next, they say:

We hear from the podcast that you believe too many people (I guess you mean working class) go to university and it should be restricted to people only as intelligent as yourself.

I said no such thing.  A bit later, they say:

 We quite understand teachers have to feed and clothe themselves …. That’s why your continued hypocrisy towards Marek Kiczkowiak is so unpleasant. Marek Kiczowiak promotes a co-authored book about teaching English as a Lingua Franca and training courses around this approach ….. and you accuse him of exploiting others.

This is a reference to a single post I wrote on my blog in 2017 pointing to a conflict of interests on Marek’s website. Marek defended his actions, made adjustments to his website and I’ve said nothing about it since. I’ve never accused him of exploiting anybody. Finally, they say:

You have worked … 25 years for an elite private business school (of course rich fee paying students are welcome to go to university but lumpens not) in Barcelona from where you flogged MA TESOL courses with the then manager. Where is your campaigning for equality? It appears you have dedicated your teaching life to promoting inequality. 

Equally childish and inaccurate are the hopelessly-written posts attacking Scott Thornbury (the distinction between “Thornbury the person” and “Thornbury the phenomenon” is particularly daft), but let’s move to the more serious issue of the posts on John Haycraft.

John Haycraft 

The group dedicates four posts to John Haycraft, co-founder of the International House chain of English language schools. While the group members have every right to be as critical as they like of Haycraft’s work, they should be ashamed of themselves for the personal attacks, insulting sneers, and pompous moral outrage which characterise these posts.

This is from Part 1:

For anyone in any doubt, Haycraft’s class position is perfectly demonstrated by the fact that, after his father’s death, he and his family were able to swan around Europe for 15 years on their father’s military pension. We can imagine few widows of British soldiers killed in Iraq or Afghanistan being afforded the same priviledge today.

After his father’s death, John Haycraft, aged 3, and his brother were brought up by their mother, on what all the obituaries refer to as “a small army pension”. The writer of the post makes no attempt to find out how much money the family lived on; all that matters is to “perfectly demonstrate” Haycraft’s “class position”.  Additionally, the comment about widows of British soldiers killed in Iraq and Afghanistan is tastelees and offensive.

Here are two more extracts:

John Haycraft was part of a born-to-rule elite. While he didn’t enjoy the overt state backing of his father sent out to rule Indians through military might, or quite the imperial prestige of his Uncle Sir Thomas Haycraft, he did indeed fly the flag for the “Anglo-sphere”.

It is no happy accident that Haycraft’s brother was an important publisher and part of the art elite in the UK ……. While their position within the ruling elite is not as illustrious as their ancestors, they still had access to a wide range of elite contacts which helped them to becone the “self-made” men I am sure they both believed they were.

Again, no attempt is made to give an honest description of Haycraft’s circumstances. His life bears little resemblance to his father’s or his uncle’s, or his brother’s, but never mind, the important thing is to establish his membership of a born-to-rule elite. (1)

This sets the tone for the four posts on Haycraft, which are littered with personal jibes, mostly related to his class background, and loose accusations which are not properly researched, or fairly stated. When a link to the first of these posts was put on Twitter, a teacher who read it responded:

“Wow! What an ugly read!” 

The teacher added in a second comment:

Ugly in content, tone, intention, writing, formatting and even editing /proofreading.

Amen to that.

Note

1 The group have only good things to say about George Orwell (Eric Blair), who was also born in India, twenty years before John Haycraft. Orwell’s father worked in the Opium Department of the Indian Civil Service. His great-grandfather, Charles Blair, was a wealthy country gentleman in Dorset who married Lady Mary Fane, daughter of the Earl of Westmorland, and had income as an absentee landlord of plantations in Jamaica. The Marxist Group seem in less of a hurry, when citing Orwell, to perfectly demonstrate his class position.

Reference 

Marx, K. & Englels,F. (1844) The Holy Family.  Downloable from  https://marxists.catbull.com/archive/marx/works/download/Marx_The_Holy_Family.pdf

Adrian Holliday: Who’s a Racist?

There’s something about the tension betweeen academics involved in the separate fields of sociolinguistics and psycholinguistics that smacks of C.P. Snow’s stuff about two cultures. The difference is that the current academic discourse of sociolinguists who adopt a postmodernist approach is often overtly moralistic. As an example, we have Adrian Holliday, who accuses his opponents of using terms like native speaker to prop up neoliberal ideology, and, most seriously of all, perhaps, of being racists. Let’s see how Holliday does this.

In the field of intercultural communication studies, Holliday explains that postmodernists favour a relativist epistemology based on constructivism. A relativist epistemology means that there’s no such thing as an observable external reality: everything we think we know is relative to our social and historical context. Constructivism involves talking about the things under investigation in terms of socially relative ‘constructions’ – see Guba & Lincoln (1994). (Note that Holliday, in his published articles, often claims that Kuhn’s work underpins his approach. This claim misrepresents Kuhn and ignores Kuhn’s repeated statements that under no circumstances should The Structure of Scientific Revolutions be seen as giving support to relativist views.)

Moving on, qualitative research and ethnography is the way to understand intercultural communication, but it must be freed from all positivist “solidity”. Breaking down barriers between the researcher and what’s being studied is a good thing because “researchers are implicated in subjectively co-constructing meaning with the people they research”. Cultures are seen as ‘socially constructed’, ‘imagined’ and ‘liquid’. In opposition to the postpositivist approach, Holliday wants to assert that “people are all the same”, just “brought up in different circumstances”, and he advocates the use of “inclusive narratives of cultural resonance” to explore how “people brought up in different circumstances” communicate with each other.

Meanwhile, the bad academics, the positivists (and now the postpostivists too, who get the brunt of Holliday’s attack) adopt a realist epistemology, which makes the outlandish assumption that a stable world exists exterior to us, and is objectively describable in terms of measurable, emprical observations. When postpositivsts talk about intercultural communication, they use “grand narratives” which “fix” things in “The Centre”, and this “blocks” your ability to be in a liquid “third space”. The postpositivists use methods which attempt to sample, triangulate and objectively represent “large cultures” by means of “presumed researcher-neutral interviews and observations”. Such an approach allows for “a technicalized commodification of methods that satisfies the current needs of the neoliberal university”, but sacrifices “the epistemology of the intersubjective ethnographic project for the sake of an illusory methodological certainty”. Most importantly, perhaps, Halliday says that the postpositivist paradigm leads to “false cultural profiling “ and to racism: “the essentialist Othering of large cultural groups” is racist.

In a presentation at the 2nd International Symposium on Language Learning and Global Competence (2019), Holliday explains the key term ‘essentialism’. I invite you to watch the video, starting at minute 16, when he says:

Essentialism is to do with ‘Us-Them’ discourse. ‘They’ are essentially different to ‘Us’ because of their culture…… There’s no way we can be the same. There’s something absolutely separate about us and them. …….Culture then becomes a euphemism for race. It’s essentially racist to imagine a group here and a group there who are essentially different to each other. That is the root of racism…….. Any group who is put over there and defined as being different to you, that is the basis of racism.

The question is, of course: to whom is Holliday attributing this essentialist, racist line of thought? At times, he seems to think that just about everybody – even he himself in unguarded moments – thinks like this. At minute 18 in his talk he says:

 Grand narratives define us and them by fixity and division. We are different to those people, we have to be in order to survive. .. You brand your nation as being different to those people, in a superior or inferior way.

He then gives the example of a Chinese student of his who told him he’d turned down a fantastic job in Mexico. Why? asked Holliday. “Beecause Mexico is not as good as Britain” came the reply. Holliday continues:

He had a hierarchy in his head. I challenge everybody in this room …. We all position ourselves …There’s no such thing as talking about culture in a neutral way; we just cannot. Everybody is positioning themselves in a hierarchy.

We then get another example. Holliday says:

I remember a long time ago I was in Istanbul interviewing colleagues. We were sitting right on the Bosphorus. And everything that was East was inferior, and everything that was West was superior. This came out, very very clear; very very clear. Even inside Turkey. Yuh.       

It’s worth wathching Holliday as he says all this. It’s like he can hardly believe how terrible it all is himself, but it’s his duty to bring the full horror of it all to our attention. And well, yes, if it’s as bad as he says it is, then maybe we all need to go into therapy with savants like him. Maybe that’s the only way to avoid falling into the inevitable trap of talking about culture in a way that reveals you to be a racist.

But what if you snap out of Holliday’s world for a moment and allow yourself to entertain positivist thoughts about rational argumentation? What if you go even further into the badlands of positivism and ask for evidence? How does Holliday know all this stuff about what people are thinking? And who actually makes this “essentialist statement” that ‘They’ are essentially different to ‘Us’? Let’s suppose that we ask a group of applied linguistics academics, all born and raised in France, all adopting a realist epistemology, and all doing quantitative research, to watch a film about life in Beijing where the locals are shown working and playing and doing the everyday things they do. Holliday tells us that, as positivists, they will all think, consciously or not, “Those people are essentially different to us. There’s no way we can be the same. There’s something absolutely separate about us and them”. If we ask them if they had such thoughts and they vigorously deny it, then what? Should we believe them, or should we accept Holliday’s assertion that “really” that’s what they thought because as positivists they couldn’t help themselves? In other words, do we accept Holliday’s argument that they were absolutely bound to have those racist thoughts because it follows inexorably from his definition of essentialism and his equation: positivism = essentialism?

Likewise, when Holliday says we simply cannot talk about culture without positioning ourselves in a hierarchy, what is the status of that assertion? Is it supported by any evidence, or is it the result of an empty circular argument; some sort of inivetable, inescapable conclusion which follows from the premise that positivism = essentialism? To take some more examples,

  • How does Holliday know when people are imagining “a group here and a group there who are essentially different to each other”? Does he have some special antenna, like a built-in lie detector, or it a necessary consequences of talking about “large cultures”?
  • How does he know when people put a group “over there” and define them as being different to us? Where is “over there”?
  • How does he know that his Mexican student had a hierarchy in his head?
  • How can he report with such certainty that when he was sitting there with his Turkish colleagues, all of them without exception thought, precisely, that everything to the East of Istanbul was inferior, and everything to the West was superior?

I’m struck by the conviction with which Holliday delivers his assertions. Watch him in the video as he waits for his audience to take in the full purport of what he’s saying. He seems to think that he sees everything much more clearly than normal people; in his case, thanks to looking through his polished, specially-ground  postmodernist lens.

Not only does his postmodernist approach allow Holliday to interpret our behaviour so sagaciously, it also helps him to understand that “the technicalized commodification of methods are implicit in the neoliberal agenda of the university sector”, and that, more generally, “grand narratives of nation, language, and culture are ideologically constructed. They come from politics”. He doesn’t feel the need to explain, for example, precisely how all grand narratives about language come from politics, and that’s because if we find a narrative that doesn’t obviously come from politics (all languages share a deep grammar, for example), then it’s not a grand narrative. Making things so by definition and then adding a few very personal, very meaningful anecdotes suffices to drive home necessary truths. Holliday stands on a soap box and preaches. He’s a crusader, out to save us all, and his postmodern armour protects him from silly demands to explain more carefully in plain English what he’s talking about, or to provide reliable evidence for his assertions.

This crusading conviction is nowhere more clearly demonstrated than when Holliday gets on his Native Speakerism hobby horse. There’s no such thing as a native speaker, and that’s that. All talk of native speakers, Holliday insists, is “the voice of the ideology of native speakerism”, where “culturally superior native speakers teach the non-native-speaker-cultural-others how to think and be civilised”. While people continue to “build their careers on comparing native speakers with non-native speakers without realising the politics behind it”, Holliday proudly delares that he bravely refuses to review any journal article using the acronyms NS and NNS.

As Widdowson (1995) made clear in his discussion of critical discourse analysis, Fairclough is “committed” to reveal the impositions of power and idealogical influence. Holliday is likewise committed. Having shown to his own satisfaction that positivists (including people like Mike Long who are dedicated anarchists) adopt a neoliberal ideology, he sees his job as sniffing out neoliberal ideological content – including racism – in their work, exposing it, and recommending an alternative approach based on denying the possibility of objective knowledge. Holliday, the academic, is committed not just to a qualitative, ethnographic methodology, not just to a constructivist epistemology, he’s also committed to stamping out racism, any mention of native speakers, and lots more injustices besides. However laudable such an agenda might be, it limits the scope of Holliday’s descriptions to a single, preferred interpretation. Holliday sees the world from a very particular point of view. Ironically, it is not just partial, it is also ideologically committed, and thus it’s as prejudiced as any other. So, to borrow from Widdowson again, what Holliday offers is interpretation, not analysis.

It’s precisely the attempt to remain objective which characterises the agenda of those academics working in the field of applied linguistics who Holliday so roundly dismisses. Regardless of their political opinions (and many will share Holliday’s views on the commodification of education, the exploitation of NNS teachers, and so on), these academics see quantitative data, triangluation, replication studies, etc., coupled with standards of clear, coherent, cohesive texts, as important ways to ensure that the phenomena under investigation are described and explained in such a way that the academic community can scrutinise, critique and improve those descriptions and explanations. I accept that the field of intercultural communication is one where qualitative methods and ethnographic studies might be particularly suitable. I accept that “nation” is particularly laden with ideological stuff which needs very careful handling. But I question the jargon-ridden prose Holliday writes; the circular arguments he makes; and his suggestion that any academic comparing British to Chinese culture, for example, is in great danger of giving expression to racism. Such sweeping generalisations, built on incoherent constructions like ‘essentialism’, need calling out.

References

Holliday, A. and MacDonald, M. (2019) Researching the Intercultural: Intersubjectivity and the Problem with Postpositivism. Applied Linguistics. Free to download here:  https://academic.oup.com/applij/advance-article/doi/10.1093/applin/amz006/5370651

Guba, E. G., & Lincoln, Y. S. (1994) Competing paradigms in qualitative research. In N. K. Denzin & Y.S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand Oaks, CA: Sage.  Free download here https://eclass.uoa.gr/modules/document/file.php/PPP356/Guba%20%26%20Lincoln%201994.pdf

Widdowson, H. (1995) Discourse Analysis a critical view. Language and Literature,, 3, pp 157 – 172.

Positivist and Constructivist Paradigms

Guba and Lincoln’s work, including their (1994) Competing Paradigms in Qualitative Research, is now considered by many to be part of a necessary background for any discussion of (educational) research. I’ve been astonished by how many people who did MAs in TESOL and/or Applied Linguistics in the nineties and onwards were taught to regard Guba and Lincoln’s work as if it were part of the canon of the philosophy of science, rather than stuff which nobody in that field takes seriously, and which very few scientists have even heard of.  Below is another attempt to set the record straight.

Research Paradigms

Following Guba and Lincon, Taylor and Medina (2013), explain that a “research paradigm” comprises

  • a view of the nature of reality (i.e., ontology) – whether it is external or internal to the knower;
  • a related view of the type of knowledge that can be generated and standards for justifying it (i.e., epistemology);
  • and a disciplined approach to generating that knowledge (i.e., methodology). 

However, scholars in scientific method and the philosopohy of science, including Khun, Popper, Lakatos, Fereyabend, and Lauden, for example, don’t discuss “Research Paradigms” in this way, because they all take a realist ontology and epistemology for granted. That is, they all assume that an external world exists independently of our perceptions of it; that it is possible to study different phenomena in this world through observation and reflection, to make meaningful statements about them, and to improve our knowledge of them. Furthermore, they all agree that scientific method requires hypotheses to be tested by means of empirical observation, logic and rational argument.

So what’s all this talk of “Research Paradigms” about? According to Taylor and Medina, the most “traditional” paradigm is positivism:

Positivism is a research paradigm that is very well known and well established in universities worldwide. This ‘scientific’ research paradigm strives to investigate, confirm and predict law-like patterns of behaviour, and is commonly used in graduate research to test theories or hypotheses.

Positivism 

In fact, positivism refers to a particular form of empiricism, and is a philosophical view primarily concerned with the issue of reliable knowledge. Comte invented the term around 1830; Mach headed the second wave of positivism fifty years later, seeking to root out the “contradictory” religious elements in Comte’s work, and finally, the Vienna Circle in the 1920s (Schlick, Carnap, Godel, were key members; Russell, Whitehead and Wittgenstein were interested parties) developed a programme labelled “Logical Positivism”, which consisted of cleaning up language so as to get rid of paradoxes, and then limiting science to strictly empirical statements. Their efforts lasted less than a decade, and by the time the 2nd world war started, the movement had broken up in complete disarray.

It’s my own invention 

When Guba & Lincoln – and now millions of others, it seems – use the term “positivist”, they’re using a definition which has nothing to do with the positivist movements of Comte, Mach, and Carnap, but is rather a politically-motivated caricature of “the scientist”. And the “positivist paradigm” refers to a set of beliefs, etc., which conctructivists like Lincoln and Guba want to attribute to the views of scientists in general. Positivism “strives to investigate, confirm and predict law-like patterns of behaviour”. Positivists work “in natural science, physical science and, to some extent, in the social sciences, especially where very large sample sizes are involved”. Positivism stresses “the objectivity of the research process”. It “mostly involves quantitative methodology, utilizing experimental methods”.

As opposed to positivism, we have various other paradigms, including post-positivism, the interpretive paradigm, and the critical paradigm. But the real alternative to the postivist paradigm is the postmodernist paradigm, or the constructivist paradigm as Lincoln and Guba prefer to call it.

The Strong Programme

We can trace Lincoln and Guba’s constructivism back to to the 1970s, when some of those working in the area of the sociology of science, taking inspiration from the “Strong Programme” developed by Barnes (1974) and Bloor (1976), changed their aim from the established one of analysing the social context in which scientists work to the far more radical, indeed audacious, one of explaining the content of scientific theories themselves. According to Barnes, Bloor and their followers, the content of scientific theories is socially determined, and there is no place whatsoever for the philosophy of science and all the epistemological problems that go with it.  Since science is a social construction, it is the business of sociology to explain the social, political and ethical factors that determine why different theories are accepted or rejected.

An example of this approach in action is sociologist Ferguson’s explanation of the paradigm shift in physics which followed Einstein’s publication of his work on relativity.

The inner collapse of the bourgeois ego signalled an end to the fixity and systematic structure of the bourgeois cosmos. One privileged point of observation was replaced by a complex interaction of viewpoints.

The new relativistic viewpoint was not itself a product of scientific “advances”, but was part, rather, of a general cultural and social transformation which expressed itself in a variety of modern movements.  It was no longer conceivable that nature could be reconstructed as a logical whole.  The incompleteness, indeterminacy, and arbitrariness of the subject now reappeared in the natural world.  Nature, that is, like personal existence, makes itself known only in fragmented images.  (Ferguson, cited in Gross and Levitt, 1998: 46)

Here, Ferguson, in all apparent seriousness, suggests that Einstein’s relativity theory is to be understood not in terms of the development of a progressively more powerful theory of physics which offers an improved explanation of the phenomena in question, but rather in terms of the evolution of “bourgeois consciousness”.

Postmodernism 

The basic argument of postmodernists is that if you believe something, then it is “real”, and thus scientific knowledge is not powerful because it is true; it is true because it is powerful. The question should not be “What is true?”, but rather “How did this version of what is believed to be true come to dominate in these particular social and historical circumstances?”  Truth and knowledge are culturally specific. If we accept this argument, then we have come to the end of the modern project, and we are in a “post-modern” world.

Here are a few snippets from postmodernist texts (see Gross and Levitt, 1998, for references):

  • Everything has already happened….nothing new can occur. There is no real world (Baudrillard, 1992: 64).
  • Foucault’s study of power and its shifting patterns is a fundamental concept of postmodernism. Foucault is considered a post-modern theorist because his work upsets the conventional understanding of history as a chronology of inevitable facts and replaces it with underlayers of suppressed and unconscious knowledge in and throughout history. (Appignanesi, 1995: 45).
  • sceptical post modernists look for substitutes for method because they argue we can never really know anything (Rosenau 1993: 117).
  • Postmodern interpretation is introspective and anti-objectivist which is a form of individualized understanding. It is more a vision than data observation (Rosenau 1993: 119).
  • There is no final meaning for any particular sign, no notion of unitary sense of text, no interpretation can be regarded as superior to any other (Latour 1988: 182).

Constructivism 

Lincoln and Guba’s (1985) “constructivist paradigm”  adopts an ontology & epistemology which is idealist (“what is real is a construction in the minds of individuals”) pluralist and relativist:

There are multiple, often conflicting, constructions and all (at least potentially) are meaningful.  The question of which or whether constructions are true is sociohistorically relative (Lincoln and Guba, 1985: 85).

The observer cannot be neatly disentangled from the observed in the activity of inquiring into constructions.  Constructions in turn are resident in the minds of individuals:

They do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors (Lincoln and Guba, 1985: 143).

Thus constructivism is based on the principle of interaction.

The results of an enquiry are always shaped by the interaction of inquirer and inquired into which renders the distinction between ontology and epistemology obsolete: what can be known and the individual who comes to know it are fused into a coherent whole (Guba: 1990: 19).

Trying to explain how one might decide between rival constructions, Lincoln says:

Although all constructions must be considered meaningful, some are rightly labelled “malconstruction” because they are incomplete, simplistic, uninformed, internally inconsistent, or derived by an inadequate methodology.  The judgement of whether a given construction is malformed can only be made with reference to the paradigm out of which the construction operates; in other words, criteria or standards are framework-specific, so, for instance, a religious construction can only be judged adequate or inadequate utilizing the particular theological paradigm from which it is derived (Lincoln, 1990: 144).

Discussion 

There is in constructivism, as in postmodernism, an obvious attempt to throw off the blinkers of modernist rationality, in order to grasp a more complex, subjective reality.  They feel that the modern project has failed, and I have some sympathy for that view. There is a great deal of injustice in the world, and there are good grounds for thinking that a ruling minority who benefit from the way economic activity is organised are responsible for manipulating information in general, and research programmes in particular, in extremely sophisticated ways, so as to bolster and increase their power and control. To the extent that postmodernists and constructivists feel that science and its discourse are riddled with a repressive ideology, and to the extent that they feel it necessary to develop their own language and discourse to combat that ideology, they are making a political statement, as they are when they say that “Theory conceals, distorts, and obfuscates, it is alienated, disparated, dissonant, it means to exclude, order, and control rival powers” (Culler, 1982: 67).  They have every right to express such views, and it is surely a good idea to encourage people to scrutinise texts, to try to uncover their “hidden agendas”.  Likewise the constructivist educational programme can be welcomed as an attempt to follow the tradition of humanistic liberal education.

The constructivists obviously have a point when they say (not that they said it first) that science is a social construct. Science is certainly a social institution, and scientists’ goals, their criteria, their decisions and achievements are historically and socially influenced.  And all the terms that scientists use, like “test”, “hypothesis”, “findings”, etc., are invented and given meaning through social interaction.  Of course.  But – and here is the crux – this does not make the results of social interaction (in this case, a scientific theory) an arbitrary consequence of it.  Popper, in reply to criticisms of his naïve falsification position, defends the idea of objective knowledge by arguing that it is precisely through the process of mutual criticism incorporated into the institution of science that the individual short-comings of its members are largely cancelled out.

As Bunge (1996) points out “The only genuine social constructions are the exceedingly uncommon scientific forgeries committed by a team.” (Bunge, 1996: 104) Bunge gives the example of the Piltdown man that was “discovered” by two pranksters in 1912, authenticated by many experts, and unmasked as a fake in 1950.  “According to the existence criterion of constructivism-relativism we should admit that the Piltdown man did exist – at least between 1912 and 1950 – just because the scientific community believed in it” (Bunge, 1996: 105).

The heart of the relativists’ confusion is the deliberate conflation of two separate issues: claims about the existence or non-existence of particular things, facts and events, and claims about how one arrives at beliefs and opinions. Whether or not the Piltdown man is a million years old is a question of fact.  What the scientific community thought about the skull it examined in 1912 is also a question of fact.  When we ask what led that community to believe in the hoax, we are looking for an explanation of a social phenomenon, and that is a separate issue.  Just because for forty years the Piltdown man was supposed to be a million years old does not make him so, however interesting the fact that so many people believed it might be.

Guba and Lincoln say “There are multiple, often conflicting, constructions and all (at least potentially) are meaningful. The question of which or whether constructions are true is socio-historically relative”, this is a perfectly acceptable comment, as far as it goes.  If Guba and Lincoln argue that the observer cannot be neatly disentangled from the observed in the activity of inquiry, then again the point can be well taken.  But when they insist that constructions are exclusively in the minds of individuals, that “they do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors”, and that “what can be known and the individual who comes to know it are fused into a coherent whole”, then they have disappeared into a Humpty Dumpty world where anything can mean whatever anybody wants it to mean.

A radically relativist epistemology rules out the possibility of data collection, of empirical tests, of any rational criterion for judging between rival explanations and I believe those doing research and building theories should have no truck with it. Solipsism and science – like solipsism and anything else of course – do not go well together. If the postmodernist paradigm  rejects any understanding of time because “the modern understanding of time controls and measures individuals”, if they argue that no theory is more correct than any other, if they believe that “everything has already happened”, that “there is no real world”, that “we can never really know anything”, then I think they should continue their “game”, as they call it, in their own way, and let those of us who prefer to work with more rationalist assumptions get on with scientific research.

References

(Citations from Taylor & Medina, and Guba & Lincoln can be found in their articles which you can download from the links above.)

Barnes, B. (1974) Scientific knowledge and sociological theory.  London: Routeledge and Kegan Paul.

Barnes, B. and Bloor, D. (1982) Relativism, Rationalism, and the Sociology of  Science. In Hollis, M. and Lukes, S.  Rationality and Relativism,  21-47.  Oxford: Basil Blackwell.

Bloor, D. (1976) Science and Social Imagery.  London: Routeledge and Kegan Hall.

Bunge, M. (1996) In Praise of Intolerance to Charlatanism in Academia. In Gross, R, Levitt, N., and Lewis, M. The Flight From Science and Reason. Annals of the New York Academy of Sciences, Vol. 777, 96-116.

Culler, J. (1982) On Deconstruction: Theory and Criticism after Structuralism.  Ithaca: Cornell University Press.

Gross, P. and Levitt, N. (1998) Higher Superstition. Baltimore: John Hopkins University Press.

Lincoln, Y. S. and Guba, E.G. (1985) Naturalistic Enquiry. Beverly Hills, CA: Sage.

Latour, B. and Woolgar, S. (1979) Laboratory Life: The Social Construction of Scientific Facts.  London: Sage.

 

The value of form-focused instruction

Currently, the most popular way of teaching courses of General English is to use a coursebook. General English coursebooks provide for the presentation and subsequent practice of a pre-selected list of ”items” of English, including grammar, vocabulary and aspects of pronunciation. The underlying assumpton is that the best way to help people learn English as an L2 is to explicitly teach the items and then practice them. This assumption is falsified by reliable evidence from SLA research.

Those bent on defending coursebook-driven ELT either ignore the evidence, or they counter by pointing to research which suggests that explicit teaching is effective. There are two problems with such a counter argument:

  1. It misrepresents research evidence by claiming that the evidence supports the way in which coursebooks deliver the explicit instruction.
  2. It cherry-picks the evidence, ignoring increasing amounts of evidence from recent studies which seriously challenge the reliability of conclusions drawn by previous studies, particularly the well-know Norris and Ortega (2000) study.

Misrepresenting Evidence

A good example of misrepresentation is Jason Anderson’s article defending PPP, which I discussed here. Anderson says:

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

But, of course,  recent research doesn’t do anything to validate the kind of focus on forms (FoFs) instruction prescribed by PPP, and no study conducted in the last 20 years provides any evidence to challenge the established view among SLA scholars, neatly summed up by Ortega (2009):

Instruction cannot affect the route of interlanguage development in any significant way.

Anderson bases his arguments on the following non-sequitur:

There is evidence to support explicit instruction, therefore there is evidence to support the “PPP paradigm”.

But, while there is certainly evidence to support explicit instruction, this evidence can’t be used to support the use of PPP in classroom based ELT. Explicit instruction takes many forms, and PPP involves one very specific type of it – the presentation and subsequent controlled practice of a linear sequence of items of language. Anderson appeals to evidence for the effectiveness of a variety of types of explicit instruction to support the argument that PPP is efficacious accross a wide range of ELT contexts. In doing so, he commits a schoolboy error in logic.

Cherry-Picking Evidence 

Moving to the second matter, the research which is most frequently cited to defend the kind of explicit grammar teaching done by teachers using coursebooks is the Norris and Ortega (2000) meta-analysis on the effects of L2 instruction, which found that explicit grammar instruction (FoFs)  was more effective than Long’s recommended, more discrete focus on form (FoF) approach through procedures like recasts.

However, Norris and Ortega themselves acknowledged, while others like Doughty (2003) reiterated, that the majority of the instruments used to measure acquisition were biased towards explicit knowledge. As they explained, if the goal of the discreet FoF is for learners to develop communicative competence, then it is important to test communicative competence to determine the effects of the treatment. Consequently, explicit tests of grammar don’t provide the best measures of implicit and proceduralized L2 knowledge. Furthermore, the post tests done in the studies used in the meta-analysis were not only grammar tests, they were grammar tests done shortly after the instruction, giving no indication of the lasting effects of this instruction.

This week, Steve Smith has been tweeting to remind people of a blog post he wrote on “The latest research on teaching grammar” , which gives a summary of a chapter in a book written in 2017. In other words, Smith’s report is two years out of date, thus hardly warranting the claim to report “the latest” research. I should add that Smith’s comments show a depressing lack of critical acumen, coupled with an ignorance of the function of theories. Having outlined the different views of SLA scholars on the interface between declarative and procedural knowledge, Smith invites teachers to suppose that

all of these hypotheses have merits and that teaching which takes into account all three may have its merits.  

But the non-interface and strong-interface hypotheses are contradictory – they belong to theories which provide opposed explanations of a given phenomenon and one of them is therefore false.

New Evidence

Newer meta-analyses have used much better criteria for selecting and evaluating studies. The result is that the conclusions of previous meta-analyses have been seriously challenged, and, in some cases, flatly contradicted. Below are excerpts from Mike Long’s notes summarising the most recent meta-analyses.

Sok, S., Kang, E. Y., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A methodological synthesis. Language Teaching Research 23, 4, 403-427.

  • 88 studies (1980-2015)
  • Explicit: Instruction involved (i) rule explanation or (ii) learners being asked to attend to particular forms and reach a linguistic generalization of their own.
  • Implicit: Neither (i) nor (ii) involved
  • FonF: Form and meaning integrated.
  • FonFs: Learners’ attention directed to target features, with no attempt to integrate form and meaning.
  • FoM: No attempt to direct learners’ attention to target features.

Note: Implicit and FoM are both defined negatively, by the absence of something.

Crucial to the results on studies of form-focused instruction is the length of treatments. Most studies have very short lengths of treatment, which weakens Implicit, FonF and FoM unfairly, as all three require more time and input. On p. 16, we learn that 21% of the studies were done in one day, 74% over two weeks or less, and 50% of sessons lasted one hour or less.

  • 65% of studies took place in a FL context, 25% in a SL context.
  • Proficiency ranges in studies: 36% Low, 34% Mid, 10% High. Short treatents with Low proficiency students favors Explicit and FonFs.
  • 46% lab, 54% classroom, 54% university students
  • No pure FoM studies, they say [but see DeVos et al, 2018, meta-analysis!]

In contrast to the Norris and Ortega (2000) study,  Sok et al (2018) found that Implicit instruction was more efficacious than explicit instruction, and that FonF was more efficacious than FonFs.

The shift in the instructional focus of studies from Norris & Ortega (2000) to Sok et al (2018) shows how more and more researchers (but not yet pedagogues or textbook writers) have recogized the limitations of explicit instruction and woken up to the importance of, and need for, incidental learning and implicit knowledge.

Kang, E. Y., Sok, S., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A meta-analysis. LanguageTeaching Research 23, 4, 428-453.

54 studies (1980 – 2015), including 15 from Norris & Ortega (2000), and 39 new (2000 – 2015).

 Implicit instruction (g = 1.76) appeared to have a significantly longer lasting impact on learning … than explicit instruction (g = 0.77). This finding, consistent with Goo et al. (2015), was a major reversal of that of Norris and Ortega (2000).

Large effect size for instruction (g = 1.06), and also on delayed post-tests (g = .93).

75% in FL, and 25% in SL setting.

Instruction over an average of 11 days, average of two sessions and 48 minutes per session.

55% adults, 19% adolescents, 13% young learners. Average of 29 SS per treatment group.

32% beginners, 44% intermediates, 9% advanced learners.

Explicit (g = 1.1) = Implicit (g = 1.38) on immediate post-tests.

Implicit (g = 1.76) > Explicit (g = .77) on delayed post-tests (!) (p < 05) [This is the usual pattern: Implicit learning is more durable]

Using immediate post-test scores as the DV, results for moderator variables were:

  • Oral assessment measures (g = 1.03) or both oral and written measures (g = 1.02) yielded a significantly larger mean effect than studies utilizing written measures only (g = 0.73)
  • L2 proficiency was a significant moderator. Instruction had a greater effect on novice learners (g = 1.45) than intermediate (g = 0.70) and advanced learners (g = 0.88).
  • FL v. SL educational setting was not a factor. 
  • Educational context — elementary, secondary, university, language institutes (student age) — was not a significant factor

Conclusion 

There is general agreement among academics researching instructed L2 learning that explicit instruction can play a significant part in facilitating and accelerating the learning process. But it’s becoming increasingly clear that the type of explicit instruction which typifies a PPP approach to ELT, delivered through coursebook-driven ELT, is not efficacious. More and more research evidence supports the view that teachers should concentrate on scaffolding implicit learning, using explicit instruction in line with Long’s FoF model.

Stop Flying

We’re stumbling towards environmental catastophe. One way we can help prevent this catastophe is to appreciate the harm flying does and to commit to flying as little as possible.

In a recent post, Sandy Millin gives a list of some of the things she does to try to reduce her impact on the environment. They include some good suggestions, but they ignore the issue of flying. “I’m very aware that I fly far too much” she says, but she says nothing more about it. It’s an issue that surely needs addressing.

I suggest that

  1. Teacher trainer / developers make every effort to avoid flying. Video-conferencing is the obvious alternative. It means changing the way the courses are delivered, but it can be done.
  2. Conference organisers stop flying in plenary speakers to grandstand their events. Again, video-conferencing is the obvious answer.
  3. More local, smaller conferences should replace the huge, international events. Yes, there’s a downside, but this is an emergency.

So I urge everybody to make a commitment not to fly to any conference ever again, and to boycott any local teacher development event where some ‘expert’ is flown in from thousands of miles away to lead the event.

A commitment to reduce flying to a minimum in the ELT world would have enormous, beneficial results. Not only would it help the environment, it would also help to stimulate local initiative, and to promote local organisations and local talent.

There are tremendous opportunities as well as uncomfortable costs involved in taking drastic action to reverse the effects of climate change now. As an anarchist, I think we’d gain enormously from scaling down, focusing on our local community, organising more widely through networks, deconstructing the state. Wooops! That last bit will maybe put people off, but this is, of course, a question of politics, and I’m happy to discuss the politics involved.

We’re on the cupse. We either ignore the threat, or we act. Action involves lots of things, including all the things that Sandy Millin lists. But right at the top of the list is to change the way we think about flying.