Positivist and Constructivist Paradigms

Guba and Lincoln’s work, including their (1994) Competing Paradigms in Qualitative Research, is now considered by many to be part of a necessary background for any discussion of (educational) research. I’ve been astonished by how many people who did MAs in TESOL and/or Applied Linguistics in the nineties and onwards were taught to regard Guba and Lincoln’s work as if it were part of the canon of the philosophy of science, rather than stuff which nobody in that field takes seriously, and which very few scientists have even heard of.  Below is another attempt to set the record straight.

Research Paradigms

Following Guba and Lincon, Taylor and Medina (2013), explain that a “research paradigm” comprises

  • a view of the nature of reality (i.e., ontology) – whether it is external or internal to the knower;
  • a related view of the type of knowledge that can be generated and standards for justifying it (i.e., epistemology);
  • and a disciplined approach to generating that knowledge (i.e., methodology). 

However, scholars in scientific method and the philosopohy of science, including Khun, Popper, Lakatos, Fereyabend, and Lauden, for example, don’t discuss “Research Paradigms” in this way, because they all take a realist ontology and epistemology for granted. That is, they all assume that an external world exists independently of our perceptions of it; that it is possible to study different phenomena in this world through observation and reflection, to make meaningful statements about them, and to improve our knowledge of them. Furthermore, they all agree that scientific method requires hypotheses to be tested by means of empirical observation, logic and rational argument.

So what’s all this talk of “Research Paradigms” about? According to Taylor and Medina, the most “traditional” paradigm is positivism:

Positivism is a research paradigm that is very well known and well established in universities worldwide. This ‘scientific’ research paradigm strives to investigate, confirm and predict law-like patterns of behaviour, and is commonly used in graduate research to test theories or hypotheses.

Positivism 

In fact, positivism refers to a particular form of empiricism, and is a philosophical view primarily concerned with the issue of reliable knowledge. Comte invented the term around 1830; Mach headed the second wave of positivism fifty years later, seeking to root out the “contradictory” religious elements in Comte’s work, and finally, the Vienna Circle in the 1920s (Schlick, Carnap, Godel, were key members; Russell, Whitehead and Wittgenstein were interested parties) developed a programme labelled “Logical Positivism”, which consisted of cleaning up language so as to get rid of paradoxes, and then limiting science to strictly empirical statements. Their efforts lasted less than a decade, and by the time the 2nd world war started, the movement had broken up in complete disarray.

It’s my own invention 

When Guba & Lincoln – and now millions of others, it seems – use the term “positivist”, they’re using a definition which has nothing to do with the positivist movements of Comte, Mach, and Carnap, but is rather a politically-motivated caricature of “the scientist”. And the “positivist paradigm” refers to a set of beliefs, etc., which conctructivists like Lincoln and Guba want to attribute to the views of scientists in general. Positivism “strives to investigate, confirm and predict law-like patterns of behaviour”. Positivists work “in natural science, physical science and, to some extent, in the social sciences, especially where very large sample sizes are involved”. Positivism stresses “the objectivity of the research process”. It “mostly involves quantitative methodology, utilizing experimental methods”.

As opposed to positivism, we have various other paradigms, including post-positivism, the interpretive paradigm, and the critical paradigm. But the real alternative to the postivist paradigm is the postmodernist paradigm, or the constructivist paradigm as Lincoln and Guba prefer to call it.

The Strong Programme

We can trace Lincoln and Guba’s constructivism back to to the 1970s, when some of those working in the area of the sociology of science, taking inspiration from the “Strong Programme” developed by Barnes (1974) and Bloor (1976), changed their aim from the established one of analysing the social context in which scientists work to the far more radical, indeed audacious, one of explaining the content of scientific theories themselves. According to Barnes, Bloor and their followers, the content of scientific theories is socially determined, and there is no place whatsoever for the philosophy of science and all the epistemological problems that go with it.  Since science is a social construction, it is the business of sociology to explain the social, political and ethical factors that determine why different theories are accepted or rejected.

An example of this approach in action is sociologist Ferguson’s explanation of the paradigm shift in physics which followed Einstein’s publication of his work on relativity.

The inner collapse of the bourgeois ego signalled an end to the fixity and systematic structure of the bourgeois cosmos. One privileged point of observation was replaced by a complex interaction of viewpoints.

The new relativistic viewpoint was not itself a product of scientific “advances”, but was part, rather, of a general cultural and social transformation which expressed itself in a variety of modern movements.  It was no longer conceivable that nature could be reconstructed as a logical whole.  The incompleteness, indeterminacy, and arbitrariness of the subject now reappeared in the natural world.  Nature, that is, like personal existence, makes itself known only in fragmented images.  (Ferguson, cited in Gross and Levitt, 1998: 46)

Here, Ferguson, in all apparent seriousness, suggests that Einstein’s relativity theory is to be understood not in terms of the development of a progressively more powerful theory of physics which offers an improved explanation of the phenomena in question, but rather in terms of the evolution of “bourgeois consciousness”.

Postmodernism 

The basic argument of postmodernists is that if you believe something, then it is “real”, and thus scientific knowledge is not powerful because it is true; it is true because it is powerful. The question should not be “What is true?”, but rather “How did this version of what is believed to be true come to dominate in these particular social and historical circumstances?”  Truth and knowledge are culturally specific. If we accept this argument, then we have come to the end of the modern project, and we are in a “post-modern” world.

Here are a few snippets from postmodernist texts (see Gross and Levitt, 1998, for references):

  • Everything has already happened….nothing new can occur. There is no real world (Baudrillard, 1992: 64).
  • Foucault’s study of power and its shifting patterns is a fundamental concept of postmodernism. Foucault is considered a post-modern theorist because his work upsets the conventional understanding of history as a chronology of inevitable facts and replaces it with underlayers of suppressed and unconscious knowledge in and throughout history. (Appignanesi, 1995: 45).
  • sceptical post modernists look for substitutes for method because they argue we can never really know anything (Rosenau 1993: 117).
  • Postmodern interpretation is introspective and anti-objectivist which is a form of individualized understanding. It is more a vision than data observation (Rosenau 1993: 119).
  • There is no final meaning for any particular sign, no notion of unitary sense of text, no interpretation can be regarded as superior to any other (Latour 1988: 182).

Constructivism 

Lincoln and Guba’s (1985) “constructivist paradigm”  adopts an ontology & epistemology which is idealist (“what is real is a construction in the minds of individuals”) pluralist and relativist:

There are multiple, often conflicting, constructions and all (at least potentially) are meaningful.  The question of which or whether constructions are true is sociohistorically relative (Lincoln and Guba, 1985: 85).

The observer cannot be neatly disentangled from the observed in the activity of inquiring into constructions.  Constructions in turn are resident in the minds of individuals:

They do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors (Lincoln and Guba, 1985: 143).

Thus constructivism is based on the principle of interaction.

The results of an enquiry are always shaped by the interaction of inquirer and inquired into which renders the distinction between ontology and epistemology obsolete: what can be known and the individual who comes to know it are fused into a coherent whole (Guba: 1990: 19).

Trying to explain how one might decide between rival constructions, Lincoln says:

Although all constructions must be considered meaningful, some are rightly labelled “malconstruction” because they are incomplete, simplistic, uninformed, internally inconsistent, or derived by an inadequate methodology.  The judgement of whether a given construction is malformed can only be made with reference to the paradigm out of which the construction operates; in other words, criteria or standards are framework-specific, so, for instance, a religious construction can only be judged adequate or inadequate utilizing the particular theological paradigm from which it is derived (Lincoln, 1990: 144).

Discussion 

There is in constructivism, as in postmodernism, an obvious attempt to throw off the blinkers of modernist rationality, in order to grasp a more complex, subjective reality.  They feel that the modern project has failed, and I have some sympathy for that view. There is a great deal of injustice in the world, and there are good grounds for thinking that a ruling minority who benefit from the way economic activity is organised are responsible for manipulating information in general, and research programmes in particular, in extremely sophisticated ways, so as to bolster and increase their power and control. To the extent that postmodernists and constructivists feel that science and its discourse are riddled with a repressive ideology, and to the extent that they feel it necessary to develop their own language and discourse to combat that ideology, they are making a political statement, as they are when they say that “Theory conceals, distorts, and obfuscates, it is alienated, disparated, dissonant, it means to exclude, order, and control rival powers” (Culler, 1982: 67).  They have every right to express such views, and it is surely a good idea to encourage people to scrutinise texts, to try to uncover their “hidden agendas”.  Likewise the constructivist educational programme can be welcomed as an attempt to follow the tradition of humanistic liberal education.

The constructivists obviously have a point when they say (not that they said it first) that science is a social construct. Science is certainly a social institution, and scientists’ goals, their criteria, their decisions and achievements are historically and socially influenced.  And all the terms that scientists use, like “test”, “hypothesis”, “findings”, etc., are invented and given meaning through social interaction.  Of course.  But – and here is the crux – this does not make the results of social interaction (in this case, a scientific theory) an arbitrary consequence of it.  Popper, in reply to criticisms of his naïve falsification position, defends the idea of objective knowledge by arguing that it is precisely through the process of mutual criticism incorporated into the institution of science that the individual short-comings of its members are largely cancelled out.

As Bunge (1996) points out “The only genuine social constructions are the exceedingly uncommon scientific forgeries committed by a team.” (Bunge, 1996: 104) Bunge gives the example of the Piltdown man that was “discovered” by two pranksters in 1912, authenticated by many experts, and unmasked as a fake in 1950.  “According to the existence criterion of constructivism-relativism we should admit that the Piltdown man did exist – at least between 1912 and 1950 – just because the scientific community believed in it” (Bunge, 1996: 105).

The heart of the relativists’ confusion is the deliberate conflation of two separate issues: claims about the existence or non-existence of particular things, facts and events, and claims about how one arrives at beliefs and opinions. Whether or not the Piltdown man is a million years old is a question of fact.  What the scientific community thought about the skull it examined in 1912 is also a question of fact.  When we ask what led that community to believe in the hoax, we are looking for an explanation of a social phenomenon, and that is a separate issue.  Just because for forty years the Piltdown man was supposed to be a million years old does not make him so, however interesting the fact that so many people believed it might be.

Guba and Lincoln say “There are multiple, often conflicting, constructions and all (at least potentially) are meaningful. The question of which or whether constructions are true is socio-historically relative”, this is a perfectly acceptable comment, as far as it goes.  If Guba and Lincoln argue that the observer cannot be neatly disentangled from the observed in the activity of inquiry, then again the point can be well taken.  But when they insist that constructions are exclusively in the minds of individuals, that “they do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors”, and that “what can be known and the individual who comes to know it are fused into a coherent whole”, then they have disappeared into a Humpty Dumpty world where anything can mean whatever anybody wants it to mean.

A radically relativist epistemology rules out the possibility of data collection, of empirical tests, of any rational criterion for judging between rival explanations and I believe those doing research and building theories should have no truck with it. Solipsism and science – like solipsism and anything else of course – do not go well together. If the postmodernist paradigm  rejects any understanding of time because “the modern understanding of time controls and measures individuals”, if they argue that no theory is more correct than any other, if they believe that “everything has already happened”, that “there is no real world”, that “we can never really know anything”, then I think they should continue their “game”, as they call it, in their own way, and let those of us who prefer to work with more rationalist assumptions get on with scientific research.

References

(Citations from Taylor & Medina, and Guba & Lincoln can be found in their articles which you can download from the links above.)

Barnes, B. (1974) Scientific knowledge and sociological theory.  London: Routeledge and Kegan Paul.

Barnes, B. and Bloor, D. (1982) Relativism, Rationalism, and the Sociology of  Science. In Hollis, M. and Lukes, S.  Rationality and Relativism,  21-47.  Oxford: Basil Blackwell.

Bloor, D. (1976) Science and Social Imagery.  London: Routeledge and Kegan Hall.

Bunge, M. (1996) In Praise of Intolerance to Charlatanism in Academia. In Gross, R, Levitt, N., and Lewis, M. The Flight From Science and Reason. Annals of the New York Academy of Sciences, Vol. 777, 96-116.

Culler, J. (1982) On Deconstruction: Theory and Criticism after Structuralism.  Ithaca: Cornell University Press.

Gross, P. and Levitt, N. (1998) Higher Superstition. Baltimore: John Hopkins University Press.

Lincoln, Y. S. and Guba, E.G. (1985) Naturalistic Enquiry. Beverly Hills, CA: Sage.

Latour, B. and Woolgar, S. (1979) Laboratory Life: The Social Construction of Scientific Facts.  London: Sage.

 

The value of form-focused instruction

Currently, the most popular way of teaching courses of General English is to use a coursebook. General English coursebooks provide for the presentation and subsequent practice of a pre-selected list of ”items” of English, including grammar, vocabulary and aspects of pronunciation. The underlying assumpton is that the best way to help people learn English as an L2 is to explicitly teach the items and then practice them. This assumption is falsified by reliable evidence from SLA research.

Those bent on defending coursebook-driven ELT either ignore the evidence, or they counter by pointing to research which suggests that explicit teaching is effective. There are two problems with such a counter argument:

  1. It misrepresents research evidence by claiming that the evidence supports the way in which coursebooks deliver the explicit instruction.
  2. It cherry-picks the evidence, ignoring increasing amounts of evidence from recent studies which seriously challenge the reliability of conclusions drawn by previous studies, particularly the well-know Norris and Ortega (2000) study.

Misrepresenting Evidence

A good example of misrepresentation is Jason Anderson’s article defending PPP, which I discussed here. Anderson says:

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

But, of course,  recent research doesn’t do anything to validate the kind of focus on forms (FoFs) instruction prescribed by PPP, and no study conducted in the last 20 years provides any evidence to challenge the established view among SLA scholars, neatly summed up by Ortega (2009):

Instruction cannot affect the route of interlanguage development in any significant way.

Anderson bases his arguments on the following non-sequitur:

There is evidence to support explicit instruction, therefore there is evidence to support the “PPP paradigm”.

But, while there is certainly evidence to support explicit instruction, this evidence can’t be used to support the use of PPP in classroom based ELT. Explicit instruction takes many forms, and PPP involves one very specific type of it – the presentation and subsequent controlled practice of a linear sequence of items of language. Anderson appeals to evidence for the effectiveness of a variety of types of explicit instruction to support the argument that PPP is efficacious accross a wide range of ELT contexts. In doing so, he commits a schoolboy error in logic.

Cherry-Picking Evidence 

Moving to the second matter, the research which is most frequently cited to defend the kind of explicit grammar teaching done by teachers using coursebooks is the Norris and Ortega (2000) meta-analysis on the effects of L2 instruction, which found that explicit grammar instruction (FoFs)  was more effective than Long’s recommended, more discrete focus on form (FoF) approach through procedures like recasts.

However, Norris and Ortega themselves acknowledged, while others like Doughty (2003) reiterated, that the majority of the instruments used to measure acquisition were biased towards explicit knowledge. As they explained, if the goal of the discreet FoF is for learners to develop communicative competence, then it is important to test communicative competence to determine the effects of the treatment. Consequently, explicit tests of grammar don’t provide the best measures of implicit and proceduralized L2 knowledge. Furthermore, the post tests done in the studies used in the meta-analysis were not only grammar tests, they were grammar tests done shortly after the instruction, giving no indication of the lasting effects of this instruction.

This week, Steve Smith has been tweeting to remind people of a blog post he wrote on “The latest research on teaching grammar” , which gives a summary of a chapter in a book written in 2017. In other words, Smith’s report is two years out of date, thus hardly warranting the claim to report “the latest” research. I should add that Smith’s comments show a depressing lack of critical acumen, coupled with an ignorance of the function of theories. Having outlined the different views of SLA scholars on the interface between declarative and procedural knowledge, Smith invites teachers to suppose that

all of these hypotheses have merits and that teaching which takes into account all three may have its merits.  

But the non-interface and strong-interface hypotheses are contradictory – they belong to theories which provide opposed explanations of a given phenomenon and one of them is therefore false.

New Evidence

Newer meta-analyses have used much better criteria for selecting and evaluating studies. The result is that the conclusions of previous meta-analyses have been seriously challenged, and, in some cases, flatly contradicted. Below are excerpts from Mike Long’s notes summarising the most recent meta-analyses.

Sok, S., Kang, E. Y., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A methodological synthesis. Language Teaching Research 23, 4, 403-427.

  • 88 studies (1980-2015)
  • Explicit: Instruction involved (i) rule explanation or (ii) learners being asked to attend to particular forms and reach a linguistic generalization of their own.
  • Implicit: Neither (i) nor (ii) involved
  • FonF: Form and meaning integrated.
  • FonFs: Learners’ attention directed to target features, with no attempt to integrate form and meaning.
  • FoM: No attempt to direct learners’ attention to target features.

Note: Implicit and FoM are both defined negatively, by the absence of something.

Crucial to the results on studies of form-focused instruction is the length of treatments. Most studies have very short lengths of treatment, which weakens Implicit, FonF and FoM unfairly, as all three require more time and input. On p. 16, we learn that 21% of the studies were done in one day, 74% over two weeks or less, and 50% of sessons lasted one hour or less.

  • 65% of studies took place in a FL context, 25% in a SL context.
  • Proficiency ranges in studies: 36% Low, 34% Mid, 10% High. Short treatents with Low proficiency students favors Explicit and FonFs.
  • 46% lab, 54% classroom, 54% university students
  • No pure FoM studies, they say [but see DeVos et al, 2018, meta-analysis!]

In contrast to the Norris and Ortega (2000) study,  Sok et al (2018) found that Implicit instruction was more efficacious than explicit instruction, and that FonF was more efficacious than FonFs.

The shift in the instructional focus of studies from Norris & Ortega (2000) to Sok et al (2018) shows how more and more researchers (but not yet pedagogues or textbook writers) have recogized the limitations of explicit instruction and woken up to the importance of, and need for, incidental learning and implicit knowledge.

Kang, E. Y., Sok, S., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A meta-analysis. LanguageTeaching Research 23, 4, 428-453.

54 studies (1980 – 2015), including 15 from Norris & Ortega (2000), and 39 new (2000 – 2015).

 Implicit instruction (g = 1.76) appeared to have a significantly longer lasting impact on learning … than explicit instruction (g = 0.77). This finding, consistent with Goo et al. (2015), was a major reversal of that of Norris and Ortega (2000).

Large effect size for instruction (g = 1.06), and also on delayed post-tests (g = .93).

75% in FL, and 25% in SL setting.

Instruction over an average of 11 days, average of two sessions and 48 minutes per session.

55% adults, 19% adolescents, 13% young learners. Average of 29 SS per treatment group.

32% beginners, 44% intermediates, 9% advanced learners.

Explicit (g = 1.1) = Implicit (g = 1.38) on immediate post-tests.

Implicit (g = 1.76) > Explicit (g = .77) on delayed post-tests (!) (p < 05) [This is the usual pattern: Implicit learning is more durable]

Using immediate post-test scores as the DV, results for moderator variables were:

  • Oral assessment measures (g = 1.03) or both oral and written measures (g = 1.02) yielded a significantly larger mean effect than studies utilizing written measures only (g = 0.73)
  • L2 proficiency was a significant moderator. Instruction had a greater effect on novice learners (g = 1.45) than intermediate (g = 0.70) and advanced learners (g = 0.88).
  • FL v. SL educational setting was not a factor. 
  • Educational context — elementary, secondary, university, language institutes (student age) — was not a significant factor

Conclusion 

There is general agreement among academics researching instructed L2 learning that explicit instruction can play a significant part in facilitating and accelerating the learning process. But it’s becoming increasingly clear that the type of explicit instruction which typifies a PPP approach to ELT, delivered through coursebook-driven ELT, is not efficacious. More and more research evidence supports the view that teachers should concentrate on scaffolding implicit learning, using explicit instruction in line with Long’s FoF model.

Stop Flying

We’re stumbling towards environmental catastophe. One way we can help prevent this catastophe is to appreciate the harm flying does and to commit to flying as little as possible.

In a recent post, Sandy Millin gives a list of some of the things she does to try to reduce her impact on the environment. They include some good suggestions, but they ignore the issue of flying. “I’m very aware that I fly far too much” she says, but she says nothing more about it. It’s an issue that surely needs addressing.

I suggest that

  1. Teacher trainer / developers make every effort to avoid flying. Video-conferencing is the obvious alternative. It means changing the way the courses are delivered, but it can be done.
  2. Conference organisers stop flying in plenary speakers to grandstand their events. Again, video-conferencing is the obvious answer.
  3. More local, smaller conferences should replace the huge, international events. Yes, there’s a downside, but this is an emergency.

So I urge everybody to make a commitment not to fly to any conference ever again, and to boycott any local teacher development event where some ‘expert’ is flown in from thousands of miles away to lead the event.

A commitment to reduce flying to a minimum in the ELT world would have enormous, beneficial results. Not only would it help the environment, it would also help to stimulate local initiative, and to promote local organisations and local talent.

There are tremendous opportunities as well as uncomfortable costs involved in taking drastic action to reverse the effects of climate change now. As an anarchist, I think we’d gain enormously from scaling down, focusing on our local community, organising more widely through networks, deconstructing the state. Wooops! That last bit will maybe put people off, but this is, of course, a question of politics, and I’m happy to discuss the politics involved.

We’re on the cupse. We either ignore the threat, or we act. Action involves lots of things, including all the things that Sandy Millin lists. But right at the top of the list is to change the way we think about flying.

 

SLB: Task-Based Language Teaching Course No. 2

What is it?

It’s an on-line course about Mike Long’s version of TBLT, consisting of twelve, two week sessions. In the course, we

  • explain the theory behind it;
  • describe and critique Long’s TBLT;
  • develop lighter versions for adoption in more restricted circumstances;
  • trace the steps of designing a TBLT syllabus;
  • show you how to implement and evaluate TBLT in the classroom.

When is it? 

It starts on November 7th and finishes on April 9th 2020.

What are the components of the Sessions?

  • Carefully selected background reading
  • A video presentation from the session tutor
  • Interactive exercises to explore key concepts
  • A forum discussion with your tutor and fellow course participants
  • A 1-hour group videoconference with your tutor
  • An assessed task (e.g. short essay, presentation, task analysis etc.)

Who are the tutors?

Neil McMillan and I do most of the tutoring, but there will also be tutorials by Roger Gilabert, Mike Long and Glenn Fulcher.

How much work is involved?

About 5 hours a week.

Why should I do the course?

1. To change. Progress involves change, and depends on a better, deeper understanding of the situation where change is needed.

2. To improve your teaching. Evidence shows that using a General English coursebook is not an efficacious way of helping students to achieve communicative competence: teachers spend too much time talking about the language and students spend too little time talking in the language. TBLT is based on helping students to use the L2 for their communucative needs, by involving them in relevant, meaningful tasks, scaffolding their learning and giving them the help they need, when they need it. This course will explain TBLT and show you how to adapt it your particular situation.

3 To improve your CV  You’ll have greater range as a teacher. If you’re involved in, or want to be involved in, teacher training / development, course design, materials design, or assessment, this course will help you advance.

Why is there so much resistence to real change?  

Because by definition, change threatens the status quo. In ELT, the way things are suits those who run the show – it’s convenient and marketable. Language is illusive, ambiguous, volatile; and language learning is a complex, dynamic,  non-linear process.  In order to be packaged and sold, language is cut up into hundreds of neat and tidy items, which Scott Thornbury calls ‘McNuggets’, and language learning is reduced to a linear process of accumulating these items. Students buy courses of English, where they learn about and practice a certain batch of items organised in a coursebook. Their proficiency is assessed according to their knowledge of these items. The knowledge learned is referred to in “can do” statements, which are used to plot students’ progress along the CEFR line from level A1 to level C2.  The levels are reified, i.e., treated as if they were real (which they are not), and as if they reflected communicative competence (which they do not). But it looks OK, if you don’t look too closely, and there are very powerful commercial interests promoting it.

What is TBLT?

There are different versions of TBLT, including  “task-supported” and “hybrid” versions. They all emphasise the importance of students working through communicative activities rather than the pages of a coursebook, but we think the best is Mike Long’s version, which identifies “target tasks” – tasks which the students will actually have to carry out in their professional or private lives – and breaks them down into a series of ‘pedagogic tasks’which form the syllabus. In the course, we consider how to identify target tasks, how to break these down into pedagogic tasks, how to find suitable materials, and how to bring all this together using the most appropriate pedagogic procedures.

 The course sounds very demanding.

We’ve extended the length of the course, so now you’ll be expected to dedicate between 4 and 6 hours a week on it. Reading is non technical, the video presentations clear, participation in on-line discussions very relaxed, and the written component practical and short.

Is there a Preview?  

Yes. Click on this link to see Session 1 

 

…. and more information? 

Click here: TBLT November 2019

 

 

 

I’m done

I´ve decided to make no further posts on this blog and to stop Tweeting.

I have a special regard for elephants, the result of three recent visits to Mfuwe Lodge in Zambia where I saw them up so close that at times I could have reached up and touched them. One evening, as the sun set, turning a washed out blue sky red, then orange, I watched a herd of 100 elephants pass by, making their way towards a destination known perfectly by their matriarchal boss, along the vast African landscape that makes Texan vistas look tiny. They made a magnificent, majestic, awsome sight, gracefully moving, flowing along, trunks swaying, little ones shood on, the biggest ones stopping now and then,  all  that steady, purposeful mightiness strolling, rolling so silently on, a collective tribute to beauty and to the power of life. Another time I looked into the eyes of a very big elephant who came close to the Land Rover I was in. He could have flicked the car up into the air as if it were a matchstick. I saw his eyelashes blink, I saw his cloudy brown eyes looking at me. An elephant, with a history of 80 million years of existence on this planet was looking at me, a member of a species with a history of maybe 50,000 years. I thought: thanks to the likes of me, you, you utterly wonderful, beautiful thing, you’re toast.

Unless drastic action is taken, elephants will be extinct by 2030, by which time the tipping point in climate change will also have been passed and havoc will ensue. Talk of anyting else seems trivial. But maybe, just maybe, technology will rescue us. I hope so, and if it does, it will be because reason gets the better of bullshit. For that to happen, there has to be a revoution which overthrows the rule of those who currently hold power. The tiny minority who currently fuck the world up in order to maintain their power are easily identifiable and their power can be stopped tomorrow by mass action against them. The action must be local, everywhere, and free of leaders. Start now – organise locally in cooperatives and let Kropotkin give a start to your thinking about the best way to organise.

As for the world of ELT, I’m old and I’ve better things to do with the time that remains than blog and tweet. The game isn’t worth the candle.

Broken Telephone

The latest spat of conferences has seen a lot of hand waving in favour of “evidence based” ELT. Among leading figures there’s a growing consensus that, like motherhood and being kind to animals, evidence based teaching is a “good thing”. No need to get excited, though, because,

  1. All the evidence that challenges current ELT practice is still routinely ignored.
  2. Even the evidence that’s being aired is quickly adulterated.

Carol Lethaby’s talk at the recent Pavillion ELT Live! conference is a case in point. In this post I want to suggest that it’s the second link in an ongoing chain in the Broken Telphone” game, where scholarly respect for the careful presentation and use of evidence is slowly but surely abandoned.

Original Text 

The story starts in 2013, with the publication of a paper by Dunlosky et al. This paper reports on work done by cognitive and educational psychologists, and its aim is to evaluate findings of studies on 10 learning techniques. It’s important to note that this is not about L2 language learning (SLA) research, or instructed SLA research; it’s a meta-analysis of research into the explicit learning strategies used by students of content-based subjects across the curriculum. See the Tables below for details.

The paper divided the 10 techniques into 3 categories, as follows:

Techniques with “low utility” (little or no evidence supporting their efficacy)

  • highlighting and underlining content as you read;
  • writing a summary of a to-be-learned text;
  • visualising what you are reading in order to learn it;
  • using mnemonic devices such as the keyword technique to learn vocabulary;
  • re-reading a text to learn its contents.

Techniques with “high utility”

At the other extreme, distributed practice and practice testing are well known to be efficacious in certain types of explicit learning, and this is confirmed by the authors.

Techniques with “moderate utility”

  • elaborative interrogation,
  • self-explanation,
  • interleaved practice.

These got “moderate utility” assessments because the evidence is far too limited to warrant any firm claims for them. As the authors say,

these techniques have not been adequately evaluated in educational contexts.

adding that

the benefits of interleaving have only just begun to be systematically explored.

The conclusion is that

the ultimate effectiveness of these techniques is currently unknown.

First Round  

Fast forward to the 2018 IATEFL conference, where Patricia Harries and Carol Lethaby gave a talk called

Use your brain! Exploiting evidence-based teaching strategies in ELT.

A summary of the talk, by the authors, is included in the Conference Selection publication, edited by Tania Patterson. Having reminded everybody that there’s no evidence to support claims about learning styles (neuro-linguistic programming – NLP) or about putative left and right hemesphere learners, Harries and Lethaby summarised the findings of Dunlosky et al’s 2013 paper. The important thing for my argument is how they presented the findings on the 3 strategies which got a “moderate utilty” evaluation.

Elaborative interrogation: Learners explain things to themselves as they learn, for example, saying why a particular fact is true. The technique is thought to enhance learning by helping to integrate new and existing knowledge. The Dunlosky et al (2013) paper stresses that that there’s little evidence so far, and that more research is required.

Self-explanation: This is another questioning strategy. Here, students use questions to monitor and explain features of their own learning to themselves. Again, Dunlosky et al’s paper stresses that further research is needed before we can say there’s a good evidence base.

Interleaved practice: Interleaved practice involves the mixing up of difference practice activities in a single session and has been found to be effective in problem solving and categorisation. Yet again, more research is needed before any firm conclusions about its efficacy can be drawn.

Implications for ELT

When they went on to discuss the implications of these findings for ELT, Harries and Lethaby said that there were a number of ways that these techniques could be adapted. And here, I think, is where the limitations of the evidence starts to get lost. In this, the eagerly awaited climax of the talk, the presenters slip away from “What if..?” to “This is what we can do now”. Particularly enthusiastic is their endorsement of the use of “prior knowledge” (see van Kesteren et al, 2014) to design pre-tasks in receptive skills development, spiral syllabuses, and exploit L1 and L2 previous knowledge in vocab learning. This, despite the fact that research into the acquisition of new knowledge which might be integrated into pre-existing conceptual schemas has so far led to no firm findings. The presenters also talk up incorporating elaborative interrogation into teaching grammatical rules and structures, and using self-explanation to ask learners about how they found their answers in language and skills tasks. They conclude:

By spacing the practice of language items and structures using increasingly greater time intervals and mixing up language practice and skills development, distributed and interleaved practice can help integrate elements, build automaticity and aid memorisation.

All of these claims can give the impression that they’re supported by strong and robust research evidence, whereas, in fact they’re not. What Harries and Lethaby should have said was: “If we ever get convincing evidence that these techniques work in L2 learning, then we could use them in the following ways”.

To be fair, the presenters concluded:

Evidence-based strategies exist and language teachers need to be aware of them. Language  teachers and learners already use many of these and it is beneficial to know that research supports this. There is, however, a danger in a whole-scale adoption of findings from research on content-based subject areas often done in inauthentic teaching situations.

Furthermore, there’s a video of J. Harmer interviewing Lethaby at the conference. When asked to give an example of how evidence from neuroscience can help more efficacious ELT practice, Lethaby replied

“There seems to be a place in the brain where new and old information connect. We can, perhaps, in the future, when we know more about how this is done, use it to help learners assimilate new information. But we have to be very cautious, we have to be careful. This is still only potential”. 

In brief, this strikes me as an interesting talk, which points to the possible future potential of three techniques and reaffirms the evidence for the efficacy of spaced practice and practice testing. We should note that the talk extrapolates from studies of content-based school courses where the focus is on the teaching of explicit knowledge.  In the case of ELT, those of us who question a PPP, coursebook-driven approach to ELT consider such a focus to be fundamentally mistaken. Most importantly, what worries me is the discussion at the end, particularly the last bit I quoted, because it could easily be misinterpreted as saying that research evidence to date gives strong support to the claim that elaborative interrogation, self-explanation, and interleaved practice are efficacious tools for classroom use.

Second Round  

We come now to Lethaby’s presentation at the recent 2019 conference. Photos of slides from the talk and tweets during and after it suggest that Lethaby more or less repeated her 2018 IATEFL talk. But did she repeat all the caveats, I wonder? Did she make it abundantly clear that conjectures based on “a place in the brain where new and old information connect” are not evidence for the efficaciousness of anything, and that there’s still no good reason to believe that interleaving and elaborative interrogation are good L2 learning strategies? Or did she, as in 2018, rather over-egg the pudding again? What were the take aways from Lethaby’s talk, I wonder, and did she do enough to prevent people from misinterpreting what she was saying?

Third Round 

Well she certainly didn’t do enough to prevent Hugh Dellar from getting it all wrong. Soon after the start of the talk, Dellar tweeted

@clethaby laying into some neuromyths that still blight our profession: left brain right brain nonsense, we only use 10% of our brains, students learn better when presented with info via preferred learning style etc. Great stuff.

Then:

Really great talk by @clethaby unpicking the truths and lies behind neuroscientific research in ELT. It’s a real cry for deeper engagement with the actual research and for a healthy degree of scepticism, as well as plea for real evidence-based teaching.

So Dellar has already forgotten what I’m sure Lethaby told him at the start of the talk, namely that she wasn’t reporting on neuroscientific research in ELT at all. Still, it’s good to see such a sincere endorsement of cries for deeper engagement with research.

And then, we read this:

 @clethaby suggests that far more effective is spaced practice and interleaving, practice testing and eleborative interrogation – explaining to someone else how/why you got the answer to something (esp. in a guided discovery manner). It helps embed new in old.

I assume that, apart from the last sentence, this is a garbled attempt to paraphrase what Lethaby actually said, and it resembles what she said in her 2018 talk, as quoted above. Still, it shows what happens when you discuss implications of unconfirmed findings without constantly repeating that the findings are unconfirmed.

And what does that last sentence – “It helps embed new in old” – mean? It sounds as if it ought to mean something and as if Dellar thinks he knows exactly what it means. Perhaps it means “Using these four techniques helps embed new knowledge in old knowledge”. But what does “embedding new knowledge in old knowledge” mean with regard to learning an L2? This is, I presume, whether Dellar knows it or not, a reference to the van Kesteren et al (2014) article, but it’s hopelessly mangled. How is one supposed to tell when “new knowledge” gets “embedded” in “old knowledge”? What happens to the new and old knowledge? Do the two types of knowledge move from the different places they were to that place in the brain where new and old knowledge connect? How do they connect, or rather “embed”? Does the new knowledge sort of snuggle in with the old knowledge, or does it all become some different kind of knowledge? How does this process differ from learning something new? And then, since we’re interested in evidence, how do we test whether, in any particular case where a student explains to someone else how/why they got the answer to something, the “new knowledge” is “embedded” in the “old knowledge”?

We started with a 2013 meta-analysis of studies on learning techniques aimed at explicit learning in content-based courses like biology and educational psychology. We end, six years later, with someone tweeting a paraphrase of what they thought they heard someone else say about neuroscientific research in ELT. The hearer jumbles together four different techniques, two of which we know little about, and confidently asserts that teachers can use them to “help embed new in old”, whatever that means.

Fourth Round  

The next step is likely to be that the tireless globetrotter will continue on his travels, now able to assure gullable young teachers all over the world that his views on ELT are backed by “actual research” and that they represent the highest form of “real evidence-based teaching”.

Finally

The tweets that followed the conference, include this one

They suggest that Carol Lethaby should re-double her efforts to avoid being misunderstood, and that all of us should re-double our efforts to carefully scrutinise what the so-called ELT experts tell us.

References

Dunlosky, J., K. A. Rawson, E. J. Marsh, M. J. Nathan and D. T. Willingham.( 2013) Improving students learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14/1: 4−58.

Lethaby also ment sources in her 2018 IATEFL presentation:

Roediger, H. L. and Pyc, M. A. (2012) Inexpensive techniques to improve education: applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition, 1/4: 242−248.

van Kesteren, M. T. R., M. Rijpkema, D. J. Ruiter, R. G. M. Morris and G. Fernandez. (2014) Building on prior knowledge: schema-dependent encoding processes related to academic performance. Journal of Cognitive Neuroscience 26/10: 2250−2261.

SLA research findings which should affect ELT

In response to a tweet from David Cullen, here’s a summary of SLA research that I think needs to be taken more seriously by the ELT community.

From time to time one sees well known “experts” on ELT refer to SLA research. The standard message is that researchers know nothing about real-world classroom practice, and that most of their findings are either irrelevant or unreliable. A few trinkets from SLA research are trotted out by the ELT experts as evidence of their scholarship, and they include these:

  • Using the L1 is OK.
  • Teaching lexical sets is not OK.
  • Guessing from context is not a reliable way of “accessing meaning”.
  • Spaced repetition is a must.
  • Getting in the flow really helps learning.

Such accounts of the research are, I think, cynically frivolous, so, within the confines of a blog post, let’s take a slightly more serious look.

The empirical study of how people learn a second language started in the 1960s and I’ve written a series of posts on the history of SLA research which you will find in the Menu on the right of this page. Here, I’ll concentrate on theories which look at the psychological process of learning, and in particular on three areas:

  1. Interlanguage: how the learner’s mental idea of the target language develops.
  2. The roles of explicit and implicit knowledge and learning.
  3. Maturational constraints on adult SLA , including sensitive periods.

1. Interlanguages

Studies of interlanguage development were given a framework by Selinker’s (1972) paper which argues that L2 learners develop their own autonomous mental grammar (IL grammar) with its own internal organising principles. More work on the acquisition of English morphemes, and then studies of developmental stages of negation in English, developmental stages of word order and questions in English, and then Pienemann’s studies of learning German as an L2 where all learners adhered to a five-stage developmental sequence (see Ortega, 2009, for a review) put together an increasingly clear picture of interlanguage development.

By the mid 1980s it was clear that learning an L2 is a process whereby learners slowly develop their own autonomous mental grammar with its own internal organising principles. Thirty years on, after hundreds more studies, it is well established that acquisition of grammatical structures, and also of pronunciation features and many lexical features such as collocation, is typically gradual, incremental and slow. Development of the L2 exhibits plateaus, occasional movement away from, not toward, the L2, and U-shaped or zigzag trajectories rather than smooth, linear contours. No matter what the order or manner in which target-language structures are presented to them by teachers, learners analyze the input and come up with their own interim grammars, the product broadly conforming to developmental sequences observed in naturalistic settings. They master the structures in roughly the same manner and order whether learning in classrooms, on the street, or both. To be perfectly clear: the acquisition sequences displayed in IL development have been shown to be impervious to explicit instruction, and the conclusion is that students don’t learn when and how a teacher decrees that they should, but only when they are developmentally ready to do so.

2. The roles of explicit and implicit knowledge and learning

Two types of knowledge are said to be involved in SLA, and the main difference between them is conscious awareness. Explicit L2 knowledge is knowledge which learners are aware of and which they can retrieve consciously from memory. It’s knowledge about language. In contrast, implicit L2 knowledge is knowledge of how to use language and it’s unconscious – learners don’t know that they know it, and they usually can’t verbalize it. (Note: the terms Declarative and Procedural knowledge are often used. While there are subtle differences, here I take them to mean the same as explicit and implicit knowledge.)

Now here’s the important thing: in terms of cognitive processing, learners need to use attentional resources to retrieve explicit knowledge from memory, which makes using explicit knowledge effortful and slow: the time taken to access explicit knowledge is such that it doesn’t allow for quick and uninterrupted language production. In contrast, learners can access implicit knowledge quickly and unconsciously, allowing it to be used for unplanned language production.

While research into explicit and implicit L2 knowledge started out looking almost exclusively at grammar, we now have lots of evidence to show that these constructs also relate to other areas of language such as vocabulary, pronunciation, and pragmatics. Just for example, while lots of vocab. learning involves learning items by explicit form-meaning mapping (“table” is “mesa” in Spanish), there are very important aspects of vocabulary knowledge, like collocations, and lexical chunks, for example, that involve implicit knowledge.

Three Interface Hypotheses

It’s now generally accepted that these two types of language knowledge are learned in different ways, stored separately and retrieved differently, but disagreement among SLA scholars continues about whether there is any interface between the two. The question is: Can learners turn the explicit knowledge they get in classrooms into implicit knowledge? Or can implicit knowledge be gained only implicitly, while explicit knowledge remains explicit?  Those who hold the “No Interface” position argue that there’s a complete inability for explicit knowledge to become implicit. Others take the “Weak Interface” position which assumes that there is a relationship between the two types of knowledge and that they work together during L2 production. Still others take the “Strong Interface” position, based on the assumption that explicit knowledge can and does become implicit.

The main theoretical support for the No Interface position is Krashen’s Monitor theory, which has few adherents these days. The Strong Interface case gets its theoretical expression from Skill Acquisition Theory, which describes the process of declarative knowledge becoming proceduralised and is most noteably championed by DeKeyser. This general learning theory clashes completely with evidence from L1  acquistion and with interlanguage findings discussed above. The Weak Interface position seems right to me and to most SLA scholars.

Implicit Knowledge is the driver of IL development 

Whatever their differences, there is very general consensus among scholars that SLA has a great deal in common with L1 learning, and that implicit learning is the “default” mechanism of SLA. Wong, Gil & Marsden (2014) point out that implicit knowledge is in fact ‘better’ than explicit knowledge; it is automatic, fast – the basic components of fluency – and more lasting because it’s the result of the deeper entrenchment which comes from repeated activation. The more that any speech act draws from implicit knowledge, the less we need to rely on explicit knowledge. Doughty (2003) concludes:

In sum, the findings of a pervasive implicit mode of learning, and the limited role of explicit learning …, point to a default mode for SLA that is fundamentally implicit, and to the need to avoid declarative knowledge when designing L2 pedagogical procedures.  

One more twist. Research shows that although there are doubts about its usefulness, explicit knowledge about the language of the kind teachers habitually offer in class is easy to learn. Equally, the implicit language knowledge that is obtained from engagement in real communicative activities is relatively hard to learn – evidence from Canadian immersion courses, for example, shows that when learners are focused exclusively on meaning, there are parts of the target language that they just don’t learn, even after hundreds of hours of practice. This leads Loewen (2015) to say:

The fact that explicit knowledge is relatively easy to learn but difficult to use for spontaneous L2 production, and that, conversely, implicit knowledge is relatively difficult to learn but important for L2 production is, I feel, one of the most important issues in ISLA and L2 pedagogy. It is essential for L2 learners and teachers to be aware of the different types of knowledge and the roles that they play in L2 acquisition and production. The implication is that the teaching of explicit language rules will not, by itself, result in students who are able to communicate easily or well in the language”.      

3. Maturational constraints on adult SLA , including sensitive periods

Our limited ability to learn a second language implcitly (in stark contrast to the way we learn our L1 as young children) brings us to the third area of SLA research I want to look at. Long (2007) in an extensive review of the literature, concludes that there are indeed critical periods for SLA, or “senstive periods”, as they’re called. For most people, the senstive period for native-like phonology closes between age 4 to 6; for the lexicon between 6 and 10; and for morphology and syntax by the mid teens. This is a controversial area, and I’ll be happy to answer any questions you might ask, but there’s general consensus that adults are partially “disabled” language learners who can’t learn in the same way children do. Which is where explicit learning comes in – the right kind of explicit teaching can help adult students learn bits of the language that they are unlikely to learn implicitly. Long calls these bits “fragile” features of the L2 – features that are of low perceptual saliency (because they’re infrequent / irregular / semantically empty / communicatively redundant / involving complex form-meaning mappings), and he says these are likely to be late or never learned without explicit learning.

Implications

It seems that the best way teachers can help students to learn an L2 is by concentrating on giving them scaffolded opportunities to use the L2 in the performance of relevant communicative tasks. During the performance of these tasks, where enhanced and genuine texts both written and spoken give the rich input required, teachers should give students help with aspects of the language that they’re having problems with by brief switches to what Long calls “focus on form”, from quick explicit explanations to corrective feedback in the form of recasts.

A relatively inefficacious way of organising ELT courses is to use a General English coursebook. Here, the English language is broken down into constituent parts which are then presented and practiced sequentially following an intuitive easy-to-difficult criterion. The teacher’s main concern is with presenting and practicing bits of grammar, pronunciation and lexis by reading and listening to short texts, doing form-focused exercises, talking about bits of the language, and writing summaries on the whiteboard. The mistake is to suppose that whatever students learn from what they read in the book, hear from the teacher, say in response to prompts, read again on the whiteboard, and maybe even read again in their own notes will lead to communicative competence. It won’t.

The mis-use of SLA findings

Numerous prominent members of the ELT establishment use either carefully selected or misinterpreted research findings in SLA to support their views. Here are 3 quick examples.

Catherine Walter’s (2012) article has been quoted by many. She claims that, while grammar teaching has been under attack for years,

evidence trumps argument, and the evidence is now in. Rigorously conducted meta-analyses of a wide range of studies have shown that, within a generally communicative approach, explicit teaching of grammar rules leads to better learning and to unconscious knowledge, and this knowledge lasts over time. Teaching grammar explicitly is more effective than not teaching it, or than teaching it implicitly; that is now clear. … Taking each class as it comes is not an option. A grammar syllabus is needed.                 

No meta-analysis in the history of SLA research supports these assertions. Not one, ever.

Likewise, Jason Anderson (2016), in his article defending PPP, says

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

His argument is preposterous and I’ve discussed it here.

The meta-analysis which is most frequently cited to defend the kind of explicit grammar teaching done by teachers using coursebooks is the Norris and Ortega (2000) meta-analysis on the effects of L2 instruction, which found that explicit grammar instruction (Focus on FormS)  was more effective than Long’s recommended more discrete focus on form (FoF) approach through procedures like recasts. (Penny Ur is still using this article to “prove” that recasts are ineffective.) However, Norris and Ortega themselves acknowledged, while others like Doughty (2003) reiterated, that the majority of the instruments used to measure acquisition were biased towards explicit knowledge. As they explained, if the goal of the discreet FoF is for learners to develop communicative competence, then it is important to test communicative competence to determine the effects of the treatment. Consequently, explicit tests of grammar don’t provide the best measures of implicit and proceduralized L2 knowledge. Furthermore, the post tests done in the studies used in the meta-analysis were not only grammar tests, they were grammar tests done shortly after the instruction, giving no indication of the lasting effects of this instruction.

Newer, post-2015 meta-analyses have used much better criteria for selecting and evaluating studies. The meta-analysis carried out by Kang, et al (2018) concluded:

 implicit instruction (g = 1.76) appeared to have a significantly longer lasting impact on learning … than explicit instruction (g = 0.77). This finding, consistent with Goo et al. (2015), was a major reversal of that of Norris and Ortega (2000).

As for Ur’s typically sweeping assertion that there is “no evidence” to support claims of TBLT’s efficaciousness, a meta-analysis by Bryfonski & McKay (2017) reported:

Findings based on a sample of 52 studies revealed an overall positive and strong effect (d = 0.93) for TBLT implementation on a variety of learning outcomes. … Additionally, synthesizing across both quantitative and qualitative data, results also showed positive stakeholder perceptions towards TBLT programs.

Enough, already! I hope this review of some important areas of SLA research and my comments will generate discussion.

References 

Bryfonski, L., & McKay, T. H. (2017). TBLT implementation and evaluation: A meta-analysis. Language Teaching Research

Corder, S. P. (1967) The significance of learners’ errors.International Review of Applied Linguistics 5, 161-9.

Doughty, C. J. (2003). Instructed SLA: Constraints, Compensation, and Enhancement. In C. J. Doughty, & M. H. Long (Eds.), The Handbook of Second Language Acquisition (pp. 256-310). Malden, MA: Blackwell.

Kang, E., Sok, S., & Hong, Z. (2018) Thirty-five years of ISLA research on form-focused instruction: A meta-analysis. Language Teaching Research Journal.

Krashen, S. (1977) The monitor model of adult second language performance. In Burt, M., Dulay, H. and Finocchiaro, M. (eds.) Viewpoints on English as a second language. New York: Regents, 152-61.

Loewen, S. (2015) Introduction to Instructed Second Language Acquisition.N.Y. Routledge.

Long, M. (2007) Problems in SLA. Lawrence Erlbaum.

Ortega, L. (2009) Sequences and Processes in Language Learning. In Long, M. and Doughty, C. Handbook of Language Teaching. Oxford, Wiley

Seliger, H. (1979) On the nature and function of language rules in language teaching. TESOL Quarterly 13, 359-369.

Selinker, L. (1972) Interlanguage. International Review of Applied Linguistics 10, 209-231.

Wilkins, D. A. (1976) Notional syllabuses. Oxford, Oxford University Press.

Whong, M., Gil, K.H. and Marsden, H., 2014. Beyond paradigm: The ‘what’and the ‘how’of classroom research. Second Language Research, 30(4), pp.551-568.