Positivist and Constructivist Paradigms

Guba and Lincoln’s work, including their (1994) Competing Paradigms in Qualitative Research, is now considered by many to be part of a necessary background for any discussion of (educational) research. I’ve been astonished by how many people who did MAs in TESOL and/or Applied Linguistics in the nineties and onwards were taught to regard Guba and Lincoln’s work as if it were part of the canon of the philosophy of science, rather than stuff which nobody in that field takes seriously, and which very few scientists have even heard of.  Below is another attempt to set the record straight.

Research Paradigms

Following Guba and Lincon, Taylor and Medina (2013), explain that a “research paradigm” comprises

  • a view of the nature of reality (i.e., ontology) – whether it is external or internal to the knower;
  • a related view of the type of knowledge that can be generated and standards for justifying it (i.e., epistemology);
  • and a disciplined approach to generating that knowledge (i.e., methodology). 

However, scholars in scientific method and the philosopohy of science, including Khun, Popper, Lakatos, Fereyabend, and Lauden, for example, don’t discuss “Research Paradigms” in this way, because they all take a realist ontology and epistemology for granted. That is, they all assume that an external world exists independently of our perceptions of it; that it is possible to study different phenomena in this world through observation and reflection, to make meaningful statements about them, and to improve our knowledge of them. Furthermore, they all agree that scientific method requires hypotheses to be tested by means of empirical observation, logic and rational argument.

So what’s all this talk of “Research Paradigms” about? According to Taylor and Medina, the most “traditional” paradigm is positivism:

Positivism is a research paradigm that is very well known and well established in universities worldwide. This ‘scientific’ research paradigm strives to investigate, confirm and predict law-like patterns of behaviour, and is commonly used in graduate research to test theories or hypotheses.


In fact, positivism refers to a particular form of empiricism, and is a philosophical view primarily concerned with the issue of reliable knowledge. Comte invented the term around 1830; Mach headed the second wave of positivism fifty years later, seeking to root out the “contradictory” religious elements in Comte’s work, and finally, the Vienna Circle in the 1920s (Schlick, Carnap, Godel, were key members; Russell, Whitehead and Wittgenstein were interested parties) developed a programme labelled “Logical Positivism”, which consisted of cleaning up language so as to get rid of paradoxes, and then limiting science to strictly empirical statements. Their efforts lasted less than a decade, and by the time the 2nd world war started, the movement had broken up in complete disarray.

It’s my own invention 

When Guba & Lincoln – and now millions of others, it seems – use the term “positivist”, they’re using a definition which has nothing to do with the positivist movements of Comte, Mach, and Carnap, but is rather a politically-motivated caricature of “the scientist”. And the “positivist paradigm” refers to a set of beliefs, etc., which conctructivists like Lincoln and Guba want to attribute to the views of scientists in general. Positivism “strives to investigate, confirm and predict law-like patterns of behaviour”. Positivists work “in natural science, physical science and, to some extent, in the social sciences, especially where very large sample sizes are involved”. Positivism stresses “the objectivity of the research process”. It “mostly involves quantitative methodology, utilizing experimental methods”.

As opposed to positivism, we have various other paradigms, including post-positivism, the interpretive paradigm, and the critical paradigm. But the real alternative to the postivist paradigm is the postmodernist paradigm, or the constructivist paradigm as Lincoln and Guba prefer to call it.

The Strong Programme

We can trace Lincoln and Guba’s constructivism back to to the 1970s, when some of those working in the area of the sociology of science, taking inspiration from the “Strong Programme” developed by Barnes (1974) and Bloor (1976), changed their aim from the established one of analysing the social context in which scientists work to the far more radical, indeed audacious, one of explaining the content of scientific theories themselves. According to Barnes, Bloor and their followers, the content of scientific theories is socially determined, and there is no place whatsoever for the philosophy of science and all the epistemological problems that go with it.  Since science is a social construction, it is the business of sociology to explain the social, political and ethical factors that determine why different theories are accepted or rejected.

An example of this approach in action is sociologist Ferguson’s explanation of the paradigm shift in physics which followed Einstein’s publication of his work on relativity.

The inner collapse of the bourgeois ego signalled an end to the fixity and systematic structure of the bourgeois cosmos. One privileged point of observation was replaced by a complex interaction of viewpoints.

The new relativistic viewpoint was not itself a product of scientific “advances”, but was part, rather, of a general cultural and social transformation which expressed itself in a variety of modern movements.  It was no longer conceivable that nature could be reconstructed as a logical whole.  The incompleteness, indeterminacy, and arbitrariness of the subject now reappeared in the natural world.  Nature, that is, like personal existence, makes itself known only in fragmented images.  (Ferguson, cited in Gross and Levitt, 1998: 46)

Here, Ferguson, in all apparent seriousness, suggests that Einstein’s relativity theory is to be understood not in terms of the development of a progressively more powerful theory of physics which offers an improved explanation of the phenomena in question, but rather in terms of the evolution of “bourgeois consciousness”.


The basic argument of postmodernists is that if you believe something, then it is “real”, and thus scientific knowledge is not powerful because it is true; it is true because it is powerful. The question should not be “What is true?”, but rather “How did this version of what is believed to be true come to dominate in these particular social and historical circumstances?”  Truth and knowledge are culturally specific. If we accept this argument, then we have come to the end of the modern project, and we are in a “post-modern” world.

Here are a few snippets from postmodernist texts (see Gross and Levitt, 1998, for references):

  • Everything has already happened….nothing new can occur. There is no real world (Baudrillard, 1992: 64).
  • Foucault’s study of power and its shifting patterns is a fundamental concept of postmodernism. Foucault is considered a post-modern theorist because his work upsets the conventional understanding of history as a chronology of inevitable facts and replaces it with underlayers of suppressed and unconscious knowledge in and throughout history. (Appignanesi, 1995: 45).
  • sceptical post modernists look for substitutes for method because they argue we can never really know anything (Rosenau 1993: 117).
  • Postmodern interpretation is introspective and anti-objectivist which is a form of individualized understanding. It is more a vision than data observation (Rosenau 1993: 119).
  • There is no final meaning for any particular sign, no notion of unitary sense of text, no interpretation can be regarded as superior to any other (Latour 1988: 182).


Lincoln and Guba’s (1985) “constructivist paradigm”  adopts an ontology & epistemology which is idealist (“what is real is a construction in the minds of individuals”) pluralist and relativist:

There are multiple, often conflicting, constructions and all (at least potentially) are meaningful.  The question of which or whether constructions are true is sociohistorically relative (Lincoln and Guba, 1985: 85).

The observer cannot be neatly disentangled from the observed in the activity of inquiring into constructions.  Constructions in turn are resident in the minds of individuals:

They do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors (Lincoln and Guba, 1985: 143).

Thus constructivism is based on the principle of interaction.

The results of an enquiry are always shaped by the interaction of inquirer and inquired into which renders the distinction between ontology and epistemology obsolete: what can be known and the individual who comes to know it are fused into a coherent whole (Guba: 1990: 19).

Trying to explain how one might decide between rival constructions, Lincoln says:

Although all constructions must be considered meaningful, some are rightly labelled “malconstruction” because they are incomplete, simplistic, uninformed, internally inconsistent, or derived by an inadequate methodology.  The judgement of whether a given construction is malformed can only be made with reference to the paradigm out of which the construction operates; in other words, criteria or standards are framework-specific, so, for instance, a religious construction can only be judged adequate or inadequate utilizing the particular theological paradigm from which it is derived (Lincoln, 1990: 144).


There is in constructivism, as in postmodernism, an obvious attempt to throw off the blinkers of modernist rationality, in order to grasp a more complex, subjective reality.  They feel that the modern project has failed, and I have some sympathy for that view. There is a great deal of injustice in the world, and there are good grounds for thinking that a ruling minority who benefit from the way economic activity is organised are responsible for manipulating information in general, and research programmes in particular, in extremely sophisticated ways, so as to bolster and increase their power and control. To the extent that postmodernists and constructivists feel that science and its discourse are riddled with a repressive ideology, and to the extent that they feel it necessary to develop their own language and discourse to combat that ideology, they are making a political statement, as they are when they say that “Theory conceals, distorts, and obfuscates, it is alienated, disparated, dissonant, it means to exclude, order, and control rival powers” (Culler, 1982: 67).  They have every right to express such views, and it is surely a good idea to encourage people to scrutinise texts, to try to uncover their “hidden agendas”.  Likewise the constructivist educational programme can be welcomed as an attempt to follow the tradition of humanistic liberal education.

The constructivists obviously have a point when they say (not that they said it first) that science is a social construct. Science is certainly a social institution, and scientists’ goals, their criteria, their decisions and achievements are historically and socially influenced.  And all the terms that scientists use, like “test”, “hypothesis”, “findings”, etc., are invented and given meaning through social interaction.  Of course.  But – and here is the crux – this does not make the results of social interaction (in this case, a scientific theory) an arbitrary consequence of it.  Popper, in reply to criticisms of his naïve falsification position, defends the idea of objective knowledge by arguing that it is precisely through the process of mutual criticism incorporated into the institution of science that the individual short-comings of its members are largely cancelled out.

As Bunge (1996) points out “The only genuine social constructions are the exceedingly uncommon scientific forgeries committed by a team.” (Bunge, 1996: 104) Bunge gives the example of the Piltdown man that was “discovered” by two pranksters in 1912, authenticated by many experts, and unmasked as a fake in 1950.  “According to the existence criterion of constructivism-relativism we should admit that the Piltdown man did exist – at least between 1912 and 1950 – just because the scientific community believed in it” (Bunge, 1996: 105).

The heart of the relativists’ confusion is the deliberate conflation of two separate issues: claims about the existence or non-existence of particular things, facts and events, and claims about how one arrives at beliefs and opinions. Whether or not the Piltdown man is a million years old is a question of fact.  What the scientific community thought about the skull it examined in 1912 is also a question of fact.  When we ask what led that community to believe in the hoax, we are looking for an explanation of a social phenomenon, and that is a separate issue.  Just because for forty years the Piltdown man was supposed to be a million years old does not make him so, however interesting the fact that so many people believed it might be.

Guba and Lincoln say “There are multiple, often conflicting, constructions and all (at least potentially) are meaningful. The question of which or whether constructions are true is socio-historically relative”, this is a perfectly acceptable comment, as far as it goes.  If Guba and Lincoln argue that the observer cannot be neatly disentangled from the observed in the activity of inquiry, then again the point can be well taken.  But when they insist that constructions are exclusively in the minds of individuals, that “they do not exist outside of the persons who created and hold them; they are not part of some “objective” world that exists apart from their constructors”, and that “what can be known and the individual who comes to know it are fused into a coherent whole”, then they have disappeared into a Humpty Dumpty world where anything can mean whatever anybody wants it to mean.

A radically relativist epistemology rules out the possibility of data collection, of empirical tests, of any rational criterion for judging between rival explanations and I believe those doing research and building theories should have no truck with it. Solipsism and science – like solipsism and anything else of course – do not go well together. If the postmodernist paradigm  rejects any understanding of time because “the modern understanding of time controls and measures individuals”, if they argue that no theory is more correct than any other, if they believe that “everything has already happened”, that “there is no real world”, that “we can never really know anything”, then I think they should continue their “game”, as they call it, in their own way, and let those of us who prefer to work with more rationalist assumptions get on with scientific research.


(Citations from Taylor & Medina, and Guba & Lincoln can be found in their articles which you can download from the links above.)

Barnes, B. (1974) Scientific knowledge and sociological theory.  London: Routeledge and Kegan Paul.

Barnes, B. and Bloor, D. (1982) Relativism, Rationalism, and the Sociology of  Science. In Hollis, M. and Lukes, S.  Rationality and Relativism,  21-47.  Oxford: Basil Blackwell.

Bloor, D. (1976) Science and Social Imagery.  London: Routeledge and Kegan Hall.

Bunge, M. (1996) In Praise of Intolerance to Charlatanism in Academia. In Gross, R, Levitt, N., and Lewis, M. The Flight From Science and Reason. Annals of the New York Academy of Sciences, Vol. 777, 96-116.

Culler, J. (1982) On Deconstruction: Theory and Criticism after Structuralism.  Ithaca: Cornell University Press.

Gross, P. and Levitt, N. (1998) Higher Superstition. Baltimore: John Hopkins University Press.

Lincoln, Y. S. and Guba, E.G. (1985) Naturalistic Enquiry. Beverly Hills, CA: Sage.

Latour, B. and Woolgar, S. (1979) Laboratory Life: The Social Construction of Scientific Facts.  London: Sage.


The value of form-focused instruction

Currently, the most popular way of teaching courses of General English is to use a coursebook. General English coursebooks provide for the presentation and subsequent practice of a pre-selected list of ”items” of English, including grammar, vocabulary and aspects of pronunciation. The underlying assumpton is that the best way to help people learn English as an L2 is to explicitly teach the items and then practice them. This assumption is falsified by reliable evidence from SLA research.

Those bent on defending coursebook-driven ELT either ignore the evidence, or they counter by pointing to research which suggests that explicit teaching is effective. There are two problems with such a counter argument:

  1. It misrepresents research evidence by claiming that the evidence supports the way in which coursebooks deliver the explicit instruction.
  2. It cherry-picks the evidence, ignoring increasing amounts of evidence from recent studies which seriously challenge the reliability of conclusions drawn by previous studies, particularly the well-know Norris and Ortega (2000) study.

Misrepresenting Evidence

A good example of misrepresentation is Jason Anderson’s article defending PPP, which I discussed here. Anderson says:

while research studies conducted between the 1970s and the 1990s cast significant doubt on the validity of more explicit, Focus on Forms-type instruction such as PPP, more recent evidence paints a significantly different picture.

But, of course,  recent research doesn’t do anything to validate the kind of focus on forms (FoFs) instruction prescribed by PPP, and no study conducted in the last 20 years provides any evidence to challenge the established view among SLA scholars, neatly summed up by Ortega (2009):

Instruction cannot affect the route of interlanguage development in any significant way.

Anderson bases his arguments on the following non-sequitur:

There is evidence to support explicit instruction, therefore there is evidence to support the “PPP paradigm”.

But, while there is certainly evidence to support explicit instruction, this evidence can’t be used to support the use of PPP in classroom based ELT. Explicit instruction takes many forms, and PPP involves one very specific type of it – the presentation and subsequent controlled practice of a linear sequence of items of language. Anderson appeals to evidence for the effectiveness of a variety of types of explicit instruction to support the argument that PPP is efficacious accross a wide range of ELT contexts. In doing so, he commits a schoolboy error in logic.

Cherry-Picking Evidence 

Moving to the second matter, the research which is most frequently cited to defend the kind of explicit grammar teaching done by teachers using coursebooks is the Norris and Ortega (2000) meta-analysis on the effects of L2 instruction, which found that explicit grammar instruction (FoFs)  was more effective than Long’s recommended, more discrete focus on form (FoF) approach through procedures like recasts.

However, Norris and Ortega themselves acknowledged, while others like Doughty (2003) reiterated, that the majority of the instruments used to measure acquisition were biased towards explicit knowledge. As they explained, if the goal of the discreet FoF is for learners to develop communicative competence, then it is important to test communicative competence to determine the effects of the treatment. Consequently, explicit tests of grammar don’t provide the best measures of implicit and proceduralized L2 knowledge. Furthermore, the post tests done in the studies used in the meta-analysis were not only grammar tests, they were grammar tests done shortly after the instruction, giving no indication of the lasting effects of this instruction.

This week, Steve Smith has been tweeting to remind people of a blog post he wrote on “The latest research on teaching grammar” , which gives a summary of a chapter in a book written in 2017. In other words, Smith’s report is two years out of date, thus hardly warranting the claim to report “the latest” research. I should add that Smith’s comments show a depressing lack of critical acumen, coupled with an ignorance of the function of theories. Having outlined the different views of SLA scholars on the interface between declarative and procedural knowledge, Smith invites teachers to suppose that

all of these hypotheses have merits and that teaching which takes into account all three may have its merits.  

But the non-interface and strong-interface hypotheses are contradictory – they belong to theories which provide opposed explanations of a given phenomenon and one of them is therefore false.

New Evidence

Newer meta-analyses have used much better criteria for selecting and evaluating studies. The result is that the conclusions of previous meta-analyses have been seriously challenged, and, in some cases, flatly contradicted. Below are excerpts from Mike Long’s notes summarising the most recent meta-analyses.

Sok, S., Kang, E. Y., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A methodological synthesis. Language Teaching Research 23, 4, 403-427.

  • 88 studies (1980-2015)
  • Explicit: Instruction involved (i) rule explanation or (ii) learners being asked to attend to particular forms and reach a linguistic generalization of their own.
  • Implicit: Neither (i) nor (ii) involved
  • FonF: Form and meaning integrated.
  • FonFs: Learners’ attention directed to target features, with no attempt to integrate form and meaning.
  • FoM: No attempt to direct learners’ attention to target features.

Note: Implicit and FoM are both defined negatively, by the absence of something.

Crucial to the results on studies of form-focused instruction is the length of treatments. Most studies have very short lengths of treatment, which weakens Implicit, FonF and FoM unfairly, as all three require more time and input. On p. 16, we learn that 21% of the studies were done in one day, 74% over two weeks or less, and 50% of sessons lasted one hour or less.

  • 65% of studies took place in a FL context, 25% in a SL context.
  • Proficiency ranges in studies: 36% Low, 34% Mid, 10% High. Short treatents with Low proficiency students favors Explicit and FonFs.
  • 46% lab, 54% classroom, 54% university students
  • No pure FoM studies, they say [but see DeVos et al, 2018, meta-analysis!]

In contrast to the Norris and Ortega (2000) study,  Sok et al (2018) found that Implicit instruction was more efficacious than explicit instruction, and that FonF was more efficacious than FonFs.

The shift in the instructional focus of studies from Norris & Ortega (2000) to Sok et al (2018) shows how more and more researchers (but not yet pedagogues or textbook writers) have recogized the limitations of explicit instruction and woken up to the importance of, and need for, incidental learning and implicit knowledge.

Kang, E. Y., Sok, S., & Han, Z-H. (2018). Thirty-five years of ISLA on form-focused instruction: A meta-analysis. LanguageTeaching Research 23, 4, 428-453.

54 studies (1980 – 2015), including 15 from Norris & Ortega (2000), and 39 new (2000 – 2015).

 Implicit instruction (g = 1.76) appeared to have a significantly longer lasting impact on learning … than explicit instruction (g = 0.77). This finding, consistent with Goo et al. (2015), was a major reversal of that of Norris and Ortega (2000).

Large effect size for instruction (g = 1.06), and also on delayed post-tests (g = .93).

75% in FL, and 25% in SL setting.

Instruction over an average of 11 days, average of two sessions and 48 minutes per session.

55% adults, 19% adolescents, 13% young learners. Average of 29 SS per treatment group.

32% beginners, 44% intermediates, 9% advanced learners.

Explicit (g = 1.1) = Implicit (g = 1.38) on immediate post-tests.

Implicit (g = 1.76) > Explicit (g = .77) on delayed post-tests (!) (p < 05) [This is the usual pattern: Implicit learning is more durable]

Using immediate post-test scores as the DV, results for moderator variables were:

  • Oral assessment measures (g = 1.03) or both oral and written measures (g = 1.02) yielded a significantly larger mean effect than studies utilizing written measures only (g = 0.73)
  • L2 proficiency was a significant moderator. Instruction had a greater effect on novice learners (g = 1.45) than intermediate (g = 0.70) and advanced learners (g = 0.88).
  • FL v. SL educational setting was not a factor. 
  • Educational context — elementary, secondary, university, language institutes (student age) — was not a significant factor


There is general agreement among academics researching instructed L2 learning that explicit instruction can play a significant part in facilitating and accelerating the learning process. But it’s becoming increasingly clear that the type of explicit instruction which typifies a PPP approach to ELT, delivered through coursebook-driven ELT, is not efficacious. More and more research evidence supports the view that teachers should concentrate on scaffolding implicit learning, using explicit instruction in line with Long’s FoF model.

Stop Flying

We’re stumbling towards environmental catastophe. One way we can help prevent this catastophe is to appreciate the harm flying does and to commit to flying as little as possible.

In a recent post, Sandy Millin gives a list of some of the things she does to try to reduce her impact on the environment. They include some good suggestions, but they ignore the issue of flying. “I’m very aware that I fly far too much” she says, but she says nothing more about it. It’s an issue that surely needs addressing.

I suggest that

  1. Teacher trainer / developers make every effort to avoid flying. Video-conferencing is the obvious alternative. It means changing the way the courses are delivered, but it can be done.
  2. Conference organisers stop flying in plenary speakers to grandstand their events. Again, video-conferencing is the obvious answer.
  3. More local, smaller conferences should replace the huge, international events. Yes, there’s a downside, but this is an emergency.

So I urge everybody to make a commitment not to fly to any conference ever again, and to boycott any local teacher development event where some ‘expert’ is flown in from thousands of miles away to lead the event.

A commitment to reduce flying to a minimum in the ELT world would have enormous, beneficial results. Not only would it help the environment, it would also help to stimulate local initiative, and to promote local organisations and local talent.

There are tremendous opportunities as well as uncomfortable costs involved in taking drastic action to reverse the effects of climate change now. As an anarchist, I think we’d gain enormously from scaling down, focusing on our local community, organising more widely through networks, deconstructing the state. Wooops! That last bit will maybe put people off, but this is, of course, a question of politics, and I’m happy to discuss the politics involved.

We’re on the cupse. We either ignore the threat, or we act. Action involves lots of things, including all the things that Sandy Millin lists. But right at the top of the list is to change the way we think about flying.


SLB: Task-Based Language Teaching Course No. 2

What is it?

It’s an on-line course about Mike Long’s version of TBLT, consisting of twelve, two week sessions. In the course, we

  • explain the theory behind it;
  • describe and critique Long’s TBLT;
  • develop lighter versions for adoption in more restricted circumstances;
  • trace the steps of designing a TBLT syllabus;
  • show you how to implement and evaluate TBLT in the classroom.

When is it? 

It starts on November 7th and finishes on April 9th 2020.

What are the components of the Sessions?

  • Carefully selected background reading
  • A video presentation from the session tutor
  • Interactive exercises to explore key concepts
  • A forum discussion with your tutor and fellow course participants
  • A 1-hour group videoconference with your tutor
  • An assessed task (e.g. short essay, presentation, task analysis etc.)

Who are the tutors?

Neil McMillan and I do most of the tutoring, but there will also be tutorials by Roger Gilabert, Mike Long and Glenn Fulcher.

How much work is involved?

About 5 hours a week.

Why should I do the course?

1. To change. Progress involves change, and depends on a better, deeper understanding of the situation where change is needed.

2. To improve your teaching. Evidence shows that using a General English coursebook is not an efficacious way of helping students to achieve communicative competence: teachers spend too much time talking about the language and students spend too little time talking in the language. TBLT is based on helping students to use the L2 for their communucative needs, by involving them in relevant, meaningful tasks, scaffolding their learning and giving them the help they need, when they need it. This course will explain TBLT and show you how to adapt it your particular situation.

3 To improve your CV  You’ll have greater range as a teacher. If you’re involved in, or want to be involved in, teacher training / development, course design, materials design, or assessment, this course will help you advance.

Why is there so much resistence to real change?  

Because by definition, change threatens the status quo. In ELT, the way things are suits those who run the show – it’s convenient and marketable. Language is illusive, ambiguous, volatile; and language learning is a complex, dynamic,  non-linear process.  In order to be packaged and sold, language is cut up into hundreds of neat and tidy items, which Scott Thornbury calls ‘McNuggets’, and language learning is reduced to a linear process of accumulating these items. Students buy courses of English, where they learn about and practice a certain batch of items organised in a coursebook. Their proficiency is assessed according to their knowledge of these items. The knowledge learned is referred to in “can do” statements, which are used to plot students’ progress along the CEFR line from level A1 to level C2.  The levels are reified, i.e., treated as if they were real (which they are not), and as if they reflected communicative competence (which they do not). But it looks OK, if you don’t look too closely, and there are very powerful commercial interests promoting it.

What is TBLT?

There are different versions of TBLT, including  “task-supported” and “hybrid” versions. They all emphasise the importance of students working through communicative activities rather than the pages of a coursebook, but we think the best is Mike Long’s version, which identifies “target tasks” – tasks which the students will actually have to carry out in their professional or private lives – and breaks them down into a series of ‘pedagogic tasks’which form the syllabus. In the course, we consider how to identify target tasks, how to break these down into pedagogic tasks, how to find suitable materials, and how to bring all this together using the most appropriate pedagogic procedures.

 The course sounds very demanding.

We’ve extended the length of the course, so now you’ll be expected to dedicate between 4 and 6 hours a week on it. Reading is non technical, the video presentations clear, participation in on-line discussions very relaxed, and the written component practical and short.

Is there a Preview?  

Yes. Click on this link to see Session 1 


…. and more information? 

Click here: TBLT November 2019