A Note on Popper’s Demarcation line between science and non-science

This is in response to Mura Nava’s request for “a short-winded view of the sci / non-sci” distinction, after I’d suggested that Raphael Berthele’s  article was long-winded and confused.

Popper says falsifiability is the hallmark of a scientific theory, and allows us to make a demarcation line between science and non-science: if a theory doesn’t make predictions that can be falsified, it’s not scientific.  According to such a demarcation, astronomy is scientific and astrology is not, since although there are millions of examples of true predictions made by astrologers, astrologers deny that false predictions constitute a challenge to their theory.

Popper insists that in scientific investigation we start with problems, not with empirical observations, and that we then leap to a solution of the problem we have identified – in any way we like. This second stage is crucial to an understanding of Popper’s epistemology: when we are at the stage of coming up with explanations, with theories or hypotheses, then, in a very real sense, anything goes. Inspiration can come from lowering yourself into a bath of water, being hit on the head by an apple, or by imbibing narcotics. It’s at the next stage of the theory-building process that empirical observation comes in, and, according to Popper, its role is not to provide data that confirm the theory, but rather to find data that test it.


Empirical observations should be carried out in attempts to falsify the theory: we should search high and low for a non-white swan, for an example of the sun rising in the West, etc. The implication is that the theory has to be formulated in such a way that empirical tests can be carried out: there must be, at least in principle, some empirical observation that could clash with the explanations and predictions that the theory offers. If the theory survives repeated attempts to falsify it, then we can hold on to it tentatively, even though we’ll never know for certain that it’s true. The bolder the theory (i.e. the more it exposes itself to testing, the more wide-ranging its consequences, the riskier it is) the better. If the theory doesn’t stand up to the tests, if it’s falsified, then we need to re-define the problem, come up with an improved solution, a better theory, and then test it again to see if it stands up to empirical tests more successfully. These successive cycles are an indication of the growth of knowledge (Popper, 1974).

Popper (1974: 105-106) gives the following diagram to explain his view:

P1 -> TT -> EE -> P2

P = problem    TT = tentative theory    EE = Error Elimination through empirical experiments which test the theory

We begin with a problem (P1), which we should articulate as well as possible.  We then propose a tentative theory (TT), that tries to explain the problem. We can arrive at this theory in any way we choose, but we must formulate it in such a way that it leaves itself open to empirical tests. The empirical tests and experiments (EE) that we devise for the theory have the aim of trying to falsify it. These experiments usually generate further problems (P2) because they contradict other experimental findings, or they clash with the theory’s predictions, or they cause us to widen our questions. The new problems give rise to a new tentative theory and the need for more empirical testing.

Popper thus gives empirical experiments and observation a completely different role to the one given to them by the empiricists and positivists: their job now is to test a theory, not to prove it, and since this is a deductive approach, it escapes Hume’s famous problem of induction. Popper takes advantage of the asymmetry between verification and falsification: while no number of empirical observations can ever prove a theory is true, just one such observation can prove that it is false. All you need is to find one black swan and the theory “All swans are white” is disproved.

Popper claimed that his research methodology, and the epistemology that informs it, is rationalist because it insists that we arrive at knowledge of the world, and in particular, that we build scientific theories to explain aspects of our world, by using our minds creatively. However, Popper also argues that we need to test these a priori ideas empirically. Hence, the two central premises of Popper’s argument are:

  • Knowledge advances by trying to solve problems in a rational way.
  • The role of empirical data is to test theories.


It’s worth noting here that “positivism” is not a synonym for a scientific method which relies on logic and empirical tests. It was a philosophical movement culminating in the writings of the Vienna Circle, and is, in my opinion, a good example of philosophers stubbornly marching up a blind alley. It is a fundamentally mistaken project, as Popper has, I believe, shown, and as Wittgenstein himself recognised in his later work (see Wittgenstein, 1953). Those critics of mainstream SLA research who label their opponents “positivists”, or who argue against “positivist science”, are either ignorant of the history of positivism or are making a straw man case which no present-day researcher in the field of SLA adopting a rationalist position need defend.

A clear distinction must also be made between empiricism as a philosophical position (a position taken by Skinner, for example and currently flirted with by emergentists) and empirical evidence – observations that can be measured and checked. Popper rejects the strict epistemolgical claims of empiricism, and claims instead that scientific theories must have some empirical content and be open to empirical tests.


Popper, K. R. (1959) The Logic of Scientific Discovery.  London: Hutchinson.

Popper, K. R. (1963) Conjectures and Refutations.  London: Hutchinson.

Popper, K. R. (1972) Objective Knowledge.  Oxford: Oxford University Press.

Popper, K. (1974) Replies to Critics in P.A. Schilpp (ed.), The Philosophy of Karl Popper. Open Court, La Salle, III.

Wittgenstein, L. (1953) Philosophical Investigations. Translated by G:E.M. Anscombe. Oxford: Basil Blackwell.


The Encounter and the end of ELT

Rob fuck-and-shit Sheppard’s version on Twitter of what I said to Scott at the recent InnovateELT conference needs correcting. Here’s what really happened:

Me: Scott!

Scott:  David! How good to see you. Loved your talk; I wish I could have been there.

Me:  You’re too kind. I’m Geoff, actually.

Scott:  Deaf? That explains everything!

Seriously though, Scott’s talk sucinctly hit the nail on the head: simultaneous machine translation software radically affects the need to learn English.

In today’s world, English is a lingua franca and that’s why close to 2,000,000,000 people are learning it. A series of devices are being designed that make it possible for speakers of different languages to communicate with each other in real time without the need to use English. Such is the rate of progress in this area of software and hardware development that in 10 years time, tourists, lawyers, doctors, bankers, etc., will simply not need to communicate with each other in English any longer. Much (not all, but a significant amount) of the communication that today is done in English by non English speakers will be done by people speaking their own language and having it translated by machines. English as a lingua franca might remain, but the need for most people to learn English in the way they do today will vanish.

So: functional needs are disappearing. But social needs remain. For caring professions, human communication remains beyond the reach of machines. And gaming too – if you like that stuff. The implications are profound for all of us. I won’t go on. Here’s Scott. What I find touching about this video is Scott’s diffidence: the usual confident demeanour gives way to a hestitant, almost reluctant style. It’s like he really doesn’t want to deliver his awful message . But he’s right, and we need to deal with the truth of what he’s saying right now.

What’s the Outcome of a Roadmap?

Selling coursebooks is a a multi billion dollar business where profit, not educational excellence, is the driving criterion.

Pearson’s latest series of English coursebooks is called “Roadmap”


and it promotes its latest star product in just the same way as if it were selling toothpaste: it makes the same sort of absurd claims for its new, eight-level general English course for adults as Colgate makes for its latest re-branded toothpaste.

Just as Colgate’s new improved toothpaste has no demonstrated new improved beneficial effect on bocal hygiene, equally the Roadmap series has no new improved beneficial effect on L2 learning. There’s nothing new or improved to be found and no reason whatsoever to think that Roadmap is any better than any other coursebook. 

Pearson spends millions of dollars on promotional, carefully crafted commercial bullshit aimed at maximising sales and profit with scant regard for the truth or for educational values. The criticisms made of coursebook-driven ELT apply just as forcefully to this series as to previous ones. The only difference is the way it ‘s been produced. 

Everybody involved in making this series is demeaned. Once the project – Roadmap – is given the green light, some high-up executive in Pearson is put in charge and a “team” is formed.  All the work is cut up and dished out to people – most of them working on zero hour contracts so as to minimise costs. Everybody works within the strict, suffocating framework of Pearson’s over-arching GSE framework. Everything they write is strictly scrutinised and revised to make sure that all texts and exercises use words and structures in line with the finely granulated, innumerable steps all students must take to ensure “real progress” in Pearson’s frightening world. It’s a badly paid, miserable, life-sucking nightmare. Still, somebody’s got to do it, right? 

In its huge publicity campaign, Pearson is paying for authors to fly to different countries to promote the series. The authors,  dubious stars in the ELT firmament who know next to nothing about language learning, must agree to do whatever their paymasters dictate, and they’ll soon appear in promotional videos, standing in front of iconic landmarks like the sublime clock in Prague, ironically clutching a hopeless Roadmap, mouthing pointless platitudes which, in the vision of some coked-up marketing guru, prove that Roadmap is the latest must-have coursebook, regardless of the fact that it consists of an overpriced collection of dud materials leading students up a dystopian garden path to nowhere.     

Pearson claims that the Roadmap series is unique because Every class is different, every learner is unique. This is, quite simply, bullshit: an empty load of marketing nonsense. Nothing substantial justifies this vaulted claim: the same old crap is delivered in the same old way, the only difference being that skills development is marginally separated. The Roadmap series is the result of a manager delivering a corporate vision: lock-step progression in pseudo-scientifically measured steps towards a reified, mistaken, commodified version of language proficiency. Look at the sample units and weep. Roadmap is Pearson’s version of  Colgate’s “everything’s-new-but-in-fact-nothing’s changed” toothpaste – the latest attempt to package and sell a useless product to a gullible public who, were they better informed, would reject it as the preposterous crap that it is.  

Two of the eight acredited authors of the Roadmap series are also the co-authors of the Outcomes series. Strangely, Andrew Walkley, one of versatile duo, has recently been explaining how the new edition of Outlooks Beginner is a “different kind of Beginner-level book”. It’s not just based on Lewis’ lexical approach, with all the criticisms of grammar-based coursebooks that this implies, it also uses a “spiral syllabus” which, Walkley confidently claims, re-cycles material far more efficiently than the rest. Such are the conflicting claims made for the two coursebook series that one wonders how the same authors can put their names to both of them. Should a coursebook be made up of 10 units containing “three core lessons featuring grammar, vocabulary and pronunciation”, or should it, as Walkley suggests, reject “having one block followed by another block” in favour of teaching “a little about form A, followed by a little bit on form B (perhaps whilst also re-using what we learned about form A), followed by a little on form C (perhaps recycling something of A and/or B), before we return to study something more about form A, etc.”? It reminds me of Groucho Marx’s quip “Those are my principles, and if you don’t like them, I have others”.