Why does Direct Instruction evoke such rancour? Imprimer Envoyer
Pédagogie Explicite - Direct Instruction
Écrit par Kerry Hempenstall   
Samedi, 12 Octobre 2013 14:50

Kerry Hempenstall, Ph.D.

RMIT UniversityB.Sc., Dip.Ed., Dip.Soc.Studies, Dip.Ed.Psych., Ph.D. MAPsS.

Why does Direct Instruction evoke such rancour?

 

 

In a previous post, I listed those reports, syntheses, reviews, and meta-analyses that have offered support to Direct Instruction as a genuine evidence-based approach to instruction.

In this post, I want to consider what is Direct Instruction, and what are the criticisms that have impeded the model from achieving the strong acceptance in education that it deserves. As an avid reader of research in education for many years, I’ve been regularly bemused to read studies employing a wide range of recently developed programs, some of which clearly influenced by DI. Rarely is DI evaluated or even discussed by independent researchers.

 

Research supports explicit instruction

How has DI been viewed by educators? Obviously, those still enamoured with Whole Language, or those whose pre-service training was conducted by WL protagonists, are likely to be critical of explicit instruction generally. DI being perhaps the prime example of explicit instruction, and having had a long history, may have been a lightning rod for those who do not consider explicit instruction as appropriate. In my education readings, and in my experience in offering electives to teachers-in-training, it is frequently evident that many critics have little understanding of DI. They just know (or have been told) that they don’t like it!

This position of criticising DI as an exemplar of explicit instruction is yet another example of the disconnect between research and practice in education, given the acknowledgement of the explicit models’ evidence based superiority over other approaches (Alfieri, Brooks, Aldrich, & Tenenbaum, 2010).

Research almost universally supports explicit instructional practices (Archer & Hughes, 2011; Kirschner, Sweller, & Clark, 2006; Klahr & Nigam, 2004; Marchand-Martella, Slocum, & Martella, 2004). Explicit instructional approaches are considered more effective and efficient as compared to discovery-based approaches (Alfieri, Brooks, Aldrich, & Tenenbaum, 2010; Ryder, Tunmer, & Greaney, 2008), particularly when students are naïve or struggling learners” (Marchand-Martella, Martella, Modderman, Petersen, & Pan, 2013, p.166).

Many teachers ignore the advantages for their students of explicit instruction, say, in reading. Yet, the strategies replacing them (such as the three cueing system (see http://www.adihome.org/adi-blog/entry/the-three-cueing-system-in-reading-will-it-ever-go-away) are not simply ineffective or neutral in effect, but create unnecessary obstacles to student success.

The apparently unruly nature of the orthography, the existence of many words that do not follow straightforward one-to-one mapping of letter onto phoneme, may undermine the resolve of teachers to teach reading as if it were an exercise in alphabetic decoding. And teachers may not have such a resolve in the first place. We know that some do not because they have been trained to avoid explicit instruction in the alphabetic principle (Goodman, 1986; Shankweiler & Fowler, 2004). This in turn has been in part based on the conviction that reading cannot be done this way anyway, precisely because of the existence of irregular words like the, once, one, was, were, there … . So, we may have the beginnings of a perfect storm – children ill equipped to discover, all by themselves, the alphabetic nature of English writing, the same children well equipped, all by themselves, to discover its morphemic nature, and a teacher who advertently or inadvertently fosters the morphemic hypothesis and obscures the phonemic one, leading to children trapped in an initially successful strategy but one that will eventually leave them floundering (Byrne, Freebody, & Gates, 1992) (Byrne, 2011, p. 182).

One issue raised by those who espouse constructivist approach is that higher order processing should be the priority, and it doesn’t arise from explicit teaching (it is claimed).

The problem with that argument is that learning generally doesn’t work that way. As cognitive scientists like Daniel Willingham have shown, it’s all but impossible to have higher-order thinking without strongly established skills and lots of knowledge of facts. Cognitive leaps, intuition, inspiration--the stuff of vision--are facilitated by expending the smallest amount of processing capacity on lower-order aspects of a problem and reapplying it at higher levels. You leap over the more basic work by being able to do it without thinking much about it, not by ignoring it. This synergy between the rote and the creative is more commonly accepted in many nations in Asia. ‘Americans have developed a fine dichotomy between rote and critical thinking; one is good, the other is bad,’ write the authors of one study of Japanese schools. But they find that many types of higher-order thinking are in fact founded on and require rote learning. Creativity often comes about because the mind has been set free in new and heretofore encumbered situations (Lemov, Woolway, & Yezzi, 2012, p. 37-38).

One contributing factor may be related to the failure of teacher education to provide prospective teachers with the knowledge needed to appreciate research, and in particular, literacy research (Clark, Jones, Reutzel, & Andreasen, 2013; Greenberg, McKee, & Walsh, 2013; Leader-Janssen, & Rankin-Erickson, 2013).

The results of these studies suggest that when teachers lack an understanding of research-based principles that allow effective adaptation, interventions may be prematurely discarded and practitioners may conclude that research has little relevance to their practice (Gersten, Vaughn, Deshler, & Schiller, 1997)” (Slocum, Spencer, & Detrich, 2012, p.172).

There has been a burgeoning interest in brain-based learning (where else might learning occur, you ask?), and in the many programs purporting to address underlying neural structures. The evidence for these is generally slim to non-existent. So, it is interesting to read that approaches for which outcomes are strongly supportive of success are the same programs that indicate neural changes consequent upon such direct explicit teaching:

Focus, then, must be two-fold. First is the focus on ensuring appropriate environmental and nutritional conditions that stimulate dendritic growth in infancy and early childhood. But second must be emphasis on improving the strength of particular neural circuits, not simply on the overall growth of dendrites. Most interestingly, instructional activities such as memorization, mastery learning, and repetition-based activities appear to best strengthen and solidify the formation and maintenance of these circuits (Garrett, 2009; Freeberg, 2006). Data strongly support the use of precision teaching, mastery learning approaches, and programs such as DISTAR or direct instruction (Kirschner, Sweller, & Clark, 2006; Mills, Cole, Jenkins, & Dale, 2002; Ryder, Burton, & Silberg, 2006; Swanson & Sachse-Lee, 2000)” (Alferink & Farmer-Dougan, 2010, p. 46).

More on the various criticisms later.

 

So, what is DI?

It is one of the most thoroughly researched educational models (DiMagliaro, Lockee, & Burton, 2005; Weir, 1990). There is ample evidence of its effectiveness for a wide range of student learning problems. It differs from Whole Language in its assumptions about the teaching process, about learner characteristics, and about the means of syllabus construction; in fact, it could be described as the antithesis of Whole Language.

Although their [Whole Language] theories lack any academically acceptable research base they continue to dominate educational policy. Direct Instruction models are ignored notwithstanding the huge body of research that indicates that direct instruction is vastly superior if basic skills and knowledge are the goal (Weir, 1990, p.30).

The Direct Instruction model lauded in Follow Through had its beginnings in the early 1960's through the work of Carl Bereiter and Siegfried Engelmann. The subsequent involvement of Wes Becker and Doug Carnine among others led to the publication of a number of teaching programs in 1969. The programs share a common teaching style readily observable to any classroom visitor. The instruction takes place in small groups with a teacher directing activities with the aid of a script, and students are actively involved in responding to a fast paced lesson during which they receive constant feedback. Programs are designed according to what, not whom, is to be taught. Thus, all children work through the same sequence of tasks directed by a teacher using the same teaching strategies. Individual differences are accommodated through different entry points, reinforcement, amounts of practice and correction strategies (Gregory, 1983).

 

Characteristics of the Direct Instruction Model

There are a number of important characteristics of Direct Instruction programs (Becker, 1977). It is assumed that all children can learn and be taught, thus failure to learn is viewed as failure to teach effectively (Engelmann, 1980). Children whose progress is restricted must be taught to learn faster through a focus on features of teaching designed to improve efficiency of instruction. These features derive from the design of instruction, and from process variables such as how the curriculum is implemented. Curriculum is designed with the goal of "faultless instruction" (Engelmann, 1980), that is, sequences or routines for which there is only one logical interpretation. The designer's brief is to avoid ambiguity in instruction - the focus is on logical-analysis principles. These principles allow the organisation of concepts according to their structure and the communication of them to the learner through the presentation of positive and negative examples.

Engelmann (1980) highlighted four design principles:

(i) Where possible teach a general case, that is, those skills which, when mastered, can be applied across a range of problems for which specific solutions have not been taught, for example, decoding regular words. These generalisations may be taught inductively, by examples only, or deductively, by providing a rule and a range of examples to define the rule's boundaries.

(ii) Teach the essentials. The essentials are determined by an analysis of the skills necessary to achieve the desired objective. There is an underlying assertion that, for reading, it is possible to achieve skilled reading by task analysis and the teaching of subskills within a cumulative framework. Advocates of a "Whole Language" perspective would disagree with the possibility or desirability of teaching in this manner.

(iii) Keep errors to a minimum. Direct Instruction designers consider errors counter-productive and time-wasting. For remedial learners a high success rate is useful in building and maintaining motivation lost through a history of failure. This low error rate is achieved by the use of the instructional design principles elucidated in Theory of Instruction (Englemann & Carnine, 1982) and by ensuring students have the pre-skills needed to commence any program (via a placement test).

(iv) Adequate practice. Direct Instruction programs include the requirement for mastery learning (usually above 90% mastery). Students continue to focus on a given task until that criterion is reached. The objective of this strategy is the achievement of retention without the requirement that all students complete the identical regimen. The practice schedule commences with massed practice, shifting to a spaced schedule. The amount of practice decreases as the relevant skill is incorporated into more complex skills. Advocates of Direct Instruction argue that this feature of instruction is particularly important for low-achieving students and is too often allowed scant regard (Engelmann, 1980). Whereas, this emphasis on practice may be unfashionable, there is considerable supporting research, and a number of effective schools are increasingly endorsing its importance (Rist, 1992; Thompson, Ransdell, & Rousseau, 2005). "The strategies that have fallen out of style, such as memorising, reciting and drilling, are what we need to do. They're simple - but fundamental - things that make complex thinking possible" (Rist, p. 19).

 

Roots of the Direct Instruction Model

It is these principles of instructional design that sets Direct Instruction apart from traditional and modern behavioural approaches to teaching. However, the model does share a number of features with other behavioural approaches (e.g., reinforcement, stimulus control, prompting, shaping, extinction, fading), and with the effective teaching movement (mastery learning, teacher presentation skills, academic engaged time, and correction procedures). These latter features have been researched thoroughly over the past 40 years, and have generally been accepted as comprising “direct instruction” (Gersten, Woodward, & Darch, 1986).

Rosenshine (1979) used the expression to describe a set of instructional variables relating teacher behaviour and classroom organisation to high levels of academic performance for primary school students. High levels of achievement were related to the amount of content covered and mastered. Hence the pacing of a lesson can be controlled to enhance learning. Academic engaged time refers to the percentage of the allotted time for a subject during which students are actively engaged. A range of studies (Rosenshine & Berliner, 1978) has highlighted the reduction in engagement that occurs when students work alone as opposed to working with a teacher in a small group or as a whole class. The choral responding typical of DI programs is one way of ensuring high student engagement. The author once counted 300 responses in the 10 minutes of teacher directed decoding activity in a Year 7 reading group (Hempenstall, 1990).

A strong focus on the academic was found to be characteristic of effective teachers. Non-academic activities, while perhaps enjoyable or directed at other educational goals, were consistently negatively correlated with achievement. Yet, in Rosenshine's (1980) review of studies it was clear that an academic focus rather than an affective emphasis produced classrooms with high student self-esteem and a warm atmosphere. Less structured programs and teachers with an affective focus had students with lower self esteem. Teacher centred rather than student centred classrooms had higher achievement levels. Analogously, teachers who were strong leaders and did not base their teaching around student choice of activities were more successful. Solomon and Kendall (1976) cited in Rosenshine (1980) indicated that permissiveness, spontaneity and lack of classroom control were " … negatively related, not only achievement gain, but also to positive growth in creativity, inquiry, writing ability, and self esteem for the students in those classrooms” (p. 18).

The instructional procedure called demonstration-practice-feedback (sometimes model-lead-test) has strong research support (Rosenshine, 1980). This deceptively simple strategy combines three elements of teaching strongly related to achievement in one general model. It comprises an invariant sequence in which a short demonstration of the skill or material is followed by guided practice during which feedback is provided to the student (and further demonstration if necessary). The second phase usually involves response to teacher questions about the material previously presented. It would appear that the overlearning this phase induces is particularly valuable. The third phase, that of independent practice, is evaluated by the teacher.

Medley's (1982) review indicated the efficacy for low SES students of a controlled practice strategy involving low cognitive level questions, a high success rate (above 80%), and infrequent criticism. Thus, the popularity among teachers of high cognitive level question implicit in discovery learning models is difficult to justify empirically. These high level questions require students to manipulate concepts without having been shown how to do so. Research on discovery approaches has indicated a negative relationship with student achievement. Winnie's (1979) review of 19 experimental studies on higher order questions made this point very strongly, as does Yates (1988).

To summarise the findings of research into teacher variables with a positive impact on student learning, Rosenshine and Berliner (1978) provide a definition for direct instruction, a concept providing part of the theoretical basis for Direct Instruction.

Direct instruction pertains to a set of teaching behaviours focused on academic matters where goals are clear to students; time allocated for instruction is sufficient and continuous; content coverage is extensive; student performance is monitored; questions are at a low cognitive level and produce many correct responses; and feedback to students is immediate and academically oriented. In direct instruction, the teacher controls the instructional goals, chooses material appropriate for the student's ability level, and paces the instructional episode (p. 7).

Direct Instruction has developed into a comprehensive system of instruction covering many skill areas: reading, mathematics, language, spelling, microcomputing, writing, reasoning, and a variety of other school subjects including chemistry, critical reading, social studies, and history. Thus, the approach that initially restricted its emphasis to basic skills then expanded into higher order skills (Kinder & Carnine, 1991), has a long research base, and continues to have unfulfilled promise as part of a solution to the problems of illiteracy in our community.

 

Evaluation of the Direct Instruction Model

A very large national evaluation of different approaches to teaching basic skills was entitled Operation Follow Through. This evaluation showed that the Direct Instruction approach was particularly effective. For a discussion of FT, see Adams (1996), Becker & Gersten (1982), Engelmann, Becker, Carnine, & Gersten (1988), Grossen (1995), and Watkins (1996).

Additional to the Follow Through data, there were numerous evaluations of Direct Instruction programs from the early days, but as with much educational research, relatively few studies met the criteria for acceptability that are demanded today. Fabre (1984) compiled an annotated bibliography of almost 200 studies completed prior to 1984. For the most part, research findings were impressive, given the caveat of limited research design quality. Notable positive reviews of outcome research were provided by Gersten, 1985; Gregory, 1983; Kinder and Carnine, 1991; Lockery and Maggs, 1982; White, 1988. See later for contrary views.

Whereas, Direct Instruction was originally designed to assist disadvantaged students, its emphasis on task characteristics and effective teaching principles transcends learner characteristics, offering value across a range of learners. Willingham and Daniel (2012) made a similar point in noting that … “Research shows that instruction geared to common learning characteristics can be more effective than instruction focused on individual differences” (p.16).

Lockery and Maggs (1982) reviewed research indicating success with average children, those with mild, moderate or severe skill deficits, those in resource rooms, withdrawal classes and special classes in regular schools, disadvantaged students (including indigenous and those whose first language is not English), students in special facilities with varying degrees of intellectual disability, and physical disabilities.

Gersten (1985) in his review of studies involving students with a range of disabilities concluded that Direct Instruction tended to produce higher academic gains than traditional approaches. He also suggested that the mastery criterion (in excess of 90%) may be particularly important for special education students, and called for more formative evaluation where only one instructional variable is manipulated, and also, for more instructional dimensions research to highlight those variables alone or in company that are associated with academic gains. Gersten referred to the Leinhardt, Zigmond, and Cooley (1981) study with 105 learning disabled students. The authors noted that three teaching behaviours were strongly associated with student progress in reading - the use of reinforcers, academic focus, and a teacher instruction variable involving demonstration, practice and feedback. Each of these is critical to the definition of direct instruction (Rosenshine, 1979) and supports the assertion that there are teacher behaviours that transcend student characteristics. This study was the first to demonstrate that specific direct instruction principles have value for learning disabled students.

White's (1988) meta-analysis of studies involving learning disabled, intellectually disabled, and reading disabled students restricted its focus to those studies employing equivalent experimental and comparison groups. White reported an effect size of 0.84 standard deviation units for DI over comparison treatments. This is markedly above the 0.25-0.33 standard for educational significance of an educational treatment effect (Stebbins, St. Pierre, Proper, Anderson, & Cerva, 1977). White concluded that " ... instruction grounded in Direct Instruction theory (Engelmann & Carnine, 1982) is efficacious for both mildly and moderately/severely handicapped learners, and in all skill areas on which research has been conducted" (p. 372).

Further support for the explicit approach came from Kavale (1990). His summary of research into direct instruction and effective teaching concludes that they are five to ten times more effective for learning disabled students than are practices aimed at altering unobservable learning processes such as perception. Binder and Watkins (1990) described Direct Instruction (along with Precision Teaching) as the approaches best supported by research to address the problems of teaching found in the English-speaking world.

So, evaluations of DI extend back in time. For other, more recent evaluations, see http://www.adihome.org/adi-blog/entry/reviews-supporting-direct-instruction-program-effectiveness.

Note in particular this recent summary:

One of the common criticisms is that Direct Instruction works with very low-level or specific skills, and with lower ability and the youngest students. These are not the findings from the meta-analyses. The effects of Direct Instruction are similar for regular (d=0.99), and special education and lower ability students (d=0.86), higher for reading (d=0.89) than for mathematics (d=0.50), similar for the more low-level word attack (d=0.64) and also for high-level comprehension (d=0.54), and similar for elementary and high school students. The messages of these meta-analyses on Direction Instruction underline the power of stating the learning intentions and success criteria, and then engaging students in moving towards these. Summarised from (Hattie, 2009, p. 206-7).

 

So what are these criticisms of Direct Instruction?

Despite the long history of empirical support for Direct Instruction, unsurprisingly there have also been criticisms. Surely, no other approach has so polarised educators as has DI. The criticisms have been based on a number of different grounds. Some are fanciful, some shallow, some purely emotional, and many result from ideologically based beliefs regarding learning.

 

(a) Conspiracy theories

- DI is an IBM/ McGraw-Hill conspiracy to oppress the masses/profiteer (Kohn, 2002; Nicholls, 1980).

- DI is a Christian right wing conspiracy (Berliner, 1996).

- “Critics contend that tack [ accepting DI] threatens to mandate the rote teaching style favored by religious conservatives and back-to-basics zealots” (Learn, 1998).

- It is designed to fuel a “global workforce training agenda” (Iserbyt, 1999, p.150). It’s to indoctrinate students into submitting to a life in the unskilled workforce (Shannon, 2007).

- It’s really about indoctrination: When I returned to the United States I realized that America’s transition from a sovereign constitutional republic to a socialist democracy would not come about through warfare (bullets and tanks) but through the implementation and installation of the “system” in all areas of government—federal, state and local. The brainwashing for acceptance of the “system’s” control would take place in the school—through indoctrination and the use of behavior modification, which comes under so many labels: the most recent labels being Outcome-Based Education, Skinnerian Mastery Learning or Direct Instruction (Iserbyt, 1999, p. XV).

- There is a conspiracy among researchers, publishers, and policy makers (Goodman, 2002; Zemelman, Daniels, & Bizar, 1999). "The research evidence is being distorted and purposefully misrepresented in ideologically consistent ways, in politically consistent ways, in reliably profitable ways" (Allington, 2002).

 

(b) It has negative side effects

- “DI is a teaching method that bypasses the brain and causes an unnatural reflex that is controlled and programmed. This manipulation causes some students to become so stressed that they actually become ill and/or develop nervous tics” (Hayes, 1999).

- “It really does damage kids: socially, morally, as well as intellectually. It’s just too narrow and constrictive” Rheta DeVries, professor of curriculum and instruction, Regent’s Center for Early Development and Education, University of Northern Iowa.

- “Tullis makes the claim that “early exposure to academics” has the potential “to psychologically damage developing brains,” and can lead to physical health problems, including (but presumably not limited to) “depression, anxiety disorders--even cardiovascular disease and diabetes.” (Engelmann, 2012, p.1).

- DI produces more felony arrests, and more time in special education for emotional impairment, lower level school completion, and fewer living with their spouses. (HighScope) http://www.highscope.org/file/Research/high_scope_curriculum/preschool_validity.pdf

- DI students can’t think “When they have to think for themselves, they’re waiting to be told how.” Psychologist Rebecca Marcon, www.titlei.com/samples/direct.htm

- DI damages students, causing delinquency (Schweinhart, Weikart, & Larner, 1986. Further, its "side effects may be lethal" (Boomer, 1988, p. 12). “It (direct instruction) is a scripted pedagogy for producing compliant, conformist, competitive students and adults.”

- “It's extremely authoritarian,” observes Larry Schweinhart of the highly regarded High Scope/Perry Research Project in Ypsilanti, Mich., and can lead children to “dependency on adults and resentment” (Duffrin, 1996, p.4) (cited in Coles, 1998).

- “Direct Instruction has become today's federally-sanctioned child abuse for poor children” (Horn, 2007).

 

(c) Its view of the reading process is wrong (Gollash, 1980).

- DI focusses on phonics, which is a bad approach (Meyer, 2003).

- It focusses on sight words: “Directed Instruction, although it gives lip service (pardon the pun) to phonics, seems to weight the instruction unavoidably in the direction of sight reading merely by virtue of group oral recitation from the text” (Fritzer & Herbst, 1999, p. 46).

- It emphasises phonics at the expense of comprehension (Jordan, Green, & Tuyay, 2005).

- It’s rote learning only, and doesn’t lead to conceptual understanding and problem solving (Ewing, 2011).

- “It's rote, it's memorization, it's not good solid practice,” Karen Smith, associate director of the National Council of Teachers of English. “It goes against everything we think.” (Duffrin, p.1)

- DI produces “A Nation of Rote Readers” (Coles, 2001).

 

(d) It is incompatible with other more important principles:

- Normalisation (Penney, 1988).

- The wholistic nature of reading (Goodman, 1986; Giffen, 1980)

- A naturalistic educational paradigm (Heshusius, 1991).

- Flexible reciprocal child-teacher interaction (Ashman & Elkins, 1990).

- Teacher professionalism and creativity - the scripts deskill teachers. (Denise, 2008; McFaul, 1983). “the proletarianization of teacher work” (Giroux, 1985, p. 376).

- Constructivism which asserts that students create their own knowledge rather than simply absorbing information presented by others, like teachers. DI is antithetical to the constructivist attitude that there are multiple representations of reality, none of which is automatically nor necessarily superior or inferior to the others. Duffy (2009) reflects how ―”the direct-instruction researchers have focused on research in which variables are manipulated in tightly controlled experiments.…[whereas] the constructivist approach is to study rich learning environments, examining the variables in the context of those environments” (p. 354-355). You thus can’t compare the two approaches - like comparing apples with oranges. “Each relies on intellectual biases that would leave the other at a disadvantage were we to compare results” (Jonassen, 2009, p. 29). Confrey (1990) puts the constructivist position as being incompatible with Direct Instruction “We can have no direct or unmediated knowledge of any external or objective reality. We construct our understanding through our experiences, and the character of our experience is influenced profoundly by our cognitive lenses (p.108).

 

(e) The success of DI is illusory:

- It is based on tests that do not measure real reading (Cambourne, 1979, Kohn, 1999).

- The apparent research support is not persuasive because empirical research can’t answer questions of superiority of methods (Weaver, 1988). “We don’t have an approach, we have a philosophy” (Horsch Erikson Institute, cited in Duffrin, 1996, p.6)

- It can’t work because it’s wrong: “ … in education, a priori beliefs about the way children ought to learn or about the relative value of different kinds of knowledge seem to have tremendous force in shaping judgments about effectiveness” (Traub, 2002).

Operation Follow Through did not prove DI was effective (Kohn, 1999).

 

(f) Other approaches are more effective, for example,

- Whole Language (Weaver, 1991),

- Discovery learning (Bay, Staver, Bryan, & Hale, 1992);

- As effective as DI (Kuder, 1990; O’Connor et al., 1993).

 

(g) It may be inappropriate for certain sub groups.

- Those in special education (Heshusius, 1991; Kuder, 1991; Penney, 1988).

- Those with certain learning styles, for example, those with an internal locus of control (McFaul, 1983; Peterson, 1979).

- Learning disabled students: “The failure of Direct Instruction to teach learning disabled children to read seems to be related to bad instructional design” (Allington, 2003).

- Those of high ability (Peterson, 1979).

- It’s not appropriate for indigenous students (Ewing, 2011; Sarra, 2011).

 

(h) Its use is best restricted to basic skill development (Peterson, 1979).

 

(i) Its use is only for the poor and at-risk (Eppley, 2011)

 

(j) It is best used in conjunction with other approaches (Delpit, 1988; Gettinger, 1993; Harper, Mallette, Maheady, & Brennan, 1993; Spiegel, 1992; Stevens, Slavin, & Farnish, 1991).

 

(k) Students might not find it acceptable (Reetz & Hoover, 1992).

- It destroys motivation by having students practise too much. “Heavy doses of practice with exercises that seem pointless to children further deaden interest and thinking” (Baroody & Ginsburg, 1990, p.58).

 

(l) Relationships, not instruction, are what evoke learning (Sara, 2011; Smith, 2003).

 

(m) A lack of basic humanity.

- Aspects of the programs, such as prescribed curriculum materials and instructions, are viewed as dehumanizing because they are centred in teaching materials rather than in people (Goodman, 1998).

- Engelmann's programs are “oppressive and inhumane” (Ken Goodman in Learn, 1998).

- Siegfried Engelmann’s DISTAR (Reading Mastery) and ECRI are both based on the very sick philosophical world view that considers man nothing but an animal” (Iserbyt, 1999, p.212).

- DI renders learners passive (Johnson, 2004). “Indeed, it is often regarded as offensive to students, assuming they can only learn from a script; and offensive to educators, assuming they can only teach from a script; and both scripts are written by some old guy in the US” (Sara, 2011).

 

(n) It’s simply old-fashioned teaching:

“ … the heart of Direct Instruction is group chanting (while following a text) in response to the teacher's scripted hand signals, analogous to the old "blab schools" of the 19th century, in which students recited in groups to memorize and feed back material” (Fritzer, & Herbst, 1999, p.45). … “lock step focus on drill and rote learning” (Fogarty & Schwab, 2012).

 

(o) It’s just Skinnerian behaviourism.

… instructional approaches now being imposed are something that most in the audience wouldn’t want their own children to suffer. These approaches have, he said, more to do with teaching rats than humans. He urged his audience to reclaim good instruction with attention to the lessons of social constructionism instead of treating students with a behaviorist approach in which, as B.F. Skinner proved, even pigeons can be taught to play ping-pong … DI is a steroidal scripted behaviorist methodology very popular with urban school policymakers and the Reading First thugs who make their curricular choices for them in Title I schools. No middle class suburban parent would ever permit this kind of cognitive decapitation of their children” (Horn, 2007).

- “Engelmann’s DI claims to be scientific as it rests upon the outmoded behaviourism of B.F. Skinner, an approach buried by Chomsky in his review of verbal behaviour in way back in 1967” (Sarra, 2011, p.1).

 

(p) It only looks good because it’s old.

“One of the problems is that to have proven programs, you have to have old programs,” adds Richard L. Allington, the chairman of the reading department at the State University of New York at Albany. “Most of these Direct Instruction programs have been around 25 or 26 years, which is why there's more 'research' on them. If Direct Instruction looks good, Mr. Allington and others say, it may be because there is a dearth of effectiveness data on anything else” (Viadero, 1999).

 

(q) It ignores higher order thinking, and, further, stifles it (Doyle, Sanford, & Emmer, 1983).

- Teaching is didactic, so students don’t learn how to have discourse among themselves (Ewing, 2011).

 

(r) Zig shouldn’t be taken seriously:

- “an obscure educationist named Engelmann” (Rundle, 2009, p.1).

- “written by some old guy in the USA” (Sarra, 2011).

 

(s) The effects may be short lived:

The Coalition for Evidence-Based Policy (2012) does not include Direct Instruction among its list of evidence-based approaches because of their perception of a lack of long term effect studies.

This from a personal communication (July 11, 2012) from a Coalition spokesperson.

We have reviewed the evidence supporting Direct Instruction and our overall thought is that, while a number of studies have found promising short-term effects of the model, more rigorous evaluations with longer-term follow-ups are needed to determine whether it produces sustained effects on important academic and behavioral outcomes. The reason we look for evidence of sustained effects is to rule out the possibility that any observed short-term effects quickly fade away, a phenomenon which is unfortunately quite common in education. There have been a handful of such long-term studies of Direct Instruction, but they’ve tended to suffer from key limitations that make it difficult to draw firm conclusions about its sustained effectiveness (e.g., because studies had very small sample sizes or Direct Instruction was combined with other interventions when evaluated) (para 1, 2).

Jean Stockard has performed the huge task of compiling a DI research database to enable those interested to study the research themselves and make decisions about program evidence. Find it here http://nifdi.org/docs/doc_download/205-di-bibliography-1412. See also Cristy Couglin’s: Research on the effectiveness of Direct Instruction programs: An updated meta-analysis at http://nifdi.org/di-research-database?controller=publications&task=show&id=142.

 

(t) A lack of methodological soundness in the research

The What Works Clearinghouse rejects most of the Direct Instruction studies as not meeting their criteria for methodological soundness, and ignores those older than 20 years or so. There has been much criticism over the last 5 years (Briggs, 2008; Carter & Wheldall, 2008; Greene, 2010; Engelmann, 2008; McArthur, 2008; Reynolds, Wheldall, & Madelaine, 2009; Slavin, 2008; Stockard, 2008, 2010, 2013; Stockard & Wood, 2012, 2013). This criticism has included the criteria used and the inconsistent application of those criteria. For a detailed analysis as applied to WWC determinations about DI programs, see Jean Stockard’s analysis at http://www.nifdi.org/documents-library/doc_download/270-2013-1-examining-the-what-works-clearinghouse-and-its-reviews-of-direct-instruction-programs

Most of the criticisms described above have been ably dealt with by Adams, (2004), Adams and Engelmann (1996), Adams and Slocum (2004), Barnes 91095), Carnine (1992, 1994), Ellis and Fouts (1997), Engelmann (2002), Kozloff (2009), and Tarver (1995, 1998).

Of the literature critical of the DI model, much is based on philosophical issues concerning reality and power; on theoretical issues such as the nature of the learning process, the role of teaching, or issues of measurement. Of the few studies in which alternative approaches have proved equivalent or superior, issues of treatment fidelity have arisen. It is rarely made clear whether the model described is the Direct Instruction model or a direct instruction clone of unknown rigour. Nor is it usually specified whether the teachers of any Direct Instruction program have been provided with the training required to ensure the programs are presented according to the presentation protocols.

A surprising feature of much of the criticism is the degree of venom present. It appears that in many of the papers, a great antipathy underpins their criticism. There is little pretence of objectivity, and the language is often emotional. One can’t help but wonder what it is about this model that evokes such ire. DI is a “harsh, inflexible, and depersonalizing approach” … I’d “like to see a stake driven in the heart of DISTAR” (Jalongo, 1999, p. 139).

The prevailing subtext seems to be that the writer doesn’t approve of the system because it contradicts the philosophy/beliefs of the writer. It must be wrongheaded because constructivism is right, and this system doesn’t fit with constructivism. In logic, this error is called begging the question. "It's rote, it's memorization, it's not good solid practice," says Karen Smith, associate director of the National Council of Teachers of English. "It goes against everything we think” (Duffrin, 1996, p.4).

Perhaps the most egregious aspect of the criticisms is that relatively few dispute the effectiveness of the approach. It appears that, for most, the outcomes are not in dispute, but the process is not one with which many teachers feel comfortable. Thus the dismissal of DI appears to place teacher comfort before student success.

 

Scripts and human error

Recently, I was watching a program called Life, Death And Mistakes, which focussed upon human error in various performance fields, and what steps different occupations are beginning to adopt to reduce the impact of these human factors on performance. It gave me pause to think about instructional scripts that are such an important part of DI programs.

I was struck by the manner in which other professions, far from being offended by protocols and checklists have learned to rely on (and benefit from) them. The program showed examples of airplane pilots in cockpit emergencies, surgical teams in patient crisis situations, fire personnel in dangerous settings, post-operative transfer medicos, and Formula One pit crews – all making use of these strategies to reduce human error, save lives (both their own and those of their charges), and increase their efficiency. There was no suggestion from the various individuals that their creativity was stifled, or that their work became demeaning. In fact, job satisfaction was elevated and stress more easily managed when they knew that they didn’t have to “wing it”. Interestingly, protocols and checklists are commonly employed in those schools and districts that have adopted Response to Intervention as their framework for preventing and ameliorating student failure.

Perhaps this is a signpost to a future in education too. Like all occupations, teachers are fallible, and thus prone to human error. The issue is how to reduce this error across organisations generally, not solely for managing emergency situations. Such as those involved in education. Variability in instruction was once seen as a quality to be promoted. I recall in Victoria, in the early stage of the whole language domination of curriculum, that there were Innovation Grants available to teachers who could devise a plan that involved them doing something different. As a visiting psychologist in the school system, I saw some of those innovations in practice. They were mostly an embarrassment to education, as pet theories reigned supreme without any requirement for the evaluation of subsequent student performance. I had applied for a grant to use Corrective Reading in a local high school, as it had never been used in the region at that time. However, the application was rejected because the WL panel considered DI to be discredited, and inappropriate for students.

A comment in the Life, Death And Mistakes TV program stood out for me, and my paraphrase of it is: “It’s the idea that you standardise everything you can, and only in those circumstances that are unforeseeable do you need to improvise. That surety is what makes our work better.” Consider the contrast between that model and the (once?) popular whole language edict that you teach “in the moment”, responding continuously in an ad hoc (but invariably brilliant) manner to each student’s needs.

 

EBP in medicine and psychology

During the 1990s, while evidence-based practice (EBP) in medicine was being discussed, the American Psychological Association (Chambless & Ollendick, 2001) introduced the term empirically supported treatments as a means of highlighting differential psychotherapy effectiveness. Prior to that time, many psychologists saw themselves as practising a craft in which competence arises through a combination of personal qualities, intuition, and experience. The result was extreme variability of effectiveness among practitioners, a problem also evident in education. The proposal was to devise a means of rating therapies for various psychological problems, and for practitioners to use these ratings as a guide to practice. The criteria for a treatment to be considered well established required efficacy to be established through two controlled clinical outcomes studies or a large series of controlled single case design studies. It also insisted on treatment manuals to ensure treatment fidelity, and the provision of clearly specified client characteristics for the study in question. A second level involved criteria for probably efficacious treatments. These criteria required fewer studies, and/or a lesser standard of rigour. The third category comprised experimental treatments – those without sufficient evidence to achieve probably efficacious status.

There are obvious similarities between these therapy requirements and the criteria for acceptability for studies demanded by evaluation bodies such as the What Works Clearinghouse.

There was significant resistance displayed by practitioners towards the adoption of EBP in the fields of medicine and psychology. However, as the principles have been espoused in these professions since the early nineties, a new generation of practitioners has been exposed in their training and generally accepted EBP as the normal standard for practice. This has occurred among most young practitioners because their training has emphasized the centrality of evidence in competent practice. The notion of manualised treatments is one with which they feel very comfortable. That is not to say that the principles of EBP are always adhered to in practice. The older brigade as a group are less accepting of change – “We often know something doesn't work, but out there are thousands and thousands of doctors who have been taught certain procedures and that's all they do … changing of clinician beliefs and behaviour, even in the face of credible evidence, remains highly challenging (p.5)” (Medew, 2012).

There is evidence that many teachers feel that they have “wing it” in their approaches to the teaching of reading (Cunningham, Perry, Stanovich, & Stanovich, 2004; Leader-Janssen & Rankin-Erickson, 2013; Spear-Swerling, Brucker, & Alfano, 2005), because neither their pre-service nor their in-service training has equipped them adequately for the task. In education, the equivalent of manualised treatment is scripted instruction, and it has been derided by many in the education profession, as outlined earlier. As we’ve seen, this is in stark contrast to other fields in which the benefits have become apparent, and have outweighed the understandable resistance to changing one’s practice.

 

Changed attitudes

It is of interest that many DI teachers have altered their former discomfort when they perceive the effectiveness of the approach. Thus their attitude was changed by the experience of their and their students’ success according to numerous surveys (Bessellieu, Kozloff, & Rice, 2001; Cossairt, Jacobs, & Shade, 1990; Gersten, Carnine, & Cronin, 1986; Gervase, 2005; Hands, 1993; Proctor, 1989).

For example:

Gersten et al. (1986) evaluated perceptions of teachers and paraprofessionals with regard to a Direct Instruction program. Teachers were interviewed toward the end of the first and second year of implementation. Initially, teachers were concerned with the high degree of structure leaving little room for fun activities and felt that scripted lessons were overly mechanical. At least half of the teachers believed that their teaching philosophy conflicted with that of Direct Instruction. By mid year, Gersten et al. found that teachers and paraprofessionals generally came to accept the program. By the end of the first year, attitudes had improved along with student achievement. Gersten et al. found that by the end of the second year of implementation, all but one teacher agreed with the main objectives of Direct Instruction as a program for educationally disadvantaged students (Gervase, 2005, p.26-27).

 

What’s the future for systematic reviews like WWC?

The issue of systematic reviews like WWC only considering gold standard research has created a new problem for education, and especially for evidence-based education. There are so few studies that meet criteria at present, that external validity has become a complicating issue, even when studies are gold standard.

The stumbling block that only the large scale, methodologically sophisticated studies are worthwhile somehow needs to be resolved. There are some alternatives: a single study involving a small number of schools or classes may not be conclusive in itself, but many such studies, preferably done by many researchers in a variety of locations, can add some confidence that a program's effects are valid (Slavin, 2003). If one obtains similar positive benefits from an intervention across different settings and personnel, there is added reason to prioritise the intervention for a large gold-standard study. There is a huge body of data out there that is no longer considered fit for human consumption. It seems such a waste that there are not currently analysis methods capable of making use of these studies.

It is important that issues of validity and reliability of the systematic reviews are continuously examined, and this process has been gathering momentum. The criticisms have been several: of the criteria for what constitutes acceptable research; of slowness in producing evaluations; of inconsistency in applying standards for what constitutes acceptable research; of the inclusion of studies that have not been peer reviewed; and of a failure to attend to fidelity of implementation issues in the WWC analyses. This latter criticism can be subsumed under a broader criticism of ignoring external validity or generalisation in the reviews.

The focus of syntheses must be on what has worked, that is, programs for which there is evidence of an aggregate effect that is internally valid. I would argue that such evidence, although certainly important, is necessary but not sufficient for those stakeholders enacting educational policies. What the superintendent of a school district wants to know is not so much what has worked but what will work. To be relevant, a good synthesis should give policy makers explicit guidance about program effectiveness that can be tailored to specific educational contexts: When and where will a given program work? For whom will it work? Under what conditions will it work the best? For causal inferences to be truly valid, both causal estimation and generalization should at the very least be given equal weight (Briggs, 2008, p.20).

The point here is that whilst RCT may provide the best bulwark against threats to internal validity, the acceptance of small scale and brief RCTs creates a strong threat to external validity. Thus, the large scale reviews have their own issues to deal with before they can be unquestioningly accepted as the royal road to truth. Further, it may also be quite some time before gold-standard research reaches critical mass to make decisions about practice easier.

It has also been queried whether educational research can ever have randomised control trials as the norm, however desirable that may appear to be. One issue is that the high cost of such research is not matched by the available funding. For example, The US D.O.E. spends about $80 million annually in educational research; whereas, the US Department of Health and Human Services provides about $33 billion for health research. In Australia, whilst the budgets for the provision of health and education services are roughly similar, the funding for health research is about 16 times that for educational research (Australian Bureau of Statistics, 2010). Another issue concerns the limitations on methodological purity in educational research. Students in schools cannot be routinely randomly selected for intervention as can occur in other settings, such as individual therapies in medicine and psychology. Thus, RCTs are arguably unlikely to ever form the great part of educational research.

Perhaps as a response to this dilemma, attention is now being paid to single case research as a possible valid and workable adjunct to RCTs in attempting to document what works in education settings. “The addition of statistical models for analysis of single-case research, especially measurement of effect size, offers significant potential for increasing the use of single-case research in documentation of empirically-supported treatments (Parker et al., 2007; Van den Noortagate & Onghena, 2003).” (Horner, Swaminathan, Sugai, & Smolkowski, 2012, p.271). In recognition of this potential, WWC released a document on single-case designs, providing and initial WWC standards for assessing any such single case studies (Kratochwill et al., 2010). In a generally supportive response Wolery (2013) offered a number of suggestions for improvement on this initial attempt.

Interestingly, early in 2013, the WWC agreed to reconsider their policies and procedures, and requested that interested groups/individuals make submissions. No announcement has yet been forthcoming.

So, where does that leave us? At least two perspectives that have been put forward are worthy of follow-up. O’Keefe et al. (2012) recognise that the current system needs improvement, but have a sense of optimism:

Empirically supported treatment is still a relatively new innovation in education and the methods for conducting effective reviews to identify ESTs are in their infancy. Based on our experience in comparing these review systems we believe that review methods can and should continue to develop. There is still a great need to build review methods that can take advantage of a larger range of studies yet can provide recommendations in which practitioner can place a high degree of confidence (p.362).

In contrast, Greene (2010) views the whole review enterprise with a more jaundiced eye, and urges consumers to rely upon their own resources.

We have no alternative to sorting through the evidence and trying to figure these things out ourselves. We may rely upon the expertise of others in helping us sort out competing claims, but we should always do so with caution, since those experts may be mistaken or even deceptive (Greene, 2010, para 15).

Given what has transpired thus far, it would seem premature for teachers, schools, and education policy makers to base their decision-making entirely on the results of systematic reviews such as those from WWC. The question “Are there any immediate shortcuts to discerning the gold from the dross?” appears to remain unresolved for the immediate future. This is an unfortunate situation as there does appear to be increasing acknowledgement of evidence-based practice as a beginning to produce a new direction for education.

So DI continues, and with greater organisational support than in the early years: the National Institute for Direct Instruction (NIFDI, http://www.nifdi.org/), and the Association for Direct Instruction (ADI, http://www.adihome.org/) in particular. It has grown from a small series of programs for basic skill development up to a wide ranging series of programs that include higher order skills, for example, literary analysis, logic, chemistry, critical reading, geometry and social studies. Use has been made of technology through computer-assisted instruction, low cost networking and videodisc courseware, and, employing the model for languages other than English.

There seems little doubt that DI will continue to be a viable and productive model, although there still remains a question mark over the extent of adoption by the school system. The major hurdle continues to be its lack of attractiveness for many educators, and resultant absence of adoption into classrooms. More than thirty years ago, Maggs and White (1982) wrote despairingly, "Few professions are more steeped in mythology and less open to empirical findings than are teachers" (p. 131). In the same decade, Ruddell and Sperling (1988) expressed a general concern at the gulf between literacy research findings and teachers' practice. They call for research aimed at discovering why empirically proven practices are "thwarted, undermined, or ignored in the classroom" (p. 319). It is easy to find the same sentiments expressed today. For more on evidence-based practice, see http://www.adihome.org/adi-blog/entry/first-blog

The concern about the gulf between research and practice has been expressed for a long time. Roger's (1983, cited in Ruddell & Sperling) asserted that there is often a period of 25 to 35 years between a research discovery and its serious implementation. Surely that time is arriving any day now?

 

 

References

Adams, G.L. (1996). Project Follow Through: In-depth and beyond. In Adams, G., & Engelmann, S. (Eds.). Research on Direct Instruction. Seattle, WA: Educational Achievement Systems. Retrieved from http://pages.uoregon.edu/adiep/ft/adams.htm

Adams, G.L. (2004). The need for an independent review of the study conducted by Dr. Randall Ryder. Education News, March 04.Retrieved from http://www.educationnews.org/need-for-an-independent-review.htm

Adams, G.L., & Engelmann, S. (1996). Research on Direct Instruction: 20 years beyond DISTAR. Seattle, WA: Educational Achievement Systems.

Adams, G.L., & Slocum, T.A. (2004). A critical review of Randall Ryder’s report of Direct Instruction Reading in two Wisconsin school districts. Journal of Direct Instruction, 4(2), 111–127.

Alferink, L.A., & Farmer-Dougan, V. (2010): Brain-(not) based education: Dangers of misunderstanding and misapplication of neuroscience research, Exceptionality, 18(1), 42-52.

Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2010). Does discovery-based instruction enhance learning? Journal of Educational Psychology, 103(1), 1-18.

Allington, R.L. (2002). Big brother and the national reading curriculum: How ideology trumped evidence. Portsmouth, NH: Heinemann.

Allington, R.L. (2003). In Leslie Poynor and Paula Wolfe (Eds.). Marketing fear in America's public schools: The real war on literacy (p.58).

Ashman, A., & Elkins, J. (1990). Educating children with special needs. New York: Prentice Hall.

Australian Bureau of Statistics (2010). ABS Research and Experimental Development, All Sector Summary, Australia, 2008-09. Retrieved from http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/8112.02008-09?OpenDocument

Barnes, D. (1985). Why not direct instruction? The Australian Educational & Developmental Psychologist, 1, 59-62.

Baroody, A.J., & Ginsburg, H.P. (1990). Children's mathematical learning: A cognitive view. Journal for Research in Mathematics Education. MonographVol. 4, Constructivist views on the teaching and learning of mathematics, 51-64, 195-210.

Bay, M., Staver, J. R., Bryan, T., Hale, J. B. (1992). Science instruction for the mildly handicapped: Direct instruction versus discovery teaching. Journal of Research in Science Teaching, 29, 555-570.

Becker, W. C. (1977). Teaching reading and language to the disadvantaged. What we have learned from field research. Harvard Educational Review, 47, 518-543.

Becker, W. C., & Gersten, R. (1982). A follow-up to Follow Through: The later effects of the direct instruction model on children in fifth and sixth grades. American Educational Research Journal, 19, 75-92.

Berliner, D.C. (1996). Educational psychology meets the Christian right: Differing views of children, schooling, teaching, and learning. Retrieved from http://courses.ed.asu.edu/berliner/readings/differingh.htm

Bessellieu, F.B., Kozloff, M.A., & Rice, J.S. (2001,Spring). Teachers’ perceptions of direct instruction teaching. Direct Instruction News, 14-18. Retrieved from http://people.uncw.edu/kozloffm/teacherperceptdi.html

Binder, C., & Watkins, C. L. (1990). Precision teaching and Direct Instruction: Measurably superior instructional technology in schools.Performance Improvement Quarterly, 3, 74-96.

Boomer, G. (1988). Standards & literacy. Two hundred years on the road to literacy: Where to from here? Directions: Literacy.Supplement to Education Victoria., June. Melbourne, Australia: State Board of Education.

Briggs, D.C. (2008). Synthesizing causal inferences. Educational Researcher, 37(1), 15-22.

Brown, A. L., & Campione, J. C. (1990). Interactive learning environments and the teaching of science and mathematics. In M. Gardner et al. (Eds.), Toward a scientific prac­tice of science education. Hillsdale, NJ: Erlbaum.

Byrne, B. (2011). Evaluating the role of phonological factors in early literacy development. In S. Brady, D. Braze, & C.A. Fowler (Eds.),Explaining individual differences in reading: Theory and evidence. New York: Psychology Press.

Cambourne, B. (1979). How important is theory to the reading teacher? Australian Journal of Reading, 2, 78-90.

Carnine, D. (1992). Introduction. In D. Carnine and E. J. Kameenui (Eds.), Higher order thinking: Designing curriculum for mainstreamed students. Austin, TX: Pro-ed

Carnine, D. (1994). Introduction to the mini-series: Educational tools for diverse learners. School Psychology Review, 23(3), 341-350.

Carter, M., & Wheldall, K. (2008). Why can't a teacher be more like a scientist? Science, pseudoscience and the art of teaching.Australasian Journal of Special Education, 32(1), 5-21.

Chambless, D. L. & Ollendick, T. H. (2001). Empirically supported psychological interventions: Controversies and evidence. Annual Review of Psychology, 52, 685- 716.

Clark, S.K., Jones, C.D., Reutzel, R., & Andreasen, L. (2013). An examination of the influences of a teacher preparation program on beginning teachers' reading instruction. Literacy Research and Instruction, 52(2), 87-105.

Coles, G. (1998, Dec. 2). No end to the reading wars. Education Week Retrieved from http://www.edweek.org/ew/vol-18/14coles.h18

Coles, G. (2001). Bush's 'scientific' vision: A nation of rote readers. Newsday, June 23. Retrieved from http://www.newsday.com/culture-watch-bush-s-scientific-vision-a-nation-of-rote-readers-1.801819

Confrey, J. (1990). What constructivism implies for teaching. Journal for Research in Mathematics Education Monograph, Vol. 4,Constructivist views on the teaching and learning of mathematics, 107-122, 195-210.

Cossairt, A., Jacobs, J., & Shade, R. (1990). Incorporating direct instruction skills throughout the undergraduate teacher training process: A training and research direction for the future. Teacher Education and Special Education, 13, 167-171.

Coughlin, C. (2011). Research on the effectiveness of Direct Instruction programs: An updated meta-analysis. Paper Presented at the Annual Meetings of the Association for Behavior Analysis International, May, 2011. Retrieved from http://nifdi.org/di-research-database?controller=publications&task=show&id=142

Cunningham, A. E., Perry, K. E., Stanovich, K. E., & Stanovich, P. J. (2004). Disciplinary knowledge of K–3 teachers and their knowledge calibration in the domain of early literacy. Annals of Dyslexia54(1), 139–167.

Delpit, L. D. (1988). The silenced dialogue: Power and pedagogy in educating other people's children. Harvard Educational Review, 58, 280-298.

Denise, G. (2008). Scripted curriculum: Scourge or salvation? Educational Leadership, 65, 80.

DiMagliaro, S., Lockee, B., & Burton, J. (2005). Direct instruction revisited: A key model for instructional technology. Educational Technology Research & Development, 53(4), 41-55. Retrieved from http://projects.ict.usc.edu/itw/materials/Lockee/DI%20Revisited.pdf

Doyle, W., Sanford, J. & Emmer, E. (1983). Managing academic tasks in junior high school: Background, design and methodology(Report No. 6185). Austin: University of Texas, Research and Development Center for Teacher Education.

Duffrin, E. (1996, Sep 1)). Direct Instruction making waves. Catalyst, 8(1), 1-7. Retrieved from http://www.catalyst-chicago.org/issues/1996/09/direct-instruction

Duffy, T. M. (2009). Building lines of communication and a research agenda. In S. Tobias & T. M. Duffy (Eds.), Constructivist Instruction: Success or failure? (pp. 351-367). New York: Routledge.

Ellis, A. K., & Fouts, J. T. (1997). Research on educational innovations. Larchmont, New York: Eye on Education, Inc

Engelmann, O. (2012). Letter to the editor of Scientific American Mind. The premature death of preschool. The National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/243-the-death-of-preschool-letter-to-editor

Engelmann, S. & Carnine, D. (1982). Theory of instruction. New York: Irvington.

Engelmann, S. (1980). Toward the design of faultless instruction: The theoretical basis of concept analysis. Educational Technology, Feb. 80, 28-36.

Engelmann, S. (2002). A response to Allington. Education News, Mar 2. Retrieved from http://www.educationnews.org/articles/allington-leveled-serious-allegations-against-direct-instruction.html

Engelmann, S. (2008). Machinations of What Works Clearinghouse. Retrieved from http://zigsite.com/PDFs/MachinationsWWC(V4).pdf.

Engelmann, S., Becker, W. C., Carnine, D., & Gersten, R. (1988). The Direct Instruction Follow Through model: Design and outcomes.Education and Treatment of Children, 11(4), 303-317.

Eppley. K. (2011). Reading Mastery as pedagogy of erasure. Journal of Research in Rural Education, 26(13). Retrieved from http://jrre.psu.edu/articles/26-13.pdf

Ewing, B. (2011). Direct Instruction in mathematics: Issues for schools with high indigenous enrolments: A literature review. Australian Journal of Teacher Education, 36(5), 64-91.

Fabre, T. (1984). The application of direct instruction in special education. An annotated bibliography. Unpublished manuscript, University of Oregon.

Fogarty, W., & Schwab, R.G. (2012). Indigenous education: Experiential learning and learning through country. Working Paper No. 80/2012. Centre for Aboriginal Economic Policy, The Australian National University.

Fritzer, P., & Herbst, P. (1999). A cautionary tale: Directed instruction reading. Contemporary Education, 70(2), 45- 47.

Garrison, J., & MacMillan, C. (1994). Process-product research on teaching: Ten years later. Educational Theory, 44(4). Retrieved November 2009, from http://www.ed.uiuc.edu/EPS/Educational-Theory/Contents/44_4_Garrison.asp

Gersten, R. M. (1985). Direct instruction with special education students: A review of evaluation research. Journal of Special Education19(1), 41-58.

Gersten, R., Carnine, D., & Cronin D. (1986). A multifaceted study of change in seven inner-city schools. Elementary School Journal,86, 257-276.

Gersten, R.M., Woodward, J., & Darch, C. (1986). Direct Instruction: A research based approach to curriculum design and teaching.Exceptional Children, 53(1), 17-31.

Gervase, S. (2005). Reading Mastery: A descriptive study of teachers attitudes and perceptions towards Direct Instruction.(Electronic Thesis or Dissertation). Retrieved from http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1120001760

Gettinger, M. (1993). Effects of invented spelling and direct instruction on spelling performance of second-grade boys. Journal of Applied Behaviour Analysis, 26, 281-291.

Giffen, D. (1980). A letter-a-reply - an invitation to respond. Reading Around, 8, 84-88.

Giroux, H. (1985). Teachers as intellectuals. Social Education, 49(15), 376-379.

Gollash, F. (1980). A review of Distar Reading I from a psycholinguistic viewpoint. Reading Education, 5, 78-86.

Goodman, K. (2002). When the fail proof reading programs fail, blow up the colleges of education. Retrieved from http://tlc.ousd.k12.ca.us/~acody/goodman.html

Goodman, K. S. (1986). What's whole in whole language. Richmond Hill, Ontario: Scholastic.

Goodman, K. S. (1998). In defense of good teaching: What teachers need to know about the reading wars. Urbana, IL: National Council of Teachers of English.

Greenberg, J., McKee. A., & Walsh, K. (2013). NCTQ Teacher Prep Review. National Council of Teacher Quality. Retrieved from http://www.nctq.org/dmsStage/Teacher_Prep_Review_2013_Report

Greene, J.P. (2010). What doesn’t work clearinghouse. Education Next. Retrieved from http://educationnext.org/what-doesnt-work-clearinghouse/

Gregory, R.P. (1983). Direct Instruction, disadvantaged and handicapped children: A review of the literature and some practical implications. Parts 1 & 2. Remedial Education, 18(3), 108-114, 130-136.

Grossen, B. (Winter 1995-6). Overview: The story behind project Follow Through. Effective School Practices, 15, Retrieved from http://darkwing.uoregon.edu/~adiep/ft/grossen.htm

Hands, B.P. (1993). Measurement of teacher attitude to direct instruction. Perth, Edith Cowan University. Retrieved from http://trove.nla.gov.au/work/153169822

Harper, G. F., Mallette, B., Maheady, L., & Brennan, G. (1993). Classwide student tutoring teams and Direct Instruction as a combined instructional program to teach generalizable strategies for mathematics word problems. Education & Treatment of Children, 16, 115-134.

Hattie, J. A.C. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. London and New York: Routledge.

Hayes, T.J. (1999). Traditional education vs. Direct Instruction. What's the difference between these teaching methods? Education Reporter, 156. Retrieved from http://www.eagleforum.org/educate/1999/jan99/focus.html

Hempenstall, K. (1990). Direct Instruction: Experiences of an educational psychologist. Paper presented at the 13th National Conference of the Australian Behaviour Modification Association. Melbourne, Australia.

Heshusius, L. (1991). Curriculum based assessment and direct instruction: Critical reflections on fundamental assumptions. Exceptional Children, 57, 315-328.

Horn, J. (2007). Direct Instruction: Learning for rats, applied to children. Schools Matter, July 29. Retrieved from http://www.schoolsmatter.info/2007/07/direct-instruction-learning-for-rats.html

Horner, R.H., Swaminathan, K., Sugai, G. & Smolkowski, K. (2012). Considerations for the systematic analysis and use of single-case research. Education and Treatment of Children, 35(2), 269-290.

Iserbyt, C.T. (1999). The deliberate dumbing down of America. Ravenna, Ohio: Conscience Press. Retrieved from www.deliberatedumbingdown.com/MomsPDFs/DDDoA.sml.pdf

Jalongo, M.R. (1999). Editorial: On behalf of children. Early Childhood Education Journal, 26(3), 139–141.

Johnson, G.M. (2004). Constructivist remediation: Correction in context. International Journal of Special Education, 19(1), 72-88.

Jonassen, D. (2009). Reconciling a human cogntivite architecture. In S. Tobias & T. M. Duffy (Eds.), Constructivist instruction: Success or failure? (pp. 13-33). New York: Routledge.

Jordan, N.L., Green, J., & Tuyay, S. (2005). Basal readers and reading as socialization: What are children learning? Language Arts, 82(3), 204-213.

Kavale, K.A. (1990). Variances & verities in learning disability interventions. In T. Scruggs & B. Wong (Eds.), Intervention research in learning disabilities (pp.3-33). New York: Springer Verlag.

Kinder, D., & Carnine, D. (1991). Direct Instruction: What it is and what it is becoming. Journal of Behavioural Education, 1(2), 193-213.

Kohn, A. (1999). Early childhood education: The case against Direct Instruction of academic skills. In Alfie Kohn (Ed.), The schools our children deserve. Boston: Houghton Mifflin. Retrieved from http://www.alfiekohn.org/teaching/ece.htm

Kohn, A. (2002, October). The 500-pound gorilla. Phi Delta Kappan, 84(2), 112-119.

Kratochwill, T.R., Hitchcock, J., Horner, R.H., Levin, J. R., Odom, S.L., Rindskoph, D.M., & Shadish, W.R. (2010). Single-case designs technical documentation. Retrieved from http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf

Kuder, S. J. (1990). Effectiveness of the DISTAR reading program for children with learning disabilities. Journal of Learning Disabilities, 23(1), 69-71.

Kuder, S. J. (1991). Language abilities and progress in a Direct Instruction reading program for students with learning disabilities. Journal of Learning Disabilities, 24, 124-127.

Leader-Janssen, E.M., & Rankin-Erickson, J.L. (2013). Preservice teachers' content knowledge and self-efficacy for teaching reading.Literacy Research and Instruction, 52(3), 204-229.

Learn, S. (1998). Educators put reading to the test The Oregonian. Retrieved fom http://www.shearonforschools.com/direct_instruction.htm

Lemov, D., Woolway, E., & Yezzi, K. (2012). Practice perfect: 42 rules for getting better at getting better.San Francisco: Jossey-Bass.

Lockery, L. & Maggs, A. (1982). Direct instruction research in Australia: A ten year analysis. Educational Psychology, 2, 263-288.

Maggs, A. & White, R. (1982). The educational psychologist: Facing a new era. Psychology in the Schools, 19, 129-134.

Marchand-Martella, N.E., Martella, R.C., Modderman, S.L., Petersen, H.M., & Pan, S. (2013). Key areas of effective adolescent literacy programs. Education and Treatment of Children, 36(1), 161-184.

McArthur, G. (2008). Does What Works Clearinghouse work? A brief review of Fast ForWord®. Australasian Journal of Special Education, 32(1), 101-107.

McFaul, S. A. (1983, April). An examination of Direct Instruction. Educational Leadership, 67-69.

Medew, J. (2012). Push for tougher line on surgery. The Age, October 4, p.5.

Medley, D. M. (1982). Teacher effectiveness. In H. E. Mitzel (Ed.), Encyclopaedia of Educational Research (5th ed., Vol. 4). (pp. 1894-1903). New York: The Free Press.

Meyer, R. (2003). Captives of the script: Killing us softly with phonics. Rethinking Schools Online, 17(4). Retrieved from http://www.rethinkingschools.org/special_reports/bushplan/capt174.shtml

Nicholls, C. (1980). Kentucky Fried reading scheme. Reading Education, 6, 18-22.

O’Keeffe, B.V., Slocum, T.A., Burlingame, C., Snyder, K., & Bundock, K. (2012). Comparing results of systematic reviews: Parallel reviews of research on repeated reading. Education & Treatment of Children, 35(2), 333 – 366.ning

O'Connor, R. E., Jenkins, J. R., Cole, K. N., & Mills, P. E. (1993). Two approaches to reading instruction with children with disabilities: Does program design make a difference? Exceptional Children, 59, 312-323.

Penney, R. K. (1988, March). Compatability [sic] of early intervention programmes with normalization. Bulletin of the Australian Psychological Society, 28-29.

Peterson, P. L. (1979, Oct.). Direct instruction: Effective for what and for whom? Educational Leadership, 46-48.

Proctor, T. J. (1989). Attitudes toward direct instruction. Teacher Education and Special Education12, 40-45.

Reetz, L. J., & Hoover, J. H. (1992). The acceptability and utility of five reading approaches as judged by middle school LD students.Learning Disabilities, Research & Practice, 7, 11-15.

Reynolds, M., Wheldall, K., & Madelaine, A. (2009). The devil is in the detail regarding the efficacy of Reading Recovery: A rejoinder to Schwartz, Hobsbaum, Briggs, and Scull. International Journal of Disability, Development and Education, 56, 17-35.

Rist, M. C. (1992). Learning by heart. The Executive Educator, November, 12-19.

Rosenshine, B. V. (1979). Content, time and direct instruction. In P. L. Peterson, & H. J. Walbert (Eds.), Research on teaching: Concepts, findings and implications. (pp 28-57). Berkeley, CA: McCutchan.

Rosenshine, B. V. (1980). Direct instruction for skill mastery. Paper presented to the School of Education, University of Milwaukee, Wisconsin.

Rosenshine, B. V., & Berliner, D. C. (1978). Academic engaged time. British Journal of Teacher Education, 4, 3-16.

Ruddell, R. B. & Sperling, M. (1988). Factors influencing the use of literacy research by the classroom teacher: Research review and new directions. In J.E. Readence (Ed.), Dialogues in literacy research (pp. 319-329). Chicago: National Reading Conference Inc.

Rundle, G. (2009, Oct 29). Review: Noel Pearson’s Radical Hope. Retrieved from http://www.crikey.com.au/2009/10/23/rundle-review-noel-pearsons-radical-hope/

Sarra, C. (2011). Not the only way to teach Indigenous students. Retrieved from http://chrissarra.wordpress.com/2011/05/26/not-the-only-way-to-teach-indigenous-students/

Schweinhart, L. J., Weikart, D. P., & Larner, W. B. (1986). Consequences of three pre-school curriculum models through age 15. Early Childhood Research Quarterly, 1(1), 15-45.

Shannon, P. (2007). Reading against democracy: The broken promises of reading instruction. Portsmouth, NH: Heinemann.

Slavin, R. E. (2003). A reader's guide to scientifically based research. Educational Leadership, 60(5), 12-16. Retrieved from http://www.ascd.org/publications/ed_lead/200302/slavin.html

Slavin, R.E. (2008). Evidence-based reform in education: Which evidence counts? Educational Researcher, 37(1), 47-50.

Slocum, T. A., Spencer, T. D., & Detrich, R. (2012). Best available evidence: Three complementary approaches. Education and Treatment of Children, 35(2), 153 – 181.

Smith, F. (2003). Unspeakable acts, unnatural practices flaws and fallacies in "scientific" reading instruction. NH: Heinemann.

Spear-Swerling, L., Brucker, P.O., &Alfano, M. P. (2005). Teachers' literacy-related knowledge and self-perceptions in relation to preparation and experience. Annals of Dyslexia, 55(2), 266-296.

Spiegel, D. L. (1992). Blending whole language and systematic direct instruction. The Reading Teacher, 46 (1), 38-44.

Stebbins, L., St. Pierre, R. G., Proper, E. C., Anderson, R. B., & Cerva, T. R. (1977). Education as experimentation: A planned variation model: Vol.IV. Cambridge, MA: Abt. Associates.

Stevens, R. J., Slavin, R. E., & Farnish, A. M. (1991). The effects of co-operative learning and direct instruction in reading comprehension strategies on main idea identification. Journal of Educational Psychology, 83, 8-16.

Stockard, J. (2008). The What Works Clearinghouse Beginning Reading reports and rating of Reading Mastery: An evaluation and comment.Technical Report 2008-04. National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/research/what-works-clearinghouse

Stockard, J. (2010). An analysis of the fidelity implementation policies of the What Works Clearinghouse. Current Issues in Education,13(4). Retrieved from http://cie.asu.edu/

Stockard, J. (Spring, 2013). Examining the What Works Clearinghouse and its reviews of Direct Instruction programs.Technical Report 2013-1, National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/270-2013-1-examining-the-what-works-clearinghouse-and-its-reviews-of-direct-instruction-programs

Stockard, J., & Wood, T.W. (2012). Reading Mastery and learning disabled students: A comment on the What Works Clearinghouse review.National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/245-response-to-wwc-rm-and-ld-july-2012

Stockard, J., & Wood, T.W. (2013). The WWC Review ProcessAn analysis of errors in two recent reports. Technical Report 2013-4. Office of Research and Evaluation, National Institute for Direct Instruction. Retrieved from http://www.nifdi.org/documents-library/doc_download/283-technical-report-2013-4-wwc

Taber, K. S. (2010 July 6). Constructivism and Direct Instruction as competing instructional paradigms: An essay review of Tobias and Duffy‘s Constructivist Instruction: Success or Failure? Education Review13(8). Retrieved from http://www.edrev.info/essays/v13n8index.html

Tarver, S. G. (1995). Direct Instruction. In W. Stainback & S. Stainback (Eds.), Controversial issues confronting special education: Divergent perspectives. (2nd ed). Boston: Allyn & Bacon Publishing Company.

Tarver, S. G. (1998). Myths and truths about Direct Instruction (Dl). In Phonics and beyond: Literacy in the 21st century, conference proceedings books from the Sixteenth Annual Dyslexia Conference of the Wisconsin Branch of the International Dyslexia Society. Reprinted in Effective School Practices, 17(1), 18-22.

The Coalition for Evidence-Based Policy (2012). Interventions-for-children-age-0-6. Retrieved from http://toptierevidence.org/programs-reviewed/interventions-for-children-age-0-6

Thompson, S., Ransdell, M., & Rousseau, C. (2005). Effective teachers in urban school settings: Linking teacher disposition and student performance on standardized tests. Journal of Authentic Learning, 2(1), 22-36.

Traub, J. (2002). No Child Left Behind: Does it work? New York Times, Nov 10. Retrieved from http://www.nytimes.com/2002/11/10/education/no-child-left-behind-does-it-work.html?pagewanted=all&src=pm

Viadero, D. (1999). A direct challenge. Education Week, 18(27), 41-43. Retrieved from http://www.zigsite.com/DirectChallenge.htm

Watkins, C. L. (1996). Follow through: Why didn’t we? Effective School Practices, 15(1), 5.

Weaver, C. (1988). Reading: Progress and practice. Portsmouth, NJ: Heinemann.

Weaver, C. (1991). Weighing the claims about "Phonics First". The Education Digest, April, 19-22.

Weir, R. (1990). Philosophy, cultural beliefs and literacy. Interchange, 21(4), 24-33.

White, W. A. T. (1988). A meta-analysis of the effects of direct instruction in special education. Education & Treatment of Children, 11, 364-374.

Willingham, D.T., & Daniel, D. (2012). Teaching to what students have in common. Educational Leadership, 69(5), 16-21. Retrieved from http://www.ascd.org/publications/educational-leadership/feb12/vol69/num05/Teaching-to-What-Students-Have-in-Common.aspx

Winnie, P. (1979). Experiments relating teachers' use of higher cognitive questions to student achievement. Review of Educational Research, 49, 13-50.

Wolery, M. (2013). A commentary: single-case design technical document of the What Works Clearinghouse. Remedial and Special Education 34(1), 39-43.

Zemelman, S., Daniels, H., & Bizar, M. (1999, March). Sixty years of reading research -- But who's listening? Phi Delta Kappan. Retrieved from http://www.pdkintl.org/kappan/kzem9903.htm

 
 
Une réalisation LSG Conseil.