Siegfried Engelmann : « If the Children Aren't Learning, We're Not Teaching. » Imprimer Envoyer
Pédagogie Explicite - Direct Instruction
Écrit par Form@PEx   
Dimanche, 13 Avril 2014 09:17

« Si les élèves n’ont pas appris,
c’est qu’on ne leur a pas enseigné. »

Entretien avec Siegfried Engelmann

01.06.2001

 

 

One of the most vigorous continuing debates in elementary education is over which teaching method produces the best results.

Is it teacher-directed learning, where the teacher conveys knowledge to his or her students? Or is it student-directed learning, where the teacher encourages students to construct meaning from their own individual learning experiences?

Although a considerable body of research shows student-directed learning is ineffective, the debate rages on because many educators--and especially teachers of educators--choose to ignore the research.

Siegfried Engelmann has been one of the key participants in this debate over the years, and a major contributor to its resolution. He first became interested in how children acquire knowledge when he was research director for an advertising agency trying to understand more about the learning process.

Pursuing this interest, Engelmann quit the advertising business in 1964 and became senior educational specialist at the Institute for Research on Exceptional Children at the University of Illinois at Champaign-Urbana. There, his research into the effectiveness of different teaching methods in the education of underprivileged children led him to develop the Direct Instruction method of teaching.

The Direct Instruction method involves teaching from a tightly scripted curriculum delivered via direct instruction to the class; i.e., giving children small pieces of information and immediately asking them questions based on that information. While Direct Instruction is teacher-directed instruction, it does not encompass all the possible varieties of teacher-directed instruction, including the common situation where a teacher delivers a content-rich curriculum to students but decides exactly "what" will be taught.

Engelmann's research in the 1960s into the effectiveness of different teaching methods was subsequently confirmed by the massive federal Follow Through project in the 1970s and 1980s. In 1999, the American Institute of Research looked at 24 education reform programs and concluded Direct Instruction was one of only two that had solid research vouching for its effectiveness. But despite all the research findings, Direct Instruction is used at only 150 of the nation's more than 114,000 schools.

After developing the Direct Instruction method, Engelmann became a professor of special education at the University of Oregon, in Eugene, Oregon, where he established the National Institute for Direct Instruction. He recently spoke with School Reform News Managing Editor George Clowes.

 

What approach did you first take to understanding the mechanics of the learning process?

ENGELMANN: I studied philosophy when I was in college, and I was much influenced by the British analytical approach that required very careful parceling out of what caused what, and also what kind of conclusions you could draw from what kind of premises. That had a big impact on how I viewed this process initially, particularly the notion that we are responsible for whatever children learn. We can't just take credit for what they did learn; we have to take credit for what they didn't learn, or mis-learned, also.

We assumed that children were logical, reasonable beings in terms of how they responded to our teaching, and that their behavior was the ultimate judge of the effectiveness of whatever went into our teaching. If the way we taught didn't induce the desired learning, we hadn't taught it. But if children learned stuff that was wrong, we were responsible for that, too, and it meant we had to revise what we were doing and try it out again. That's the formula we used from the beginning.

Just because you covered the material doesn't mean the children learned the material. That tells about what you did. It doesn't tell about what you taught. If you want to know what you taught, you have to look at what the children learned.


Which means you have to test the children.

ENGELMANN: It means you would not wait to test the children. You would design the instruction so that you were testing them all the time. You would design the instruction so that you received feedback on what they were learning at a very high rate. You would present instructions so that the children's responses carried implications for what they were learning. And you would design the instruction to be efficient, so that you're not working with just one child.

All of this means that, for young children, you would use procedures involving oral responses where the children can respond together, and you get information about what they're learning from their responses. That's the test.

For very simple responses, the paradigm that we use is: Model, Lead, and Test. You first show them what the task is and how they're supposed to respond to it. Then you test to see if they can respond properly. It all happens very quickly.

It's something like, "My turn: What am I doing? Standing up. Your turn: What am I doing?" It's a model and then a test. But if they can't produce the response, then you do a model and lead the test. For example, "My turn: What am I doing? Standing up. Your turn: What am I doing? 'Standing up.' Say it with me: 'Standing up.' Once more: 'Standing up.' Your turn: What am I doing?" So "your turn" is the test.


When did you decide to develop this into an instructional package for beginning learners?

ENGELMANN: Initially, we took programs people were using or were being talked about and evaluated them according to our criterion: If the children aren't learning, we're not teaching.

For the most part, the children we were working with were disadvantaged pre-schoolers. They represented a particular challenge because they didn't come in with very high levels of knowledge and they didn't learn things very well. Their performance on the programs that were available led to the conclusion these programs just didn't work--the language experience program, the sight-word approach--none of them worked. They were horrible.

The sight-word, or look-say, approach is particularly bad because there is no method for correcting mistakes. If a child reads a word incorrectly, what do you tell them with the sight-word approach? "Look at the unique shape of the word," or "Look at the beginning letter and ask yourself what that word could be." That's it. They're not taught that the word is a function of the arrangement of specific letters. It's like taking average people off the street and trying to teach them calculus by showing them different curves with different answers. "What's this one? .03. And this one? .05. Good." It's that stupid.

With sight-word, children develop all kinds of misconceptions about what reading really is. They think reading means looking at pictures and guessing what the words are, because that's what they've learned to do. The misconceptions are induced because the children are given highly predictable text for reading practice, which then reinforces for guessing on the basis of context. But when they're given text that's not predictable, they can't make out what the words on the paper say because they really don't know how to read.

The only programs that showed any promise were the ones based on the International Teaching Alphabet, where you taught children to read using the phonetic pronunciation. You could teach disadvantaged kids to read that way, but then you had a terrible time transitioning them out because they were absolutely unprepared to deal with the high rate of irregular pronunciations among the most common words. The reading strategies they had developed with the phonetic alphabet weren't any help to them and a great deal of re-teaching was necessary.

But what they had learned was a function of what we had taught. We were responsible for so seriously mis-teaching these children that they could not easily transition and learn the irregular side of the reading game. So that meant we had to a) introduce some version of irregulars very early, so that children get the idea not everything is perfectly regular, and b) keep the sounding out, but treat it more as a sop for spelling the word. You don't want them to spell the word for initial reading. You want them to be able to sound out the word. But if you do it rigorously, they can easily understand that a particular sound means a particular letter.

The notion that you somehow recognize the word as a lump has been thoroughly discredited by research. When words are presented on a screen at the rate of about four or five hundred words a minute, experienced readers still can identify misspelled words. They can't do that without understanding the arrangement of letters in the word, and that each word is composed of a unique arrangement of letters. They're not looking at the shape of words.


When did you decide to publish your findings?

ENGELMANN: When we were working with the children, our objective was to teach them reading, math, and language. We wanted to make sure we taught them well, and so we made up sequences that compensated for what was lacking in other programs.

Pretty soon we had prototype versions of the reading program, the math program, and the language program. Our rule was that we would not submit anything for publication until we were sure that if the script was followed and presented as specified, it would work. We never submitted anything for publication that was not absolutely finished.

Also, the publisher was not allowed to edit any of our material. The publisher would say, "There's a better way to phrase it." No, there isn't! We've tried different ways. This way is efficient and it ties in with things we're going to do later on.

Another thing that happened was the federal government's Project Follow Through, which came out of President Johnson's War on Poverty and was aimed at evaluating programs that provided compensatory early education to disadvantaged children. We were one of 13 major sponsors, with the others representing the full spectrum of philosophies about instruction: developmental, Piagetian, the British open classroom, natural learning processes, and so on.

The results showed those other programs don't work in any subject. Direct Instruction beat them in all subjects. We beat them in language, in math, in science, in reading, and in spelling. And our students were the highest in self-image. And although Follow Through went only through third grade, additional follow-up showed an advantage through eighth grade and a statistically significant increase in college enrollment.

We also have some more direct information from places we worked with in Utah, where the Direct Instruction sequence goes through sixth grade, For example, when the children in Gunnison Elementary School entered junior high, they skipped seventh grade math and went directly into Algebra I, which was scheduled for eighth grade. At the end of the year, the children from our program were first, second, fourth, fifth, and sixth in performance in Algebra I.


So Project Follow Through confirmed what you had already found about the ineffectiveness of those other programs. Yet those programs still are being promoted in teacher colleges and they still are widely used, while Direct Instruction is not. Why?

ENGELMANN: The answer is really simple, but it's very difficult for most people to accept: Outcomes have never been a priority in public education, from its inception. That's the way the public education system is. The system is more concerned with the experience of the child: "Let the child explore," "Let the child be his or her self," "Don't interfere with the natural learning process," and so on.

The rhetoric is wonderful, but the test is: Does it work? Quite clearly, it doesn't. The ones who are victimized the most by this are children from poor families.

But anyone who does not view the child in this way is portrayed as some kind of redneck Republican with no real human concern.


What about Advantage Schools? I understand they're using your approach, too.

ENGELMANN: They're doing some pretty good things, but I think they're probably a little light on initial training. Part of that is because they're installing a school from scratch, and so you have to teach the teachers and the administrators a lot more than you would if you were just moving into an extant school. That's a tough job. It takes months to get the routines down.


Do you have any recommendations for state policy-makers who want to raise the quality of U.S. K-12 education?

ENGELMANN: My first recommendation would be to use only data-based material; that is, material that has a track record and can demonstrate it works. My second recommendation would be to evaluate test results skeptically. Don't rely on state tests and the like to give you an indication of what's really going on. To produce quality, you have to have quality control. That means having random samples, just as you would in a business.

You would go into a school and randomly test one out of five students in randomly chosen classrooms. In reading, you would give each student a passage to read and then ask them some questions about it. You could get the information you need out of a classroom very quickly--I'd guess no more than 10 minutes. If you sampled six classrooms, that would give you a pretty good idea of what is going on in that school. Then you would compare the performance of the students you had sampled with their achievement test scores and note any discrepancies.

In many cases, you will discover great discrepancies--where the children performed well on the test and yet when sampled they can't do math or they can't read. Schools can do all kinds of things to make their scores look better than they really are, so they need to be evaluated skeptically, preferably with this quality control approach.

 

Propos recueillis par George A. Clowes

 
 
Une réalisation LSG Conseil.