V. positive effects (d = .69) of Reading Recovery: 1st results from huge trial
*Year One Results from the Multisite Randomized Evaluation of the i3 Scale-Up of Reading Recovery
American Educational Research Journal: http://aer.sagepub.com/content/52/3/547.abstract?etoc
Here’s what Prof Diane McGuinness has to say about Reading Recovery ‘research’
Several years ago, a letter was sent to members of the U.S. Congress with 31 signatures of the top researchers in the field of reading urging Congress to suspend support for RR because independent research showed the method had no effect. It is extremely costly to implement, re teacher training, tutoring time, and materials. Not only this, but RR “research” is notorious for misrepresenting the data. In a recent publication by the Institute of Education, the same problems appear. 1. Nearly half of the children from the 145 strong “RR-tutoring group” were dropped from the study at post-testing, while the control group remained intact. (Barely a mention of this, and no attempt to solve the problem this creates.) 2. The RR group received individual tutoring, the control group got none. One could go on. The published paper bears the hallmarks of a bona fide “scientific” journal, until a closer inspection reveals it is published by Reading Recovery. No chance for an impartial peer review process here
http://www.publications.parliament.uk/p ... me1302.htm
I contacted Prof. James Chapman (IFERI committee member) for advice. He said that he was aware of the AERJ RR research and had discussed it with a fellow academic, Prof. Bill Tunmer.
James very kindly sent me the following 4 points re. this latest RR study:
Our problems with the study can be summarized in 4 points.
1. First, consider the control group. Clearly, RR is better than doing nothing, and according to footnote 4 (pp. 30-31), about a fourth of the control group children in the AERJ study received no intervention. The rest appeared to have received some small group intervention instruction, but no systematic data on the amount or type of instruction are reported. Clearly, one-to-one instruction in general is better than no instruction or small group instruction. A more important question is whether RR is superior to other forms of one-to-one intervention, both in terms of cost and outcomes (see this recent article by Lorraine Hammond who also makes this point): https://theconversation.com/there-are-m ... very-39574
A recent meta-analysis by Slavin et al. (2011) showed that the magnitude of different reading interventions was positively related to the amount of explicit instruction in phonics that was included in the intervention. Not surprisingly RR, which includes limited explicit, systematic instruction in alphabetic coding skills, came out last. Related to this point is research indicating that RR is more effective for children from code-oriented classrooms than from whole language classrooms (Center et al., 2001). Slavin et al. also reported that the effect size for RR (.24 I think) was similar to that for volunteer and largely untrained tutors.
2. As indicated on page 9 of the AERJ article, only 52.4% of the treatment children completed RR with 22.4% referred on. That is, more than one fifth of the struggling readers didn’t benefit much from RR, which is consistent with what others have reported. This percentage may be an underestimate, as another 20% of the children in the treatment sample didn’t complete RR for a variety of vaguely stated reasons. These findings are consistent with our analyses of annual RR monitoring data reported in New Zealand indicating that RR is not effective for those struggling readers who need the most help (i.e., who are most at risk of reading failure). In a chapter on RR in a soon to be released book on “Excellence and Equity in Literacy Education: The Case of New Zealand” (London: Palgrave Macmillan, June/July, 2015) we argue that the effectiveness of RR interacts with where children are located on the developmental progression from prereader to skilled reader. Our research clearly indicates that RR is not effective for those children at the lower end of the developmental continuum who need much more explicit instruction in phonemic awareness and alphabetic coding skills than what is typically provided in the standard RR lesson. Louisa Moats underscored this point in an article recently reported in the Melbourne Age newspaper:
http://www.theage.com.au/victoria/readi ... m8m9e.html
3. Providing support for the view expressed in the preceding paragraph is very strong evidence from research reported in New Zealand and Australia and the AERJ article indicating that the lowest performing readers (i.e., those most in need of extra support) are excluded from selection for RR. This issue is discussed at length in the AERJ article (see pp. 23- 26, 28), where it is explicitly stated that many schools “preferred to reserve Reading Recovery slots for students they regarded as more likely to benefit from the intervention” (p. 25). This widely adopted practice strongly suggests that the effectiveness of RR interacts with the extent to which children at the lower end of the developmental progression of reading acquisition are excluded from entering the program. In view of this consideration it is highly likely that the effect sizes reported in the evaluation of RR would have been significantly lower if the RR Standards and Guidelines (RRCNA, 2009) had been followed. These guidelines require schools to “use only the OS [i.e., scores from Clay’s Observational Survey] to select the lowest achieving first-grade students and to serve the lowest scorers first” (p. 24 of AERJ article). Given the rigidity of RR, the only way school systems can make it work is to exclude the lowest performing children, the ones for whom more systematic instruction in phonological skills is needed to enable them to achieve progress. However, given the theoretical underpinnings of the program, such instruction will never be available unless the program is modified. But then they will argue, as they did, that such modified RR programs are no longer RR.
4. The AERJ article states that RR “enables students to catch up to their peers and sustain achievement at grade level into the future” (p. 3). However, there is simply no convincing evidence in support of this claim. In the RR chapter of our upcoming book, we discuss three recent large-scale studies carried out in New Zealand indicating very limited positive maintenance effects for children who had participated in RR two to four years earlier. Slavin et al. (2011, p. 19) discusses studies carried out in the UK and US that report similar findings. We argue that the main reason for the failure to sustain positive outcomes from RR (among the successful/discontinued students) is straight forward: the theoretical underpinnings of the RR approach to reading intervention (i.e., the constructivist, multiple cues approach) were shown to be incorrect by the scientific community three decades ago.
James Chapman and Bill Tunmer have a book out soon. A full discussion of RR issues will be presented in one chapter.