Research and dodgy (?) reporting: The devil is in the detail

Downloads and links to relevant research and articles, along with book recommendations.
Post Reply
User avatar
Debbie_Hepplewhite
Posts: 2499
Joined: Sat May 23, 2015 4:42 pm

Research and dodgy (?) reporting: The devil is in the detail

Post by Debbie_Hepplewhite »

This thread is dedicated to highlighting the need to read research reports and conclusions with great care.

I launch with an excellent blog posting via the 'Horatiospeaks' blog:

A convergence of interests?

https://horatiospeaks.wordpress.com/201 ... interests/
... Consider, for example, the Education Endowment Foundation. The government has invested considerable funds via the EEF to run randomized controlled trials. One of the RCT evaluations recently released by the EEF was for a programme called Switch-On Reading. What is not apparent in the headline, but appears later in the report, is that the programme is in fact a repackaging of Reading Recovery, which is now being aimed at students at the transition between Key Stages 2 and 3.
User avatar
Debbie_Hepplewhite
Posts: 2499
Joined: Sat May 23, 2015 4:42 pm

Re: Research and dodgy (?) reporting: The devil is in the detail

Post by Debbie_Hepplewhite »

Professor Kevin Wheldall describes worries and anomalies about the 'What Works Clearing House' - essential reading so use the link to read the whole piece:

What's wrong with What Works?

http://www.kevinwheldall.com/2012/08/wh ... works.html
Ah, ‘what works’, that rings a bell. It is too early to tell whether this new centre will deliver on its promises but what about the original ‘What Works Clearinghouse’ (WWC), the US based repository of reports on educational program efficacy that originally promised so much?

As Daniel Willingham has pointed out:
“The U.S. Department of Education has, in the past, tried to bring some scientific rigor to teaching. The What Works Clearinghouse, created in 2002 by the DOE's Institute of Education Sciences, evaluates classroom curricula, programs and materials, but its standards of evidence are overly stringent, and teachers play no role in the vetting process.” (See http://tinyurl.com/bn8mvdt)

My colleagues and I have also been critical of WWC. And not just for being too stringent. Far from being too rigorous, the WWC boffins frequently make, to us, egregious mistakes; mistakes that, far too often for comfort, seem to support a particular approach to teaching and learning.

I first became a little wary of WWC when I found that our own truly experimental study on the efficacy of Reading Recovery (RR) had been omitted from their analyses underlying their report on RR. Too bad, you might think, that’s just sour grapes. But, according to Google Scholar, the article has been cited 160 times since publication in 1995 and was described by eminent American reading researchers Shanahan and Barr as one of the “more sophisticated studies”. Interestingly enough, it is frequently cited by proponents of RR (we did find it to be effective) as well as by its critics (but effective only for one in three children who received it). So why was it not included by WWC? It was considered for inclusion but was rejected on the following grounds:

“Incomparable groups: this study was a quasi-experimental design that used achievement pre-tests but it did not establish that the comparison group was comparable to the treatment group prior to the start of the intervention.”

You can read the details of why this is just plain wrong, as well as other criticisms of WWC, in Carter and Wheldall (2008) (http://tinyurl.com/c6jcknl). Suffice to say that participants were randomly allocated to treatment groups and that we did establish that the control group (as well as the comparison group) was comparable to the (experimental) treatment group who received RR prior to the start of the intervention. This example also highlights another problem with WWC’s approach. Because they are supposedly so ‘rigorous’, they discard the vast majority of studies from the research literature on any given topic as not meeting their criteria for inclusion or ‘evidence standards’. In the case of RR, 78 studies of RR were considered and all but five were excluded from further consideration. Our many other criticisms of what we regard as a seriously flawed WWC evaluation report on RR are detailed in Reynolds, Wheldall, and Madelaine (2009) (http://tinyurl.com/cuj8sqm).

Advocates of Direct Instruction (DI) seem to have been particularly ill-served by the methodological ‘rigour’ of WWC, for not only are most more recent studies of the efficacy of DI programs excluded because they do not meet the WWC evidence standards but they also impose a blanket ban on including any study (regardless of technical adequacy) published before 1985; an interesting if somewhat idiosyncratic approach to science.
Post Reply