Part 3: Conceptualising the Pilot Programme in Education- Paying attention to the harmful possibilities in how we think about pilot programmes in School-based intervention or All Aboard.

Date

Elsewhere we have discussed the internal split of aims. We discussed how the misalignment of aims is not something to be eliminated or papered over but that this was split reflected the fact that, and the ways in which, different stakeholder are positioned differently in relation to an intervention.

 

This discussion spoke about the intervention process in general. Now let us zoom in on the conceptualisation of the pilot programme. Once again, this is not mean to be a critique of particular intervention programmes or implementing organisations. This is a consideration of the potentially harmful effects of pilot programmes. Through reflection on this topic, the stakeholders involved might be more intentional and explicit about mitigating against harmful unintended consequences in their project conceptualisation and design.

 

What is a pilot study?

The Institute of Education Sciences (IES) has a comprehensive document titled Learning Before Going to Scale: An Introduction to Conducting Pilot Studies developed by the Regional Educational Laboratory Appalachia. This document defines implementation pilot studies as studies that test an intervention programme in a real-world context. Pilots generally utilise small samples and explore the “feasibility of implementing a new initiative and the likelihood of reaping its benefits at scale”. That is, pilot studies implement all elements in a programme design, usually with a relatively small sample, with an emphasis on learning about the programme and its implementation. Although discussing tech-based pilots, TechTarget puts it well in saying: “A good pilot program provides a platform for the organization to test logistics, prove value and reveal deficiencies before spending a significant amount of time, energy or money on a large-scale project” [emphasis added]. What is worth paying particular attention to in this definition is the component of pilot that is a proof of value. While the logistics and context adaptability provide the space for adjustments improving the likelihood of success at scale, a pilot programme aims to demonstrate that its programme design is capable of producing the desired effects. If an intervention design aims to improve learner performance, then its pilot programme aims to demonstrate that the mechanism it puts in place to achieve this improvement are viable. This demonstration of viability will be the core of this reflection.

 

Neglect:

In a recent evaluation of pilot studies there was a recurring theme of neglect. To be clear this neglect was not a neglect instituted by these interventions, but its appearance indicated the potential of pilot programmes to contribute with the issue or otherwise reproduce it. Looking at the purpose of an implementation pilot programme that it might be argued that, in trying to demonstrate the viability of a programme, the focus on subsets of the sample who present high levels of programme uptake is essential to exploring and demonstrating the mechanisms of the programme. Of course, the pilot, in learning about the implementation, may well consider questions such as uptake and adaptations that may expand its reach and effectiveness. However, in its pilot phase it produces effects. There are certain students left by the wayside in pilot phase for a range of reasons, often highly complex and exceeding the programmes mechanisms. It is understood as unfeasible for a programme, and a pilot in particular, to deal with all barriers to inclusion and for each individual students’ assortment of challenges. 

 

While a programme demonstrates its success in improving performance in certain learners, those who face challenges that exceed the design are left further behind. It is not only that such students witness a gap increase between them, and other students recognised as “struggling” but also the witnessing of a programme designed to support students, that, along with the rest of the education system up until that point, fails to centre them and recognise their issues. This threatens to problematize the learners who are excluded. Nobody explains to these students that the programme has failed to respond to their issues or that it was not feasible to deal with their problems. This framing reflects the neglectful dimension of such a conceptualisation of a pilot study. That is, explaining this failure is no real solution.

 

Let us think with an example. Let us imagine a programme focusses on addressing backlogs — that is, skill gaps — in mathematics. A programme is designed addressing issues stretching from content to pedagogy to learner motivation. The effectiveness of pull-out interventions— where targeted students are selected to receive the programme — and the large classroom sizes make the selection of students for an afterschool class a feasible option. In the aim to not problematize students or create a negative stigma or the students in the programme this is not labelled as a ‘backlog project’, but the students are aware of who is being targeted and have a sense of the criteria. We can consider a more straightforward case where, in the interests of demonstrating the viability of the programme, teachers play a role in selecting students and are asked to consider the likelihood of students’ participation in their selection. Students who are identified as unmotivated to learn and/or disruptive, students who struggle with transport and students with attendance issues are examples of students at risk of not being selected. This is straightforward in the sense that, in the interest of what we are referring to as a demonstration of viability, certain learners are identified as “problematic” precisely because the challenges they represent exceed the scope of the design. The programme might reflect on how to make the class more accessible, but these students remain identified as less likely to contribute to the proof of concept. In the case where this is not utilised as selection criteria and such students are selected but then struggle or show low levels of engagement, the issue remains. Let us imagine this programme is a great success: there is a high level of correlation between students’ participation in the programme and an improvement in school assessments. What is the impact of such an outcome on those students whose challenges exclude them from this success? This cohort has a high probability of including those with the largest backlogs and/or issues that exceed the classroom. As the programme “demonstrated its success” they continued to fail. If we are to recognise that pedagogy is not limited to content, what are the unintended didactics for these students (and for the teachers, SMT and other school staff)? In one of the pilots based on reading backlogs a teacher noted that there are students who struggle to read at all whose backlogs are so fundamental that the methods of the interventions are ineffective. This teacher felt that “it [the intervention] was meant for them” giving the opportunity to reflect on how these students then go missing. That is, if the interventions are somewhere guided by the notion that students have been failed by the schooling system in some respect, what does it mean for those students who are failed again by these interventions? If we say, “no student left behind” and then, with all the relevant qualifiers, leave some students behind, what does that mean? If we do not say “no student left behind”, what does that mean?  To what extent are these issues apparent in the very conceptualisation of a pilot and therefore in the model of interventions? And how much of this is connected to the continued failure of programmes to assist those who are struggling the most?

 

We have alluded to the fact that these questions are not limited to the pilot phase however the conceptualisation of the pilot guides some questions. By the nature of most interventions that aim to provide changes that can be implemented at systems level, they are not founded upon the assessments of individual learners but attempt to identify more systemic barriers and alter them. Those students whose barriers are not affected by these systemic changes are thus less likely to see improvements. These interventions are aiming to help as many students as possible as efficiently as possible. But how does this appear to those students and their peers, educators, caretakers etc.? The pilot phase’s emphasis is on learning about the implementation of a programme design and demonstrating the viability of this programme. That is, this a particular weighting of the aims where both emphases threaten to neglect those students for which the programme design is not viable. It is argued that this should be intentionally and explicitly accounted for. As an example, I recall teachers we interviewed in a pilot school who gave the idea of interventions proving diagnostic assessments for students, particularly those with challenges the intervention and the school were unable to address, so that they could be better informed, and the students and their caretakers could be better equipped to address their challenges. These teachers felt that some students’ challenges were not only unable to be addressed by the pilot programme but also by themselves, but they were considering how they could still contribute to ensuring these learners got the assistance they needed.

 

Reference list:

TechTarget Contributor (2013, June 13). pilot program (pilot study). CIO. https://www.techtarget.com/searchcio/definition/pilot-program-pilot-study

Regional Educational Laboratory Appalachia (2021). Learning Before Going to Scale: An Introduction to Conducting Pilot Studies.

Get in touch with us

Fill out the form below, and we will be in touch shortly.