Social Surveys Blog

The thoughts expressed on this page do not represent the positions of Social Surveys Africa and do not necessarily represent those of the staff. The page offers the reflections and engagements of the Social Surveys team members with the topics of our research. Sometimes transient (which is not the same as immaterial), sometimes firmly held (which is not the same as intransigent).
Not all analyses align with research questions
Not all reflections qualify as analysis
But surplus is not superfluous.
Welcome to the Social Surveys Blog

Part 1: The proliferation of intervention programmes and thinking for sustainable education intervention or En-core Issues

Part 1: The proliferation of intervention programmes and thinking for sustainable education intervention or En-core Issues

In response to the challenges for education in South Africa, a range of actors aim to drive change and provide interventions that can positively impact education in South Africa. Many interventions take an approach which goes against the discourses centred around the failure of learners and instead focuses on the systems which should be in place to support learners with a sensitivity to, and understanding of, the complexities that learners face. Similarly, this post aims to think fidelity with regard to some of the constraints and pressures that exist in the system of education intervention. That is, to reflect on the components of the system of interventions and the system in which interventions occur. We are offering reflection, from our perspective as monitoring and evaluation (M&E) partners, on recent education projects. We have seen up close, the hard work and care that such interventions require and experienced the challenges and complexities.

Let us begin by briefly exploring the concept of implementation fidelity which is key in M&E. Implementation fidelity refers to the degree to which an intervention or program is delivered as intended. The monitoring of the implementation fidelity is the way in which the provision of an intervention is held accountable to the research, aims and theory of change by which it was designed.

By understanding and measuring whether an intervention has been implemented with fidelity, evaluators, researchers, and practitioners can better understand how and why an intervention works and the extent to which outcomes can be improved. Implementation fidelity contributes to the robustness of the evaluation of the impacts of the intervention programmes. The more expansive the measure and assessment of fidelity, the more this can contribute to attribution of outcomes to the program, to understanding how and why the intervention works and where it can be improved. Implementing fidelity analysis for attribution and a record of implementation allows for the transportability of the programme. That is, if the programme is shown to produce results and the mechanisms of implementation are well documented, then there is a blueprint for the programme to be reproduced elsewhere. Moreover, implementation fidelity analysis allows for deconstruction of the programme and an analysis on a piece-by-piece basis allowing for the identification of components in which it was successful (“pockets of success”) and the components that produced challenges. This analysis, and capacity for piece-by-piece evaluation, allows for a greater capacity to identify the effects of components of implementation on mediating variables which may be valuable information. That is, as an analysis of implementation, and thus how results are achieved (or not), the analysis may reveal other relationships and their effects that are of interest and/or of importance[1]. Implementation fidelity analysis is, in short, an assessment of the extent to which an intervention was delivered as per the design. This allows for greater evidence for attributing the results to the intervention, a greater understanding of the how the results came about, and a more piece-by-piece understanding of those workings. 

The education intervention environment can see, to varying extents, a significant lack in fidelity. These gaps, particularly where they are more significant, often do not appear by accident but as the results of constraints and pressures. These programmes face limitations in resources, and in finances and time in particular. These interventions’ timeframes are not only subject to the implementor’s evaluation of the time required to develop, prepare, and run the project but are often subject to external funding or research cycles. This is clear area for potential problems. Project timeframes are only so malleable and are external to the particularity of the intervention. This is exacerbated in cases of external funding where the project designer has to abide by the timeframe of funders. Simply put, the timeframes of interventions often precede their design and are then adjusted. This incentivises programme designs that can fit these time scales and programmes with less time required for preparation in general. Preparation here refers to the preparation for implementation after the proposal for the intervention has been approved. The paper Education in Africa: What are we learning?[2], written by Evans and Acosta, offers a meta-analysis of 145 empirical studies across Africa from 2014 onwards. This paper also identifies the timescale of studies as an area that limits learning in the field, but it focuses on the brief timeframe for the measurement of impacts. Evans and Acosta state that, “The vast majority of interventions measure outcomes within 12 months of the onset of the intervention, with little information on the longer run time path of impacts”. While it is noted that studies at a policy level show greater long term impacts the point remains that interventions, and their measurements are structured by their timeframes in ways that are more restrictive than they need be.

There are also the constraints of what resources they are able to provide. Financial constraints always threaten to distort the intention of the design. Keeping in mind the expansiveness and complexity of pedagogy, small changes in accordance with budget can have profound effects on the process and outcomes. That is to say, the programme cannot always be reduced to the sum of its parts and is often rooted in theory and evidence of, not only the effects of components but the details of their interrelation and calibration. In other words, the adaptation to constraints and the pressure of expediency allows the intervention to move forward and deliver solutions to immediate problems but threaten to undermine the theoretical underpinnings. Insofar as the programme deviates from the interrelation and calibration of components based in pedagogical research then the aims of the programme should be revaluated. This is another point that overlaps with Evans and Costa who, while noting that there is a shift to context-based design, highlight the importance of attending to the theoretical underpinnings of the study and connecting findings to theory. They quote Duflo who states that “[o]ur models give us very little theoretical guidance on what (and how) details will matter. This article aims to highlight that it is precisely the implications of the details and the theoretical underpinnings that threatens to be overlooked in the name of expediency.

With the constraints of resources and the particularities of the school it is often those innovative and complex components of the design that go missing. Restricted development time, the restricted funding coupled with the effects of alterations and substitutions in the pedagogical ecosystem of the design, and the challenges that schools face create a tendency towards simplified interventions. Again, this is not a critique of that choice to simplify as, often, this is a response to the more widespread and documented issues that’s schools face that could well undermine programmes that did not attend to such issues. There is a constellation of variables involved and the impact can have great, and nuanced, variation. This article is not arguing that these interventions have no capacity to be impactful on schools and student performance. In fact, as has been said, this ‘simplification’ is the utilisation of documented responses to documented problems. What is being reflected upon is the question of programme fidelity and the tendency towards programmes that respond with similar strategies to a similar core range of challenges. These interventions are not designed as the rollout of these strategies but are part of the exploration of possibilities in education.

Schools also have their own systems and processes as well as constraints and challenges. These are not consistent across schools and every school offers a unique context for programme implementation. Project design must adjust to a variety of constraints as well as consider their relationship with the school and the consideration of interpersonal and professional relationships. Intervention designs must integrate themselves into the schools and do so in a way that recognises the school’s staff and their systems and challenges as well as realising that when the intervention ends the staff will largely remain at the schools.

This piece is based off of relatively small sample of interventions but, again, aims to reflect on more structural components of fidelity gaps to offer a consideration of these challenges. A paper titled A review of South African primary school literacy interventions from 2005 to 2020, authored by Meiklejohn et al.[3], offered a meta-analysis of 24 papers about 21 literacy interventions in South Africa. Their search was restricted to peer-reviewed journals that are included in the Sabinet database and the paper acknowledges the limitations of their search. The paper approaches the reflection on interventions from a different angle to this article and offers critical insights that are complementary to this discussion but, as a meta-analysis, it is also a useful consideration in this article.

 As they put it:

 It is noteworthy that many literacy interventions are driven by Non Governmental Organisation (NGOs) and are reliant on donor funding to be implemented. In many cases, donors are proponents of specific literacy approaches and their resources are used to implement interventions that fit within these approaches or agendas.

Similar to the discussion above the role of funders and organisations, while key to capacitating interventions, is posited as a limiting factor to implementation designs but they add that the funding organisations and other capacitating organisations are proponents of certain literacy approaches. While their paper does not explicitly speak to the tendency towards ‘basics’ there is some evidence of this in their findings. For example, they highlight the common features of the programmes and indicate that they are aligned with those advocated by the world bank. As we have argued the issue is not necessarily the repetition of elements but that, when faced with the constraints, which are well documented, these tend, in our experience, to crowd out new elements and disrupt the design insofar as it aims to build on the existing knowledge. In concluding Meiklejohn et al. say

It is thus hardly a surprise that this systematic review has indicated that the response to the literacy crisis in South Africa has generally been ad hoc, uncoordinated and somewhat NGO/donor-driven. The flip side of this is that, in spite of a few attempts to make an impact on a large scale,[…] the government has been unable and/or unwilling to deal effectively with pervasive literacy challenges. There is little evidence of large-scale, coordinated interventions implemented over sustained periods to make the required impact on national literacy levels. (p. 10)

This is largely in agreement with what has been discussed in this article. Their meta-review has found the repetition of core elements, the impact of funding and NGOs and an uncoordinated ad hoc approach from stakeholders. This article is aims to further identify that the constraints on implementers and project designers. It is not being posited, by Meiklejohn et al., that a singular government coordinated approach is the way, as this recommendation appears alongside the already mentioned critique that NGOs being proponents of specific literary approaches and approaches advocated for by the World Bank may result in issues. A review of South African primary school literacy interventions from 2005 to 2020 as previously stated, offers useful reflections, not least on the coordination of information and learnings from these interventions but responding to the constraint highlighted in this article, this piece will explore a few discussion starting possibilities. The possibilities are geared toward expanding the boundaries and constraints or reconfigure them in such a way that it allows for greater variety in programmes and to avoid the trappings promoted by the current structure. These are not posited as definitive solutions and, no doubt, appear in a simplified form that would have to be further articulated in practice.

How can we work towards a decrease in the repetition of fidelity failures and increase variety in designs thus allowing for more expansive interventions and thus a more expansive evidence base for interventions and intervention possibilities. Just as the intervention space moves to greater attention regarding the context of the implementation there is the space for further reflecting on the way this is connected to the context and structure in which interventions are developed and managed and in so doing opening the potential space for a reinvigorated education research and intervention environment.

  1. longer time and more financial support.

There is the option of longer intervention cycles with greater funding. Knowing that funding is a complex issue there is also the shift of the balance between resources and numbers of projects. For example, instead of two interventions over two years have one over two years with the first 6 months being dedicated to development and preparation. Here you have greater time for development and preparation and utilise 2 years’ worth of implementation funds on 1 year and 6 months of implementation and 6 months of preparations post the acceptance of proposals. The point here is the reconfiguration of the timeframe and funding. Without an increase in total funding this means a less projects but more funding per project. This is a difficult decision and would require consideration of the extent of this shift with the hope that findings from this approach are able to generate more largescale and sustainable interventions. Again, beyond the repetition of core elements the more expansive and nuanced approaches that might be facilitated by longer cycles offer greater opportunities for more robust approaches that reach beyond what are often identified as the core issues. It is also recommended that greater time for preparation even without the additional financial assistance is worthwhile. Or, more ambitiously, considering structures that can accommodate timeframes that emerge from the design and greater emphasis on evaluating the longer-term effects.

  • Earlier school involvement

It is known that each school has unique factors and circumstances. The general model sees implementers develop an intervention then select schools then adjust the programme to the included schools. There is the opportunity to select the schools at an earlier stage in the development of the programme design, including the schools in the process and/or designing a programme with a more direct understanding of constraints as opposed to an the more ad hoc process we see. In some cases, it may be argued that this just displaces some of the gap from one between the implementation and the design to one between theory and project design insofar as it responds to the limitations at an earlier stage. But the displacement from implementation to design is key. Again, this is not said that this will expose ‘bad design’, it is rather to call for reflection on the design stage and the choices made to identify what is a failure of implementation, what is a failure of design and what theoretical elements might guide decision making. It may be argued that this authorises a greater level of programme differentiation. But it is also a more precise and intentional documentation of differentiation and the reasoning for it. This then allows for a clearer understanding of how much differentiation was a result of constraints, and thus an organised response, and how much was failure in implementation.

  • Greater collaboration in general (pedagogy experts, M&E, pooling of resources)

Schools are worth specifying for an earlier role in the project but there is opportunity for more and greater collaboration in general. These interventions are not a specific set of tasks but attempts to address challenges and learn from the attempts. While there are different roles that different stakeholders have in the collaboration process the more attention is given to co-conceptualising the collaboration the greater the opportunity to align objectives and the processes to achieve those objectives. The work to consider the various interests allows the opportunity to evaluate the design from multiple perspectives. And there is the opportunity to collaboratively utilise resources more effectively. From an M&E perspective this includes assisting in considering what the key components, levers of change and indicators in a project are and setting up systems to monitor those points and provide feedback in a format that is easily usable for implementers. Bringing in M&E collaboration at an earlier stage allows for the utilisation and synchronisation of monitoring tools for internal processes and evaluation. Through the overlaying of various mappings of the design a more complex picture is likely to emerge from which each stakeholder can contextualise and reflect on their original mapping. This collaboration also allows for more adaptability. It was stated earlier that seemingly small changes in the implementation can have profound effects on the design and should invite a revaluation of aims. With greater collaboration with experts in pedagogy, for example, there is space for a constantly updating pedagogical implications of the fidelity question.

Reference List:

Dusenbury L, Brannigan R, Falco M, Hansen W (2013) A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Res. 18: 237-256. 10.1093/her/18.2.237

Evans, D. K., & Acosta, A. M. (2021). Education in Africa: What Are We Learning? Journal of African Economies, 30(1), 13–54. https://doi.org/10.1093/jae/ejaa009 Meiklejohn, C., Westaway, L., Westaway, A. F. H., & Long, K. A. (2021). A review of South African primary school literacy interventions from 2005 to 2020. South African Journal of Childhood Education, 11(1). https://doi.org/10.4102/sajce.v11i1.919


[1] Dusenbury et al., 2003

[2] Evans and Acosta, 2020

[3] Meiklejohn et al. (2021)

Part 2: Building a response to education- Reflections on the approach to school-based education interventions or Kaleidoscopic Chameleon Eyes

Part 2: Building a response to education- Reflections on the approach to school-based education interventions or Kaleidoscopic Chameleon Eyes

Opportunity for reflection:

A concerted effort from a range of actors sees interventions and studies in the field of school-based education. This is a continuous and dialogical reflection on the general models by which the topic is approached as well as some of the thinking that guides them. The continuous participation in the activity of reflection on this process means thinking about the guiding principles and components that might switch to autopilot. This is a willingness to interrupt so as to continue to promote an environment that produces, supports, and sustains innovative approaches to education.

Tracing a model:

The landscape of education-based interventions uses a scientific approach: A problem is identified. Programmes that respond to this problem are identified and implemented. This process is monitored and valuated. From this, the benefits and failures of these responses are assessed. This can then be built upon. Generally, a successful response might be returned to, and a failed response would have to be adapted before being utilised. Two key questions in the field are the generalisability of its success beyond the schools it was piloted or initiated in and how it can be scaled up.

 For example, there have been findings that a mentoring system for teachers can produce an increase in learner performance. However, providing mentors to teachers in several schools to assess the suitability of this response with regard to impact on school results is different from the question of producing a national model of institutionalised programmatic mentoring and the feasibility of doing so. That is “does it work?” does not wholly coincide with “is it feasible as a large-scale response?”. The response of mentoring is recognised as a viable option. Thus, in other interventions this might be considered with mentoring being utilised again and might be tweaked in an attempt to respond to the questions of scale or generalisability. That is, tested in new contexts and with slight adjustments a clearer understanding of the conditions for success for mentoring might appear. For example, the mentor might have less face-to-face visits than the first pilot, but this is supplemented with virtual mentoring. Such a model allows for building on previous findings.

 As this process continues and produces results with more interventions building on previous successes and failures the bank of findings allows for more robust models. This is not necessarily linear, it is not that each intervention adds more successful impacts on education outcomes than its predecessors but rather that the knowledge base ­— the articulation of levers of change, how effects come to occur — becomes more nuanced and robust.

Balancing within the model and the internal split:

An intervention aiming to build on the previous successes must consider the extent to which they differ from the previous findings in their attempts to solve other problems or limitations and the contribution to an understanding of the mechanism. Let us return to the example of mentoring. Once it has been established that mentoring works at a certain frequency, a follow up intervention might be useful in finding if the frequency can be decreased for the sake of feasibility. For example, if coaching was found to produce the intended affects at a frequency of one mentoring session per week what happens if this frequency is reduced to a mentoring session every two weeks? If the aim was greater precision in the understating of the effects of frequency, then isolating that variable would be ideal. Run the same project with a range of frequencies of mentoring support, all else equal, and we will develop a greater model of the role of frequency in the effects of mentorship and potentially a better understanding of how it produces effects. But this means multiple projects run on the same variable costing resources including the time of the schools and learners. This is a point at which an internal split becomes apparent. The internal split referenced here is in relation to the aims. If one aim is to directly impact on learners, learner performance the schools etc., then another aim, or another component of the aim, is to add to the knowledge and evidence base. The internal position of this split is what we might want to reflect on. That is, as part of this model both aims exist and overlap and the space between them can go missing.In an attempt to provide an innovative programme that addresses a whole range of issues, there is the risk of straying from the evidence and in this scientific model there is a constraint on programmes in that they cannot stray too far from the evidence. In a case, for example where there is evidence for success in 5 sperate strategies from previous pilots and there is no evidence of their combined outcomes, it is a pilot attempting two of these strategies might have more evidence base support than a pilot now trying to utilise all 5 let alone a pilot that might want to utilise all five and add some unsupported strategies. “Might” is added because the theoretical support for these mechanisms might also assist in supporting the combination of strategies or highlight the likelihood of interference or a combined effect other than a kind of addition of their individual effects.  Again, what is to be highlighted and reflected upon is the internal nature of these splits obviously this article is aiming to make those splits apparent, but they do not always appear in project designs, as separate aims that need to be weighed up.

What this means for interventions is a different can of worms so let us explore one possibility as an example. Schools and teachers also involved in the building of responses in different ways potentially with different aims, but they are provided with constraints, individual learners, changing curriculums, external effects such as loadshedding etc. That is, schools and teachers have experience in the experimental (with knowledge-producing capacity) model of learner responses. How might they consider the aims of any interventions in their school. To be clear here, teachers and SMTs should not be reduced to one response, or one simplified stance. Where teachers and schools are appreciative of interventions and their offerings, this should not crowd out the possibility of other simultaneous responses, nuance, conflicting positions, or other more complex positions. If the teachers or SMT has some sense of the (necessary) multiplicity of a projects aims whether this is articulable or not, how does this affect their relation to the project? 

While there is overlap in many areas (improving learner performance, getting a better understanding of issues and solutions) one can consider the gap here. The teacher’s relation to the knowledge base of a nationally scalable model of improvement is different to the governments or the implementers just as the school’s relation to dealing with the constraints and evidence base for this scaling that will be different than the implementers. This threatens to be lost insofar as monitoring and evaluation is catered to the aims of implementers, funders, and government. The generalisability of intervention results and scalability of interventions protocols is not the professional focus of teachers and SMT in the way it might be for other stakeholders. So, for example, when a teacher’s critiques and suggestions with regard to an intervention divert from the aims of the intervention one must consider their relation to this internal split. The teacher is expected to take up, not only the aims of the intervention, but the balancing of the aim’s components. Consider a teacher who understands that the aims of a programme are never as simple as “here is the final solution to your problems” but is expected to implement it as such and then their critique of the intervention is measured in accordance with the aims and weighting of aims that are not their own. So when analysis finds that teachers do not understand the aims of the programme, something is being lost in relation to the workings of the models by which responses are built. Such misalignment is key.

There is another internal split that can be quickly touched on: the potential to learn about the process of learning. While the learning processes are reflected upon it often appears in the framing of a kind of skill development: M&E practitioners learn how to do the work of M&E better. But this is not framed as contribution to the study of M&E as a discipline. As previously alluded to, this aim to learn about the M&E is always a component of M&E insofar as it is always an adaptive process that responds to the specific programmes it is engaged with but the findings from this process are not given authority as products of the process. A commitment to producing such study could provide the acceleration of M&E capacitation of implementers of education projects and the acceleration of innovation in methods used by M&E practitioners.

Limits of evidence base response:

This is thinking with the model of evidence-based solutions. I don’t think the merits of such a system need to be expounded in much depth here. This model, as has been discussed, provides a knowledge base with the continuous aim to build on previous findings to produce more nuanced and robust solutions. There is a sense in which this model is constrained by its evidence base as well. Insofar as this model aims to respond to problems it is constrained. Insofar as it intervenes in changing the outcomes of the current model it is constrained. But this is not to say it is failed, it is the robustness of the model. However, excluded from such a model is the wholesale rearticulation of aims. That is, the aims of education are broadly set and defined, and the interventions are largely a shifting of outcomes and adaptation of aims. We do not have pilots that aim to test a new type of education system as much as we have pilots geared towards improving the aims of the existing education system. From governments perspective, the evidence base has the same story and the evidence for alternative education is largely siphoned off from such improvements. Structural analysis of the education system as such, tends to go missing through the proliferation of interventions. That may be an obvious point, we are here concerned with the M&E of interventions and not a structural analysis of education (and the position of education). But there is a way in which this model stands on already built foundations bypassing the critiques of said foundations. In what ways can the intervention model speak more directly to structural analysis and pedagogical theory. The question is not how to collapse them as they are differing fields, but how to strengthen their position as complementary projects or at least projects concerned with similar issues that can expand the other.

Part 3: Conceptualising the Pilot Programme in Education- Paying attention to the harmful possibilities in how we think about pilot programmes in School-based intervention or All Aboard.

Part 3: Conceptualising the Pilot Programme in Education- Paying attention to the harmful possibilities in how we think about pilot programmes in School-based intervention or All Aboard.

Elsewhere we have discussed the internal split of aims. We discussed how the misalignment of aims is not something to be eliminated or papered over but that this was split reflected the fact that, and the ways in which, different stakeholder are positioned differently in relation to an intervention.

This discussion spoke about the intervention process in general. Now let us zoom in on the conceptualisation of the pilot programme. Once again, this is not mean to be a critique of particular intervention programmes or implementing organisations. This is a consideration of the potentially harmful effects of pilot programmes. Through reflection on this topic, the stakeholders involved might be more intentional and explicit about mitigating against harmful unintended consequences in their project conceptualisation and design.

What is a pilot study?

The Institute of Education Sciences (IES) has a comprehensive document titled Learning Before Going to Scale: An Introduction to Conducting Pilot Studies developed by the Regional Educational Laboratory Appalachia. This document defines implementation pilot studies as studies that test an intervention programme in a real-world context. Pilots generally utilise small samples and explore the “feasibility of implementing a new initiative and the likelihood of reaping its benefits at scale”. That is, pilot studies implement all elements in a programme design, usually with a relatively small sample, with an emphasis on learning about the programme and its implementation. Although discussing tech-based pilots, TechTarget puts it well in saying: “A good pilot program provides a platform for the organization to test logistics, prove value and reveal deficiencies before spending a significant amount of time, energy or money on a large-scale project” [emphasis added]. What is worth paying particular attention to in this definition is the component of pilot that is a proof of value. While the logistics and context adaptability provide the space for adjustments improving the likelihood of success at scale, a pilot programme aims to demonstrate that its programme design is capable of producing the desired effects. If an intervention design aims to improve learner performance, then its pilot programme aims to demonstrate that the mechanism it puts in place to achieve this improvement are viable. This demonstration of viability will be the core of this reflection.

Neglect:

In a recent evaluation of pilot studies there was a recurring theme of neglect. To be clear this neglect was not a neglect instituted by these interventions, but its appearance indicated the potential of pilot programmes to contribute with the issue or otherwise reproduce it. Looking at the purpose of an implementation pilot programme that it might be argued that, in trying to demonstrate the viability of a programme, the focus on subsets of the sample who present high levels of programme uptake is essential to exploring and demonstrating the mechanisms of the programme. Of course, the pilot, in learning about the implementation, may well consider questions such as uptake and adaptations that may expand its reach and effectiveness. However, in its pilot phase it produces effects. There are certain students left by the wayside in pilot phase for a range of reasons, often highly complex and exceeding the programmes mechanisms. It is understood as unfeasible for a programme, and a pilot in particular, to deal with all barriers to inclusion and for each individual students’ assortment of challenges. 

While a programme demonstrates its success in improving performance in certain learners, those who face challenges that exceed the design are left further behind. It is not only that such students witness a gap increase between them, and other students recognised as “struggling” but also the witnessing of a programme designed to support students, that, along with the rest of the education system up until that point, fails to centre them and recognise their issues. This threatens to problematize the learners who are excluded. Nobody explains to these students that the programme has failed to respond to their issues or that it was not feasible to deal with their problems. This framing reflects the neglectful dimension of such a conceptualisation of a pilot study. That is, explaining this failure is no real solution.

Let us think with an example. Let us imagine a programme focusses on addressing backlogs — that is, skill gaps — in mathematics. A programme is designed addressing issues stretching from content to pedagogy to learner motivation. The effectiveness of pull-out interventions— where targeted students are selected to receive the programme — and the large classroom sizes make the selection of students for an afterschool class a feasible option. In the aim to not problematize students or create a negative stigma or the students in the programme this is not labelled as a ‘backlog project’, but the students are aware of who is being targeted and have a sense of the criteria. We can consider a more straightforward case where, in the interests of demonstrating the viability of the programme, teachers play a role in selecting students and are asked to consider the likelihood of students’ participation in their selection. Students who are identified as unmotivated to learn and/or disruptive, students who struggle with transport and students with attendance issues are examples of students at risk of not being selected. This is straightforward in the sense that, in the interest of what we are referring to as a demonstration of viability, certain learners are identified as “problematic” precisely because the challenges they represent exceed the scope of the design. The programme might reflect on how to make the class more accessible, but these students remain identified as less likely to contribute to the proof of concept. In the case where this is not utilised as selection criteria and such students are selected but then struggle or show low levels of engagement, the issue remains. Let us imagine this programme is a great success: there is a high level of correlation between students’ participation in the programme and an improvement in school assessments. What is the impact of such an outcome on those students whose challenges exclude them from this success? This cohort has a high probability of including those with the largest backlogs and/or issues that exceed the classroom. As the programme “demonstrated its success” they continued to fail. If we are to recognise that pedagogy is not limited to content, what are the unintended didactics for these students (and for the teachers, SMT and other school staff)? In one of the pilots based on reading backlogs a teacher noted that there are students who struggle to read at all whose backlogs are so fundamental that the methods of the interventions are ineffective. This teacher felt that “it [the intervention] was meant for them” giving the opportunity to reflect on how these students then go missing. That is, if the interventions are somewhere guided by the notion that students have been failed by the schooling system in some respect, what does it mean for those students who are failed again by these interventions? If we say, “no student left behind” and then, with all the relevant qualifiers, leave some students behind, what does that mean? If we do not say “no student left behind”, what does that mean?  To what extent are these issues apparent in the very conceptualisation of a pilot and therefore in the model of interventions? And how much of this is connected to the continued failure of programmes to assist those who are struggling the most?

We have alluded to the fact that these questions are not limited to the pilot phase however the conceptualisation of the pilot guides some questions. By the nature of most interventions that aim to provide changes that can be implemented at systems level, they are not founded upon the assessments of individual learners but attempt to identify more systemic barriers and alter them. Those students whose barriers are not affected by these systemic changes are thus less likely to see improvements. These interventions are aiming to help as many students as possible as efficiently as possible. But how does this appear to those students and their peers, educators, caretakers etc.? The pilot phase’s emphasis is on learning about the implementation of a programme design and demonstrating the viability of this programme. That is, this a particular weighting of the aims where both emphases threaten to neglect those students for which the programme design is not viable. It is argued that this should be intentionally and explicitly accounted for. As an example, I recall teachers we interviewed in a pilot school who gave the idea of interventions proving diagnostic assessments for students, particularly those with challenges the intervention and the school were unable to address, so that they could be better informed, and the students and their caretakers could be better equipped to address their challenges. These teachers felt that some students’ challenges were not only unable to be addressed by the pilot programme but also by themselves, but they were considering how they could still contribute to ensuring these learners got the assistance they needed.

Reference list:

TechTarget Contributor (2013, June 13). pilot program (pilot study). CIO. https://www.techtarget.com/searchcio/definition/pilot-program-pilot-study

Regional Educational Laboratory Appalachia (2021). Learning Before Going to Scale: An Introduction to Conducting Pilot Studies.