Part 2: Building a response to education- Reflections on the approach to school-based education interventions or Kaleidoscopic Chameleon Eyes

Date

Opportunity for reflection:

A concerted effort from a range of actors sees interventions and studies in the field of school-based education. This is a continuous and dialogical reflection on the general models by which the topic is approached as well as some of the thinking that guides them. The continuous participation in the activity of reflection on this process means thinking about the guiding principles and components that might switch to autopilot. This is a willingness to interrupt so as to continue to promote an environment that produces, supports, and sustains innovative approaches to education.

 

Tracing a model:

The landscape of education-based interventions uses a scientific approach: A problem is identified. Programmes that respond to this problem are identified and implemented. This process is monitored and valuated. From this, the benefits and failures of these responses are assessed. This can then be built upon. Generally, a successful response might be returned to, and a failed response would have to be adapted before being utilised. Two key questions in the field are the generalisability of its success beyond the schools it was piloted or initiated in and how it can be scaled up.

 

 For example, there have been findings that a mentoring system for teachers can produce an increase in learner performance. However, providing mentors to teachers in several schools to assess the suitability of this response with regard to impact on school results is different from the question of producing a national model of institutionalised programmatic mentoring and the feasibility of doing so. That is “does it work?” does not wholly coincide with “is it feasible as a large-scale response?”. The response of mentoring is recognised as a viable option. Thus, in other interventions this might be considered with mentoring being utilised again and might be tweaked in an attempt to respond to the questions of scale or generalisability. That is, tested in new contexts and with slight adjustments a clearer understanding of the conditions for success for mentoring might appear. For example, the mentor might have less face-to-face visits than the first pilot, but this is supplemented with virtual mentoring. Such a model allows for building on previous findings.

 

 As this process continues and produces results with more interventions building on previous successes and failures the bank of findings allows for more robust models. This is not necessarily linear, it is not that each intervention adds more successful impacts on education outcomes than its predecessors but rather that the knowledge base ­— the articulation of levers of change, how effects come to occur — becomes more nuanced and robust.

 

Balancing within the model and the internal split:

An intervention aiming to build on the previous successes must consider the extent to which they differ from the previous findings in their attempts to solve other problems or limitations and the contribution to an understanding of the mechanism. Let us return to the example of mentoring. Once it has been established that mentoring works at a certain frequency, a follow up intervention might be useful in finding if the frequency can be decreased for the sake of feasibility. For example, if coaching was found to produce the intended affects at a frequency of one mentoring session per week what happens if this frequency is reduced to a mentoring session every two weeks? If the aim was greater precision in the understating of the effects of frequency, then isolating that variable would be ideal. Run the same project with a range of frequencies of mentoring support, all else equal, and we will develop a greater model of the role of frequency in the effects of mentorship and potentially a better understanding of how it produces effects. But this means multiple projects run on the same variable costing resources including the time of the schools and learners. This is a point at which an internal split becomes apparent. The internal split referenced here is in relation to the aims. If one aim is to directly impact on learners, learner performance the schools etc., then another aim, or another component of the aim, is to add to the knowledge and evidence base. The internal position of this split is what we might want to reflect on. That is, as part of this model both aims exist and overlap and the space between them can go missing.In an attempt to provide an innovative programme that addresses a whole range of issues, there is the risk of straying from the evidence and in this scientific model there is a constraint on programmes in that they cannot stray too far from the evidence. In a case, for example where there is evidence for success in 5 sperate strategies from previous pilots and there is no evidence of their combined outcomes, it is a pilot attempting two of these strategies might have more evidence base support than a pilot now trying to utilise all 5 let alone a pilot that might want to utilise all five and add some unsupported strategies. “Might” is added because the theoretical support for these mechanisms might also assist in supporting the combination of strategies or highlight the likelihood of interference or a combined effect other than a kind of addition of their individual effects.  Again, what is to be highlighted and reflected upon is the internal nature of these splits obviously this article is aiming to make those splits apparent, but they do not always appear in project designs, as separate aims that need to be weighed up.

 

What this means for interventions is a different can of worms so let us explore one possibility as an example. Schools and teachers also involved in the building of responses in different ways potentially with different aims, but they are provided with constraints, individual learners, changing curriculums, external effects such as loadshedding etc. That is, schools and teachers have experience in the experimental (with knowledge-producing capacity) model of learner responses. How might they consider the aims of any interventions in their school. To be clear here, teachers and SMTs should not be reduced to one response, or one simplified stance. Where teachers and schools are appreciative of interventions and their offerings, this should not crowd out the possibility of other simultaneous responses, nuance, conflicting positions, or other more complex positions. If the teachers or SMT has some sense of the (necessary) multiplicity of a projects aims whether this is articulable or not, how does this affect their relation to the project? 

 

While there is overlap in many areas (improving learner performance, getting a better understanding of issues and solutions) one can consider the gap here. The teacher’s relation to the knowledge base of a nationally scalable model of improvement is different to the governments or the implementers just as the school’s relation to dealing with the constraints and evidence base for this scaling that will be different than the implementers. This threatens to be lost insofar as monitoring and evaluation is catered to the aims of implementers, funders, and government. The generalisability of intervention results and scalability of interventions protocols is not the professional focus of teachers and SMT in the way it might be for other stakeholders. So, for example, when a teacher’s critiques and suggestions with regard to an intervention divert from the aims of the intervention one must consider their relation to this internal split. The teacher is expected to take up, not only the aims of the intervention, but the balancing of the aim’s components. Consider a teacher who understands that the aims of a programme are never as simple as “here is the final solution to your problems” but is expected to implement it as such and then their critique of the intervention is measured in accordance with the aims and weighting of aims that are not their own. So when analysis finds that teachers do not understand the aims of the programme, something is being lost in relation to the workings of the models by which responses are built. Such misalignment is key.

 

There is another internal split that can be quickly touched on: the potential to learn about the process of learning. While the learning processes are reflected upon it often appears in the framing of a kind of skill development: M&E practitioners learn how to do the work of M&E better. But this is not framed as contribution to the study of M&E as a discipline. As previously alluded to, this aim to learn about the M&E is always a component of M&E insofar as it is always an adaptive process that responds to the specific programmes it is engaged with but the findings from this process are not given authority as products of the process. A commitment to producing such study could provide the acceleration of M&E capacitation of implementers of education projects and the acceleration of innovation in methods used by M&E practitioners.

 

Limits of evidence base response:

This is thinking with the model of evidence-based solutions. I don’t think the merits of such a system need to be expounded in much depth here. This model, as has been discussed, provides a knowledge base with the continuous aim to build on previous findings to produce more nuanced and robust solutions. There is a sense in which this model is constrained by its evidence base as well. Insofar as this model aims to respond to problems it is constrained. Insofar as it intervenes in changing the outcomes of the current model it is constrained. But this is not to say it is failed, it is the robustness of the model. However, excluded from such a model is the wholesale rearticulation of aims. That is, the aims of education are broadly set and defined, and the interventions are largely a shifting of outcomes and adaptation of aims. We do not have pilots that aim to test a new type of education system as much as we have pilots geared towards improving the aims of the existing education system. From governments perspective, the evidence base has the same story and the evidence for alternative education is largely siphoned off from such improvements. Structural analysis of the education system as such, tends to go missing through the proliferation of interventions. That may be an obvious point, we are here concerned with the M&E of interventions and not a structural analysis of education (and the position of education). But there is a way in which this model stands on already built foundations bypassing the critiques of said foundations. In what ways can the intervention model speak more directly to structural analysis and pedagogical theory. The question is not how to collapse them as they are differing fields, but how to strengthen their position as complementary projects or at least projects concerned with similar issues that can expand the other.

Get in touch with us

Fill out the form below, and we will be in touch shortly.