The other day I agreed to read through an evaluation report written by a colleague following some training he had facilitated in Nigeria. Headline comment was the number of individuals that had completed the training and the capabilities they consequently had. I asked him how he knew the Nigerians would incorporate their training into their work and he looked at me blankly. Browsing through other reports, I found a similar story; evaluation was focusing on output rather than result. When searching for the reason for this I found the initial training requests had been agreed based on numbers trained. With no pressure to justify the effectiveness of the training, we hadn’t bothered. Both sides were ostensibly happy, we could boast about how we were helping to develop the capacity of the Nigerian security sector and they could publicise progress by their willingness to complete internationally recognised courses. Assigning monitoring and evaluation to an afterthought appears to typify the approach taken towards both by many projects and has led to deep concern over the effectiveness of such efforts in helping to ensure projects are meeting their objectives (Anderson, Chigas and Woodrow, 2007).
The aim of monitoring and evaluation is to ascertain the relevance and achievement of objectives, impact and sustainability (Popovic, 2008). Rynn and Hiscock (2009) suggest evaluation of projects in the security and justice sectors is done badly for many reasons. Firstly due to the challenges facing projects in general such as staff finding it burdensome, weak incentives to invest in evaluation, evaluation being poorly funded and donor-driven targets distorting priorities; but there are also challenges faced specifically by security and justice projects. Both sectors are complex thus it can be hard to isolate and evaluate changes, programme objectives can be deliberately vague to allow space to develop, projects can have multiple strands and budgets with little cohesion between the various mandates, actors can have limited understanding of evaluation processes and in fragile environments it can be difficult to gather evaluation evidence. The result is that monitoring and evaluation is frequently not done, and if done, not done well.
What Needs to be Done
What needs to be done is very clear. Yes, there are many challenges involved with monitoring and evaluating projects in post-conflict areas but tools, guidelines and systems already exist for other contexts that just require a bit of adaptation (Organisation for Economic Co-operation and Development, 2011). Rather than being a complex process that is poorly understood and therefore avoided, project managers need to ensure that individuals who are skilled in this area are employed and all other workers understand the importance of carrying out monitoring and evaluation. The best way to make this happen is ensuring monitoring and evaluation is planned for right from the inception stage of a project.
As stated by Vandermoortele (2015), there are two fundamental reasons why security and justice projects need to be adequately monitored and evaluated. Firstly, so we can learn from our failures, indeed without adequate evaluation we may not even realise that we are failing. In Iraq following the 2003 intervention, several organisations ran projects to assist the state in managing its newly organised agricultural sector, seemingly successful in themselves but a lack of monitoring and evaluating impact meant that much needed help for the farmers to grow and distribute their produce was overlooked and consequently produce ended up rotting as people starved (Hassin and Isakhan, 2016). Funding is a finite resource, it is therefore essential that truly successful projects are identified so they can be scaled up or replicated and unsuccessful projects can either be restructured or closed down.
The second fundamental need for monitoring and evaluation is so we can highlight positive achievements (Vandermoortele, 2015). Documented evidence of success obtained through monitoring and evaluation can serve as a catalyst for attracting further funds and help convince recipients of the credibility of the projects. It can also highlight projects that are having similar effects in the same communities and so help refine and deconflict objectives to ensure resources across all projects are having the maximum effect in the targeted communities.
Monitoring and evaluation needs to be an integral element of all security and justice projects in post-conflict areas as it is the only way to determine if projects are successful or not. To overlook monitoring and evaluation is to risk consigning valuable resources, time and effort to projects that do not work and not learning the valuable lessons from projects that are successful.
There are two main reasons why monitoring and evaluation are not done well. First, I believe they are poorly understood. Within my own organisation, the British Army, external evaluation cells across the training establishments were the first to be cut when it came to finding savings because their purpose and value were not understood. The same applies when it comes to the work we do abroad helping to improve the capacity of foreign armies. People are willing to release funds to send across training teams to conduct the training because there are tangible outputs – hands to be shaken, photos to be taken. It becomes extremely difficult to persuade the budget holders to follow up the training with evaluation because it is seen as taking funds away from further training.
This leads on to the second reason, an unwillingness to invest resources. As stated above, monitoring and especially evaluation in post-conflict environments can be challenging. Without having a clear idea of how they could be done effectively, it is easier to do nothing. Additionally, more often than not, projects are competing for funds and are under pressure to demonstrate value for money. Conducting effective evaluation could provide this evidence in the longer term but in the shorter term, it requires resources but may provide no tangible gain to the project. It therefore may seem to be expedient to concentrate all resources towards achieving the maximum results in the short term to procure further funds.
Anderson, M., Chigas, D. and Woodrow, P. (2007) Encouraging Effective Evaluation of Conflict Prevention and Peacebuilding, Paris: OECD, http://www.oecd.org/dac/evaluation/dcdndep/39660852.pdf, (accessed 26th March 2016).
Hassin, A. and Isakhan, B. (2016) ‘The Failures of Neo-Liberal State Building in Iraq: Assessing Australia’s Post-Conflict Reconstruction and Development Initiatives’ Australian Journal of Politics and History 62(1): 87-99.
Organisation for Economic Co-operation and Development, (2011) Handbook on Security System Reform, Paris: OECD.
Popovic, N. (2008) ‘Security Sector Reform Assessment, Monitoring and Evaluation and Gender’ in M. Bastick and K. Valasek (eds) Gender and Security Sector Reform Toolkit, Geneva: DCAF, OSCE/ODIHR.
Rynn, S. and Hiscock, D. (2009) Evaluating for Security and Justice, London: Saferworld.
Vandemoortele, A. (2015) Learning from Failure? British and European Approaches to Security and Justice Programming, http://www.ssrresourcecentre.org/2015/03/13/learning-from-failure-british-and-european-approaches-to-security-and-justice-programming/, (accessed 26th March 2016).