Critique of Rethinking Canadian Aid: Chapter 13 – Canada’s Fragile States Policy
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 13: Canada’s Fragile States Policy” by David Carment and Yiagadeesen Samy. I have to confess upfront that as a public sector manager, I have a lot of trouble with this chapter. Overall, they basically say it has strong conceptual challenges, yet then conclude investments were squandered when the focus shifted from one area to another.
The complexity of dealing with, and responding to, fragile situations is reflected in the way CIDA has generally allowed “a thousand flowers to bloom,” to support partner organizations, academics, and NGOs that work on state fragility. Indeed, when it first appeared on the scene, as an idea in search of a policy, just around 9/11, the concept of state fragility brought with it a new and complex understanding of how donors and civil society interact and use analysis to support their policies. Given CIDA’s prior investments in conflict analysis, peacebuilding, public policy, and consultations with civil society, it could be assumed that the agency would have been prepared to address these challenges. Such was not the case, for a couple of reasons. First, if one examines the evolution of CIDA’s fragile states analysis and policy, we see that initially at least the organization relied on a number of initiatives that emphasized transparency, collaboration, and value-based analysis. This is because, at the time, CIDA turned to the academic, humanitarian, and NGO community to build analytical support for its policy developments. The truth is that a lot of the momentum and investments made during this period were either squandered or forgotten as various donors, including CIDA, scrambled to shift their emphasis from support to civil society (1994–2002) to state building (2003–14) with the onset of the Iraq war following 9/11 (Carment et al. 2010).
So here’s my question…how can an analysis determine that they’re not “doing it right” when no one knows what the “right way” actually is (or was)? Given the conceptual issues, the newness of the area, the complexity of the issue, there is no “one” right way. Hence a shift could be worse, better, or just neutral. But that isn’t how the chapter is written — they may note that “State fragility as a concept is relatively abstract and mostly unclear in terms of cause and effect”, but they still claim momentum was lost, investments were squandered, and that “bold, decisive, forward-looking action was in short supply.” All we really know is that there was a shift by multiple donors, and without clear lines of evidence that the old wasn’t working nor that the new would work better. And in the face of such uncertainty, basic strategic planning says to be cautious, stay diversified in approach, don’t over commit to “bold decisive action” that could be in the wrong direction.
These include several tools such as the “Conflict and Peace Analysis and Response Manual” (FEWER 1999) and the Peace and Conflict Impact Assessment (PCIA), which were never fully operationalized and integrated into policy making.
Before one concludes that these are failures, shouldn’t there be first some sort of evidence that they were useful approaches that worked and thus should have been fully operationalized and integrated? Which would seem unlikely given the ambiguity around the programming area. Yet their “evidence” is that the ‘scandal-plagued PCIA initiative’ did great work between 1997 and 2003. Having worked in the Department during that time, I have a very different perspective. While they may have done some good work, and I don’t wish to denigrate it, it was always a sidekick to normal development programming. Put more simply, and less normative, the issue was quite simple. Development is considered complex, and hard to unpack, yet years of work have allowed certain types of best practices to emerge and donors do have some general directions to follow; fragile states, even peace-building (which suffers from the same nomenclature problem that WID became GE, and capacity building became capacity development i.e. you can’t “build” peace but only help the participants to develop their own), were chaos. All of the analysis was “it’s large, chaotic, nothing works as development should”. An intractable problem that people wanted to throw money at, which not surprisingly, attracted little interest around the department. It was not, by and large, development. It is what you do when development isn’t possible. A step above humanitarian assistance, but generally, only a small step. And hence, of very little interest to most of CIDA employees who didn’t see it as “development”.
On the other hand, no self-respecting policy analyst at CIDA publicly decried the lack of space for decisions formed on the basis of good empirical evidence. That fact and the unwillingness at the political level to do things based on evidence unless it is expedient have been disconcerting. Canada’s failure to heed the evidence may well come home to roost as the situation in the Middle East worsens.
This is pretty much where the analysis disappears and the empty rhetoric reigns supreme. They set up a framework that says “it’s chaos, nobody knew what to do conceptually, no framework”, and yet now not only has CIDA failed to take decisive action (whatever that may have been), but the staff at CIDA are also to blame for not pushing for evidence-based decision making, despite the complete lack of anything resembling evidence or data?
When the authors decide which is true, that there is no framework of what works and doesn’t or that there is a framework that they can articulate and measure performance against, they might have a coherent argument. Otherwise, this chapter could easily be omitted, and probably should have been.