I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 12: From “Children-in-Development” to Social Age Mainstreaming in Canada’s Development Policy and Programming? Practice, Prospects and Proposals” by Christina Clark-Kazak. In the interest of full disclosure, I knew Clark in a previous incarnation at CIDA, but as with my review of Swiss’ chapter, that probably won’t mean much in terms of my review of her material. I wanted to mention it upfront as I really like the theme of the chapter — mainstreaming “age” vs. “children-in-development”, the modern-day equivalent of old “women in development” programming.
Second, for biological or social reasons, people of different ages may experience poverty differently (Sumner 2010). For example, children under the age of five have specific nutritional needs that may not be adequately met in contexts of poverty. […] Third, development initiatives have differential impacts on people at different stages of the life course. For example, León and Younger (2007) and Himaz (2008) have assessed the impact of grants to poor households on children’s health in Ecuador and Sri Lanka, respectively. They conclude that increases in household income do not necessarily translate into better health outcomes for all members of a family.
I agree these are important, and perhaps not vastly different points, maybe two sides of the same coin, but it doesn’t convince me of the need for mainstreaming so much as a disaggregated approach for different age brackets. Interesting, but unconvincing.
Drawing on the importance of variations in social constructions of age across time and place, I have developed the concept of social age to complement the predominant focus on chronological age.
I am not convinced that this is a wholly “new” approach, maybe just not one that is not well-discussed in international development. Partly because international development regularly treats “international” as completely different from domestic policy. But health has always done “adjusted” ages based on biological factors such as premature birth or mental competence; social work regularly adjusts for the maturity of the individual, social-cultural norms, etc. It might be “new” to international, but not a new concept. And while there are ages in the Convention on the Rights of the Child, there is another element that is in CRC and fleshed out in full in the Convention on the Rights of Persons with Disabilities — people (adults, children, PwD) should participate in decisions that affect them to the best of their capabilities, which would include age, mental defect, etc. I don’t disagree that an analysis of age and generational issues could be useful, but is it a full separate analysis or is it just a part of “social analysis”? Would it need to be fully mainstreamed?
The analysis that follows of the Children and Youth Strategy offers little evidence of anything. Lamenting the lack of reference to elders or old people is hardly relevant; it wasn’t written as an “all ages” strategy, it was about Children and Youth issues. Equally, the scope of the doc was not to be a fully comprehensive picture of everything that could ever be done for children and youth, nor all their issues. Choices were made, some things remained in and some were outside the scope. So it is hardly surprising that it doesn’t contain a whole host of other possible references, even if most of it is presented as “human becomings”. I also disagree with the analysis that talking about the “safety and security” of children paints them as victims rather than rights holders. While that complaint may, in some instances, hold water for a gender analysis by talking about the security of adult women, this is about children who are incapable in most instances of taking care of themselves. Put differently, they are not fully independently autonomous entities that are having their rights trampled if they are in need of assistance in being protected. Not all of the gender concepts translate directly.
Indeed, there are many critiques of mainstreaming in development policy. Although intended to effect deep organizational and structural change (Hartsock 1981), mainstreaming may actually depoliticize radical agendas by incorporating “language” into technocratic planning and programming without changing the reality on the ground (Hankivsky 2005). Mainstreaming has also been critiqued for its “fuzzy” approach (Booth and Bennett 2002), resulting in mixed or counterproductive application in practice, or being “everywhere but nowhere” (Tiessen 2007). In some cases, gender is simply equated with “women” without adequate attention to “the wider context of power relations caused by societally defined … gender roles” (Groves 2005, 7; see also Clark-Kazak 2009a). Finally, the complexity of mainstreaming requires a long-term, multi-staged process (Donaghy 2004; Moser 2005).
I love this paragraph. I don’t totally agree with the criticisms of mainstreaming, but it would be a powerful place to start this chapter. Perhaps a paragraph to explain gender mainstreaming and its importance, the limitations/criticisms, focus on how age mainstreaming could be done in a different way, perhaps explaining in a simplistic fashion (a table?) what WID gets you, what WID+ gets you, and what GE gets you. Followed by a similar table for CiD. One challenge I see is that most of the examples identify “issues”, but give very little concrete info on how that an analysis of that issue will result in different (and better!) development outcomes rather than just words in a report.
Interesting ideas, but mainstreaming is too undersold to me.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 11: Gender Equality and the ‘Two CIDAs'” by Rebecca Tiessen. I need to confess up front that I probably won’t have as much to say about Tiessen’s chapter — while I agree with the “facts” she relies on, I’m not sure I share her interpretation of them as setbacks. Most of this is because I am not looking for rhetoric or inventing academic paradigms, I’m a professional government manager who has to interpret policies like these and figure out how to implement them.
Over the forty-seven years of its existence, CIDA progressed from a “women in development” (WID) approach to a gender equality approach to development programming. However, between 2009 and 2013, two key developments set back the progress CIDA had made in this area: (1) the partial, but significant, erasure of the term “gender equality” from official policies and government speeches when the Harper Conservatives shifted their language to “equality between women and men”; and (2) the introduction of the Muskoka Initiative on maternal health, signalling a further retreat from gender equality programming by targeting mothers as “victims” and beneficiaries of development services rather than active agents in the design and implementation of development programs. These initiatives heralded the return of a WID approach, at best, and the increasing prevalence of a charity model under which the government assigns money to solve development problems without the active participation and engagement of local communities in the design and implementation of their own development projects.
To account for the change, Tiessen talks about the difference between the public face and the operational face as being different. I agree with her, wholeheartedly, because much of what was done was public positioning. As with my complaints about earlier chapters, relying on the “public” pronouncements is a bit risky since often they are about playing to particular audiences more than about actual change. Private sector development, country concentration, links to trade and development all showed large-scale announcements on new approaches, but as Swiss showed back in Chapter 2, the change was more rhetorical than real. CIDA has kept doing development for developmental reasons, regardless of what the press releases say.
The 1999 GE Policy acknowledged that women and men have different perspectives, needs, interests, roles, and resources, and that development programming must address these differences (DFATD 2013b). The hard work that went into developing the GE Policy can be attributed to a commitment to gender equality among mid-level CIDA staff members. Over time, critiques of CIDA’s 1999 policy and programs began to emerge. In particular, CIDA’s official evaluation of the implementation of its Gender Equality Policy (Bytown and CAC 2008) and the civil society response by the Informal CSO Working Group on Women’s Rights (WGWR 2009) identified a number of areas for improvement within CIDA’s gender equality efforts. CIDA was criticized for having a tendency to compartmentalize its gender equality processes; a propensity to spread the responsibility of gender mainstreaming too thinly across the organization; and a lack of clarity from CIDA to its stakeholders and development recipients regarding their gender equality documents and programs. CIDA staff members and critics alike attributed the shortcomings on the gender front to the lack of funding for GE programming. Indeed, the official evaluation of CIDA’s gender policy noted that the agency, while recognizably committed to long-term GE initiatives, had not committed resources to GE-designated programming initiatives that were commensurate with its stated GE policy objectives (Bytown and CAC 2008). It attributed the failure largely to senior management officials who had not made gender equality a priority and so allocated insufficient financial and human resources to GE in programs and projects (Bytown and CAC 2008).
There’s a number of nuances in that extended excerpt that I think requires a bit of tweaking. First of all, yes there was hard work completed in developing the GE policy, and some of it was from mid-level staff. But there were also ADMs, DGs, and junior staff who played quite heavily on the file in terms of supporting it, demanding it, pressing for guidance and a clear document to guide programming. When it was finished, it was lauded by the GE network — while a bunch of non-GE specialists read it and said, “huh?”. The policy had 9 great principles, but when it came to saying how to operationalize it as a mainstream principle, most of it reduced in operations to saying women had to be consulted on program/project design. While Tiessen and the cited evaluation says not enough resources were spent on GE projects, the policy very clearly argued that GE was not a one-off programming category (like WID) that required separate and unique programming, but rather one that would be fully integrated into all programming. There’s also no mention of the impact of the Social Development Priorities of the early 2000s on coding.
Take for example a project targeted at basic education and girls in Africa. Let’s assume it is a $10M project, or rather, two almost identical projects in two neighbouring countries, aimed at involving women in educational programming, increasing participation, etc., ticking off 5 out of the 9 principles and goals of the GE policy. If the project was created in the late 90s, the GE specialist who ran the one project would code it as a GE project; the education specialist doing the identical project in the neighbouring country would code it as an education project. Some would get creative and do percentages that made no sense — how could a fully GE-integrated project be “40% GE and 60% education”? But the SDPs of the early 2000s forced a much more stringent coding process — if it was health and nutrition, education, child protection, or HIV/AIDS, CIDA learned to code it as 100% of those initiatives. Nothing that was GE-focused was left coded that way. So not surprising that a few years later, everyone was still coding things as not uniquely GE, and the evaluation found little separate funding. Which, according to the 1999 policy, wasn’t the right way to do programming anyway. It was a cross-cutting file, not a unique target category, partly out of the philosophical approach to avoiding old school WID-type projects that had been deemed less effective.
I don’t disagree, personally, that management may have devoted too few resources or spread their HR too thinly, but that’s a pretty subjective line. There’s no mathematical formula that says there should be x FTEs alloted per area, and to be blunt, there were far more demands than there were resources available. Relying on the GE folks as the citation to say too few were devoted (as both Tiessen’s research and the evaluation did) is not surprising — equally, health evaluations say not enough resources are devoted to health; evaluators of evaluations say not enough money is spent on evaluation; auditors will tell you more money needs to be spent on audits. To come to that conclusion for the paper, I would need to see a lot more international comparative data, and in the end, it’s still managerial discretion. There’s no universal normative guide to determining how many FTEs are needed for a horizontal theme.
The SDPs also caused a number of sources of external funding to appear to “dry up”. Except just as the NGOs and CSOs who were GE-specialists no longer had GE-earmarked resources, the funds going to social development projects with strong GE components went up.
CIDA, like many international agencies and development organizations, was criticized for integrating gender equality — or gender mainstreaming — “everywhere but nowhere” (Tiessen 2007). Maneepong and Stiles (2007) note in their evaluation of CIDA project monitoring that some of the major challenges of integrating gender included ingrained institutional attitudes that regarded GE as unimportant. They found that senior managers failed to appreciate the importance of gender dimensions and achieved relatively little in advancing women’s equal participation in decision making and reducing gender inequities in access to — and control over — resources.
Again, I’m not sure that senior managers didn’t think it was important so much as they had 22 things they were supposed to consider, all competing for attention, and the support from the GE side was not as strong as it could have been. In some ways, the failure to garner more support in concrete terms is not unlike other parts of CIDA that were arguing for a “rights-based” approach to development. Except when operational managers said, “Great, show me how to do a project in that manner”, all they got was fluff. The GE material was much more advanced and with better examples than human rights, yet still, you had training offered by the GE folks that would frequently leave people scratching their heads with rhetoric and hortatory policy language and few examples of what to do on the ground to fully mainstream GE in their project other than “consult”. This is not the same for the GE specialists in programs themselves, who could speak project management, but it was typical of all large-scale policy work in CIDA from 1996 to about 2005 (perhaps longer, but I can’t judge that as well).
In short, policy people spoke in policy terms and were very disconnected from actual project management. So policies would go to committees for approval and then dissemination, and would mean very little to the people managing projects — there were no steps to follow, models to choose, processes to implement, all the things that project managers look for when managing projects. The “vision” was completely disconnected from the “operations”, and while we are talking here about GE, it applied equally to human rights, much of the environmental work, most of the private sector and trade work, and to a lesser extent, health. (I note that health and nutrition work was disconnected to a lesser extent because development and health policy, by and large, is frequently linear from lines of evidence to policy goals, and back again to operations, thus making it easier to interpret without a policy-to-operations translator.)
Anything that had to do with gender equality sat on the minister’s desk for long periods of time without obtaining approval for funding (interview participant). Other studies have noted that removing the words “gender equality” from funding proposals was essential if NGOs wanted to improve their chances of having their projects approved (Caplan 2010; Carrier and Tiessen 2012; Plewes and Kerr 2010). The kinds of projects that CIDA funded under the Conservative government were those that demonstrated service delivery and outputs such as “health care services to so many women, getting so many girls to school … [because] that’s something you can count and it was what could be promoted and approved” (interview participant). Several interview participants noted that the Harper Conservatives did not support projects that might not have numbers attached, such as gender mainstreaming or capacity building, because they believed these approaches to development would not resonate with Canadian taxpayers.
That paragraph is pretty powerful stuff. Sounds completely damning. Except for a few small facts omitted. Like for example a lot of projects unrelated to GE were also left sitting on the Minister’s desk. The GE participants interviewed didn’t think about that because they only know their area, but ask a few ADMs how many files were sitting unapproved and if they were unique to GE, and GE probably wouldn’t even make the top 5 list of reasons. However, if you ask about how many pending projects were there without clear demonstrable results identified, the list would be quite high. It was the fundamental shift in approach — and, sorry to burst CIDA’s bubble, not limited to CIDA. All departments got hit with the same questions — what are the results? When will we see them? How many are tied to current funding? Why are we doing core funding of organizations where we get general results and not project funding where we can see if they perform or not? With the light on those two facts — not just GE and not limited to CIDA — the complaint that it was all GE is not quite so damning.
In sum, the Muskoka Initiative failed to penetrate the gendered societal norms that prevent women from accessing health services even when they are available and has limited potential for improving the quality of life for women who still have little or no say over reproductive rights and child spacing.
The overall analysis on the Muskoka Initiative is accurate for facts, but seems a little bit off to me on policy, particularly in light of past G8 announcements on development. It didn’t do all the good things mentioned because it wasn’t supposed to — it was designed (and directed to be designed) as a high-profile G8 initiative that could and would produce demonstrable short-term results. If you treat the MI as a nutrition project, or health+ project, rather than a GE project, than most of the criticism, disappears. It doesn’t deal with reproductive health? Not within scope. It doesn’t deal with rights? Not within scope. One can argue that perhaps it was a missed opportunity, but arguing it was another giant shift is a bit light on evidence. You can only assume that if you first say “the public face” is the only one, and saying it is about women “makes it GE” when the evidence of the rest of the chapter was that it wasn’t. So why would you use one initiative that wasn’t designed as GE as the vanguard evidence of a sea-change in GE programming? I can understand it from the GE specialists (they all got excluded from what they thought was going to be a huge pot of money for them to spend), but I expected that an arms-length academic might have pulled a little more and obtained some better evidence.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 10: The Management of Canadian Development Assistance: Ideology, Electoral Politics or Public Interest?” by François Audet and Olga Navarro-Flores. Remember as you read my comments that my interest is not in reviewing it from an academic perspective, but rather if it has any value-added from a managerial perspective. Where it doesn’t, my comments may seem a lot harsher than they are — it may be perfectly fine as an academic contribution, I just have little use for it as a manager.
We specifically consider whether [decisions by the Conservative government] were made on the basis of ideology, electoral politics, or the public interest. Our goal is to reflect on Canadian aid from a public administration perspective.
Colour me intrigued. Very few academics will pull themselves out of a policy paradigm and get their hands dirty with public administration, so I’m curious to see how the framework works out.
In the context of public administration, Thomas (1996, 96) characterized ODA management as Development Management, a union between management and international development that implies the inclusion of aid-related values, such as equity, political participation, and gender equality. Development managers in public administration thus distinguish themselves through specific attributes and skills connected to challenges and realities that are different from those found in other areas of the public sector. They must therefore reconcile notions of NPM efficiency with “Third World” ideals (Dar and Cooke 2008, 15).
And there went most of my interest. First and foremost, all the rhetoric about “new public management” is in my opinion, well, rhetoric. It is more like a four-letter word starting with C and rhyming with RAP, but there isn’t much room here to destroy it in detail. I’ll have to suffice with a few small critiques.
If one looks at the elements of NPM theory, you see arguments that the “new government approach” is distinguished from previous approaches by a set of key elements — decentralization, privatization, results orientation, management practices, and participation. On the extreme end, some argue that those elements never existed before; others say they are just so different that they represent a whole new paradigm.
But here’s the reality, and I’ll look mainly at program design. In the 1920s, “modern government” had the underlying theory that the only reason to do something in government was a basic market failure — the private sector couldn’t or shouldn’t do it. Ergo, nobody considered “private sector” policy instruments because the decision was already made. Fast-forward through 90 years of experience, and the evolution can be seen that in some instances, a component can be done by the private sector. That doesn’t represent a whole new paradigm, it just means we have a broader understanding in government of possible instruments beyond tax, regulate or spend. Equally, some things end up being privatized over time — not because it’s a new paradigm (although that can happen), but because the government comes to the conclusion that it doesn’t have to manage that part of some service area. But that is just a variation on realizing that the choice of instrument and choice of who delivers it is really the unpacking of two choices, not an embedded single decision.
Similarly, results orientation and participation are not new concepts, but they do “look and feel” different. Some of that is simply computer-based. Back then, many of the program design ideas like consulting stakeholders were also done, but not in the way we understand it now. Now we would ask the stakeholders what their needs are; back then, we didn’t have any real mechanism to do that well and roll it up into anything interesting so it was more low-end qualitative research or estimations based on small in-house peer groups. More anecdotal extrapolation because the data we have now wasn’t available. Over time, we have improved our ability to collect and analyze “evidence” in ways that didn’t exist in the 1920s. Even something as simple as tying policy design to census data wasn’t possible without a computer to tabulate and sort census data quickly and reliably, let alone real-time applications now.
Academics wrap NPM in a private-sector flag and say, “Hey, look, the public sector looks like the private sector finally”, when in fact, much of what the private sector is doing has changed too. Back in the 1920s, there wasn’t a lot of market research being done. Companies took things to market and then waited to find out if they worked, just like governments did. Now they do deep penetration research on market niches, which government types call consultation with stakeholders.
So when someone says, “Let’s talk about public admin”, I’m in. If they then say, “And all these new things government does”, I don’t think it’s true. I don’t believe the construct called “new public management” really exists, except in a theoretical academic exercise, and even then, there isn’t a lot of rigour in what they claim is “new and different”. Which then means when someone says “let’s look at aid from an NPM perspective”, I’m doubtful it will reveal much.
If development managers make the decisions, they should be based on development considerations, which entails separating politics from the administrative functions.
Only an academic exercise could suggest development in any form isn’t inherently small p political. Distribution of resources? Power? If development doesn’t inherently affect those, it isn’t development and all of politics is about the same. There is no way to separate them. And only the most ardent lover of mandarin stories would list donor managers as separate actors from the political realm that gives them their power in the first place. If so, it totally misunderstands the entire basis for the link between “governance” and “government”. The only realm it would be relevant is in the NGO sector, not the public provision of aid. Equally ridiculous is to take announcements by the government, backed by the stated rationale at the time, as the sole reason why something is done. Things are almost never one thing or the other.
One of the key factors that emerges from this list of decisions has to do with the public administration’s management effectiveness. Influenced by the NPM perspectives, the budget cuts and merger of CIDA and DFAIT to downsize government, along with the fact that 13 percent of the ODA budget went unspent, seek to improve program management efficiency and reduce operating costs.
Sure, those are valid reasons. But they are not the only reason. Questions of merging CIDA and DFAIT did not suddenly crop up as a way to reduce costs. In fact, those pressures to merge have been around since they were separated in the first place. Audits, the machinery of government (MoG) discussions, program reviews, all had the same question. Could you save money if you merged them? Sure, in some ways. But does it make good policy sense? Up until recently, the answer was “no”, and they weren’t merged. NPM be damned. Other departments have been split, and merged, and resplit and remerged in the same time frame. Same TBS people, same MoG pressures, but the policy direction didn’t show a benefit. It wasn’t NPM that made the decision, it didn’t even drive the decision. In fact, it was irrelevant. Well, actually, that’s not entirely true. It did affect one thing — timing. Not surprisingly, during times of great upheaval and large scale reviews, many things get done that could not be done as one-off items. But with a confluence of events — budget reviews across government, an ideological desire for down-sizing, an ongoing pendulum swing towards efficiency vs. effectiveness, and a dissatisfaction with a lack of demonstrable development results — perhaps the Government saw an opportunity to address multiple pressure points all at once.
It is instructive to examine Harper government decisions and one case in particular of political intervention in aid program management, in which Bev Oda, then Minister of International Cooperation, overturned a CIDA administrative funding decision regarding KAIROS, an NGO working in Palestine, by inserting the word “not” (see Fitzpatrick 2011).
I might expect such an example from a pop journalist. A tabloid’s coverage of the political process. But not a respected academic publication and the sources they cite. Do they have any idea how a government office of the Minister actually works? Not the theory of public administration, but the day-to-day reality. I’m going to let you in on a secret here: Ministers almost never say no to memos. They never reject the recommendations of their public officials.
That doesn’t mean what you think it means. I don’t mean they say “yes” to everything that the public servant recommends, I mean that the public servant doesn’t send a memo to the Minister that will result in a “no”. This is a fundamental flaw in the theory of public admin. It suggests that mandarins offer blind, impartial advice to the minister. Impartial, yes, but never blind.
If there is a project or policy proposal that the Minister isn’t keen on, and it lands on their desk in such a form, a meeting will happen. And the Minister will ask questions, mostly giving guidance to say “You are recommending X, but I don’t think you gave adequate weight to problem Y with the proposal.”(That’s the clean version, anyway.)
Which likely will result in the public servant going back to their desk, rewriting the memo, resubmitting it, and now saying, “Overall, this part is quite solid…” (matching what they said before) “…but there are other considerations that are deficient” (the Minister’s criticisms and concerns). Sometimes they will even go so far as to recommend against it (which is fair, since it doesn’t represent the Minister’s latest policy direction), other times they will present 2-3 options, one of which is not to proceed and let the Minister choose. But the Minister isn’t saying “no” in either case, they are saying “yes” to a well-written recommendation that is in line with their direction. This is so pervasive that memos will sit unsigned for extended periods of time rather than the Minister outright saying “no”. It happens in many departments if not all. Decision memos that the Minister does not want to agree to, or more pointedly, are simply unconvincing to them, remain in decision limbo.
So it is almost unheard of for a Minister to say “no” (or “not”) on a Memo. It almost never happens. And it NEVER happens by them just adding the word “not” to the memo. But let’s look at this case because it appears to be the way it happened. People assumed when they heard the Minister signed the memo and added the word “not” that these two events happened at the same time and by the same hand. Then, later when they learned it was supposedly approved and then later disapproved, it was also by the same hand. But was it?
There is a dirty little not-so-secret that lurks in the halls of power across the Canadian government, England, Australia, New Zealand, and quite a few other Parliamentary democracies (it’s not limited to them, but seems to be oddly more prevalent). Ministers don’t always sign every memo they approve. They don’t. This is particularly true in organizations that have a lot of memos going through in a given year (CIDA averages about 800-1000 per year), memos that are often programmatic rather than detailed policy proposals (about 600-800 of the CIDA ones) and thus more administrative/technical, or where the Minister is travelling often (hello, the I is for International!). Almost all of the Ministries have a technical solution if the Minister can’t keep up with the signing workload — they have a mechanical arm. The Minister still approves the project, often by putting them in a pile of “approved” and “not approved”, and then the Chief of Staff will take them and have them signed by the mechanical arm.
Did that happen in this case? I have no idea. I don’t work in their office, and don’t know any of the players. Some Ministers are very comfortable using the mechanical arm, as long as the only person who can use it is their Chief of Staff. Others refuse its use at all and sign everything. In this case, I see three possibilities:
The Minister signed the memo and wrote “NOT” on it, and not returning it the way any other memo would be rewritten;
The Minister signed the memo, and then changed their mind later and hence wrote NOT on it as it had already been put into the system as approved;
Either the Minister approved or someone thought the Minister approved, the memo was put under the arm, and then later when it was discovered already “signed”, the word “not” was added.
Personally, I have no trouble believing that the memo was accidentally signed in person or on the arm, and then the word NOT added later (either through option 2 or 3 above), but I have a lot of trouble believing the entire system was changed for one single memo (option 1) and that this represented a significant instance of political interference.
The short version is it doesn’t matter how the memo was signed or not; what matters is that all approvals of that size are decided by Ministers. If they had rejected it outright, nobody would even raise it as significant, and if the manner was merely a correction of an accident, it’s not any more significant. Put differently, dozens of decisions are made every year about development assistance with input from PCO or PMO, dating back to the 1940s, ranging from the choice of recipients to announceables on foreign visits to even which countries are selected for foreign visits. That doesn’t make this one example a special example of “political interference”, more likely just one of administrative irregularity.
Third, some NGOs’ refusal to toe the Conservative line has had other repercussions, as well. Some Canadian organizations have lost government funding for their projects relating to sexual and reproductive health, notably those providing access to safe and legal abortion for women who have been sexually assaulted or are HIV-positive. This decision can be attributed to the Conservatives’ ideological position on abortion. Organizations such as Doctors of the World Canada, which has much expertise in maternal health and fighting HIV, can no longer rely on funding from the Canadian government for those issues.
I love this example because the purity of it is so clear. The Government has an ideological bent against certain types of reproductive health programming; as a result, organizations that do that type of work have received less funding. Gasp! Cue the NGO harps! How dare they!
Except, I hate to be a little snot about this, but isn’t that the whole point of democracy? Two, three, four parties running on the basis of what they think should be done about a certain problem; citizens voting for the candidate/party/approach they think will do the things they want; the party with the most votes gets to form the government; and then they do those things the way they think they should be done. While the election wasn’t run on the basis of a single issue like reproductive health in developing countries, is there any evidence the election was rigged? Did an armed force take over the Parliament and impose its will on the electorate? Is there anything to suggest that the party in power is illegitimate in how it gained power, some evidence that would now dethrone it from making decisions? I haven’t seen any such evidence. I may not agree on a personal level with the party platform, but there’s nothing illegitimate about it. So, shouldn’t we expect them to govern according to their party platform, ideology, etc? And when they do, why do so many people suggest that they can do so only as long as they fund others (who don’t agree with them) too?
But here’s the real kicker. Arguing that it is less efficient or effective developmentally is almost a non-starter because to do so, you have to define what the benchmark criteria are going to be. If you set it up that the primary condition is that you think religion is important to people, the approach chosen must respect the sanctity of life and their religious beliefs, that the best way to reach those goals is to do programming that is in line with those values and not actively opposes them, then any other type of programming would automatically be less effective and efficient as it couldn’t possibly meet those objectives. And who sets the internal objectives for Canadian aid? The Government in power. There is no moral absolutism that can be pointed to that will say one method is “better” than the other because you are already comparing apples and oranges. They aren’t trying to accomplish the same thing, it’s more like the guns and butter argument in economics.
Remember guns and butter? One party wants to build guns to protect the populace, another wants to make butter to generate trade with other countries. If the party that wants to make butter wins the election, why would you expect them to support gun manufacturers? You wouldn’t. In fact, if you voted for the butter group, wouldn’t you expect them to NOT support the gun crowd?
Yet in development or even environmental policy, we claim this is somehow unfair. That the government is only funding those who “agree with them” (i.e. so they can deliver on their platform) rather than funding those who “disagree with them” (i.e. who wouldn’t then help them achieve their objectives). I see the arguments, including about “promoting debate”. Really? That’s the best argument you can make? There was a debate. It was called an election. The other side lost. They won. You should probably expect them to fund the groups that are going to do what they want to be done. Cuz that’s how democracy works.
Note the government isn’t saying those other groups can’t exist. They’re not funding terrorist groups to attack them, or sending in the military to crush them. They’re just saying “Just because you believe something, doesn’t mean you are entitled to government funding”. And in this case, it isn’t a grey zone. The government purely disagrees. And as the governing party, they not only have the right to do so, they’re supposed to do so. That’s how elections and decision-making works.
However, unlike the claims of simply replacing a Liberal bias with a Conservative bias, or reflecting economic self-interest (which Swiss already disproved), the switch is simply to match the “current ruling party”.
However, downsizing creates new public service problems because the layoff of experts impairs institutional memory and reduces the government’s ability to make informed decisions (Tait 1997).
I’m going to go out on a limb and suggest neither of the two authors nor the cited one has ever worked for a large organization. Because here is the nature of organizations with indeterminate staff or tenure and the simple passage of time — they grow. They will. Everybody wants you to do more, more, more and that pressure to expand (not in empire-building, which can happen) drives growth and expansion. It’s also extremely hard to correct without a sustained effort. Private sector organizations do it through layoffs. Public sector organizations do it a bit differently.
Both for Program Review and the latest Deficit Reduction Action Plan, pretty detailed options were put in place to manage the downsizing over time. Do you know who the main exit group is? Those who were within five years of retirement. The ones who could afford to go early because they got a slightly better retirement package than they had without the program. It doesn’t change that they were going to retire anyway (with concurrent loss of institutional memory), it just affects timing. Most of them volunteered to depart.
Do other people leave? Sure, it’s not perfect alignment that you cut only the aged, which in and of itself would be problematic. But if the old-timers volunteer to leave, it’s an easy way to handle attrition without having to fire people who want to stay. For the rest though, here’s the reality. Very few people actually left the government who (a) were a good fit and (b) wanted to stay. In the Program Review era, the government actually goofed — they made the buyout so generous, more people jumped than they were expecting. In DRAP, the timing of implementation made it possible for many people to find other jobs in government. I don’t mean it wasn’t painful, the resulting jobs were not always dream jobs, but most people who wanted to stay managed to find other positions. In a government that frequently sees a 4-8% attrition rate, lots of positions opened up and the vacancies left behind were deemed surplus. Of the ones who were “forced” out, some of them had productivity problems and nobody else wanted them, some of them had skills sets the government wasn’t looking for, some of them were poor fits for government, some of them were awesome and just didn’t find a new position. Everybody on the “out” list thinks they were in the last of that group, but it was a very small percentage.
How do I know? Check the priority list that resulted. Managers in some cases were dying for replacements after the first year. Attrition hit some areas harder than others, and some managers needed to hire. And when they went through the priority list, the managers chose NOBODY. They chose to have NOBODY rather than some of the people on the list. Why? Because they checked them out. And found their references cited poor performance; found their skills were too narrow for the new area; found those who were poor fits for the new jobs. Lots of them.
Based on what I have seen from some of the priority lists, I would say about 10% are decent options who will find something, will just take time; 30-40% who are good in certain types of jobs, if they can find a position with the right skill needs but it will take even more time to find the right match; 20-30% are poor performers that the government used reviews to dump; and another 20-40% who were probably just poor fits for government in the first place. Working in government is hard for certain types who don’t want to be constrained by a lot of legislative rules and red tape that derive from being paid by the taxpayer’s purse. HR people are a good example — there are some private sector people who attempt HR in the government and see how many rules that legislation and the Charter impose on hiring, and go crazy fast. There’s even a small sub-set of that 20-40% who took the job in the first place thinking it was just easy money and “nobody in government does anything.” Then they started and found out it was more work than they were expecting.
So, does downsizing hurt informed decision-making? Certainly, there are challenges to corporate memory, but it’s the same challenge as people retiring any time, and oddly enough, we don’t chain them to their desks to prevent it. But one of the key aspects of government is that no one is irreplaceable. Do you lose some info? Yes, but not as a challenge to “informed decision-making”. What you MAY lose, and this is debatable, is efficiency. A senior officer, who has seen 200 TBS submissions and knows 8 different formulations of the same instrument choice, might be more efficient than the newer officer who has only seen 100 and 4 formulations. However, the trade-off is the newer officer may also be more innovative than the near-to-retirement officer who runs the risk of only doing things the way they’ve always been done before. For the non-volunteers, most who are worthwhile retaining are not lost — they may be temporarily, and painfully, displaced, but they eventually find homes and ways to continue to contribute their knowledge and expertise.
So, to recap, from a managerial perspective:
NPM adds nothing but academic fluff to the argument;
Development, governance and politics can never be separated, nor should they be in analysis;
Governments are elected to do things they said they would do, not fund the opposite;
Single anecdotes, badly understood, rarely are good examples of anything; and,
Downsizing doesn’t mean decisions are less informed.
Of all the chapters so far, this is the one that I felt most bothered by…there were some nuggets in there that could be really interesting to analyze, this chapter just wasn’t it.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 9: Why Aid? Canadian Perception of the Usefulness of Canadian Aid in an Era of Economic Uncertainty” by Dominic H. Silvio.
There is growing evidence in many countries that the state of the economy can have a powerful impact on public attitudes towards anything international, but particularly development assistance. Bad economic news can have negative effects on public attitudes towards aid; and positive news, positive effects (Smillie 2003; Zealand and Howes 2012). Since the beginning of the Canadian aid program in the early 1950s, it has received considerable public support (Lavergne 1989; Smillie 1998ab, 2003). However, the perception of the usefulness of aid has not been endorsed by Canadians without reservation.
This is a huge, under-analyzed area, and yet it is under-analyzed for a very good reason. As noted in earlier chapters, citizen’s understanding of what “development” actually is or what constitutes aid, is actually really quite shallow in many countries, particularly Canada (Or as Silvio notes, “Research shows that the public knows virtually nothing about foreign aid and that what they think they know is actually incorrect”). So, studying how much support a citizen has for a given area also brings into question if their opinion is even significant. If a person has a detailed understanding of an issue, their support/lack of support is likely quite stable; if they only have a superficial understanding, their support levels are more variable and prone to shift with the wind. If you then analyze the variability, is it statistically significant? Or is it just a measure of the force and direction of the wind?
The initial framework by Silvio is solid, suggesting multiple possible measures for public support and attitudes:
Public opinion polling;
Elections and electoral platforms;
Donations to NGOs;
Volunteering with NGOs;
Engagement in public debate;
Consumer behaviour (i.e. fair trade);
Most of items 2-6 are nuancing whether the support is “solid”, whether there is a true commitment to the “cause” or if it is just “polling-deep” (i.e. if you ask me to choose between butter or guns, I’ll choose butter, but that is just a polling question, easy to answer, without the consequences of not investing in guns to protect the citizenry). Unfortunately, data on #s 2-6 are almost non-existent and inherently unreliable as they are equally “self-reported” as with the polling data.
Based on this data, it is clear that the public is more likely than not to be comfortable with the level of aid currently being provided, and since 2009, has become more so. For instance, in 2010, over half (54.4 percent) considered the level of spending to be about right — meaning Canada should give the same amount of ODA (up 8 points from 2009).
As with the general lack of understanding of aid, these numbers are almost meaningless to policy managers, but for a different reason — if you ask someone to tell you if the government should spend more, wouldn’t it be more telling if they actually knew how much we spent on ANYTHING in government — health care, social assistance, aid — and the relative proportions on each, before asking them if it should go up, down or stay the same? In part, you would also need to know their general views of government (should be smaller, larger or same size) and their normal view of budgeting (nominal values of aid budgets, the relative share of the government budget, percentage of GDP, or share of global spending on development) before interpreting views of a specific sectoral expenditure. Without knowing those other contextual beliefs, and I know it’s too detailed a possibility, the winds shift outcomes for multiple reasons and you can’t tell why. Support might be “a mile wide and an inch deep”, but old research used to ask about budget levels and the result was predictable — if you asked first if aid should go up or down, and then asked them to estimate out of $1 spent by the government on everything, the estimates of aid would be somewhere in the 4-5 cent range i.e. on average, they estimated 5% of the federal government budget was foreign aid. When they learned it was a fraction of a penny, support for increasing aid went up, but not to the 4-5 cent range, even when they first said that current spending was appropriate and they thought it was 5%.
First, public opinion means very little to governmental policy formulation where ODA is concerned, perhaps not surprisingly in light of Canadian public opinion, which consistently gives the government a high performance rating on the aid file, while assigning it the lowest priority among both domestic and foreign policy issues. Second, public opinion, while widely supportive of issues such as development assistance, is not, in reality, very strong.
I think the conclusions are appropriate and reflect the weakness of polling data generally, and aid polls in particular. However, one alternate conclusion or at least one area that isn’t explored and perhaps could be in future, is whether public opinion might not affect ODA levels but might have more influence on the type of aid (humanitarian vs. development, given that humanitarian assistance is easier to sell and understand) or if it might influence recipient choice (given that we have large diaspora communities in Canada).
A tough area but a good analysis of what’s available.
I am doing a series of articles on the book “Rethinking Canadian Aid” (University of Ottawa Press, 2015), and now it’s time for “Chapter 8: Preventing, Substituting or Complementing the Use of Force?” by Justin Massie and Stéphane Roussel.
When did military operations and development assistance policies become integrated foreign policy tools? For what politico-strategic purposes? Despite significant literature on human security, failed and failing states, peacebuilding, humanitarian wars, and even foreign aid as an instrument of foreign policy, the relationship between official development assistance (ODA) and the use of military force as converging tools of statecraft remains under-analyzed.
When I saw the title of the section, I was afraid that the analysis might end up being better suited to an NGO rant than quality academic analysis. Many NGOs wrongly assume that everything a military does is has to be about force. Many would add to that by misquoting Henry Kissinger and further characterize the military as evil, hence all they’re actions are evil. It’s popular, it’s emotionally compelling, sure, but it’s also facile and false. True academic analysis would look at the actions, note whether they add to development or detract from development, and ignore who is doing them. Whether the NGOs admit it or not, there are two pretty compelling aspects to military engagement, particularly when it looks like blue helmets doing peacekeeping: first, the basic reality that development can’t happen if two sides in a conflict are shooting at each other; and second, if a rebel force is shooting women who had the audacity to learn to read, do you want soldiers telling them to stop or development aid workers asking them nicely?
Unfortunately, the approach assumes far too much with depicting the relationship between aid and the military as boiling down to just three things:
aid as a means to prevent future military action and/or violence escalation;
aid as a preferred alternative to the use of force in the attainment of states’ national objectives; and
a complement to military action for similar political objectives.
Really? Those are the alternatives? If you have to force-fit into those three categories, I’m sure they’ll find exactly what they’re assuming. However, how about a different framework:
Aid only to a crisis location, with mainly development goals;
Aid and military assistance together to a crisis location, with multiple goals; and,
Military assistance only.
And then look at each of the three to see if development and/or military goals were achieved.
Back in 2004-05, the OECD Development Assistance Committee dealt with this type of issue in some detail, and without the academic rhetoric or the normative undertones. Basically, the question was very simple — are there any actions done by the military that meet the definition of development assistance that would allow the costs incurred to be counted as ODA? It wasn’t a quiet debate, with lots of countries having very different views about multiple issues. But let’s be clear — the debate wasn’t about whether it needed to be done, or the military’s role in doing it, or even the prerequisite links to effective development assistance, the debate was whether it should or could be counted towards ODA levels. After that, the question moves to “how to make it work together” to make it more effective.
Peacebuilding, in other words, is conceived of as an integrated and coherent agenda involving mutually reinforcing development- and security-related policies. From this perspective, for example, antiterrorist policies and development assistance are inextricably linked.
Actually, that isn’t what it says. It says there are links between the goals of security and the goals of development — that development can help build peace, and peace is the underpinning of development. It doesn’t mean that every element is inextricably linked nor that it is a fully integrated or coherent agenda. In some cases, it could be as simple as saying “If you’re delivering humanitarian assistance in a hot zone, maybe the military force can provide cover.” It’s about seeing what links are there and making sure they work together or are at least neutral to each other (policy coherence only in the sense of administrative non-competitiveness), rather than creating links that aren’t there. Relying on the assumptions that Canada has pursued self-interested policies, contrary to Swiss demolishing such theories in Chapter 6 (Critique of Rethinking Canadian Aid – Chapter 6 – Mimicry and Motives), ). the analysis is pretty linear but pretty interesting for the Marshall Plan, Colombo Plan, and the new “Commonwealth”. It’s a pretty far reach to say foreign aid was focused on conflict prevention, but if you start with only three buckets, any sieve will do I suppose. Equally, it’s hard to conceive of human rights programming as a “substitute” for military action, not the least of which is that effective sustainable human rights programming requires a zone that is beyond the need for ongoing military action.
Despite the rhetoric, there are three areas worthy of special attention where the links were more forced than real. Axworthy drove the human security agenda and tied to it, landmine awareness. Both were pretty active areas. Equally, post-911, more attention was given to the links between security (mainly looked at by the military) and governance issues (mainly in the domain of development). In all three instances, there were links, but none of them were mainstream military nor aid policies — it was limited to small portions of the budget, not a complete revamp of either policy. Too bad the analysis amounts to no more than repeating policy statements used to sell aid to the masses, picking and choosing one or two key phrases while ignoring 300 others about development that made no reference to security.