The CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here.

Understanding Practitioners’ Perceptions in PD Evaluation

Dec 19, 2019

by

Demands for greater accountability and “value-for-money” are reshaping public diplomacy practice. While the need for evaluation and evidence-based decision-making has increased, recent commentary suggests that the state of the practice is grim—with incoherent approaches and little agreement over relevant goals and measures, as well as a prevalent “reporting culture” rather than an “evaluation culture.” To overcome this, offices such as the Research and Evaluation Unit (REU) consult and assist local PD staff and recent reports by the Department of State have suggested structural and organizational changes in order to advance evaluation practice. To better understand the day-to-day context in which any evaluation approach and policy will be interpreted, it is vital to explore the extant evaluation-related attitudes and intentions of practitioners.

We conducted in-depth interviews with 25 public diplomacy practitioners in the U.S. Department of State, exploring evaluation-related attitudes, as well as perceptions of emergent standards and norms related to evaluation, and practitioners’ ideas about their own evaluation abilities. While the State Department maintains that “much of the success of research and impact evaluation depends on leadership that consistently signals that data-driven strategies and tactics are important,” our research sheds light on the practitioner perspective that forms the basis on which such “consistent signaling” on the importance of data-driven PD would resonate (or not). Based on our findings, we derive six key focus areas for the further developments of an “evaluation culture” in public diplomacy:

The Role of Anecdotal and Ad Hoc Evaluations

To a certain extent, evaluation in public diplomacy is mandated—practitioners have no choice but to evaluate programs. But, data showed, the intention to engage in evaluation is far from an either/or proposition. Practitioners do not always follow the rules or feel normative pressure to comply. Practitioners may be required to evaluate, but the sophistication of evaluation, and conformity to rules varies significantly based on practitioners’ own attitudes. Further, the perceived goal for which evaluation is undertaken makes all the difference: A superficial focus of evaluation on mere accountability drives a culture of mere “reporting”—and of showcasing mostly positive results to stakeholders in Washington. At the same time, practitioners tended to hold negative attitudes towards this “accountability approach” as it favors ad hoc approaches to evaluation and simplistic output indicators.


When norms about evaluation shift to “we need to learn,” rather than “we need to report results” there may be potential for accompanying attitudes toward evaluation behavior to shift.

Interestingly, though, at the same time, practitioners echoed an understanding that accountability is also positive as it may help to prove what they achieve to others within the State Department. Looking at attitudes towards ad hoc evaluations more closely, data revealed that such evaluations are indeed appreciated when they come from highly trusted senior practitioners. As recent policy advice has emphasized the value of providing “more contextual data to determine impact,” one way to harness such valued ad hoc evaluations in the future could be to keep them out of core objective measurements but still use them to deliver insights on the contextual factors “around” such core results. This is not to say that contextual data should not be quantitative. However, given the state of the practice, it can make sense to concentrate quantification first on core objectives indicators and at the same time focus the valued insights provided by “veteran practitioners” on the context of such results—a step for which concrete qualitative tools need to be established. In other words, anecdotal evaluations need to be a structured complement to a data-driven approach, rather than be mainly “replaced” and “quantified.”

Beware of a “Lose-Lose Attitude” Towards Evaluation

Further, our research unearthed an important paradox that highlights the difficulties of implementing a more “straightforward” evaluation culture in public diplomacy. Within the grant and funding structure of the State Department, practitioners attached a “dual risk” to evaluations: if evaluation results are bad they expressed fear of cuts to programs and negative effects for their careers, if results are good they fear a “takeover” of their programs by other actors. Any structured attempt to further a culture of research and evaluation in U.S. public diplomacy would have to take such “lose-lose attitudes” into account and help manage them.

Going Beyond the 5 Percent Rule

Further, our data shows that the aspired funding structure (minimum of 5 percent of the grant budget should be spent on evaluation), even if it was applied more widely and followed more strictly, may reinforce a culture where evaluation is seen as a “farce,” as many practitioners work regularly with grants that are too small to break even at a 5 percent level. A solution to this could be a partial decoupling of evaluation resources from grant size (e.g., through a parallel funding scheme evaluation). This may not only allow smaller programs to deliver insights for the department, but also bolster the “culture of evaluation” by signaling clear commitment to evaluation and placing support for evaluation on a wider footing. It should be acknowledged that the evaluation efforts by the Office of the Undersecretary for Public Diplomacy and the Research and Evaluation Unit to assist specific programs are independently funded, and in some cases the budget for such evaluations are greater than the project itself. This illustrates the commitment of specific offices to enhance evaluation—but such efforts are the exception and have so far not impacted practitioner attitudes towards evaluation writ large, or the perception that assistance with evaluation is readily available.

Leveraging the Role of Local Staff

Our data further suggests that at the field level, expectations for foreign service officers to perform evaluation can be very low. Extant norms seem to recognize the difficulties in terms of workload but also high turnover at posts with people “coming and going” every two or three years. As also suggested by several of the interviewees, this reality should shift attention more towards locally employed staff.  The question may become how to harness the continuity and regional expertise of locally employed staff to feed into mid- and long-term evaluations.

Level of Skills and the Role of Learning Focused Evaluation

Further, our research largely confirms previous commentary, according to which time, money and lack of skills are the most prominent perceived barriers to the enactment of evaluation in public diplomacy. Interestingly, though, a more nuanced look revealed that the magnitude of such barriers plays out quite differently regarding different types of evaluations. Generally speaking, practitioners believed they had the knowledge and skills to “report” output indicators that answer demands that a culture of accountability behests—indeed, they often have no choice but to do so. As noted earlier, this behavior—rather paradoxically—is viewed by practitioners as both negative and positive. The mentioned barriers, however, appeared to be much more influential when discussing learning focused evaluation and the use of more sophisticated evaluation methods and indicators that go beyond reporting outputs. If the State Department is to move beyond a culture of reporting to one of learning, a noticeable shift in the perceived purpose for which evaluation is undertaken must occur. When norms about evaluation shift to “we need to learn,” rather than “we need to report results” there may be potential for accompanying attitudes toward evaluation behavior to shift.

A Learning Approach to Evaluation Should Start Locally

Finally, our data showed that, while the learning purposes of evaluation were associated with some clearly negative attitudes, positive attitudes towards learning emerged specifically when insights were seen to serve on the local level and be under the control of the individual unit or local staff at post level. This suggests that a first step to more learning-based evaluation may be a “nested approach to insights” in which tools are provided on post level and remain mostly under local control. In other words, our results suggested that any top-down approach to insights and learning would be met with significant opposition when it is not previously anchored and facilitated at post level.

Note from the CPD Blog Manager: Read Buhmann and Sommerfeldt’s previous piece on goal-setting in public diplomacy, “Focused But Unclear: A Look at Goal Plurality in PD Practice.”

STAY IN THE KNOW

Visit CPD's Online Library

Explore CPD's vast online database featuring the latest books, articles, speeches and information on international organizations dedicated to public diplomacy. 

Join the Conversation

Interested in contributing to the CPD Blog? We welcome your posts. Read our guidelines and find out how you can submit blogs and photo essays >