A February 2019 email newsletter from John Lansing, CEO of the U.S. Agency for Global Media (new name for the Broadcasting Board of Governors) proclaimed that “2018 was a banner year” for USAGM. He noted that “USAGM...
KEEP READINGThe CPD Blog is intended to stimulate dialog among scholars and practitioners from around the world in the public diplomacy sphere. The opinions represented here are the authors' own and do not necessarily reflect CPD's views. For blogger guidelines, click here.
USAGM’s Global Reach: More than Meets the Eye
For America’s publicly-funded global media operations, 2018 was a banner year. We not only re-branded from the Broadcasting Board of Governors (BBG) to the U.S. Agency for Global Media (USAGM),[1] we registered a record weekly global audience of 345 million adults. To put this figure in perspective, in the same year, the BBC reached 376 million adults. Accounting for the 33 million of those audience members who hail from the United States, where USAGM is not legally permitted to broadcast, America’s and Britain’s global media operations are basically neck-to-neck, give or take a couple of million people.[2]
The current size of our global reach is not the result of a change in methodology, but rather the hard work of our storytellers, our fact-checkers, our local partners, and the army of professionals who support our journalists operating in over 100 countries. USAGM’s reach is strong because we are doing our best work to date, including producing award-winning content, breaking stories like the inhumane internment of China’s Uighurs, and supporting first-in-class internet freedom platforms to ensure citizens seeking information aren’t slowed by authoritarian governments. The Agency continues to modernize in ways to ensure its effectiveness as a mission-driven, forward-leaning media enterprise.
When I talk about USAGM’s reach and impact—which I do a lot—I commonly I hear the following question: How do you know 345 million adults accessed news and information produced by one of our five publicly-funded news networks, which include the Voice of America (VOA), Radio Free Europe/Radio Liberty (RFE/RL), Radio Free Asia (RFA), Middle East Broadcasting Networks Inc. (MBN), and Radio/TV Marti?
The answer is simple: through extensive and rigorous research. Our 2018 reach reflects inputs from 102 surveys conducted in many of the major markets we operate in over the past five years. If we could afford to conduct a nationally representative survey[3] in each market each year, we would, but short of that we use the latest data and do our best to implement new surveys as efficiently and judiciously as possible. Trust me, we’re not making this up as we go along. This approach, broadly speaking, is comparable to how other publicly-funded operations calculate their global audience, including the BBC.
We employ several layers of quality control to ensure every detail—from the wording and translation of surveys to data entry and analysis—surpasses industry standards.
Yet, weekly reach is just part of the story. Over the past several years, we’ve implemented a comprehensive Impact Model to measure success in informing, engaging and connecting people around the world in support of freedom and democracy. The model considers a range of quantitative and qualitative data—collected through surveys, digital analytics, qualitative research and case studies. In addition to weekly reach, the Impact Model tracks the credibility and uniqueness of our networks’ coverage; audience participation rates and how often consumers share our content with others; and whether our content drives meaningful behaviors and cultural change. We track a range of variables—over 40 indicators—which allow the model to capture impact in each of the unique environments in which we operate.
The primary purpose of our research, of course, is to inform our journalists on what’s working and where. We also conduct research to improve our understanding of particular markets, focus groups to gauge the resonance of our content, longitudinal panels to assess effectiveness over time, along with other methods that allow us to better understand our audience's behavior on digital platforms. We also report our top-line findings to Congress, the Office of Management and Budget (OMB), and the American public in order to ensure accountability and transparency. On a weekly basis, I brief interlocutors from across the interagency on our research efforts and commonly hear two things: (1) USAGM’s research is among the best in this space and serves as a model for others; and (2) These programs provide a tremendous return on investment for the American taxpayer.
Having led the U.S. Advisory Commission on Public Diplomacy, which helps oversee research efforts across the American public diplomacy apparatus (for example, see this report), I can say with confidence that USAGM’s research is a leader in this sector and that its networks meaningfully promote the benefits of a free press all around the world. We employ several layers of quality control to ensure every detail—from the wording and translation of surveys to data entry and analysis—surpasses industry standards. These checks include several audits of data to provide quality assurance.
How do I know these checks are working? Because a majority of our surveys get flagged for additional questions or field work at one or more of the checks. If we see irregularities, we re-do the field work to ensure the data was collected accurately and authentically reflects the population we’re aiming to better understand. In extreme cases, when we can’t reconcile the data collected with our quality assurance process and when additional field work isn’t possible, we refuse to add the findings to our central repository. Nowhere in my professional life—from academia to the commission—have I seen such rigor and care to protect the legitimacy and integrity of the data collection process.
Dr. Kim Andrew Elliott recently raised some excellent questions about our research, and as someone who feels most at home on a university campus, I welcome the dialogue. Dr. Elliott’s queries regarding false positives, survey design and methodology, and the need for greater openness and collaboration with academics are most welcome. Several of the conclusions he draws, however, are unsubstantiated.
Regarding the question of false positives—situations where a survey respondent incorrectly states that they accessed USAGM content—we use a number of tools in the quality control process to flag any suspect data and are intentionally conservative in how we calculate audience estimates. We are confident that we’re doing as much as anyone in the industry to check for and remove false positives from our datasets. And, as it turns out, concerns over false positives and data falsification are generally overstated, at least according to this thorough review conducted by the Pew Research Center. On the specific question of whether asking about a network’s branded program—as opposed to simply asking about the network’s brand—increases the likelihood of a false positive, I’m afraid Dr. Elliott has it exactly backwards.
Reviewing decades of survey data, we found that asking about media brands alone carries its own risks of false positives. While the ability of consumers to correctly recall the brand of the news they’ve engaged with has dropped precipitously in many markets, accurate recall rates for specific shows and journalists has held steady or improved. We’ve been asking about specific branded shows in certain markets since at least 2003, and do so when research indicates such an approach will result in the most accurate response.[4] We report the unduplicated sum of those who have used our brand or a branded program, not both.
We’re equally concerned about the likelihood of false negatives, whereby a survey respondent incorrectly responds that they have not accessed USAGM content when in reality they have. In a number of important markets—including in China, Iran, Russia, and Cuba—accessing USAGM content is either illegal or deeply discouraged by local authorities. In these and other sensitive countries, we know that fear of breaking local laws or norms limits the number of respondents who will answer the survey honestly. Our research shows that asking about the program and/or host name in these markets is less politically toxic than simply asking about our brands, therefore resulting in more accurate findings.[5]
In all likelihood, we are actually underreporting our global audience. No survey methodology can eliminate all effects of a repressive environment, which leads to underreporting of the use of content from foreign broadcasters, especially those associated with the United States. We cannot, for example, quantify or count our audiences in closed-off societies like North Korea or members of diaspora communities in countries we don’t survey. Yet, due to hundreds of defector and diaspora interviews, we know that there are considerable and loyal audiences for our USAGM’s programming among these communities.
Moving forward, trends in media consumption will require us to iterate—transparently and thoughtfully—how we measure audiences. Online content, especially on social platforms, circulates in a variety of ways, typically in clips and segments that are often removed from the traditional linear model of program distribution. As news consumption shifts more and more to digital platforms, we will need to adapt our approach, finding ways to integrate survey and digital data in meaningful ways that still preserve our ability to focus on the individual user.
Iterating will require lots of help. We’re undertaking a number of projects to make data more accessible both inside and outside the Agency, including consolidating data from various sources, building custom dashboards, and exploring opportunities for broader data sharing across the U.S. government and with key stakeholders in the academic and private sectors. We recently launched the Research and Analytics Working Group, which convenes experts from across the government and the research sector on a monthly basis to discuss shared challenges and best practices. We are dedicated to expanding the conversation outside of traditional circles and contacts to ensure we’re considering and integrating the most rigorous and applicable new tools and methods into our global research and analysis portfolio. If you are interested in exploring potential synergies between your work and that of USAGM, please don’t hesitate to get in touch. I’m at spowers@usagm.gov.
A number of colleagues contributed to this article, including: Theresa Beatty (USAGM’s Office of Policy and Research), Leah Ermarth (VOA’s Research Director), Betsy Henderson (RFA’s Director of Research, Training and Evaluation), Rami Khater (USAGM’s Chief Technology Officer and Director of Research), Scott Michael (VOA’s Office of Research), Kate Neeper (USAGM’s Office of Policy and Research), Paul Tibbitts (RFE/RL’s Director of Market Insight & Evaluation) and Diana Turecek (MBN’s Research Director).
[1] Incidentally, in 2011 I called for the Agency to change its name in this article for USC’s Public Diplomacy Magazine.
[2] For the most part, USAGM’s networks and the BBC do not compete. Instead, we aim to complement each other’s strengths. Different voices are more credible in different markets, and thus we each invest in language services where our research indicates we may have a comparative advantage over other media providers.
[3] Via stratified random sampling.
[4] On the specific example provided—VOA Persian’s “Early News” and “Late News” programs—starting in 2015 we added a number of details to the survey to minimize potential confusion. These included identifying the shows as produced by “broadcasters in different countries” and viewable “on satellite channels, recordings or Internet.” Program names also contained additional information on schedule, on-air talent, or content order to minimize the chance that respondents could confuse VOA’s programs with domestic TV programs. The most recent USAGM Iran phone survey conducted July–October 2017 followed the same methodology.
[5] Concern for false negatives drove, in part, the research design of our most recent China survey, completed in December 2017. RFA and VOA’s brands are censored in China, and our content circulates primarily via social platforms and through intermediaries. Thus, asking about specific programs was the most effective way to access accurate recall from the survey respondents. Importantly, we adopted a conservative approach, only reporting when respondents recalled engaging with VOA and RFA branded content.
Visit CPD's Online Library
Explore CPD's vast online database featuring the latest books, articles, speeches and information on international organizations dedicated to public diplomacy.
POPULAR ARTICLES
-
October 1
-
October 21
-
November 5
-
October 21
-
October 24
Join the Conversation
Interested in contributing to the CPD Blog? We welcome your posts. Read our guidelines and find out how you can submit blogs and photo essays >.