The AI Revolution
This feature was included in CPD's 2022–2023 Annual Report. View this and other stories here.
Emilio Ferrara is a Professor of Communication and Computer Science at the USC Information Sciences Institute and the USC Annenberg School for Communication and Journalism.
Will people get savvier at spotting bot- and AI-generated content as they read and view more of it?
When people first became aware of bots, we observed slow improvement in spotting and disengaging from bot- and AI- generated content. But as AI technology rapidly advanced, the content became much harder to detect. Now when you ask people: “Was this made by a bot?” and “Was this made by ChatGPT?”—accurate detection is essentially a coin toss. And I don’t think it is going to revert. The technology is advancing much too fast for detection tools to keep up. We are very near a point where detection by general public will become impossible; a point where you’ll look at a text or image or email and be unable to determine whether it was created by AI or not.
Are there sufficient safeguards built into AI-technologies like ChatGPT to prevent the creation of hate speech, conspiracy theories, and other illicit content?
ChatGPT has digested virtually all human knowledge. From Mein Kampf to the Ted Kaczynski manifesto, it’s all in there. But if you ask it to create a white supremacy manifesto, or a speech Hitler would deliver, it’s not going to do it. Why not? Because its engineers have installed guardrails. ChatGPT has a large team working on safeguards, with a vested interest in keeping illicit content off the platform. That said there are by now hundreds, if not thousands, of clones of ChatGPT, and the people and organizations behind them do not always have the expertise, or the manpower, to install safeguards. Or, they might have a different level of ethics. And that’s what keeps me up at night when it comes to AI. It’s not “Skynet” from the Terminator, or evil AI robots conquering the world. It’s malign actors getting this powerful technology to work for their own harmful and destructive purposes.
With the technology evolving so rapidly, how can public diplomacy practitioners keep up?
Experiment with these tools. Find a use case. It doesn’t have to be public diplomacy-related. What I do is sit with my kids at nighttime once a week, pull up ChatGPT, and say “let’s write a story.” They enter a creative prompt and we read the story together. You don’t need to understand how AI is built (leave that to the nerds like me), but it’s important to figure out what it can do. Because we still don’t know exactly what it can do, or exactly how it works. We essentially built a machine, gave it a massive amount of human knowledge, and now we’re slowly discovering its capabilities. We didn’t train it to be a mathematician or musician, and yet it’s solving problems even our smartest mathematicians couldn’t solve, and it’s writing beautiful music. By learning AI’s capabilities, we can find productive, beneficial uses—like writing children’s stories—and we can be on the lookout for harmful uses.
Visit CPD's Online Library
Explore CPD's vast online database featuring the latest books, articles, speeches and information on international organizations dedicated to public diplomacy.
Popular Blogs
-
October 21
-
November 21
-
November 5
-
November 7
-
October 24