James R. Ostrowski at The New Atlantis:
Any machine learning researcher will admit that there is a critical disconnect between what’s possible in the lab and what’s happening in the field. Take deepfakes. When the technology was first developed, public discourse was saturated with proclamations that it would slacken society’s grip on reality. A 2019 New York Times op-ed, indicative of the general sentiment of this time, was titled “Deepfakes Are Coming. We Can No Longer Believe What We See.” That same week, Politico sounded the alarm in its article “‘Nightmarish’: Lawmakers brace for swarm of 2020 deepfakes.” A Forbes article asked us to imagine a deepfake video of President Trump announcing a nuclear weapons launch against North Korea. These stories, like others in the genre, gloss over questions of practicality.
Chroniclers of disinformation often assume that because a tactic is hypothetically available to an attacker, the attacker is using it. But state-backed actors assigned to carry out influence operations face budgetary and time constraints like everyone else, and must maximize the influence they get for every dollar spent. Tim Hwang, a research fellow at the Center for Security and Emerging Technology, explains in a 2020 report that “propagandists are pragmatists.”
more here.