One day in early June 2018, Sara-Jayne Terp, a British data scientist, flew from her home in Oregon to Tampa, Florida, to take part in an exercise that the US military was hosting. On the anniversary of D-Day, the US Special Operations Command was gathering a bunch of experts and soldiers for a thought experiment: If the Normandy invasion were to happen today, what would it look like? The 1944 operation was successful in large part because the Allies had spent almost a year planting fake information, convincing the Germans they were building up troops in places they weren’t, broadcasting sham radio transmissions, even staging dummy tanks at key locations. Now, given today’s tools, how would you deceive the enemy?
Terp spent the day in Florida brainstorming how to fool a modern foe, though she has never seen the results. “I think they instantly classified the report,” she says. But she wound up at dinner with Pablo Breuer—the Navy commander who had invited her—and Marc Rogers, a cybersecurity expert. They started talking about modern deception and, in particular, a new danger: campaigns that use ordinary people to spread false information through social media. The 2016 election had shown that foreign countries had playbooks for this kind of operation. But in the US, there wasn’t much of a response—or defense.
“We got tired of admiring the problem,” Breuer says. “Everybody was looking at it. Nobody was doing anything.”
They discussed creating their own playbook for tracking and stopping misinformation. If someone launched a campaign, they wanted to know how it worked. If people worldwide started reciting the same strange theory, they wanted a sense of who was behind it. As hackers, they were used to taking things apart to see how they worked—using artifacts lurking in code to trace malware back to a Russian crime syndicate, say, or reverse engineering a denial-of-service attack to find a way to defend against it. Misinformation, they realized, could be treated the same way: as a cybersecurity problem.
The trio left Tampa convinced there had to be a way of analyzing misinformation campaigns so researchers could understand how they worked and counter them. Not long after, Terp helped pull together an international group of security experts, academics, journalists, and government researchers to work on what she called “misinfosec.”
Terp knew, of course, there’s one key difference between malware and influence campaigns. A virus propagates through the vulnerable end points and nodes of a computer network. But with misinfo, those nodes aren’t machines, they’re humans. “Beliefs can be hacked,” Terp says. If you want to guard against an attack, she thought, you have to identify the weaknesses in the network. In this case, that network was the people of the United States.
So when Breuer invited Terp back to Tampa to hash out their idea six months later, she decided not to fly. On the last day of 2018, she packed up her red Hyundai for a few weeks on the road. She stopped by a New Year’s Eve party in Portland to say goodbye to friends. A storm was coming, so she left well before midnight to make it over the mountains east of the city, skidding through the pass as highway workers closed the roads behind her.
Thus began an odyssey that started with a 3,000-mile drive to Tampa but didn’t stop there. Terp spent almost nine months on the road—roving from Indianapolis to San Francisco to Atlanta to Seattle—developing a playbook for tackling misinformation and promoting it to colleagues in 47 states. Along the way, she also kept her eye out for vulnerabilities in America’s human network.