The fact that in 2023 we’re rolling out mandated minimum cybersecurity practices for the first time in critical infrastructure—we’re one of the last countries to do that.
Building in the red-teaming, the testing, the human-in-the-loop before those models are deployed is a core lesson learned from cybersecurity that we want to make in the AI space.
In the AI executive order, regulators were tasked to determine where their existing regulations—let’s say for safety—already account for the risks around AI, and where are there deltas? Those first risk assessments have come in, and we’re going to use those both to inform the Hill’s work and also to think about how we roll those into the same cybersecurity minimum practices that we just talked about that regulators are doing.
Where are you starting to see threat actors actually use AI in attacks on the US? Are there places where you’re seeing this technology already being deployed by threat actors?
We mentioned voice cloning and deepfakes. We can say we’re seeing some criminal actors—or some countries—experimenting. You saw FraudGPT that ostensibly advances criminal use cases. That’s about all we can say we’re releasing right now.
You have been more engaged recently on autonomous vehicles. What’s drawn your interest there?
There’s a whole host of risks that we have to look at, the data that’s collected, patching—bulk patches, should we have checks to ensure they’re safe before millions of cars get a software patch? The administration is working on an effort that probably will include both some requests for input as well as assessing the need for new standards. Then we’re looking very likely in the near term to come up with a plan to test those standards, ideally in partnership with our European allies. This is something we both care about, and it’s another example of “Let’s get ahead of it.”
You already see with AVs large amounts of data being collected. We’ve seen a few states, for example, that have given approval for Chinese car models to drive around and collect. We’re taking a look at that and thinking, “Hold on a second, maybe before we allow this kind of data collection that can potentially be around military bases, around sensitive sites, we want to really take a look at that more carefully.” We’re interested both from the perspective of what data is being collected, what are we comfortable being collected, as well as what new standards are needed to ensure American cars and foreign-made cars are built safely. Cars used to be hardware, and they’ve shifted to including a great deal of software, and we need to reboot how we think about security and long-term safety.
You’ve also been working a lot on spectrum—you had a big gathering about 6G standards last year. Where do you see that work going, and what are the next steps?
First, I would say there’s a domestic and an international part. It comes from a foundational belief that wireless telecommunications is core to our economic growth—it’s both manufacturing robotics in a smart manufacturing factory, and then I just went to CES and John Deere was showing their smart tractors, where they use connectivity to adjust irrigation based on the weather. On the CES floor, they noted that integrating AI in agriculture requires changes to US policies on spectrum. I said, “I don’t understand, America’s broadband plan deploys to rural sites.” He said, “Yeah, you’re deploying to the farm, but there’s acres and acres of fields that have no connectivity. How are we going to do this stuff?” I hadn’t expected to get pinged on spectrum there, on the floor talking about tractors. But it shows how it’s core to what we want to do—this huge promise of drones monitoring electricity infrastructure after storms and determining lines are down to make maintenance far more efficient, all of that needs connectivity.