Jameis Winston’s Hopeful Plea for Deliverance from Pick-Sixes

Jameis Winston on interceptions: 'Praying for the Lord to deliver me from pick-sixes' Jameis Winston set a Cleveland Browns record with 497 passing yards in Monday night's 41-32 loss to the Denver Broncos. But despite also throwing four touchdown passes, it was his two interceptions returned for touchdowns — and three picks overall — that
HomeInnovationCan legislation combat the surge of non-consensual deepfake porn? | The Excerpt

Can legislation combat the surge of non-consensual deepfake porn? | The Excerpt

 

Can legislation combat the surge of non-consensual deepfake porn? | The Excerpt


Deepfake videos are spreading rapidly, fueled by sophisticated open-source AI models. MIT researchers reveal that most of these videos are non-consensual porn, targeting celebrities like Taylor Swift. But now even high school and middle school students, predominantly females, are being targeted. UCLA professor John Villasenor joins The Excerpt to parse through the legislative and technological efforts to curb this surge of illicit content. We discuss the challenges of regulating AI-generated images, the importance of international cooperation, and offer practical advice for parents to protect their children from cyber sexual violence.

Dana Taylor:

Hello, and welcome to The Excerpt. I’m Dana Taylor. Today is Wednesday, October 30th, 2024, and this is a special episode of The Excerpt.

 

Deepfake videos are nothing new. What is new is their pervasiveness. Sophisticated open-source AI models are now available everywhere and to everyone. The vast majority of these videos, according to researchers at MIT, are non-consensual porn. Big celebrities like Taylor Swift are popular targets, but now so are high school and middle school students, almost all of them female.

 

Unfortunately, there’s no way to put the AI-generated, deepfake genie back in the bottle. Is there any way to fight this tidal wave of illicit and in many places, illegal content? Here to help me unpack this complex and quickly evolving story, I’m now joined by John Villasenor, professor of electrical engineering, law, public policy and management at UCLA. John, thanks for joining me on The Excerpt.

John Villasenor:

Thank you very much for having me.

Dana Taylor:

Let’s dive right in, starting with how various government entities are attempting to fight the proliferation of AI-created non-consensual porn. Governor Gavin Newsom of California, where the vast majority of AI-focused companies operate just passed 18 laws to help regulate the use of AI with particular focus on AI-generated images of sexual child abuse. In August, San Francisco’s Attorney General filed a lawsuit involving 16 separate websites, all of which allow users to create their own porn. There’s a whole lot of legislation coming out to try to address the issue. Is it enough, and will it work?

 

John Villasenor:

It’s early days still. Legislatively, there’s still a lot of things that people are talking about doing but haven’t actually done. And then there’s also a question of There’s a few questions. There’s the technology question, will it work from that standpoint? And then there’s the sort of legal question.

 

I guess I’ll start with the technology question. It is true that, as you said, the technology has made it easier to create deepfake videos, and they can be used for innocuous purposes like making a documentary of Abraham Lincoln or something and making Abraham Lincoln look very realistic. And they can also be used for horrifying purposes, like some of the ones that you mentioned there. One of the challenges is the people who create these could be hard to find, and even if it’s illegal, actually sort of getting that content addressed can be really difficult.

 

And then on the legislative side, of course, these bills have a very important goal, which is to reduce this very, very problematic use of this technology. But one challenge is going to be that they may be subject to court challenges, not because people are opposed to the goal of addressing that particular way of using these technologies, but because sometimes, and I haven’t studied all the details of all the laws, but there’s a risk sometimes in technology regulation that you write a law that does address the problem that you’re trying to address, but it also has sort of collateral damage and other things, and be therefore can be open to some legal challenges.

I do know, for example, with respect to the deepfake, anti-deepfake law relating to political information that that is already being subject to a legal challenge.

Dana Taylor:

International cooperation to rein in bad actors in the AI deepfake space is clearly an important aspect of this fight. Are the crime fighting technology focused alliances already in place going to be effective here, or might we need new infrastructure and new agreements?

John Villasenor:

Well, there’s plenty of infrastructure in place for international crime fighting. There’s decades long or longer history of that. I think the challenge is it’s just the technology itself, right? So somebody can post something on the internet. It may not be at all obvious where this was created and where the person is.

 

And so for example, you got to have a person in one country who makes a video and then they post it on a server located in another country and the person depicted in the video is in yet a third country. So you could have sort of three countries involved, and it can be difficult to sort of figure out sort of who’s behind these things. And then also it can be a bit of a game of whack-a-mole, right? Where if it gets taken down from one server, then somebody can put it up on a different server in a different country, for example.

And that can be very hard to chase down. And especially when you have the volume that you likely will have, then it just becomes, you might be able to chase down one of these videos, but if there are hundreds or thousands, all the alliances in the world aren’t necessarily going to be enough to actually do that at the speed that you might want to do that.

So I think the longer term solution would have to be automated technologies that are used and hopefully run by the people who run the servers where these are hosted. Because I think any reputable, for example, social media company would not want this kind of content on their own site. So they have it within their control to develop technologies that can detect and automatically filter some of this stuff out. And I think that would go a long way towards mitigating it.

Dana Taylor:

This podcast has a big millennial audience, many of whom might have young children and are rightfully worried about deepfake porn. John, how can parents protect their children from cyber sexual violence, or can they?

 

John Villasenor:

There’s no perfect measure, but I certainly think it’s good for everybody, and particularly young people these days to be just really aware of knowing how to use the internet responsibly and being careful about the kinds of images that they share on the internet. And of course, I mean, it goes without saying that you don’t want any of these folks to share explicit images on the internet, but even images that are sort of maybe not crossing the line into being sort of specifically explicit but are close enough to it that it wouldn’t be as hard to modify being aware of that kind of thing as well.

 

But I think the broader change, and maybe this is naive, is that I would hope that as we get more education about the harms of this kind of content that. I mean, there’s some bad actors that are never going to stop being bad actors, but there’s some fraction of people who I think with some education would perhaps be less likely to engage in creating these sorts of disseminating these sorts of videos. And again, that’s not a perfect solution, but that could be part of a solution. Education on the one hand, awareness on the other hand, and then thirdly with the companies themselves having a better suite of automated tools to detect these things. I think those three things together can really make progress, although it’s not going to be perfect.

Dana Taylor:

WIRED magazine recently published a piece on how deepfake detection tools, including those using AI, are failing in many cases as AI generative videos get more and more sophisticated. Is this an infinitely repeating game of whack-a-mole, as you said?

John Villasenor:

Yeah, no, it’s a great point. You’re correct, and it’s sort of an arms race, and the defense is always sort of a few steps behind the offense, right? In other words that you make a detection tool that, let’s say, is good at detecting today’s deepfakes, but then tomorrow somebody has a new deepfake creation technology that is even better and it can fool the current detection technology. And so then you update your detection technology so it can detect the new deepfake technology, but then the deepfake technology evolves again.

 

So you’re always going to be, with these detection technologies, a couple of steps behind. That’s not to say that these aren’t worth investing in, because again, if you can detect 85 or 90%, then that’s a lot better than detecting zero, right? So it’s still a good idea to have these detection technologies out there. It’s also important to be realistic and understanding that these detection technologies are going to never be perfect. They’re always going to be a little bit behind.

And there’s another risk on the other side, which is that there’s sort of the false negatives and the false positive. There’s also the possibility that some of these detection technologies can inadvertently flag content that isn’t actually a deepfake and identified as deepfake, and that’s something that obviously you want to avoid as well. I’m thinking really more in the political context and things like that. You don’t want an actual video of a real speech by a politician being flagged as a deepfake if it isn’t. That’s another kind of trade-off that people making detection technology have to be very mindful of.

Dana Taylor:

As you said, deepfake videos are not just about child sexual abuse and revenge porn, though they’ve also infiltrated the political world. And as they say, it’s impossible to unsee a video that impacts how voters view a candidate. Are there any tools that might help here?

John Villasenor:

Well, I think the same. It’s the same type of tools. For example, a deepfake detection tool is also going to be able to detect a deepfake of a politician. And so those same tools are useful. The challenge in the political context is that a deepfake can sort of do its damage pretty quickly.

 

Let’s suppose somebody puts out a deepfake of a politician saying something they never really said. It might take days before sort of the kind of system kicks into gear and identifies that as a deepfake and it gets removed. But if by that time 500,000 people have seen it, maybe only 50,000 of those people will later read that it was actually deepfake. So you’d still you end up with 450,000 people who saw it, never heard that it was a deepfake, and maybe believed it was real. That’s one of the challenges with deepfakes in the political context.

Dana Taylor:

John, you’ve covered a lot of different aspects of AI, including issues related to law and public policy. Where do you think the conversation about how to rein in deepfake porn is going?

John Villasenor:

I think the conversation is more mature and farther along now than it was even a year ago. Unfortunately, because it’s happened a lot more in the last year or two, there’s a lot more awareness about it. And one consequence of awareness is you have legislators, policymakers, parents, and young people, I think, much more aware that this is a phenomenon that’s out there than they were a year or so ago. And so I would like to think that that will generate some good results in terms of better detection technologies and better awareness by policymakers, and I hope a dramatic reduction in the amount of this content that gets put out there. But I’ve learned with technology not to predict the future because it’s very hard to predict where technologies are going to go. So I don’t know.

Dana Taylor:

Thanks so much for being on The Excerpt, John.

John Villasenor:

Thank you.

Dana Taylor:

Thanks to our senior producers, Shannon Rae Green and Kaely Monahan for their production assistance.