top of page
  • Writer's pictureDusty Weis

Lead Balloon Ep. 41 - Deepfakes: How Communicators Must Prepare Now for this Reputation Threat

In recent weeks, deepfake disinformation attacks have exploded, and experts say a new era of strategic communication tactics must arise.


It is now only a matter of time until someone attacks your reputation with a deepfake, according to the experts.


So-called deepfake technology, which can synthesize audio and video of things that never happened, has arrived en masse.

And, while these tools for generating potential disinformation were previously only available to trained experts and big institutions, recent advances in artificial intelligence technology mean that ANYONE can create fake videos... nearly instantly, with little to no training, for FREE.


Accordingly, experts like Dr. Hany Farid from UC-Berkeley say deepfakes are suddenly being used the wage disinformation campaigns every day.


So in this episode, Dr. Farid cites some examples of how deepfake technology is being used to attack important people and institutions, and lays out strategies that strategic communicators can use to try and protect their clients and employers.


We talk to Francesca Panetta and Halsey Burgund, the Emmy-winning film directors who used a viral deepfake of President Richard Nixon to try to warn society about the growing threat, and learn some shocking facts about the technology.


And we meet Noelle Martin, a lawyer, researcher and activist from Australia whose reputation has been targeted with deepfake pornography. Noelle tells us about her efforts to create legal recourse for the non-consenting victims of deepfake porn and her battle to reclaim her reputation.


Because deepfake technology no longer poses a reputation threat "sometime in the next few years."


It poses a threat RIGHT NOW.


Subscribe to the Podcamp Media e-newsletter for more updates on the world of strategic communication.



Transcript:


Dusty Weis:

Last year as Russian bombs fell on Ukraine and the world watched in shock as Ukrainian fighters mounted a brave resistance to the invasion, a video surfaced online of Ukrainian President Volodymyr Zelenskyy appearing to urge his troops to lay down arms.


It was a forgery, a so-called deep fake, a video completely synthesized by computers using artificial intelligence deep learning algorithms. For Francesca Panetta, director of the University of Arts London Storytelling Institute, seeing that AI forgery deployed as a weapon of disinformation marked the realization of a threat that she and her creative partner have been trying to warn society about for years.


Francesca Panetta:

By destabilizing our landscape of truth, it means that you can plausibly deny everything. That is what we are more scared about than the deepfakes themselves.


Dusty Weis:

Because even in just the last several weeks, deepfake technology has taken massive steps forward in terms of its power and its accessibility. For strategic communicators in public relations and marketing roles, we are going to have to start responding to this threat this year. According to Dr. Hany Farid at UC-Berkeley.


Dr. Hany Farid:

Inside of and outside of your company, you've got to control reputation. People are going to try to damage your reputation. Here's the thing, everybody today is vulnerable.


Dusty Weis:

In this episode, a primer on the brave new post-truth world into which we are all being dragged as strategic communicators and as human beings. We explore the institutional threats posed by deepfake tech, case studies from front lines of Eastern Europe, how to start planning for your first deepfake attack because they are coming. And we talk to one young woman who was dragged into the deepfake fight by the most despicable sort of assault on her reputation.


Noelle Martin:

It absolutely destroys your life. It's a life sentence. It's a permanent misrepresentation that's publicly accessible for everyone forever in perpetuity.


Dusty Weis:

I'm Dusty Weis from PodCamp Media. This is Lead Balloon, a podcast about compelling tales from the world of PR marketing and branding told by the well-meaning communications professionals who live them.


Thanks for tuning in. Make sure you're subscribed in Apple Podcasts for these monthly tales about communication, either in a historic context or sometimes about how the business is changing as we speak.


Sometimes when I do this show, the end result surprises me. I know I've alluded to it in the past, but before I got into PR and marketing, I was a news reporter. When I dig into a story, I feel compelled to follow it wherever it takes me. This episode is by far the most shocking example of this phenomenon. I had not planned to do a whole episode on deepfakes until I talked to Fran Panetta and Halsey Burgund for last month's episode about the greatest speech never given.


Quick recap, Richard Nixon's speech writer had to write remarks to be delivered in the event that the Apollo 11 moon landing ended in disaster. It is an incredible piece of writing that was lost to history for 30 years, was rediscovered and is now internet famous. In 2019, Fran and Halsey partnered with MIT to create a short film and installation centered around a deepfake version of Richard Nixon delivering this speech that he never actually delivered. The project won an Emmy, it went viral on the internet, but Fran and Halsey say the whole point of the thing was to warn people that deepfake technology posed a rising threat to society.


Francesca Panetta:

We wanted to create a art installation and video that showed the public exactly what was possible using the most sophisticated technology at the time.


Halsey Burgund:

We call it a complete deepfake because we manipulated both the audio and the video. The video is actually a segment from Nixon's resignation speech that we painstakingly searched many, many of his speeches to find the speech that had the right essence and tonality and just sort of somberness to it that he might have were he to deliver this very somber speech. That's called the Target video. That was what was manipulated visually by an artificial intelligence model to have his lips and everything associated with the lips move to voice different words, the words of the speech that we wanted him to deliver instead of the words that were actually delivered by him during his resignation speech.


The visual part was done by one AI model that could really just change the lips, leave everything else the same so that it would be as authentic as possible. All of his head motions, all of his looking up at the camera, looking down, all that kind of stuff remained the same, just the lips were changed. Then we had to produce a synthetic version of the audio as well to make a voice that sounds like Nixon come out of the lips that were synced to that voice. To do that, we needed a different artificial intelligence model and we needed to train that one painstakingly on clips of Nixon. We gathered lots of Nixon speeches and sliced those into short clips. Then we had an actor voice those same clips.


Then we would send the pair of data into the artificial intelligence model so it would know the actor sounds like saying this thing, and here's what Nixon sounds like saying the same thing. Then it can learn how to make the translation from the actor to Nixon's voice.


We did that for hundreds and hundreds of clips. Then we were able to, at that point, after the model trained itself, we were able to have the actors say at the speech that we wanted said, ie, at the event of moon disaster speech, and then the model would output the same speech with the same performative qualities that our actor did, but in Nixon's voice.


Francesca Panetta:

When we first made this project, we took it to the International Documentary Festival on Amsterdam. We built a 1960s living room setting. We played this video on an old TV set. It was really interesting watching audience members come by and see this video. To be honest, some of them couldn't even believe that it was a fake. We had to be really careful about the messaging around the project, both as an installation and then when we later put it up online as a website that our messaging was very direct and said this is a deepfake and we're doing this to try and show you what's possible.


Dusty Weis:

But fast forward four years to present day and the deepfake tools that Fran and Halsey used in their Emmy-winning project are suddenly easily accessible to the masses. We've all heard the news stories about the disruptions caused by chat GPT in recent months, but what this flood of new open source AI tools means from a deepfake perspective, Halsey says, is that what took him, Fran, and a team of experts months to accomplish with special software in 2019 can now be done by someone with no training for free in minutes on the right website.


Halsey Burgund:

It's really, really incredible. It is vastly easier now than it was back in 2019 when we were doing this work, but it's going to be very easy in not too many years, from not too many months probably for people to create anything. I want to see a video of Dusty Weis saying that he believes the earth is flat and that he was going on an expedition to fall off the edge. It's kind of a crazy new world out there. You really don't want to prevent totally legitimate uses of this technology as well. There are people in the world who want to get messages out and want to be heard, who cannot, for their own safety, let it be known that it is them saying something, or it is them doing something. Those are wonderful use cases, which if we're legislating or if we're using technology, the nuance has to be built in. That's really, really difficult. It's a thorny problem. It's not just ban deepfakes, ban synthetic media. That's probably not possible and also not prudent

.

Dusty Weis:

If we now live in a world where anyone can log onto the internet and generate convincing audiovisual evidence of something that isn't true, where does that leave us as strategic communicators when we have the job of protecting the reputations of important people and institutions? Dr. Hany Farid is a professor at UC Berkeley with a joint appointment in computer sciences and the School of Information. Put that Venn diagram together and it makes him a leading authority on deepfakes and a frequent guest on CNN, NPR, all the biggies. He says it's time for us strategic communicators to get to work. Like right now. This is happening.


Dr. Hany Farid:

Deepfakes, first it's important to understand is part of a continuum of being able to manipulate and create digital media. Think styling airbrushing people out of photos to the modern age of Photoshop with people, put one person's head on another person's body. Fast forward a few decades and now machine learning and artificial intelligence are being used to both manipulate and fully synthesize digital content. Let me give you some examples.


You can go to a website called ThisPersonDoesNotExist.com, and it will true to its URL name, just show you an image of a person that doesn't exist. It was fully generated by a machine learning algorithm. You can go to another website and upload 60 seconds of you, Dusty, speaking, and then I can type whatever I want and it will synthesize an audio of you saying whatever I want you to say. Go download another app and you will be able to then create a video of you saying exactly those things.


Image generation, video generation, audio generation, I can make people say and do things they never did. All of that is being powered by machine learning and artificial intelligence. The important part of that is that what used to be in the hands of the few people who had a lot of expertise in manipulating digital media, Hollywood Studio, State-sponsored actors, is now in the hands of the many. We've automated it. We've democratized access to very sophisticated technology that can have many entertaining and interesting applications and many nefarious applications.


Dusty Weis:

All of this could be used for creative and benign and fun things, or it can be used to destroy people in institutions.


Dr. Hany Farid:

That's exactly right. Let's talk about where we are seeing the harms. First of all, many great days. If you haven't seen it, go to YouTube and search for Nick Cage in the Sound of Music. It's fantastic.


Dusty Weis:

I'm a fan. I'm a fan.


Dr. Hany Farid:

It's Nick Cage twirling on the mountaintop singing. It's the most brilliant thing I've ever seen on the internet. Go over to TikTok and look for Tom Cruise Deepfake, and you will see what looks to be Tom Cruise. It is incredibly funny and clever.


Deepfake Tom Cruise:

Hey, listen up sports and TikTok fans. If you like what you're seeing, just wait till what's coming next.


Dr. Hany Farid:

But you are seeing deepfakes being used to commit large to small scale fraud. You are seeing deepfakes being used to push state-sponsored or institutional disinformation campaigns. We have the technology today to create a video of a CEO Fortune 500 company saying in a private conversation, seemingly, our profits are going to be down 20%. I leak that on Twitter, it goes viral in what? 30, 60 seconds? How much can I move a market?


Dusty Weis:

How much damage can you do to a stock price?


Dr. Hany Farid:

Right. Here's important to understand since we're on this topic, is that it's not just that we now have technology in the hands of the many that can create and distort digital media, but it's that we can also distribute it to billions of people around the world instantaneously through social media. Half life of a social media post is measured in minutes, not hours or days or weeks, which means you can do a lot of damage. You and I both know that once something is on the internet, it never really comes down and you never set the record straight.


Dusty Weis:

Right. The internet is forever.


Dr. Hany Farid:

The internet is forever for better or worse. We are seeing real harms in this technology. I think there are opportunities, but I think we can't ignore the harms.


Dusty Weis:

Now, let's explore a very specific case study here. I'll preface this by saying about a year ago on this show, we spoke with a group of Ukrainian agency creatives about how they were waging what they called an information war against the Russians who invaded their homeland. What we learned, what we know about warfare and international relations in the 21st century is that every war is now going to be fought on the digital information front. How have we seen deepfakes in this new technology then used as a weapon in this type of warfare?


Dr. Hany Farid:

Yeah, that's a great question. I'm glad you brought this up because I think this has been one of the most troubling and interesting nefarious uses of deepfakes. First of all, I completely agree with you that there is no future conflict that will not have a cyber component, whether that's cyber attacks or disinformation campaigns. You have absolutely seen that in the Russian invasion of Ukraine.


What's so interesting about this case is that in the early days of the Russian invasion, president Zelenskyy said, he warned us, "The Russians are going to create a deepfake of me, and it's going to say, 'We surrender. Put your arms down.' I guarantee you I will not do that." We can now call this pre-bunking. You get out ahead of it. What happened is a couple months later, there was a video, in fact, pretty crude but not terrible, of what seemed to be President Zelenskyy at a press conference saying, "We give up. Lay your arms down." That of course went viral on social media and it showed up on national television.


Dusty Weis:

Now, let's examine that for a moment because I think that it's fascinating that you bring up this notion of pre-bunking here, and the fact that Zelenskyy and presumably his communications advisors thought ahead of time, "This is something that's going to happen. We need to be prepared for this. Let's get out ahead of it." What sort of a scenario would we be looking at had they not had that level of foresight?


Dr. Hany Farid:

It's a really good question. That's sort of an unknown. One of the interesting things here is we only know about the cases we discover. This thing went online, it showed up on television, probably had a minimal impact because it'd been pre-bumped. It wasn't particularly good. Honestly, I think the Ukrainians probably knew better. But what about all the cases we don't discover?


The reality is we don't know the denominator in this equation. That's very tricky, but you could certainly imagine in the fog of war, let's say that this was released more timely during the heated battle and it was just enough to create uncertainty already in a very uncertain time. Could it have had an impact? Sure. Of course it could have.


By the way, you could play both sides of this. Somebody could release a video of President Biden saying, "I've launched nuclear weapons against North Korea." How long before the North Korean dictator just panics and hits the button? Do I think that's likely? No. But it's also not out of the question. That should really, really worry us.


Here's the other thing too, is that we are in the early days of this technology. This technology is only getting better. It's getting better at a very rapid clip. Every few months you see advances in the technology. What was a fairly crude deepfake of President Zelenskyy in the early days of the Russian invasion, at the next invasion, or during this current one as we keep going, who knows how sophisticated they can get?


Dusty Weis:

Well, let's stay in Ukraine then, because we saw another deployment of this technology that you brought to my attention that I think illustrates just that, how rapidly that's evolving. That was the deepfake application that was used of Kyiv Mayor, Vitali Klitschko, in phone calls that he had with mayors in the cities of Berlin, Madrid, Vienna, and we don't even know. As you said, we don't know the denominator, what other mayors they might have talked to here, but what happened?


Dr. Hany Farid:

This one is interesting because the previous version I was telling you about was an offline video. Somebody recorded it, posted it on social media and national television.


Dusty Weis:

It's not interactive. It's not someone that you can talk to or answer questions with.


Dr. Hany Farid:

That's right. Maybe we've grown somewhat suspicious of the images in the videos we see on YouTube and TikTok because we know they can be manipulated. But in this case, what happened is an imposter was on a Zoom call with, as you said, the mayors of Madrid, separately, Berlin and Vienna, and had somewhere between a 15 and 20 minute call. It looked and sounded like they were talking to the mayor of Kyiv, but they weren't. It was an imposter.


What was amazing about that is live real-time deepfake generation with reasonably sophisticated people, mayors of major cities in Western Europe, and they didn't know. It looks like it was probably a prank. Nobody really has figured out what was going on here. I don't think it did any real lasting harm except for the following. You better believe the next time the mayor of Berlin gets on a Zoom call, they're going to be suspicious of who they're talking to and with good reason. Now, not only do I have to worry about everything I read online, everything I see online, now I have to worry when I'm talking with somebody live over a Zoom call, how do you know that person is real?


Dusty Weis:

Right. How do I know I'm talking to Dr. Hany Farid right now?


Dr. Hany Farid:

Yeah, maybe he's a lazy bum and this is one of his grad students who just, he couldn't be bothered to come to work today. It's going to sit with you a little bit, isn't it Dusty?


Dusty Weis:

Okay. Presuming then that this is the real Dr. Hany Farid, you've alluded to just how quickly this technology is evolving and how easy it is to access. But so much of the conversation around deepfakes to date is this sort of, "Wow, look at what someone has just used it for," sort of a thing. How far away is the day really when it becomes commonplace and we just see deep fakes in the news every day?


Dr. Hany Farid:

If we were having this interview a month ago, I would've said, "You're absolutely right, Dusty. We're not seeing it every day." I don't think that's true anymore. Let me give you a couple of examples.


First of all, you are starting to see small scale fraud where people are calling parents, grandparents, spouses, and saying, "I'm in trouble. I've been arrested. I need money," hand the phone over to what they claim is a lawyer or police officer, and the whole thing is a scam. You're starting to see very sophisticated voice scams on the phones. But here's the thing, is just in the last two weeks, there has not been a single day that has gone by where I've not seen a fake image, or a fake audio, or a fake video go viral on social media. Let me just give you a few examples.


After the rumor of former President Trump's arrest, images showed up on Twitter with millions of views, which looked pretty convincing of him being escorted by the police. Just in the last week, every single day I've been contacted by a reporter with an audio recording purportedly of President Biden on a hot mic saying something inappropriate about Trump, about Silicon Valley Bank, about China. Go down the list every single day what sounds like President Biden.


Just yesterday, there was a fake image of what looked to be like Putin kneeling before Xi during the visit in Moscow. Another video of what looked to be Bill Gates claiming that Covid was a hoax and the vaccine was created so you can put tracking devices in people.


This is now happening on a daily basis. The reason is that the technology, the companies that are now allowing you to clone voices, synthesize images, create videos, are making their services publicly available, and freely available because there is a race to monetize generative AI. There is not a lot of thought being put into how these are being misused.


I think it's here and it is going to start accelerating from here. I think we are entering this inflection point. Look, the 2024 election is around the corner. I will be stunned if you do not see major examples of people creating fake content of candidates and of Biden and of Harris and so on and so forth.


Dusty Weis:

Dr. Farid, the next question that I have for you is what should we be doing as a society to limit the threat that deepfake technology poses? But the way that you lay it out on the table just like that, it sounds like there's not really anything that we can do. We're just going to be at the mercy of this thing until society adjusts its expectations for the facts that we consume.


Dr. Hany Farid:

There's two parts to this answer. One part is, what can we do? The other question is, what will we do? Let's talk about the can part first. On the can part, the companies, the open AIs of the world, the mid journeys, the 11 labs, the companies that are creating the synthetic content could put guardrails on their technology. They could say, "Look, you want to synthesize the voice, fine, but we're only going to let you synthesize your voice." Or, "If you are going to synthesize somebody else's voice, we're going to put some guardrails by watermarking the content, making sure we know who you are and what you are creating."


There's another aspect of what we could do, which is to also authenticate real content. One guardrail you want to put on is all the synthetic media, this generative AI, how do you protect it and make sure we can identify it upstream? But the other is, for example, when somebody really does take a video of a president or a candidate saying something inappropriate, how do I trust it? There are technologies. Let me name one of them. It's called the C2PA, the Coalition for Content Providence and Authentication. For full disclosure, I'm on their steering committee. It is a not-for-profit, multi-stakeholder, Adobe, Microsoft, Sony, BBC, hundreds of company that are trying... That are building, I should say, a protocol that would allow that.When you pick up your phone, it can determine who you are, where you are, when you are there, and cryptographically sign all of the information that you recorded, place that on to an immutable ledger so that downstream, if I record a video of police violence, human rights violations, the president saying something inappropriate, we have some guarantees that it's real.


Dusty Weis:

It's verifiable.


Dr. Hany Farid:

Verifiable. Those things can be done. Now what will be done? That's the more interesting question.


What will be done? Probably nothing. The reason is that there are billions of dollars flowing into the generative AI space. Companies are tripping over themselves to be first to dominate in this field. We are making the same mistakes as the last 20 years. That is we are moving fast and breaking things.


Meanwhile, our government is nowhere on regulation. They don't even know what AI is. They're still trying to fix the problems from the last 20 years and we are lost. I'm worried that we are moving so fast here without precautions.


Here's the thing, is that sometimes we develop technology and there are unintended consequences. There are bugs in the system. This is not unintended. This is a feature. You can look at this technology today and say, this is exactly how it's going to be used.


Dusty Weis:

This is what it was designed for.


Dr. Hany Farid:

This is what it's designed for. The fact that these companies, even the good ones, are moving so fast without the right guardrails, is reckless. I don't think it is hyperbolic to say that these are potentially existential threats to society. If you can't trust the outcome of an election because people are creating fake images and video of voter fraud, where are we as a society?


I want to emphasize one more thing here too, is that this is not a uniquely deepfake problem because if I did not have the delivery mechanism, social media, this problem would be more contained. You have to couple the ability to create content, distribute it to the world simultaneously, and then have the platform's own algorithms amplify this content because it's driving engagement. That's the ballgame. Produce, distribute, amplify, consume. Now we're in for a mess.


Dusty Weis:

Bringing this all back full circle then to our audience of who I like to call well-meaning strategic communicators, the people who are tasked with protecting important people and institutions, and in some cases, democracy itself from misinformation, as we strive to keep the public informed and protect the reputations of those people and institutions, what are we left with right now? What is the best path to chart for a strategic communicator in the years ahead?


Dr. Hany Farid:

Yeah, I think today, right now, the one technology that you could deploy is what I was mentioning earlier, which is the C2PA work, which is that if you want to protect your CEO, what you do is you say, "Every time he or she speaks publicly, we are going to record them with a C2PA compliant device." It's this software that runs on your phone. It's nothing fancy. Every single piece of public statement will be recorded, cryptographically signed and put on a centralized ledger where you can determine what they said. If you don't see that there, you should be suspicious of it. That's a way of protecting high profile users.


There's other problems you have to worry about. People are using deepfakes to interview for jobs. People are attacking financial institutions with phone calls from high value customers saying, "I'm locked on my account. Can you please reset my password?" You've got to figure out how to control the validity of information coming inside of and outside of your company. You've got to control reputation. People are going to try to damage your reputation because they don't like you. Your CEO said something they don't like, and they are going to create fake content of them.


Here's the thing, everybody today is vulnerable, because every single CEO, I guarantee you, at the Fortune 500 company has public images of them, public videos of them, and public audios of them. Once that is out on the internet, you will now have a vulnerability.

Today, the only real way to protect that because it is a big internet and it moves very, very fast, is to authenticate the things that you can control and message out, if this does not have the C2PA compliance signature, the stamp of approval, the security, well then you should be suspicious of it.


By the way, I don't think that solves all your problems because the reality is there's a lot of people out there who say, "What do I care about cryptographic signatures? It's fake because I know what I know," because when we do enter a world where anything you read online, any audio recording, any video, any image, any live Zoom call can be fake, nothing has to be real anymore. You get to deny reality.


Francesca Panetta:

When we were building the project, we spoke to a fantastic scholar and lawyer, Danielle Citron, who talked about the liars dividend. Her concern was by destabilizing our landscape of truth, it means that you can plausibly deny everything. That is what we are more scared about than the deepfakes themselves.


Dusty Weis:

Fran Panetta and her co-director of the MIT Nixon Deepfake, Halsey Burgund, say they're currently working on a new installation aimed at raising awareness of the deepfake threat. This one will put willing participants into the video itself, which will be generated in seconds on site to demonstrate just how far the technology has evolved. They're currently seeking partners and financial support for the project, but Fran also said something that stunned me literally speechless. Let me just preface this clip by saying, I've been doing this for 20 years now. I can count on one hand the number of times that someone said something that surprised me to the point where I just had no words.


Francesca Panetta:

Also, we need to remember that it's everyday women that are the victims of most of the deepfakes are out there. 94% or maybe even more of the deepfakes that are created are revenge porn videos of innocent women. It's quite easy to-


Dusty Weis:

94%?


Francesca Panetta:

Yeah. It's really easy to think that this is all about big names, celebrities, politicians, democracy. These are really big problems. But actually, it's a lot of innocent women who in a very, very large scale, over 80,000 deepfakes out there who are the victims of this technology. I think it's really worth remembering how it is everyday people and women who are very vulnerable to this.


Dusty Weis:

That just makes my skin crawl and my blood boil. I hadn't even fathomed, and maybe it's because I'm not... That's disgusting. Wow.


Halsey Burgund:

Yeah. There's a marketplace for these on the dark web that's... It's not really that hard to, and not expensive.


Dusty Weis:

Remember what I said about this story taking on a life of its own? I was caught completely off guard by the fact that this deepfake technology has been used almost exclusively to victimize women up to this point in history, that while we may worry about existential threats to society and reputational threats to important institutions, for thousands of women, this isn't hypothetical. They have been victimized in the most invasive way imaginable.


Noelle Martin:

There was dozens upon dozens of pornographic sites that had my images on them, my details on them, and doctored pornographic images of me on them.


Dusty Weis:

Coming up after the break, we meet Noel Martin, a young lawyer from Australia who's using her firsthand experience as a victim of maliciously wielded deepfake technology to fight for global legal reform and justice. That's coming up in just a minute here on Lead Balloon.


This is Lead Balloon. I'm Dusty Weis. In the world of professional video gaming online, the streaming platform, Twitch, has made overnight sensations out of dozens of young women, and more broadly, many other gamers from all demographic categories who stream themselves playing games and have amassed, in some cases, tens of millions of followers, an undeniable celebrity status in that world.


Well, just about two months ago, a headline making controversy erupted in the Twitch community over deepfake pornography featuring the likenesses of non-consenting women streamers. Using now commonly available deepfake tech, some lowlifes on the internet had synthesized pornographic videos of a handful of these streamers appearing to take part in all manner of explicit acts.


In the fallout of these revelations, the women found themselves bombarded by direct messages, screenshots in abuse stemming from the deepfake porn. One prominent victim, known as QT Cinderella, angrily took to her feed in tears and put the entire internet on blast for a culture that seems specifically engineered to victimize women.


QT Cinderella:

That it should not be a part of my job to have to pay money to get this stuff taken down. It should not be part of my job to be harassed, to see pictures of being nude spread around. It should not be something that is found on the internet. The fact that it is is exhausting.


Dusty Weis:

But this is just one high profile example. As we already discussed, well, this kind of thing is becoming increasingly common. What gets lost in the conversations about streamers, celebrities, institutions, and even democracy itself, is that the vast, vast majority of deepfake victims are actually just regular people, women specifically, and that deepfake porn of non-consenting women accounts for about 19 out of every 20 deepfakes on the internet.


Noelle Martin:

This is the thing about the issue of deepfakes. Since it was popularized in around 2017 when the news broke about the story of the Reddit user, the username DeepFakes had been creating fake pornographic videos of celebrity women. This tool, this technology has been and has continued to be predominantly used as a weapon to abuse women, to create fabricated pornographic videos of women, sometimes to silence, to humiliate, to intimidate, journalists, activists, ordinary women, celebrity women.


Dusty Weis:

Noelle Martin is a lawyer and activist from Perth Australia, who about 10 years ago discovered that her face had been photoshopped into pornographic images and distributed across porn sites. Making a stand and speaking out against this form of abuse, she's been subjected to increasingly obscene and increasingly complex forms of deepfake pornography, but she hasn't let that silence her. Not only has she been a part of efforts to provide legal recourse to the victims of deepfake porn, she's become an internationally recognized speaker on the subject.


Noelle Martin:

The numbers of the statistics are shocking so far, but I think that actually doesn't tell the fuller picture because there are times where women might not know the content is out of them. There's a lot of that happening on top of what we know today.


Dusty Weis:

Now, you became engaged in this against your will, essentially. The story that you have to tell, what you have been subjected to, it's truly horrifying. How did you discover that you were a victim of deepfake pornography?


Noelle Martin:

Well, it started off around 10 years ago. I decided to Google myself when I was 18 and I saw that there was dozens upon dozens of pornographic sites that had my images on them, my details on them, and doctored pornographic images of me on them. Over time, that only escalated in nature and in gravity and how graphic they were. I ended up speaking up publicly and fighting for law reform here in Australia. That only, as you said, put a target on my back and has led the perpetrators to create deepfakes of me. They created one deepfake of me later on in around 2018.


Dusty Weis:

As the technology evolved and became more easily accessible.


Noelle Martin:

Yes. They created a video of me that was verified to be a deepfake depicting me having sexual intercourse. The title of the video had my full name in it. Then there was another video that they had created as well. That was technically what they would consider a cheap fake, a more crude version of a deepfake. It's a video of me falsely depicting me performing oral sex on someone. They escalated the abuse over time.


Dusty Weis:

I'm the first person to cite the internet as a real toxic waste pit of deliberately abusive sociopaths. But other than the fact that you just spoke out against this form of abuse, why were these people targeting you for this? This is... It's just hair-raisingly horrifying.


Noelle Martin:

Well, I think in the beginning when they started targeting me, obviously, I don't know, because I don't know who the perpetrators are. When it came to deepfakes and the videos later down the track, the motivations were knowing that I was speaking out and very public about this, that I think that they wanted to taunt me and to intimidate me to stop what I was doing in some way.


Dusty Weis:

They saw you taking power against your abusers and were essentially trying to take the power back in their own crude way.


Noelle Martin:

Yes. That's their way of trying to be like, "We don't care. We have no regard for you or the laws. We're just going to continue to do what we are doing because we can get away with it."


It absolutely destroys your life. It's a life sentence. What people might not understand, because I think there's this misconception about issues that happen online, that it's online. It doesn't affect you as things would in the real world. Some might say, "Oh, it's not really you, so why are you so upset?" But the thing about this issue is that it's permanent, effectively, for me, the way that they've misappropriated my name and my image and my likeness and my dignity and my autonomy and agency, it's a permanent misrepresentation that's publicly accessible for everyone, forever in perpetuity. It is extremely damaging to go through. But the sad thing is, because this has happened to me over years and just escalated over time, it's almost become normalized.


The deepfakes, I felt more angry at the audacity for them to do that rather than feeling all the emotional pain because I had already gone through that for so long.


Dusty Weis:

In the face of that emotional turmoil, Noel says the decision to speak up and fight back was one that she struggled over.


Noelle Martin:

It's definitely been a tough battle. It wasn't something that I wanted to do or thought I would do at all. Even speaking out, I know people say that's something that they might not have done in those circumstances, but it definitely wasn't the first thing that I wanted to do. It took a long time to reach that point because there was literally nothing else available. There were no specific laws, there was no justice, there was no recourse. The things on the internet were just proliferating and amplifying. There was nothing that I could do except to try and reclaim my name that was being taken away from me and fight for justice because I wasn't the only one it was happening to. People didn't seem to talk about it in the media, or just in general, but I was heavily involved in the laws changing across Australia.


In certain states, new laws are introduced, making it a criminal offense to distribute, to record, and to threaten to distribute or record intimate images or videos without consent. That was really great in terms of setting the standard for society and the community, being like, this is not acceptable. This is punishable and we don't tolerate this abuse.

Now I've gone on to try and speak about this publicly and globally. I've spoken to countries all over the world in news media. I'm trying to essentially urge for countries to criminalize this, to act upon this, to be more aware of this and the harms that it causes people. I've spoken to the FBI and Homeland security. It's good that people at the highest levels are at least focused on looking at this issue.


Dusty Weis:

You've certainly become a leader in this space, and I'm glad for that, but over this same timeframe, we've also seen some other very high profile women victimized by deepfake porn, Hollywood actresses, Scarlet Johansen, Taylor Swift, Aubrey Plaza. How does that impact the battle against this kind of abuse?


Noelle Martin:

It does ultimately put a lot of media attention on the issue, especially when you have high profile cases, celebrity cases of this. There's responses by those people to the issue. It makes people more aware of what's happening. But I think what that has done in some ways made the broader public think that this issue is so far-fetched that it wouldn't happen to everyday people, that this is something that might only happen to celebrities or to people in the public eye, but that's not the case. This is something that is happening to everyday people and can theoretically happen to anyone. Once it's out there, that can potentially ruin your entire life. That's sort of exaggeration to say. That is literally the world that we are living in.


Dusty Weis:

Given the creeping and universal nature of the threat here, what do we need to do as a society to protect people, and especially people like you from this sort of abuse?


Noelle Martin:

Well, that's a really big question, and it's something that survivors, academics, policy makers are really working on. There's been recent summits actually in the US of activist and people in this space from all over the world who've come together to try and chart a global path forward because we need to act on this.


You've got different layers to this. It's not going to be one solution. But ultimately, you need to have greater education about this, digital literacy. People need to be aware of what can happen and what the threats are. You need to have stronger laws, at the baseline, criminal laws in every jurisdiction around the world. You also need to make sure that there's other avenues for justice civilly. You also need to have potentially regulators established, regulators established that are going to help people take down the material, because what we're seeing is a lack of action from these big tech companies.


You also need to have a lot more accountability for the bigger hubs, the tech companies, the porn sites as well that are helping enable and facilitate this abuse. There are a lot of different possible solutions that we're all working on trying to implement them.


Dusty Weis:

You're right. That was a big question. Boy, if you didn't have a perfectly bulleted list of really big answers. But even as she advances the cause in one area of emerging technology, Noelle Martin has her eyes on the potential threats of another emerging tech, the so-called Metaverse of online virtual reality.


Noelle Martin:

Effectively, my research was looking at Meta'S plans to build the Metaverse and how they're doing a lot of work, effectively engineering human bodies into these avatars, 3D avatarser people. They're replicating human beings down to their pores, their hair strands, their eye gaze, their body movements in order to create this computer generated universe that feels real, where people can communicate and work and socialize and all that in the coming years. One of the, and I would say, inevitable harms of this is how it's going to be used, misused and abused, and how women are going to be the targets of harms that I don't think we've seen before. Especially in the misappropriation of people's identities and the abuse of women. If what we are dealing with today is the fabrication and the misappropriation of people and women in 2D, I really am concerned what that would look like and how that would manifest in 3D form.


Dusty Weis:

Does this new piece of technology just become a new venue for perpetuating the sort of abuse that you sustained? Yeah.


Noelle Martin:

Yeah.


Dusty Weis:

Can I ask you, you mentioned that you never found out who your abusers were or what their real motivations were. You've just been left to speculate, and so you find yourself the victim of this nameless, faceless mob on the internet. But if you had the opportunity today to speak directly to the people who created these images of you, what would you say?


Noelle Martin:

I've thought about this. There would be a lot of curse words in what I would say, but I just don't know if I would even waste my energy.


Dusty Weis:

That's a pretty big statement in itself. Yeah.


Noelle Martin:

Yeah. Because I guess it is what it is. I've had to almost make peace with what's happened for the sake of my sanity.


Dusty Weis:

Yeah, because how do you make people who habitually dehumanize other people recognize the value of a human life?


Noelle Martin:

Yep, that's exactly right.


Dusty Weis:

Noelle Martin, you're a lawyer, a researcher, and an activist from Perth, Australia, courageously leading the battle from the front of the lines against deepfake pornography that victimizes women in the real world. Thank you for your bravery, and thank you so much for talking to us here on Lead Balloon.


Noelle Martin:

Thank you so much for having me.


Dusty Weis:

Thank you as well to Dr. Hany Farid from UC Berkeley, also Fran Panetta and Halsey Burgund, co-directors of the Emmy-winning in Event of Moon Disaster Project featuring the Richard Nixon Deepfake.


Francesca Panetta:

Bewaaaaare.


Halsey Burgund:

We're all about terrifying our audiences. Yes, we'll scare you straight.


Dusty Weis:

I have to note here that Fran, Halsey, Hany, and Noelle are all delightful, fun and funny people with great senses of humor. Normally we try to keep this show kind of light, but the dower tone of this episode, that's not on the guests. I like to think that it's not on me, it's strictly on the subject matter here. But if this episode didn't drive it home, this technology is an imminent threat for us here in the world of strategic communication. Take it seriously. Start paying attention. Listen to the experts, and above all, have a plan, because a year from now, we are all going to be working in a very different world now that this genie is out of the bottle.


I promise something a little more light and fun is coming up next month. Do follow Lead Balloon in Apple Podcasts, or whatever your favorite podcast app is, and check out PodCamp Media on Social. Lead Balloon is produced by PodCamp Media, where we provide branded podcast production services for businesses. Our podcast studios are located in the heart of beautiful downtown Milwaukee, Wisconsin, but we work with brands all over North America. Podcampmedia.com. Music for this episode by Falls, Midnight Noir, Memory Theory, and Empyrial Glow. Beatrice Lawrence was our researcher, and I was the producer, editor, and writer. Until the next time, folks, thanks for listening. I'm Dusty Weis.



60 views0 comments
bottom of page