Deepfakes: Why podcasters are at risk and how to protect yourself

In this article, you’ll learn the risk podcasters face by creating a consistent audio collection of their voice, how they can attempt to protect themselves, emerging ways to monitor for deepfakes, and what to do if you ever find a deepfake of your voice on the internet.
October 4, 2023
Erin Ollila
In this article
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions

Jordan Peele is a masterful filmmaker. So masterful that back in 2018, he left me questioning my ability to determine whether something was real or not with the deepfake that he made of former President Barack Obama.

At the time, I remember thinking, Thank goodness that no one would create a fake video of me. I couldn’t imagine a situation in which someone would have access to enough content of me to train a neural network on—nor could I imagine why they’d want to. 

And then, I started a podcast. 

Now, AI is so accessible in our day-to-day lives that the scenario that once seemed so impossible suddenly seems likely.

If you’re a podcaster like me who’s been making episodes for a while, there are now hundreds of hours of your voice available for streaming. Which also means the potential exists for someone to take your audio and use it to train an AI model of your voice. And once that model is trained, there’s no telling what they could do with it.

In this article, you’ll learn the risk podcasters face by creating a consistent audio collection of their voice, how they can attempt to protect themselves, emerging ways to monitor for deepfakes, and what to do if you ever find a deepfake of your voice on the internet.

What are deepfakes?

To understand more about deepfakes and the risks and protections there are out there, I contacted Dr. Manjeet Rege, the Director of the Center for Applied Artificial Intelligence at the University of St. Thomas in St. Paul, Minnesota. 

“Deepfakes are synthetic media created by deep learning techniques to generate fake audio, images, or video that falsely depict a real person's likeness, expressions and voice,” says Dr. Rege.

“They’re made by training neural networks on large datasets of images, videos and audio recordings to produce convincing forgeries that can spread misinformation if used maliciously.”

And these synthetic media are only getting easier to make—and harder to spot. The Federal Trade Commission warns that “...a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference.”

Because it’s getting so hard to separate what’s real from what’s fake, it’s becoming even more important to protect yourself as a content creator and advocate for tighter screening from your content platforms as a content consumer. You’ll learn more about how to do both of those things later on in the article.

A fan created a deepfake of podcaster Joe Rogan to make The Joe Rogan AI Experience

Are deepfakes something podcasters should worry about?

You might think you’re safe from audio deepfakes because you have a smaller podcast or because you’re not famous. (We assume.)

But that’s not necessarily true.

Even before deepfakes were possible, cyberbullies looked for any strategy to trash their rivals — think doctored chat screenshots, old photos presented out of context, or even just gossipy posts designed to spread malicious rumors. The posts about famous people spread far enough for you to hear about them, but that doesn’t mean less-famous people are never targeted. 

Put simply, if you’ve ever had an enemy, you’re at risk of deepfakes.

And creating a deepfake wouldn’t be too difficult if someone had access to a collection of your podcast episodes. If you’re hosting your own podcast, the chances are high that there are hours and hours of your voice available to be downloaded.

“The more audio data you have to train the model, the better it will potentially be at replicating that person's voice accurately,” says Dr. Rege. 

The industry is catching up

If you’re nervous about the effect audio deepfakes may have on the podcasting industry, know that your concerns are valid. The good news is that there are tools and other services being developed designed to help with deepfake detection.

For example, it’s becoming possible to create an inaudible digital watermark so you can tell when audio of your voice has been tampered with. Resemble AI, for example, offers digital watermarking as one of their features. Their tool Resemble Detect also provides technology that is trained to help identify deepfakes. But Detect would only help if you found potentially spoofed audio content using your voice; it doesn’t search to find all voice impersonations that exist. 

It’s also good to know that many audio platforms are taking this threat seriously and putting plans in place to protect themselves and creators who publish to their platforms. 

As an example, Deezer is beginning to use AI tools to try to detect and remove content that was created from generative AI. Spotify is working on this issue, too. 

“We've seen pretty much everything in the history of Spotify at this point with people trying to game our system," Spotify founder Daniel Ek said in a recent interview with the BBC. "We have a very large team that is working on exactly these types of issues."

At this time, AI-generated audio isn’t completely banned on Spotify, though it doesn’t allow the content on its platform to be used in machine learning. In other words, if someone’s going to make a deepfake of a podcaster’s voice, they can’t do it using Spotify.

What to do if you discover a deepfake

It’s important to know how to protect yourself if you ever were a victim of an audio deepfake.

Dr. Rege says that podcasters have a few options if they do discover a deepfake. 

“The podcaster can issue formal takedown requests to hosting platforms, leverage deepfake monitoring services to accelerate takedowns, suspend associated fraud accounts misusing their identity, publish disclaimers that the content is fabricated, work with reputation management firms to optimize real content visibility, pursue legal action if needed against creators, and increase security measures on their official accounts and content going forward.”

Another way to protect yourself is having an organized podcasting workflow that backs up your audio content for safe storage. You’ll want to have the original copies of your recordings available to use in order to verify that the audio deepfake is, in fact, synthetic media.

Dr. Rege says, “Having this authenticated media on hand will allow the podcaster to conclusively demonstrate to platforms, monitoring services, and even legally if necessary, that the deepfake content is fake by comparing it against their bona fide vocal examples.”

Descript’s Overdub landing page

Does Descript let you deepfake yourself?

I love that Descript has an AI voice feature that I can use if I catch a mistake during editing. That is literally an AI version of my own voice—technically, a deepfake. But it’s important to point out that replacing your own individual content with a synthetic version of you is completely different from someone else creating a fake piece of content without your permission. When you use Descript to overdub a small piece of audio in a larger body of content, you’re not trying to fool anyone; you’re just correcting an error.

And more importantly, Descript has tight security protocols in place, along with ethical principles that outline how an individual can use their AI voice. 

For example, the first time you use your AI voice, you have to record a verification statement—in your own voice—that matches the surrounding voice. And this verification statement is authenticated by an actual human on the Descript team. Plus, Descript doesn’t allow you to train anyone else’s voice, so you couldn’t Overdub a podcast guest or fix an error from someone you may be doing a podcast ad swap with, for example—much less create a deepfake of a celebrity.

As artificial intelligence becomes more ingrained in our lives, the best thing we can do as podcasters is to stay vigilant about the risks and benefits. It’s also important to use your voice, and the platform you’ve built, to speak up for better legal protections for podcasters and other content creators.

Erin Ollila
Erin Ollila is an SEO copywriter, lover of pretzel bread, and host of the Talk Copy to Me podcast. Learn more and connect: https://erinollila.com
Start creating—for free
Sign up
Join millions of others creating with Descript

Deepfakes: Why podcasters are at risk and how to protect yourself

Jordan Peele is a masterful filmmaker. So masterful that back in 2018, he left me questioning my ability to determine whether something was real or not with the deepfake that he made of former President Barack Obama.

At the time, I remember thinking, Thank goodness that no one would create a fake video of me. I couldn’t imagine a situation in which someone would have access to enough content of me to train a neural network on—nor could I imagine why they’d want to. 

And then, I started a podcast. 

Now, AI is so accessible in our day-to-day lives that the scenario that once seemed so impossible suddenly seems likely.

If you’re a podcaster like me who’s been making episodes for a while, there are now hundreds of hours of your voice available for streaming. Which also means the potential exists for someone to take your audio and use it to train an AI model of your voice. And once that model is trained, there’s no telling what they could do with it.

In this article, you’ll learn the risk podcasters face by creating a consistent audio collection of their voice, how they can attempt to protect themselves, emerging ways to monitor for deepfakes, and what to do if you ever find a deepfake of your voice on the internet.

Record or import audio, make edits, add fades, music, and sound effects, then publish online, export the audio in the format of your choice or send it directly to your hosting service.
Create your podcast from start to finish with Descript.

What are deepfakes?

To understand more about deepfakes and the risks and protections there are out there, I contacted Dr. Manjeet Rege, the Director of the Center for Applied Artificial Intelligence at the University of St. Thomas in St. Paul, Minnesota. 

“Deepfakes are synthetic media created by deep learning techniques to generate fake audio, images, or video that falsely depict a real person's likeness, expressions and voice,” says Dr. Rege.

“They’re made by training neural networks on large datasets of images, videos and audio recordings to produce convincing forgeries that can spread misinformation if used maliciously.”

And these synthetic media are only getting easier to make—and harder to spot. The Federal Trade Commission warns that “...a growing percentage of what we’re looking at is not authentic, and it’s getting more difficult to tell the difference.”

Because it’s getting so hard to separate what’s real from what’s fake, it’s becoming even more important to protect yourself as a content creator and advocate for tighter screening from your content platforms as a content consumer. You’ll learn more about how to do both of those things later on in the article.

A fan created a deepfake of podcaster Joe Rogan to make The Joe Rogan AI Experience

Are deepfakes something podcasters should worry about?

You might think you’re safe from audio deepfakes because you have a smaller podcast or because you’re not famous. (We assume.)

But that’s not necessarily true.

Even before deepfakes were possible, cyberbullies looked for any strategy to trash their rivals — think doctored chat screenshots, old photos presented out of context, or even just gossipy posts designed to spread malicious rumors. The posts about famous people spread far enough for you to hear about them, but that doesn’t mean less-famous people are never targeted. 

Put simply, if you’ve ever had an enemy, you’re at risk of deepfakes.

And creating a deepfake wouldn’t be too difficult if someone had access to a collection of your podcast episodes. If you’re hosting your own podcast, the chances are high that there are hours and hours of your voice available to be downloaded.

“The more audio data you have to train the model, the better it will potentially be at replicating that person's voice accurately,” says Dr. Rege. 

The industry is catching up

If you’re nervous about the effect audio deepfakes may have on the podcasting industry, know that your concerns are valid. The good news is that there are tools and other services being developed designed to help with deepfake detection.

For example, it’s becoming possible to create an inaudible digital watermark so you can tell when audio of your voice has been tampered with. Resemble AI, for example, offers digital watermarking as one of their features. Their tool Resemble Detect also provides technology that is trained to help identify deepfakes. But Detect would only help if you found potentially spoofed audio content using your voice; it doesn’t search to find all voice impersonations that exist. 

It’s also good to know that many audio platforms are taking this threat seriously and putting plans in place to protect themselves and creators who publish to their platforms. 

As an example, Deezer is beginning to use AI tools to try to detect and remove content that was created from generative AI. Spotify is working on this issue, too. 

“We've seen pretty much everything in the history of Spotify at this point with people trying to game our system," Spotify founder Daniel Ek said in a recent interview with the BBC. "We have a very large team that is working on exactly these types of issues."

At this time, AI-generated audio isn’t completely banned on Spotify, though it doesn’t allow the content on its platform to be used in machine learning. In other words, if someone’s going to make a deepfake of a podcaster’s voice, they can’t do it using Spotify.

What to do if you discover a deepfake

It’s important to know how to protect yourself if you ever were a victim of an audio deepfake.

Dr. Rege says that podcasters have a few options if they do discover a deepfake. 

“The podcaster can issue formal takedown requests to hosting platforms, leverage deepfake monitoring services to accelerate takedowns, suspend associated fraud accounts misusing their identity, publish disclaimers that the content is fabricated, work with reputation management firms to optimize real content visibility, pursue legal action if needed against creators, and increase security measures on their official accounts and content going forward.”

Another way to protect yourself is having an organized podcasting workflow that backs up your audio content for safe storage. You’ll want to have the original copies of your recordings available to use in order to verify that the audio deepfake is, in fact, synthetic media.

Dr. Rege says, “Having this authenticated media on hand will allow the podcaster to conclusively demonstrate to platforms, monitoring services, and even legally if necessary, that the deepfake content is fake by comparing it against their bona fide vocal examples.”

Descript’s Overdub landing page

Does Descript let you deepfake yourself?

I love that Descript has an AI voice feature that I can use if I catch a mistake during editing. That is literally an AI version of my own voice—technically, a deepfake. But it’s important to point out that replacing your own individual content with a synthetic version of you is completely different from someone else creating a fake piece of content without your permission. When you use Descript to overdub a small piece of audio in a larger body of content, you’re not trying to fool anyone; you’re just correcting an error.

And more importantly, Descript has tight security protocols in place, along with ethical principles that outline how an individual can use their AI voice. 

For example, the first time you use your AI voice, you have to record a verification statement—in your own voice—that matches the surrounding voice. And this verification statement is authenticated by an actual human on the Descript team. Plus, Descript doesn’t allow you to train anyone else’s voice, so you couldn’t Overdub a podcast guest or fix an error from someone you may be doing a podcast ad swap with, for example—much less create a deepfake of a celebrity.

As artificial intelligence becomes more ingrained in our lives, the best thing we can do as podcasters is to stay vigilant about the risks and benefits. It’s also important to use your voice, and the platform you’ve built, to speak up for better legal protections for podcasters and other content creators.

Featured articles:

No items found.

Articles you might find interesting

For Business

How to Create an Amazing Product Demo Video

Product demo videos are an excellent opportunity for you, the maker. Your product demonstration video should aim to answer a viewer’s burning question: “What can this product do for me?”

Video

What is color grading? Learn the importance of stylizing footage

Color grading is all about tweaking film and video to create consistent color tones throughout the final product. Use this step-by-step guide to get you started.

Video

Video Montages: Make Your Clips More Versatile and Fun

Montages are great for condensing a lot of visual information into a short period of time, using Descript as your video montage maker is an easy drag-and-drop process.

Video

The 5 best explainer videos to inspire yours (and how to create one)

Explainer videos are short, digital marketing videos that concisely and compellingly explain your company’s product or service, they take a considerable amount of effort.

Related articles:

Share this article

Get started for free →