Descript Ethics Statement

Generative media — the field of research that relates to "deep fakes" and other forms of synthesized audio and video — is advancing rapidly. In many use cases, the results are already indistinguishable from real media. This technology has exciting applications, such as Descript's Overdub feature, but it also holds the potential for misuse.

While Descript is among the first products available with generative media features, it won't be the last. As such, we are committed to modeling a responsible implementation of these technologies, unlocking the benefits of generative media while safeguarding against malicious use.

We believe you should own and control the use of your digital voice. Descript uses a process for training speech models that depends on real-time verbal feedback, ensuring that individuals can only create a text-to-speech model of their own voice. Once created, the user is the owner of their voice and has the sole authority to decide when and how it is used.

Descript's generative media features are currently in closed beta. As we look toward a broader release, we will remain in conversation with leading machine learning researchers, ethics professors, and the broader public about how to best develop and implement this technology.

FAQ

How does Descript ensure that one can only synthesize his or her own voice?

Over the last several years, we have created thousands of authenticated voices by using a voice model training process that depends upon real-time verbal feedback. During the closed beta of Descript's generative voice modeling features, we are obtaining manual approval from each speaker before making their Voice Double available for their use.

What is preventing other individuals/organizations from using similar technology with malicious intent?

Today, our technology is unique, but the foundational research is already widely available. Other generative media products will exist soon, and there's no reason to assume they will have the same constraints we've added to Descript.

What can be done to detect deep fakes?

It's unclear. While compelling research (example) is underway, the quality of generative media could increase at a rate that outpaces technology designed to detect it. While we cannot predict what the future holds for the media, we do believe it will continue to be important for each of us to be critical consumers of everything we see, hear, and read.