Generative media — the field of research that relates to "deep fakes" and other forms of synthesized audio and video — has made some incredible advances in recent years. At its most impressive, it’s already able to produce audio, video, and writing that’s indistinguishable from real, human-created media. The technology has exciting potential, with applications like Descript's AI Voices and Studio Sound making creative work more efficient and accessible But it also carries the potential for misuse.
Descript was among the first products available with generative media features. As a leader in the field, we are committed to modeling a responsible implementation of these technologies, unlocking the benefits of generative media while safeguarding against malicious use.
We believe you should own and control the use of your digital voice. Descript uses a process for training speech models that depends on verbal consent verification, ensuring that our customers can only create text to speech models that have been authorized by the voice’s owner.
As the applications of this technology continue to evolve, we will remain in conversation with leading machine learning researchers, ethics professors, our customers, and the broader public about the most responsible, beneficial ways to develop and implement this technology. And through our membership in the Content Authenticity Initiative, we will collaborate with other technology and media companies to create a set of industry standards to combat misinformation.
Many companies offer similar technologies to ours, and we hope that, if they haven’t already, those companies will soon join us in implementing similar constraints to deter misuse.
It's unclear. While compelling research (example) is underway, the quality of generative media may increase at a rate that outpaces technology designed to detect it. While we cannot predict what the future holds for the media, we do believe it will continue to be important for each of us to be critical consumers of everything we see, hear, and read.