How I combine generative video tools with Descript to produce innovative special effects

In this article, I’m going to explain how I’ve been using generative video in combination with Descript to supercharge my creativity — and how you can do the same.
July 24, 2023
Brenton Zola
In this article
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions
Start editing audio & video
This makes the editing process so much faster. I wish I knew about Descript a year ago.
Matt D., Copywriter
Sign up

What type of content do you primarily create?

Videos
Podcasts
Social media clips
Transcriptions

If you’re reading this, chances are you’ve heard about generative AI tools — a Descript survey found that nearly two thirds of creators are already using them. Most of the focus of these conversations has been around generative text tools like ChatGPT or generative image tools like Midjourney or DALL-E 2.

But not as much attention has been paid to generative video tools. Generative video can be a game-changing tool for creatives and marketers. Particularly, it could have potential for creators who want to be able to create unique cinematic videos or marketing animations but don't have experience as animators or don't have the time to master tools like Adobe After Effects.

Count me among them. In this article, I’m going to explain how I’ve been using generative video in combination with Descript to supercharge my creativity — and how you can do the same.

The power of generative video

One generative video app I’ve been experimenting with is Runway. Runway has a number of nifty AI tools, from image expansion to face blurring, but their flagship tools are text-to-video and video style transfer, which they call video-to-video.

Runway’s text-to-video tool, Gen-2, generates 4-second clips in virtually any style based on your text prompts. Up until now, the free service had only been available to members in a closed beta that operated on Runway's Discord, but Gen-2 just became available to the public.

To use Runway Gen-2, you'll need to create a free account and then navigate to the tool page. Gen-2 certainly isn’t perfect — it’s still in its early phases and will require a bit of patience if you want to make longer videos. Since you can only generate clips 4 seconds at a time, it’s best for shorter videos in a montage style. The video clips also have no sound and often have slow frame rates, so they can look more like animated GIFs at first. If you want to produce a video that plays at a normal speed, you’ll have to speed up the clips with an editing tool. 

Caveats aside, Gen-2 can produce some exciting results. Here’s a still from a text prompt I wrote, “magical glowing garden of fireflies.”

image


The fact that the tool could produce this clip in just a couple of minutes is pretty impressive. Gen-2 is really good at producing landscapes, natural scenes, abstracts, and other ephemera. What it’s still not consistently great at producing…is human beings. This always seems to be the early limitation of AI tools. Sometimes the results will be fantastic, other times they’ll be kind of strange and something out of the uncanny valley. 

Despite the limitations of Gen-2, I’ve seen a number of innovative people turn them into longer movies by stitching many clips together. Generative AI expert Souki Mehdaoui made an entire AI film based on Mary Oliver’s poem “Dogfish.” The video is poignant, and there are many scenes where Runway’s generations are emotionally moving, especially in the backdrop of the narration. 

One of the sillier examples of a Runway Gen-2 movie is the viral Pepperoni Hugspot pizza commercial, made by the creator Pizza Later. They created a commercial in the style of a nostalgic 80’s pizza ad, which also included music and a voiceover generated with other AI tools. 

If you want to use Gen-2, you’ll have to choose the video you want to create carefully because there's a limit to how many videos you can generate for free. A free account has a limit of 60 seconds of video. With a paid account ( $15/month or $144/year), you get 125 seconds of video per month and can pay extra for more. You also get additional features like higher resolution videos. 

I look forward to seeing how Runway continues to develop Gen-2. But in my opinion, where Runway still shines is its Gen-1 video-to-video tool, which modifies existing videos to give them a different style. Below is an example of Gen-1 using some of Runway’s stock samples. I took a video of this young woman and superimposed it with a style reference of this shadowy cyberpunk cityscape.  

image


image

This was the result Gen-1 quickly produced: 

image


If you want, you can regenerate the clip to change how strong the effect is and adjust other parameters to achieve the exact look that you want — maybe highlighting more of the face and less of the background. Gen-1 also gives you several previews of what the output will look like before generation, which is pretty handy. 

This kind of technology has incredible potential. For example, the team at the Corridor YouTube channel used technology similar to Runway to produce an anime film called “Rock, Paper, Scissors.” They filmed live-action scenes on a greenscreen and then generated anime effects from the footage in order to create an entire streaming-ready film from scratch. And they did it in a fraction of the time that it would take to make with traditional methods. The results are pretty stunning:


This method isn’t necessarily new. As Disney animator Aaron Blaise points out in his reaction video, this is the AI version of rotoscoping, which is essentially tracing over real-life scenes that were previously filmed to give them an animated look. Rotoscoping was practiced for a long time in Hollywood and still pops up here and there. 

But with new AI technology, not only can it be done in a fraction of the time, but it can also be applied with nearly any creative style. And that’s what’s gotten me most excited. 

Runway + Descript

I’ve been combining Runway with Descript in order to generate creative video concepts. I’ll use Runway’s video-to-video and other AI magic tools, then export that creation to Descript’s editor. Runway does have its own built-in video editor, but I find Descript to be a more comprehensive editing tool. Descript makes it really easy to generate text effects, add waveforms, change aspect ratio, export for social formats, collaborate — in essence, it gives me flexibility in my video workflow.

In the case of one of my recent projects where I explored the idea of multiple “selves,” I took footage I already had filmed and used Runway's green screen effect to remove the background. (Editor’s note: Descript also has an AI-powered green screen feature).

image

image

Then I used one of Runway’s digitization effects to create a digitized version of my “selves."

image


Finally, I used Descript to stitch the whole video together with superimposed backgrounds, an intro card, outro, sound effects, transitions, and cuts. I was pleased with the result, which was a creative full-length video that would’ve been very difficult to achieve otherwise. 

So, now it’s your turn to go and experiment. For creatives who want to lean into generative video or for marketers who are looking to spice up their content and create something new, generative video could be a great tool. And its capabilities are only going to get better with time. 

As a note, I know that there's an important conversation being had about the ethics of using AI and the labor of artists. As an artist myself, I think it's best to think about how you are using the technology thoughtfully. For me personally, what appeals about video is that you’re making an entirely new creation that requires the hand of the artist. When you're applying a creative style to something that you've already created (or licensed), you’re bringing a new vision to life that might not have been possible otherwise.

I think it's important to just be thoughtful about that. Don’t miss the boat on these new tools, and keep pushing your creative visions.

Brenton Zola
Brenton Zola is a first-generation writer, thinker, and multidisciplinary artist fascinated by what it means to be human.
Start creating
The all-in-one video & podcast editor, easy as a doc.
Sign up
Start creating—for free
Sign up
Join millions of others creating with Descript

How I combine generative video tools with Descript to produce innovative special effects

If you’re reading this, chances are you’ve heard about generative AI tools — a Descript survey found that nearly two thirds of creators are already using them. Most of the focus of these conversations has been around generative text tools like ChatGPT or generative image tools like Midjourney or DALL-E 2.

But not as much attention has been paid to generative video tools. Generative video can be a game-changing tool for creatives and marketers. Particularly, it could have potential for creators who want to be able to create unique cinematic videos or marketing animations but don't have experience as animators or don't have the time to master tools like Adobe After Effects.

Count me among them. In this article, I’m going to explain how I’ve been using generative video in combination with Descript to supercharge my creativity — and how you can do the same.

The power of generative video

One generative video app I’ve been experimenting with is Runway. Runway has a number of nifty AI tools, from image expansion to face blurring, but their flagship tools are text-to-video and video style transfer, which they call video-to-video.

Runway’s text-to-video tool, Gen-2, generates 4-second clips in virtually any style based on your text prompts. Up until now, the free service had only been available to members in a closed beta that operated on Runway's Discord, but Gen-2 just became available to the public.

To use Runway Gen-2, you'll need to create a free account and then navigate to the tool page. Gen-2 certainly isn’t perfect — it’s still in its early phases and will require a bit of patience if you want to make longer videos. Since you can only generate clips 4 seconds at a time, it’s best for shorter videos in a montage style. The video clips also have no sound and often have slow frame rates, so they can look more like animated GIFs at first. If you want to produce a video that plays at a normal speed, you’ll have to speed up the clips with an editing tool. 

Caveats aside, Gen-2 can produce some exciting results. Here’s a still from a text prompt I wrote, “magical glowing garden of fireflies.”

image


The fact that the tool could produce this clip in just a couple of minutes is pretty impressive. Gen-2 is really good at producing landscapes, natural scenes, abstracts, and other ephemera. What it’s still not consistently great at producing…is human beings. This always seems to be the early limitation of AI tools. Sometimes the results will be fantastic, other times they’ll be kind of strange and something out of the uncanny valley. 

Despite the limitations of Gen-2, I’ve seen a number of innovative people turn them into longer movies by stitching many clips together. Generative AI expert Souki Mehdaoui made an entire AI film based on Mary Oliver’s poem “Dogfish.” The video is poignant, and there are many scenes where Runway’s generations are emotionally moving, especially in the backdrop of the narration. 

One of the sillier examples of a Runway Gen-2 movie is the viral Pepperoni Hugspot pizza commercial, made by the creator Pizza Later. They created a commercial in the style of a nostalgic 80’s pizza ad, which also included music and a voiceover generated with other AI tools. 

If you want to use Gen-2, you’ll have to choose the video you want to create carefully because there's a limit to how many videos you can generate for free. A free account has a limit of 60 seconds of video. With a paid account ( $15/month or $144/year), you get 125 seconds of video per month and can pay extra for more. You also get additional features like higher resolution videos. 

I look forward to seeing how Runway continues to develop Gen-2. But in my opinion, where Runway still shines is its Gen-1 video-to-video tool, which modifies existing videos to give them a different style. Below is an example of Gen-1 using some of Runway’s stock samples. I took a video of this young woman and superimposed it with a style reference of this shadowy cyberpunk cityscape.  

image


image

This was the result Gen-1 quickly produced: 

image


If you want, you can regenerate the clip to change how strong the effect is and adjust other parameters to achieve the exact look that you want — maybe highlighting more of the face and less of the background. Gen-1 also gives you several previews of what the output will look like before generation, which is pretty handy. 

This kind of technology has incredible potential. For example, the team at the Corridor YouTube channel used technology similar to Runway to produce an anime film called “Rock, Paper, Scissors.” They filmed live-action scenes on a greenscreen and then generated anime effects from the footage in order to create an entire streaming-ready film from scratch. And they did it in a fraction of the time that it would take to make with traditional methods. The results are pretty stunning:


This method isn’t necessarily new. As Disney animator Aaron Blaise points out in his reaction video, this is the AI version of rotoscoping, which is essentially tracing over real-life scenes that were previously filmed to give them an animated look. Rotoscoping was practiced for a long time in Hollywood and still pops up here and there. 

But with new AI technology, not only can it be done in a fraction of the time, but it can also be applied with nearly any creative style. And that’s what’s gotten me most excited. 

Runway + Descript

I’ve been combining Runway with Descript in order to generate creative video concepts. I’ll use Runway’s video-to-video and other AI magic tools, then export that creation to Descript’s editor. Runway does have its own built-in video editor, but I find Descript to be a more comprehensive editing tool. Descript makes it really easy to generate text effects, add waveforms, change aspect ratio, export for social formats, collaborate — in essence, it gives me flexibility in my video workflow.

In the case of one of my recent projects where I explored the idea of multiple “selves,” I took footage I already had filmed and used Runway's green screen effect to remove the background. (Editor’s note: Descript also has an AI-powered green screen feature).

image

image

Then I used one of Runway’s digitization effects to create a digitized version of my “selves."

image


Finally, I used Descript to stitch the whole video together with superimposed backgrounds, an intro card, outro, sound effects, transitions, and cuts. I was pleased with the result, which was a creative full-length video that would’ve been very difficult to achieve otherwise. 

So, now it’s your turn to go and experiment. For creatives who want to lean into generative video or for marketers who are looking to spice up their content and create something new, generative video could be a great tool. And its capabilities are only going to get better with time. 

As a note, I know that there's an important conversation being had about the ethics of using AI and the labor of artists. As an artist myself, I think it's best to think about how you are using the technology thoughtfully. For me personally, what appeals about video is that you’re making an entirely new creation that requires the hand of the artist. When you're applying a creative style to something that you've already created (or licensed), you’re bringing a new vision to life that might not have been possible otherwise.

I think it's important to just be thoughtful about that. Don’t miss the boat on these new tools, and keep pushing your creative visions.

Featured articles:

No items found.

Articles you might find interesting

Podcasting

Here's how to turn a mountain of interview tape into a coherent podcast episode

It can feel overwhelming to take everything you’ve recorded and try to pull it into a single, coherent podcast episode. We asked an expert how to sort through it all.

Podcasting

5 ways to establish your podcast's brand

It's not just logos and fonts -- these branding element flow from having a thorough, thoughtful understanding of what your podcast is trying to do and what you want to offer listeners. 

Product Updates

Building a searchable news firehose

This post was written by James Shield, senior producer for Stories of Our Times, a daily news podcast from The Times and The Sunday Times of London. It was originally published on his blog and is republished here with permission.

Video

3 YouTube analytics numbers you should be measuring — and how to fix them

YouTube analytics contains an overwhelming amount of information, and it's hard to know which you need to care about. Here are some of the most important metrics for channel success, and what they could be telling you.

Related articles:

Share this article

Get started for free →