Bringing Midjourney Art to Life with Multi-Motion Brush from Runway

Step aside, text-to-image generators. In this article, we’ll feature Runway’s unique multi-motion brush and turn Midjourney images into short videos.

John Angelo Yap

Updated June 14, 2024

A robot filming a cat, generated with Midjourney

A robot filming a cat, generated with Midjourney

Reading Time: 5 minutes

It’s been a few years since AI was introduced to the general public, and it pretty much shaped the internet since. Today, we don’t just see AI in blogs or in a random student’s assignment. It’s also been taking over the world of art, for better or worse.

We all know what Midjourney is at this point. After all, it’s the most popular AI image generator in the market. But only a handful know that there is a new technology that’s being developed alongside image generators: text-to-video AI models. An example of which is Runway.

One of Runway’s most famous features is multi-motion brush, which is what we’ll be talking about today. In this article, I’ll guide you through what it does, how to use it, and some examples I created using this feature.

What is Midjourney?

Since its inception, Midjourney has reigned supreme in the realm of AI image generation. Midjourney's creative prowess and attention to detail have garnered a devoted following. Its latest version is called V6 (and Niji 6 for anime art generation) which features not just better creativity, but also more features such as style and character reference.

What is Runway?

Runway is a platform similar to Midjourney, but its main focus is on text-to-video AI generation. This platform allows users to create multimedia content, like videos and images, using prompts or image inputs. 

So, let’s put the two together. What happens if we try to bring Midjourney images to life using Runway?

How Can You Create Videos of Midjourney Art with Runway?

Let’s start with something easy — create an image using Midjourney. In the past, you could only do this on Discord using their bot with the “/imagine” command. But now, you can also generate images on their web application. For this article, let’s use Discord.

Out of the four variations, I like the first one the most, so that’s what we’ll use in Runway. Let’s see if we can fool future historians using this image.

To use Runway, you need to create an account first and then login. 

Once you’re through, select “Generate Videos” on the left navigation bar, then choose “Text/Image to Video” on the three options provided.

The next step is to upload the image you created with Midjourney. From here, you have an empty textbox where you can input a prompt describing what you want the final output to look like, but that’s not what we’re going to do. 

Instead, let’s press the paintbrush logo on the navigation bar, which leads you to the multi-motion brush functionality.

The next thing you need to do is highlight specific areas of your image that you want to move. For this, you’ll have five distinct brushes, which are made to mark every layer of your original image.

Once every layer is highlighted, select one of the brushes and wait for the motion brush pane to pop up. This will contain four sliders:

  • Horizontal: x-axis movement or left to right.
  • Vertical: y-axis movement or up to down.
  • Proximity: z-axis movement or closer to farther.
  • Ambient: Scattered noise to simulate real videos.

Here’s what the final output looks like once I’ve generated a video:

Other Examples

If you’re looking for more, here are other videos I’ve created using Midjourney and Runway’s motion brush feature.

Note: These are all created using Runway’s free trial version. More complex movement and precision control can be unlocked by subscribing to their paid tiers.

Overall Thoughts

Disappointed is too strong a word as I was definitely expecting more from Runway. Especially since they’re one of the first in the market to introduce text-to-video and image-to-video AI generation.

Out of the seven samples above, there are three standouts: the cat, the empty field, and the woman at a music festival. They’re the ones that are most coherent, but there are still some issues. For instance, I don’t really understand why Runway chose to completely change the cat and the rock formation in the fourth video. 

With these three videos, you could really see Runway’s potential — The animation of the waves in the cat video, the camera movement in the festival video, and the swaying of the plants in the field video were all exceptional. 

So, let’s discuss the bad apples.

  • The first video (John Wick-esque film shot) isn’t too bad. But Runway was unable to coherently animate running, which made the character look like he was skipping in the latter parts.
  • The fifth video (oil painting of the universe’s creation) shows that Runway is unable to animate maximalist images. It’s also obvious that animating images using the motion brush isn’t optimal as it couldn’t move the person’s face realistically using only the x-axis option.
  • The sixth video (robot dancing) displays Runway’s inability to perceive objects as solid matter with the robot morphing into something new with every movement.
  • The last video (woman in a park bench) also shows that Runway doesn’t have a good grasp on animating faces.

The Bottom Line

With Sora just around the corner, you have to wonder how established AI video generators would perform against this new product from OpenAI. It’s simply not enough to be “good enough” anymore. 

Here’s what I’ll say about Runway though: it’s definitely promising. But it’s not as good as the outputs we’ve seen from Sora so far. The biggest hurdle that they have to overcome is maximizing their model’s object coherence. From there, they also need to improve their features such as the multi-motion brush to understand more complex instructions.

Speaking of Sora, we recently wrote an article comparing Sora’s output against Runway. Give it a read if you have some time. Have fun!

Want to Learn Even More?

If you enjoyed this article, subscribe to our free newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.


Written by John Angelo Yap

Hi, I'm Angelo. I'm currently an undergraduate student studying Software Engineering. Now, you might be wondering, what is a computer science student doing writing for Gold Penguin? I took up studying computer science because it was practical and because I was good at it. But, if I had the chance, I'd be writing for a career. Building worlds and adjectivizing nouns for no other reason other than they sound good. And that's why I'm here.

Subscribe
Notify of
guest

0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments