Gold Penguin Logo with Text

Midjourney Just Became More Consistent With Style Reference

Midjourney just released a game-changing feature for better consistency: Style Reference. In this article, I'll explore what it actually is and how effective this feature is at capturing image styles.
Updated February 5, 2024
A robot painter, generated with Midjourney
A robot painter, generated with Midjourney

No matter your choice of AI image generator, there's one problem plaguing the community with seemingly no fix in sight: consistency.

AI models don't have a concept of visual memory. What this means is that you can't generate the same style or character twice. There will always be a difference, no matter how hard you try.

Well, looks like Midjourney figured out a way.

With style reference, you can copy another image's art style without ever knowing the prompt behind it. In this article, we'll talk about what exactly is style reference, how to use it, and some examples to show you how it works.

What is a Style Reference?

Have you ever liked an image but couldn’t replicate the style and vibe of it on Midjourney? That’s what the style reference feature tries to fix. It extracts the style of the reference image (hence, style reference) so you can use it for your prompt without actually knowing how to phrase it. The best thing is that you can use any image, not just other Midjourney images.

How To Use Midjourney’s Style Reference Feature

To use this new feature, you need to use Midjourney on Discord, even if you have access to their alpha website. From there, type your prompt as you usually do but, at the end, add an --sref parameter. Following that, simply copy and paste a link to your style reference image (you can add as many as you want) and generate. Oh and don’t forget to add --v6 too. Here’s an example:

Midjourney V6 Style Reference

You can also control the strength of stylization using the --sw parameter followed by a number. 0 is the lowest, 100 is default, and 1000 is the maximum value.

Reference Image

Prompt: a cat staring out a window --v 6.0 --ar 16:9 --sref [Reference Image Link] --sw 25

Prompt: a cat staring out a window --v 6.0 --ar 16:9 --sref [Reference Image Link] --sw 800

Notice how the first cat image didn't even apply an anime art style to the image, instead just focused on some of the most inconsequential parts of the reference image, like the plants, wooden windowsill, and subject position.

On the other hand, setting the style weight to 800 not only followed the reference image's art style, but also overcompensated in some ways. This is most evident in the cat's "hair," which looks too similar to the girl's hair in the reference.

Style Reference vs. Image Prompts

If you’ve been using Midjourney for a while, you’d know that it has always been able to use images as prompts.

So, what’s the difference? Let’s break it down.

A Midjourney prompt has three major parts: the subject, the supporting details, and the parameters. Let’s disregard the last one for now. What image prompts do is that it integrates the image into your prompt. Meaning that it will include the subject and all. Take this reference image for an example:

Now, let's try using it as an image prompt.

Prompt: abraham lincoln [Image Link] --v 6.0 --ar 16:9

It doesn't look out of place, right? Like someone took Honest Abe's picture and pasted it directly into the image.

On the other hand, style reference only takes the supporting details (styles) and not the subject. This makes it a lot more flexible. Here’s what the last prompt looks like with style reference instead of image prompts:

Prompt: abraham lincoln --v 6.0 --ar 16:9 --sref [Reference Image Link]

Notice how well it blends? It's much more natural with style reference. This is especially amazing for blending two images together if you need the subject from one image and the art style of the other.

Midjourney Style Reference Examples

So, enough discussion, let’s see Midjourney’s new feature in action.

With Other Midjourney Images

Using style reference with other Midjourney images is a game-changer in maintaining consistency. This could prove useful when you find an image whose style you really like in the four image variations you get with each prompt, but you want to change the subject. 

This also solves the issue of prompt gatekeeping in Midjourney communities. You don’t have to ask for their prompt anymore, you just need a copy of their image and use it as a style reference. 

Prompt: an ocean --v 6.0 --sref [Reference Image Link]

Prompt: logo design of a sneaker --v 6.0 --sref [Reference Image Link]

Prompt: a man looking at new york skyline --v 6.0 --sref [Reference Image Link]

Prompt: seamless pattern of autumn leaves --v 6.0 --sref [Reference Image Link]

Prompt: a whale --v 6.0 --sref [Reference Image Link]

Prompt: a tall lighthouse --v 6.0 --sref [Reference Image Link]

Prompt: poseidon --v 6.0 --sref [Reference Image Link]

Prompt: zendaya --v 6.0 --sref [Reference Image Link]

Using Non-AI Artwork

To me, this is the most practical use of style reference. If you find a niche photograph or artwork that you really like, you couldn’t likely generate the style with base Midjourney perfectly. Now, you can just use it as the style of your prompt. 

Prompt: a dog --v 6.0 --sref [Reference Image Link]

Prompt: a girl tidying her bed --v 6.0 --sref [Reference Image Link]

Prompt: a well-dressed man in an empty road --v 6.0 --sref [Reference Image Link]

Wrapping Up

Midjourney has been on a roll since releasing V6 late last year. In the span of one month, they released a new version, rolled out an alpha web version, unveiled a new Niji model, cleaned up their web interface, and introduced V5 features to V6. Style reference is a great addition to their already robust functionality.

The way things are shaping out now, it's hard to see Midjourney getting toppled without a significant improvement to other image generators such as DALL-E 3. It's also interesting they're improving their model by listening to what their user base needs. Text generation, better nuance, a web interface, and now consistency.

Truly, is there anything that Midjourney can't do?

Before you leave, check out our full review of Midjourney here, as well as our comparison articles with other image generation models. If you have access to Midjourney, give style referencing a try and let us know how it goes in the comments.

Want To Learn Even More?
If you enjoyed this article, subscribe to our free monthly newsletter
where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
Written by John Angelo Yap
Hi, I'm Angelo. I'm currently an undergraduate student studying Software Engineering. Now, you might be wondering, what is a computer science student doing writing for Gold Penguin? I took up studying computer science because it was practical and because I was good at it. But, if I had the chance, I'd be writing for a career. Building worlds and adjectivizing nouns for no other reason other than they sound good. And that's why I'm here.
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
Join Our Newsletter!
If you enjoyed this article, subscribe to our free monthly newsletter where we share tips & tricks on how to use tech & AI to grow and optimize your business, career, and life.
magnifiercross