Multi-Character Scene with Midjourney’s Huge Character Consistency Update (--cref)

Mia Meow
17 Mar 202408:30

TLDRMidjourney's latest feature, the character consistency update, has been highly anticipated and is now available. This new --cref function allows users to generate characters with consistent details using a character reference image. The feature is particularly effective for stabilizing facial features but may not perfectly replicate hair or outfit details. It works best with characters created by Midjourney and is not designed for real people or photos, which may be distorted. The video demonstrates how to use the --cref function in Midjourney to generate consistent characters, including tips on how to adjust character weight and use the 'vary region' feature for fine-tuning details. The creator also shares a workaround for placing multiple characters in the same scene. The results are suitable for creating AI influencers or fashion models, but the focus of the video is on animation style illustrations, which are popular among the audience. The video concludes with a teaser for more advanced character consistency hacks.

Takeaways

  • 🎉 Midjourney has released a new feature for character consistency, allowing users to generate characters with consistent details using a character reference image.
  • 🔍 The cref function is similar to a regular image prompt but focuses on character traits, although it won't replicate exact details like dimples or logos.
  • 💡 The feature is most effective with characters created by Midjourney and is not intended for real people or photos, which may be distorted.
  • 👧 The speaker prefers using real people results for stabilizing facial features, finding the cref feature useful for this purpose but not for hair and outfit details.
  • 🎨 The results are suitable for creating AI influencers or fashion models, but the focus of the video is on animation-style illustrations.
  • 📝 It's recommended to note down important character features to maintain consistency as images are generated.
  • 🌟 The first character, Lily, is described in detail to show how the cref function can be used to generate consistent character images.
  • 🔗 The image URL for the cref function can be obtained by dragging the image to the prompt box, right-clicking the image to copy the address, or opening the image in a browser.
  • ✅ The --cw parameter can be used to modify character references, with strengths ranging from 100 (all details) to 0 (just the face).
  • 🧐 Lowering the character weight makes the image adhere more to the text prompt and less to the reference character's hair and outfit.
  • 🖼️ The generated images include animals and butterflies from the original image, showing that the cref parameter improves consistency over a simple reference image.
  • 🛠️ The 'vary region' feature can be used to edit images for perfection, such as changing clothing details or the character's gaze.
  • 👥 To generate multiple characters in a scene, be more descriptive in the prompt and adjust the character reference for each character.
  • 💻 For further detail editing, one can use Photoshop's generative tool or similar software to refine the generated images.
  • 📈 The consistency of characters generated by AI is expected to improve over time as tools like Midjourney's cref function are perfected.

Q & A

  • What is the new feature introduced by Midjourney for character consistency?

    -Midjourney has introduced a new feature called 'cref' that allows users to generate characters with consistent details using a character reference image.

  • What are the limitations of the cref function according to Midjourney?

    -The precision of the cref function is limited and it will not copy exact details like dimples, freckles, or t-shirt logos. It works best with characters made from Midjourney and is not designed for real people or photos, which may get distorted.

  • Why might the creator prefer using real people results in their tests?

    -The creator prefers real people results because the cref feature is more useful in stabilizing facial features, which is important for animation or storybook standards, even though it may not perfectly replicate hair and outfit details.

  • For what type of content is the cref feature particularly useful?

    -The cref feature is particularly useful for creating AI influencers or AI models for fashion brands where facial consistency is more critical.

  • How does one use the cref function in Midjourney?

    -To use the cref function, one can type in a prompt followed by '--cref' and then insert the image URL of the character reference. The feature can be modified using the '--cw' parameter to adjust the character reference strength from 100 to 0.

  • How can one obtain the image URL for using with the cref function?

    -The image URL can be obtained by dragging the image directly to the prompt box, right-clicking the image to get the image address, or opening the image in a browser and copying the link from there.

  • What is the purpose of adjusting the character weight ('--cw') in the cref function?

    -Adjusting the character weight allows users to control the extent to which the character details are used as a reference. A strength of 100 uses all character details, while a strength of 0 focuses only on the face.

  • How can one edit the generated images to better match the original character?

    -One can use the 'vary region' feature to select and edit specific areas of the generated image, such as changing clothing colors or details to more closely match the original character.

  • What is the process for generating multiple characters in the same scene?

    -To generate multiple characters in the same scene, one must be more descriptive in the text prompt, specifying the details of each character and their actions. Additionally, the character reference should be switched for each character being generated.

  • How can one further refine the details of the generated images?

    -One can further refine the details by using the 'vary region' feature to edit specific areas such as clothing, eye gaze, or any other unsatisfactory details. Alternatively, Photoshop's generative tool can be used for more complex edits.

  • What is the creator's opinion on the consistency of the characters generated using Midjourney's cref function?

    -The creator acknowledges that while the consistency is not perfect, it is still better than using just a reference image. They also express optimism that the tools will improve and offer better consistency in the future.

  • What additional resource does the creator recommend for those serious about creating consistent characters using AI?

    -The creator recommends checking out a video for 'ultimate Character Consistency Hacks' for those who are serious about creating the most consistent characters using AI.

Outlines

00:00

🎨 Midjourney's Character Consistency Feature Overview

Midjourney has introduced a new feature that allows for character consistency using a character reference image. The feature prioritizes character traits over other elements. While it's not designed for real people or photos, it was found to be particularly effective in stabilizing facial features. The video demonstrates how to use this feature and provides a workaround for placing multiple characters in a single scene. The process involves using the 'cref' function in Midjourney and noting down key character features to maintain consistency. The video focuses on animation style illustration, which is the preferred style of the audience. The character 'Lily' is used as an example, with detailed features described. The video also covers how to generate images on Discord and the Alpha website, and how to modify character references using the 'cref' parameter.

05:01

🖌️ Refining Character Details with Midjourney's Tools

The video script outlines a method for refining character details to achieve a closer match to the original character design. It introduces the 'vary region' feature for editing specific areas of the generated image. The process involves upscaling the image, selecting the area to edit, and using the 'cref' link with a simple prompt description to achieve the desired result. The script also discusses adding a second character to the scene, emphasizing the need for a more descriptive prompt to ensure both characters are generated. The video demonstrates how to switch character references and fine-tune details using the 'vary region' feature. It also mentions the possibility of using Photoshop's generative tool for further detail editing. The video concludes with a series of images generated entirely in Midjourney, inviting viewers to share their thoughts on the consistency of the characters and promoting another video with ultimate character consistency hacks.

Mindmap

Introduction to Midjourney's cref function
Comparison with regular image prompts
Limitations of precision
Optimal use with Midjourney-created characters
Potential distortion of real people or photos
Overview of Midjourney's cref Feature
Preference for real people results
Utility in stabilizing facial features
Challenges with hair and outfit consistency
Suitability for AI influencers and fashion models
Personal Experience and Preferences
Focus on animation style illustration
Importance of character consistency in storytelling
Relevance to Animation and Storytelling
Using a Midjourney-generated image as a reference
Noting down important character features
Creating a character: Lily
Describing Lily's appearance and outfit
Character Creation Process
Generating the same character in different actions
Using Discord for Midjourney access
Prompting techniques for character generation
Using --cref parameter with image URL
Adjusting character weight with --cw
Image Generation Techniques
Character weight impact on image resemblance
Adherence to text prompt over character details
Increasing 'Christmasy' theme with reduced character weight
Character Weight and Image Resemblance
Capturing overall style and facial features
Inconsistency in clothing details
Inclusion of animals and butterflies
Better consistency with cref parameter
Generation Results and Consistency
Using vary region to edit image details
Changing suspenders color to match original
Upscaling and adding a second character
Descriptive prompts for multi-character scenes
Refining and Upscaling Generated Images
Adding descriptive details for clothing and accessories
Using vary region for fine-tuning
Alternative editing with Photoshop's generative tool
Final Touches and Editing
Generated images consistency assessment
Expectation of future improvements
Sharing of character consistency hacks
Conclusion and Future Outlook
Multi-Character Scene Generation with Midjourney's Character Consistency Update
Alert

Keywords

💡Character Consistency

Character consistency refers to the uniformity and continuity of a character's traits, appearance, and behavior throughout various instances of a story or media presentation. In the context of the video, it is the ability of Midjourney's AI to generate characters with consistent details across different images, which is crucial for creating a coherent narrative or visual representation in animation or storybooks.

💡Midjourney

Midjourney is the name of the AI tool discussed in the video that specializes in generating images based on textual prompts. It is highlighted for its new feature that focuses on character consistency, which is particularly useful for creating a series of images with the same characters maintaining their unique attributes.

💡Character Reference Image

A character reference image is a specific image used to guide the AI in generating characters with similar traits. It serves as a visual template for the character's features, which the AI then tries to replicate in the generated images. In the video, the creator uses a character reference image from Midjourney to ensure that the generated characters have consistent details.

💡Cref Function

The cref function is a feature within Midjourney that allows for the generation of images with a focus on character traits. It is used to maintain consistency in the characters' details across different images. The video demonstrates how to use the cref function to generate characters that closely resemble a provided character reference image.

💡Text Prompt

A text prompt is a descriptive input provided to the AI to guide the generation of an image. It includes details about the scene, actions, and style desired in the final image. In the video, text prompts are used in conjunction with the cref function to create images of characters performing specific actions within a particular artistic style.

💡Discord Integration

Discord is a communication platform that has been integrated with Midjourney to facilitate the image generation process. The video script describes how users can generate images using Midjourney through Discord, which may be more accessible to some users who have not yet gained access to the Midjourney Alpha website.

💡Image URL

An image URL is the web address of a specific image that can be used to access and reference that image online. In the context of the video, the image URL of a character reference is inserted after the text prompt in the Discord command to guide the AI in generating images with character consistency.

💡Character Weight

Character weight is a parameter in Midjourney's cref function that determines the influence of the character reference on the generated image. It ranges from 100, where all character details are used as a reference, to 0, where only the facial features are considered. The video illustrates how adjusting the character weight can lead to images that are more or less similar to the reference character.

💡Vary Region

Vary region is a feature within Midjourney that allows users to edit specific areas of a generated image to refine the details. The video demonstrates using vary region to change elements such as the color of a character's suspenders or to adjust clothing details to better match the original character reference.

💡Upscaling

Upscaling is the process of increasing the resolution or size of an image without losing quality. In the video, upscaling is used to enlarge the generated images for closer inspection and further editing. It is part of the process to achieve a higher quality final image that closely resembles the desired character design.

💡AI Influencers

AI influencers are virtual characters created by AI that can be used for various purposes, such as promoting brands or products on social media. The video mentions that the character consistency feature of Midjourney is particularly useful for creating AI influencers or AI models for fashion brands, where a consistent appearance is important for brand recognition.

Highlights

Midjourney introduces a new feature for character consistency, allowing the generation of characters with consistent details using a reference image.

The character reference image (cref) function focuses on character traits and is best used with characters created by Midjourney.

The precision of the cref technique is limited and does not replicate exact details like dimples or t-shirt logos.

The feature is not designed for real people or photos, which may be distorted like regular image prompts.

The presenter found the cref feature more useful for stabilizing facial features rather than hair and outfit details.

The results are suitable for creating AI influencers or fashion brand models.

The video focuses on animation style illustration, which is the preferred style for most viewers.

The presenter recommends noting down important character features to maintain consistency throughout the image generation process.

An example character, Lily, is described with specific traits to demonstrate the cref function.

The generation process on Discord is outlined, including how to use the cref function with a text prompt and image URL.

The character weight (--cw) can be adjusted from 100 to 0 to modify the emphasis on character details.

Lower character weight results in images that adhere more to the text prompt and less to the reference character's hair and outfit.

Generated images include animals and butterflies from the original image, showing some level of detail consistency.

The 'vary region' feature can be used to edit images and achieve closer matches to the original character.

Adding a second character to the scene requires a more descriptive prompt and switching the character reference.

The presenter demonstrates how to fine-tune details using the vary region feature and a more specific text prompt.

Photoshop's generative tool can be used for further detail editing without wasting time in the generation process.

The presenter shares generated images created entirely in Midjourney and invites feedback on character consistency.

An upcoming video will provide ultimate Character Consistency Hacks for those serious about creating consistent characters using AI.