Multi-Character Scene with Midjourney’s Huge Character Consistency Update (--cref)
TLDRMidjourney's latest feature, the character consistency update, has been highly anticipated and is now available. This new --cref function allows users to generate characters with consistent details using a character reference image. The feature is particularly effective for stabilizing facial features but may not perfectly replicate hair or outfit details. It works best with characters created by Midjourney and is not designed for real people or photos, which may be distorted. The video demonstrates how to use the --cref function in Midjourney to generate consistent characters, including tips on how to adjust character weight and use the 'vary region' feature for fine-tuning details. The creator also shares a workaround for placing multiple characters in the same scene. The results are suitable for creating AI influencers or fashion models, but the focus of the video is on animation style illustrations, which are popular among the audience. The video concludes with a teaser for more advanced character consistency hacks.
Takeaways
- 🎉 Midjourney has released a new feature for character consistency, allowing users to generate characters with consistent details using a character reference image.
- 🔍 The cref function is similar to a regular image prompt but focuses on character traits, although it won't replicate exact details like dimples or logos.
- 💡 The feature is most effective with characters created by Midjourney and is not intended for real people or photos, which may be distorted.
- 👧 The speaker prefers using real people results for stabilizing facial features, finding the cref feature useful for this purpose but not for hair and outfit details.
- 🎨 The results are suitable for creating AI influencers or fashion models, but the focus of the video is on animation-style illustrations.
- 📝 It's recommended to note down important character features to maintain consistency as images are generated.
- 🌟 The first character, Lily, is described in detail to show how the cref function can be used to generate consistent character images.
- 🔗 The image URL for the cref function can be obtained by dragging the image to the prompt box, right-clicking the image to copy the address, or opening the image in a browser.
- ✅ The --cw parameter can be used to modify character references, with strengths ranging from 100 (all details) to 0 (just the face).
- 🧐 Lowering the character weight makes the image adhere more to the text prompt and less to the reference character's hair and outfit.
- 🖼️ The generated images include animals and butterflies from the original image, showing that the cref parameter improves consistency over a simple reference image.
- 🛠️ The 'vary region' feature can be used to edit images for perfection, such as changing clothing details or the character's gaze.
- 👥 To generate multiple characters in a scene, be more descriptive in the prompt and adjust the character reference for each character.
- 💻 For further detail editing, one can use Photoshop's generative tool or similar software to refine the generated images.
- 📈 The consistency of characters generated by AI is expected to improve over time as tools like Midjourney's cref function are perfected.
Q & A
What is the new feature introduced by Midjourney for character consistency?
-Midjourney has introduced a new feature called 'cref' that allows users to generate characters with consistent details using a character reference image.
What are the limitations of the cref function according to Midjourney?
-The precision of the cref function is limited and it will not copy exact details like dimples, freckles, or t-shirt logos. It works best with characters made from Midjourney and is not designed for real people or photos, which may get distorted.
Why might the creator prefer using real people results in their tests?
-The creator prefers real people results because the cref feature is more useful in stabilizing facial features, which is important for animation or storybook standards, even though it may not perfectly replicate hair and outfit details.
For what type of content is the cref feature particularly useful?
-The cref feature is particularly useful for creating AI influencers or AI models for fashion brands where facial consistency is more critical.
How does one use the cref function in Midjourney?
-To use the cref function, one can type in a prompt followed by '--cref' and then insert the image URL of the character reference. The feature can be modified using the '--cw' parameter to adjust the character reference strength from 100 to 0.
How can one obtain the image URL for using with the cref function?
-The image URL can be obtained by dragging the image directly to the prompt box, right-clicking the image to get the image address, or opening the image in a browser and copying the link from there.
What is the purpose of adjusting the character weight ('--cw') in the cref function?
-Adjusting the character weight allows users to control the extent to which the character details are used as a reference. A strength of 100 uses all character details, while a strength of 0 focuses only on the face.
How can one edit the generated images to better match the original character?
-One can use the 'vary region' feature to select and edit specific areas of the generated image, such as changing clothing colors or details to more closely match the original character.
What is the process for generating multiple characters in the same scene?
-To generate multiple characters in the same scene, one must be more descriptive in the text prompt, specifying the details of each character and their actions. Additionally, the character reference should be switched for each character being generated.
How can one further refine the details of the generated images?
-One can further refine the details by using the 'vary region' feature to edit specific areas such as clothing, eye gaze, or any other unsatisfactory details. Alternatively, Photoshop's generative tool can be used for more complex edits.
What is the creator's opinion on the consistency of the characters generated using Midjourney's cref function?
-The creator acknowledges that while the consistency is not perfect, it is still better than using just a reference image. They also express optimism that the tools will improve and offer better consistency in the future.
What additional resource does the creator recommend for those serious about creating consistent characters using AI?
-The creator recommends checking out a video for 'ultimate Character Consistency Hacks' for those who are serious about creating the most consistent characters using AI.
Outlines
🎨 Midjourney's Character Consistency Feature Overview
Midjourney has introduced a new feature that allows for character consistency using a character reference image. The feature prioritizes character traits over other elements. While it's not designed for real people or photos, it was found to be particularly effective in stabilizing facial features. The video demonstrates how to use this feature and provides a workaround for placing multiple characters in a single scene. The process involves using the 'cref' function in Midjourney and noting down key character features to maintain consistency. The video focuses on animation style illustration, which is the preferred style of the audience. The character 'Lily' is used as an example, with detailed features described. The video also covers how to generate images on Discord and the Alpha website, and how to modify character references using the 'cref' parameter.
🖌️ Refining Character Details with Midjourney's Tools
The video script outlines a method for refining character details to achieve a closer match to the original character design. It introduces the 'vary region' feature for editing specific areas of the generated image. The process involves upscaling the image, selecting the area to edit, and using the 'cref' link with a simple prompt description to achieve the desired result. The script also discusses adding a second character to the scene, emphasizing the need for a more descriptive prompt to ensure both characters are generated. The video demonstrates how to switch character references and fine-tune details using the 'vary region' feature. It also mentions the possibility of using Photoshop's generative tool for further detail editing. The video concludes with a series of images generated entirely in Midjourney, inviting viewers to share their thoughts on the consistency of the characters and promoting another video with ultimate character consistency hacks.
Mindmap
Keywords
💡Character Consistency
💡Midjourney
💡Character Reference Image
💡Cref Function
💡Text Prompt
💡Discord Integration
💡Image URL
💡Character Weight
💡Vary Region
💡Upscaling
💡AI Influencers
Highlights
Midjourney introduces a new feature for character consistency, allowing the generation of characters with consistent details using a reference image.
The character reference image (cref) function focuses on character traits and is best used with characters created by Midjourney.
The precision of the cref technique is limited and does not replicate exact details like dimples or t-shirt logos.
The feature is not designed for real people or photos, which may be distorted like regular image prompts.
The presenter found the cref feature more useful for stabilizing facial features rather than hair and outfit details.
The results are suitable for creating AI influencers or fashion brand models.
The video focuses on animation style illustration, which is the preferred style for most viewers.
The presenter recommends noting down important character features to maintain consistency throughout the image generation process.
An example character, Lily, is described with specific traits to demonstrate the cref function.
The generation process on Discord is outlined, including how to use the cref function with a text prompt and image URL.
The character weight (--cw) can be adjusted from 100 to 0 to modify the emphasis on character details.
Lower character weight results in images that adhere more to the text prompt and less to the reference character's hair and outfit.
Generated images include animals and butterflies from the original image, showing some level of detail consistency.
The 'vary region' feature can be used to edit images and achieve closer matches to the original character.
Adding a second character to the scene requires a more descriptive prompt and switching the character reference.
The presenter demonstrates how to fine-tune details using the vary region feature and a more specific text prompt.
Photoshop's generative tool can be used for further detail editing without wasting time in the generation process.
The presenter shares generated images created entirely in Midjourney and invites feedback on character consistency.
An upcoming video will provide ultimate Character Consistency Hacks for those serious about creating consistent characters using AI.