Midjourney's Amazing New Feature PLUS: StableVideo 1.1 from Stablity.AI!
TLDRThe video discusses a mid-journey update focusing on style consistency in AI-generated images. It introduces a new feature that combines image prompting with style tuning, allowing users to create art by referencing multiple images. The process is demonstrated using the MID Journey Alpha website, which is now accessible to certain users. The video also explores the potential of combining two different images for style references and controlling the influence of each image. Additionally, it touches on the limitations of the feature, such as not being able to create consistent characters yet. The video concludes by highlighting the potential of this feature and its integration with other commands. It also briefly covers the opening of stable video by Stability.A, showcasing its capabilities and current limitations.
Takeaways
- 🚀 A mid-journey update introduces a new feature for style consistency in image generation.
- 🎨 Style references are used as the first step in the new consistent styles algorithm, combining image prompting with style tuning.
- 🌐 The Mid Journey Alpha website is accessible to users who have generated over 5,000 images, with access for those with 1,000 images coming soon.
- 🔗 Users can utilize the --s, ref command with an image URL to create a new style based on the referenced image.
- 🖼️ The new feature allows for the blending of multiple style references, influencing the generated image based on the elements of the reference images.
- 📈 Control over the influence of each image URL is possible through the use of wait commands.
- 📄 Detailed information on the new features and commands is available as a free PDF on gumroad.
- 🎥 Stability.A has launched its own platform for stable video diffusion, currently in beta and free to use.
- 📹 Stability video diffusion offers options to start with an image or text prompt and includes various camera motion features.
- 🌐 The stable video platform allows for user voting on generations from other users, providing a community-driven aspect to the generation process.
- 🎉 The creative AI space is progressing rapidly, with exciting developments and features being introduced regularly.
Q & A
What is the main focus of the mid Journey update discussed in the transcript?
-The main focus of the mid Journey update is style consistency, specifically the introduction of a new feature that blends image prompting with style tuning to create a new style based on provided image URLs and prompts.
How does the new style consistency feature work?
-The new feature works by using an image URL or multiple image URLs alongside a prompt to generate an image that reflects the style of the referenced images. This is done by issuing the --s, ref command along with the image URL in the prompt bar of the mid Journey Alpha website.
What is the current access status for the mid Journey Alpha website?
-Access to the mid Journey Alpha website has been opened to users who have generated more than 5,000 images, and users who have generated 1,000 images are expected to gain access soon.
How can users who don't have access to the mid Journey Alpha website use the new feature?
-Users without access to the mid Journey Alpha website can use the new feature through commands available on Discord.
What is the difference between style referencing and image referencing?
-Style referencing is similar to image referencing but goes a step further by blending the style of the referenced image with the prompt, creating an image that is not just based on the content but also the style of the provided image URL.
How can the influence of each image URL be controlled in the style consistency feature?
-The influence of each image URL can be controlled by adjusting the weight of the image reference using the --s, ref command followed by a colon and a number, which ranges from 0 to 1000, to determine the strength of the style reference.
What is the current limitation of the style consistency feature?
-The current limitation of the style consistency feature is that it does not support consistent characters, and the mid Journey team is working on introducing a command for this, expected to be Dash Dash CF.
What is the significance of the free PDF available on gumroad mentioned in the transcript?
-The free PDF available on gumroad contains all the information discussed in the transcript, providing a comprehensive guide on the new style consistency feature, its usage, and the commands involved.
What is the current status of stable video from stability.a?
-Stable video from stability.a is currently in beta and is available for early access. It is an open-source platform for stable video diffusion 1.1, which is the underlying technology behind other platforms like Leonardo motion and pixiverse.
What features are available for users who have early access to stable video?
-For those with early access to stable video, features include starting with an image or a text prompt, locking the camera, shaking the camera, tilting it down, doing an orbit, panning, and zooming in and out. There are also options for the amount of steps and the overall motion strength under the advanced tab.
How does the camera motion feature in stable video work?
-The camera motion feature in stable video allows users to apply various types of motion to the generated video, such as locking the camera, shaking it, tilting it down, doing an orbit, panning, and zooming in and out. These motions can be adjusted for intensity and can result in impressive visual effects, such as rotational orbits and maintaining character features during zooming.
Outlines
🎨 Introducing Mid Journey's Style Consistency Feature
The paragraph discusses the introduction of a new feature in Mid Journey's platform focused on style consistency. It explains that the feature works by using image URLs along with a prompt to create a new style, similar to the reference image. The process involves using the '--sref' command with the image being referenced. The video script highlights the platform's current limitations, such as not being able to maintain consistent characters, and mentions upcoming features like 'Dash Dash CF.' It also provides insights on how to utilize the new feature, including tips on controlling the influence of each image URL and combining multiple images for style references. The paragraph concludes with information on a free PDF guide available on gumroad and a brief mention of stability.a's platform for stable video diffusion.
📹 Exploring Stable Video Diffusion with Stability.A
This paragraph delves into the capabilities of Stability.A's platform for stable video diffusion, which is currently in beta. It outlines the two primary options for creating videos: starting with an image or a text prompt. The paragraph discusses the various camera motions and settings available, such as locking the camera, shaking it, tilting, panning, and zooming. It also touches on the experimental features and the option for users to vote on their preferred generations from other users. The video script provides examples of generated videos, highlighting the impressive results from rotational and zooming effects. The paragraph concludes by encouraging viewers to sign up for the beta to explore the creative possibilities of stable video diffusion.
🌟 The Future of Creative AI and Closing Remarks
The final paragraph reflects on the rapid advancements in the creative AI space and expresses excitement for the future developments. The speaker, Tim, shares his anticipation for the progress that could be seen by the end of the year or even the next month. He summarizes his experience with the new features in Mid Journey and Stability.A's platform, emphasizing the potential of these tools for creative exploration. The paragraph ends with a thank you note to the viewers for watching the video and a sign-off from the speaker.
Mindmap
Keywords
💡Mid Journey Update
💡Style Consistency
💡Image Prompting
💡Style References
💡Stable Video
💡AI and Creative Tools
💡Community Feed
💡Discord
💡Style Influence
💡Gumroad
💡Beta Phase
Highlights
Introduction of a mid-journey update focusing on style consistency in AI-generated images.
Exploration of a new feature that combines image prompting with style tuning to create a new style based on provided image URLs.
The new feature is accessible through the Mid Journey Alpha website, currently open to users who have generated over 5,000 images.
Demonstration of the --s ref command to reference an image and create a new image in a similar style.
Comparison of the new style reference feature to traditional image referencing, showcasing its unique capabilities.
Example of blending two different images as style references to create a new, unique image.
Explanation of how to control the influence of each image URL through waiting commands.
Discussion on the limitations of using three style references, emphasizing the need for thematic coherence.
Information on the availability of a free PDF guide on gumroad, with a note on donations being appreciated.
Introduction to stability.a's platform for stable video diffusion, currently in beta.
Overview of the options available for creating videos from images, including camera motion and zooming.
Showcase of the impressive capabilities of stable diffusion video, including rotational and orbit effects.
Demonstration of text-to-video capabilities with various aspect ratios and styles.
The current limitations of stable video, acknowledging the absence of certain features due to its beta status.
Encouragement for users to sign up for early access to stable video to explore its potential.
Reflection on the rapid progress in the creative AI space and anticipation for future developments.