Easily CLONE Any Art Style With A.I. (MidJourney, Runway ML, Stable Diffusion)

Casey Rickey
3 Jan 202308:38

TLDRIn this video, the presenter introduces three top methods for replicating any art style using AI: MidJourney, Runway ML, and Stable Diffusion. The video demonstrates the process of using each method to generate images of a zebra, lion, and cheetah in the presenter's abstract art style. The presenter advises viewers to use these techniques with caution, especially when replicating living artists' works, and to seek permission if intending to use the images for profit. The results from each method are showcased, with the presenter expressing a preference for the Runway ML images. The video concludes with an invitation for viewers to share their thoughts on which method worked best and to ask any questions they may have.

Takeaways

  • 🎨 **AI Art Replication**: The video discusses three methods to replicate any art style using AI, which can be used to generate images in the style of famous artists like Salvador Dali.
  • 🚀 **Top Three Methods**: The top three methods highlighted are MidJourney, Runway ML, and Stable Diffusion, each offering a unique approach to AI art style replication.
  • ⚠️ **Ethical Considerations**: It's important to use these AI techniques responsibly, especially when replicating the work of living artists. Always seek permission or use for experimental purposes only.
  • 🔗 **MidJourney Process**: To use MidJourney, join their Discord server, use the newcomers' room for experiments, and upload a photo of the style to emulate. Then, use specific commands and prompts to generate images.
  • 📈 **Runway ML Custom Generator**: With Runway ML, you can train a model with 15-30 sample images of the desired style. After training, you can input prompts to generate art in that style.
  • 💰 **Costs Involved**: MidJourney offers a free trial, but for continued use, a subscription is required. Runway ML charges a fee for model training.
  • 📚 **Training Models**: For both MidJourney and Runway ML, the models are trained on specific styles by uploading relevant images and then generating new images based on text prompts.
  • 🔧 **Stable Diffusion**: This method involves connecting to a Google Colab notebook, using Hugging Face for an access token, and training a model with steps and text encoder steps.
  • 🖌️ **Creative Control**: Runway ML allows control over the number of image options generated, the size, resolution, and style of the outputs, as well as the influence of the prompt on the final image.
  • 📸 **Image Upload**: In Stable Diffusion, you upload an image to use as a base for style replication and can adjust the sampling steps and method for better results.
  • 📈 **Resolution and Quality**: The video mentions using high resolution (8K) and specific parameters like V4 and Q2 for higher quality images in MidJourney.
  • 🤝 **Community Engagement**: The video encourages viewers to share their thoughts on which method worked best and to ask questions or request further details on the methods.

Q & A

  • What are the three methods mentioned in the transcript for replicating an art style using AI?

    -The three methods mentioned are using MidJourney, Runway ML, and Stable Diffusion.

  • What is the first step to use MidJourney for replicating an art style?

    -The first step is to go to midjourney.com, join their Discord server, and use their newcomers' rooms to make experiments.

  • How does one use the MidJourney method to generate images?

    -Upload a photo of the style you want to emulate to Discord, copy the link, type '\imagine', paste the link, and then type a text prompt describing the photo and the style you want.

  • What is the process for using Runway ML to replicate an art style?

    -Create an account on Runway, go to their AI magic tools, select the custom generator option, upload 15 to 30 sample images of the style you want to train, name your model, and pay to train the model. Once ready, type a prompt to generate images.

  • How much does it cost to train a model on Runway ML?

    -It costs ten dollars to train a model on Runway ML.

  • What is the Stable Diffusion method and how is it used?

    -Stable Diffusion is a method that involves connecting to a Google Colab notebook, creating an account on huggingface.co, and using a custom model to generate images in a specific style. You upload art images of the style you want to replicate, train the model, and then test it with an image and a prompt.

  • What are the ethical considerations mentioned in the transcript when using these AI techniques?

    -The ethical considerations include using these techniques on your own art style, seeking permission from living artists if replicating their work, and using the technology for experimental purposes only, especially if using their images for profit.

  • What is the purpose of adding 'V4' and 'Q2' at the end of the MidJourney prompt?

    -Adding 'V4' specifies that you want to use version 4 of MidJourney, and 'Q2' results in a higher quality image.

  • How many sample images does Runway recommend for training the model?

    -Runway recommends uploading 15 to 30 sample images of the style you want to train.

  • What is the role of the 'prompt weight' in Runway ML?

    -The 'prompt weight' tells Runway how much of your prompt to infuse into the output image.

  • How many steps are used for training the Stable Diffusion model per image uploaded?

    -A hundred steps per image uploaded are used for training the Stable Diffusion model.

  • What are some factors that can affect the quality of the final image generated by Stable Diffusion?

    -Factors that can affect the quality include the number of sampling steps, the sampling method used (e.g., ddim), the resolution adjusted using width and height sliders, and the CFG scale.

Outlines

00:00

🎨 Exploring AI-Powered Art Style Replication

This paragraph introduces the viewer to the concept of using artificial intelligence to replicate any art style. The speaker shares their top three methods for achieving this: mid-journey, Runway ML, and stable diffusion. They emphasize the importance of respecting original artists' work and suggest using these techniques for personal art style or experimental purposes. The speaker plans to demonstrate the methods by generating images of a zebra, lion, and cheetah, trained on their own abstract art style. Detailed instructions are provided for using mid-journey, including joining their Discord, uploading style photos, and crafting prompts for image generation. The paragraph concludes with the speaker's first set of results using mid-journey.

05:01

🚀 Methods for AI Art Style Generation: Runway ML and Stable Diffusion

The second paragraph delves into the next two methods for replicating art styles using AI: Runway ML and stable diffusion. For Runway ML, the process involves creating an account, uploading sample images of the desired style, and training a model with a fee. The speaker provides a step-by-step guide for using Runway ML, including setting up the model, crafting prompts, and controlling the output options. The results for Runway ML are showcased with themed images of a zebra, lion, and cheetah. Moving on to stable diffusion, the speaker outlines the process of connecting to a Google Colab notebook, setting up an account on huggingface.co, and training a model using uploaded art images. The paragraph concludes with the speaker's results for stable diffusion, again with themed images, and prompts for inspiration. The speaker invites viewers to share their thoughts on which method worked best and offers to provide more in-depth information if needed.

Mindmap

Keywords

💡AI

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the context of the video, AI is used to replicate and generate art styles, showcasing its powerful capabilities in the field of art and creativity.

💡Midjourney

Midjourney is one of the top three methods mentioned in the video for replicating art styles using AI. It is an online platform that provides users with the ability to generate images by uploading a style photo and describing the desired output, as demonstrated in the video with the creation of abstract paintings of animals.

💡Runway ML

Runway ML is another method highlighted in the video for achieving AI-generated art. It is a platform that allows users to train their models with their own art style by uploading sample images. The trained model then generates new artwork in the user's style, as shown when the presenter created abstract paintings of a zebra, lion, and cheetah.

💡Stable Diffusion

Stable Diffusion is the third method discussed in the video for cloning art styles with AI. It involves connecting to a Google Colab notebook and using a custom model to generate images in a specific style. The presenter demonstrates this by training a model with a set of images and then using it to create stylized images of animals.

💡Discord

Discord is a communication platform used by Midjourney for users to interact and experiment with their AI image generation service. In the video, the presenter instructs viewers to join Midjourney's Discord server to use their AI tools, emphasizing the community aspect of the platform.

💡Abstract Art

Abstract art is a style of art that does not attempt to represent external reality but instead seeks to achieve its effect using shapes, forms, colors, and textures. The video's theme revolves around using AI to replicate this style, with the presenter using their own abstract art as a basis for the AI models.

💡High-Resolution

High-resolution refers to the clarity and detail of an image, typically measured in pixels. In the context of the video, the presenter specifies '8K' resolution when generating images, indicating a desire for high-quality outputs from the AI art generation process.

💡Google Colab

Google Colab is a cloud-based platform for machine learning that allows users to write and execute code in a simplified environment. It is used in the video to access and run the Stable Diffusion model for generating art in a specific style.

💡Huggingface.co

Hugging Face is a company that provides tools and platforms for natural language processing (NLP). In the video, it is mentioned as a place to create an account and generate an access token, which is then used in conjunction with Google Colab to utilize the Stable Diffusion model.

💡Sampling Steps

Sampling steps refer to the number of iterations used in the process of generating an image with AI. The presenter in the video mentions using a high number of sampling steps for better results when creating images with Stable Diffusion, indicating the importance of this parameter in achieving detailed outputs.

💡CFG Scale

CFG scale is a parameter used in AI image generation models like Stable Diffusion to control the level of detail in the generated image. The video suggests increasing the CFG scale if the output image lacks detail, showing how fine-tuning this parameter can affect the final result.

Highlights

Three top methods for replicating any art style using AI are presented: MidJourney, Runway ML, and Stable Diffusion.

A disclaimer is provided to use these techniques responsibly and respectfully towards original artists.

MidJourney offers a free trial to generate images and requires a Discord account.

To use MidJourney, upload a photo of the desired style, copy the link, and use a specific command with a text prompt.

Runway ML allows training a custom model with 15-30 sample images of the desired style.

Runway ML charges a fee to train the model and provides options to control the output.

Stable Diffusion requires connecting to a Google Colab notebook and using Hugging Face for model training.

For Stable Diffusion, upload art images of the style to replicate and optionally provide captions and concept images.

Training steps and text encoder steps can be adjusted in Stable Diffusion for better results.

High sampling steps and using ddim sampling method work best for Stable Diffusion.

CFG scale can be increased in Stable Diffusion if the image lacks detail.

The author generated images of a zebra, lion, and cheetah using their own abstract art style for comparison.

Different prompts were used for each animal to demonstrate the versatility of the AI replication methods.

The results of the replication methods are showcased with the corresponding prompts used.

The author personally prefers the results from Runway ML.

Viewers are encouraged to share their thoughts on which method worked best in the comments.

The video invites questions and further discussion on the presented AI art replication methods.

A call to action is made to like and subscribe for more similar content.