AnimateDiff ControlNet Animation v1.0 [ComfyUI]
TLDRThis tutorial outlines a workflow for creating animations using AnimateDiff ControlNet and ComfyUI. It guides users through importing reference videos, downscaling and exporting them as JPEG sequences, and setting up the ComfyUI workspace with necessary extensions. The process involves rendering control net passes, organizing images, and using various nodes for input and control. The summary highlights the selection of animation style, setting dimensions, and testing animations with different models. It concludes with tips on rendering the final animation, fixing facial issues with the automatic 1111 tool, and upscaling images for enhanced quality.
Takeaways
- 🎨 Use Animate, Diff, and Comfy UI for creating animations with a streamlined workflow.
- 📁 Download JSON files from the description and drag them into the Comfy UI workspace.
- 📹 Import a dance video reference into After Effects and create a new composition with a downscaled resolution.
- 🖼️ Export the video as a JPEG image sequence for initial control net passes.
- 🌐 Use the Load Images from Directory node to import images into Comfy UI for control net creation.
- 🔍 Two control net passes are needed: Soft Edge and Open Pose, save and organize them with appropriate prefixes.
- 🚦 Test the images by rendering a sequence and fix any issues as needed (refer to part two of the tutorial).
- 🛠️ Set up the animation workflow with input nodes (green), control net units (purple), and other necessary settings.
- 🖼️ Load control net images into purple nodes and select the desired animation style (realistic, anime, or cartoon).
- 🔄 Adjust batch range and skip frames for rendering, taking into account your PC's capacity.
- 🎭 After rendering, fix facial issues using the automatic 1111 image to image tab and detailer extensions.
- 🔄 Sequence and render the final animation in After Effects, applying color corrections and adjustments as needed.
Q & A
What software is used to create the animation mentioned in the script?
-The software used for the animation is a combination of Animate, Diff, and Comfy UI.
How can users access the JSON files needed for the animation?
-Users can download the JSON files from the description provided below the video.
What is the purpose of the Comfy UI extensions?
-The Comfy UI extensions are necessary for using this workflow and should be installed before proceeding.
What is the first step in creating the animation?
-The first step is to drag and drop the reference video into After Effects and create a new composition with the video downsized to a smaller resolution.
What resolution should the video be exported as when creating a JPEG image sequence?
-The video should be exported as a JPEG image sequence at a resolution between 480p and 720p.
How many passes are needed for the control net in Comfy UI?
-Two passes are needed for the control net in Comfy UI: Soft Edge and Open Pose.
What is the purpose of the control net passes?
-The control net passes are used to apply the control net to the animation, ensuring that the rendered images are in sequence and correctly organized.
What are the green nodes in the animation workflow?
-The green nodes in the animation workflow are input nodes, including the model loader node and the resolution nodes.
How can users adjust the batch range and skip frames for rendering?
-Users can adjust the batch range and skip frames based on their PC's handling capacity and the number of images they have. For example, if there are 100 images, they can be divided into two batches of 50 and skip frames accordingly.
What is the purpose of the RTX 3070 TI laptop GPU mentioned in the script?
-The RTX 3070 TI laptop GPU is used as an example to show that the maximum number of frames that can be handled for the given resolution is 150. This capacity will vary depending on the user's GPU and rendered resolution.
How are the final images improved after rendering?
-After rendering, the final images are improved by using the automatic 1111 image to image tab. The images are tested with the most visible face, and the autodetect button is used for image dimensions. Negative embeddings and the AD Detailer extension are used for better results. The images are then upscaled using Topaz Gigapixel AI.
How can users share their creations made with this workflow?
-Users can share their works by forwarding them to the creator on Discord or mentioning it in the comments. The creator's Discord username is Jerry Davos.
Outlines
🎨 Animation Workflow Setup
This paragraph outlines the initial steps for setting up an animation workflow using Comfy UI and Animate, Diff. It involves downloading JSON files, installing necessary extensions, and preparing the workspace with reference videos. The process includes making a new composition in After Effects, downscaling the video, exporting it as JPEG images, and organizing the files for further use. The paragraph also addresses common issues with rendering and provides a solution by referring to part two of a tutorial.
🖼️ Control Net Creation and Testing
The second paragraph delves into the creation of control net passes, which are essential for the animation process. It explains the need for soft Edge and open pose passes, saving them with appropriate naming conventions for organization. The paragraph details the process of capping images for testing, rendering all frames, and organizing them into folders. It also introduces a control net passes JSON file for ease of use and describes the input and control nodes for the animation workflow, emphasizing the efficiency of rendering times.
🎭 Animation Style Selection and Rendering
This part focuses on selecting the desired animation style, either realistic or cartoon, and setting up the model loader node accordingly. It discusses the importance of matching the dimensions of the original video and managing the input images based on the PC's processing capacity. The paragraph provides a strategy for rendering the animation in batches, adjusting the batch range and skip frames as necessary. It also touches on the use of prompts for the animation and prepares the user for potential face rendering issues that will be addressed later.
🌟 Final Touches and Bug Fixes
The final paragraph discusses the rendering of the final animation, addressing potential issues with face rendering and offering solutions using the automatic 1111 image to image tab. It suggests using a detailer with larger noise strength to fix disproportionate faces and upscaling images for better quality. The paragraph concludes with the speaker's enthusiasm for seeing the user's creations and encourages users to share their work or reach out for support. It also mentions the availability of a follow-up video that will address common bugs and issues.
Mindmap
Keywords
💡AnimateDiff
💡ComfyUI
💡ControlNet
💡After Effects
💡JPEG Image Sequence
💡Soft Edge
💡Open Pose
💡K Sampler Node
💡Batch Range
💡Negative Prompts
💡Autodetect
Highlights
The animation was created using Animate, Diff, and Comfy UI with automatic 1111.
JSON files for the animation can be downloaded from the description below.
To use the downloaded files, drag and drop them into the Comfy UI workspace.
Comfy UI extensions are required before using this workflow.
A dance video by Helen Ping is used for reference in the tutorial.
Create a new composition in After Effects and downscale the video to a resolution between 480p to 720p.
Export the video as a JPEG image sequence for initial control net passes.
Images are imported using the 'load images from directory' node in Comfy UI.
Two passes are needed for the reference video: soft Edge and open pose.
Save the passes with prefixes for better organization.
Cap the images to 10 to test if they are rendering in sequence.
Render all frames if the test passes are successful.
Create two new folders for the soft Edge and open pose images.
Included is a control net passes JS Sun file for easy drag and drop into the workspace.
Select the model style (realistic anime or cartoon) and the appropriate SD model.
Set the dimensions to match the width and height ratio of the reference video.
Use the skip frames and batch range node to manage the rendering process.
Control net units apply the control net passes without extra processing.
Load the control net images into the purple nodes for the animation.
Test the animation with 10 frames in the batch range for initial feedback.
Use simple positive prompts and set negative prompts with negative embeddings.
Copy and paste the directories for soft Edge pass and open pose pass images into their respective nodes.
The final animation is ready to render after testing and fixing any issues.
If faces are not looking good initially, they can be fixed later in the process.
Render the animation in batches according to the GPU's VRAM capacity.
Use the automatic 1111 image to image tab to fix faces with the most visible issues.
After rendering, sequence all batches in After Effects and apply color corrections before final rendering.
The possibilities for creating artworks with this workflow are endless.
For any bugs encountered, refer to the notes inside the main Json file or watch part two of the tutorial.