
Wan VACE Controlnet Video
Use depth or pose map to control video generation
Usage 1:
1. Load video
2. Enable control function
3. Input prompt
4. Generate
Usage 2:
1. Load video
2. Load reference image
3. Enable control function
4. Input prompt
5. Generate
Features:
1. Pose control: Convert video into openpose human pose map to control video, suitable for portraits
2. Depth control: Convert video into depth map to control video, suitable for scenes
Parameters:
1. Random seed: Changing the random seed will alter the generated result
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value; the other side will scale proportionally
3. Skip the first few frames of the video: Controlled by the original frame count of the video. For example, if the original video is 30 frames per second, setting 30 here will skip the first second
4. Duration of the generated video: Set according to the video model. For example, the official example of wan2.1 generates a duration of 81 frames at 16 frames per second, resulting in a 5-second video
5. Positive prompts: Words to control the generated images/videos; based on the uploaded images, input reasonable descriptions
Notes:
1. **Load video** node's force_rate determines the video's frame rate per second. Input 16 for 16 frames per second
2. **Load video** node's frame_load_cap and skip_first_frames are only for preview reference
The final decision on skipping frames and video generation frames is controlled by the green parameter button
3. Save location
Videos will be saved in the "output/video" folder
4. Video acceleration
If your comfyui does not have triton and sageattn installed, please delete
WanVideo Torch Compile Settings from the wan2.1 video generation
Change WanVideo Model Loader's attention_mode to sdpa
Otherwise, errors will occur
If your VRAM is low and cannot generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although slower, it will at least generate
You can also join Ashuo's Knowledge Planet ID: 17259412
Direct link: https://t.zsxq.com/TIfva
Community provides:
1. Exclusive advanced workflows for comfyui
2. Solutions to various problems
3. Ashuo's integration package to resolve environment conflicts and 100 original workflows
4. Latest international AI news
5. Community Q&A support
Wan VACE Controlnet Video
Use depth or pose map to control video generation
Usage 1:
1. Load video
2. Enable control function
3. Input prompt
4. Generate
Usage 2:
1. Load video
2. Load reference image
3. Enable control function
4. Input prompt
5. Generate
Features:
1. Pose control: Convert video into openpose human pose map to control video, suitable for portraits
2. Depth control: Convert video into depth map to control video, suitable for scenes
Parameters:
1. Random seed: Changing the random seed will alter the generated result
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value; the other side will scale proportionally
3. Skip the first few frames of the video: Controlled by the original frame count of the video. For example, if the original video is 30 frames per second, setting 30 here will skip the first second
4. Duration of the generated video: Set according to the video model. For example, the official example of wan2.1 generates a duration of 81 frames at 16 frames per second, resulting in a 5-second video
5. Positive prompts: Words to control the generated images/videos; based on the uploaded images, input reasonable descriptions
Notes:
1. **Load video** node's force_rate determines the video's frame rate per second. Input 16 for 16 frames per second
2. **Load video** node's frame_load_cap and skip_first_frames are only for preview reference
The final decision on skipping frames and video generation frames is controlled by the green parameter button
3. Save location
Videos will be saved in the "output/video" folder
4. Video acceleration
If your comfyui does not have triton and sageattn installed, please delete
WanVideo Torch Compile Settings from the wan2.1 video generation
Change WanVideo Model Loader's attention_mode to sdpa
Otherwise, errors will occur
If your VRAM is low and cannot generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although slower, it will at least generate
You can also join Ashuo's Knowledge Planet ID: 17259412
Direct link: https://t.zsxq.com/TIfva
Community provides:
1. Exclusive advanced workflows for comfyui
2. Solutions to various problems
3. Ashuo's integration package to resolve environment conflicts and 100 original workflows
4. Latest international AI news
5. Community Q&A support