Wan VACE Controlnet Video
Invitation Code 【rh v1312】
https://www.runninghub.cn/?inviteCode=rh v1312
Use this code to get 1000 RH coins as a reward (enough to process over 20 videos or hundreds of images).

Use depth or pose maps to control video generation.

Usage 1:
1. Load the video.
2. Enable the control function.
3. Enter prompt.
4. Generate.

Usage 2:
1. Load the video.
2. Load the reference image.
3. Enable the control function.
4. Enter prompt.
5. Generate.

Features:
1. Pose control: Convert the video into openpose human pose maps to control the video, suitable for portraits.
2. Depth control: Convert the video into depth maps to control the video, suitable for scenes.

Parameters:
1. Random seed: Changing the random seed will alter the generated result.
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value, and the other side will scale proportionally.
3. Skip how many frames at the beginning of the video: Control based on the original frame count of the video. For example, if the original video is 30 frames per second, setting this to 30 will skip the first second.
4. Duration of the generated video: Set based on the video model. For example, in the official example of wan2.1, the generated duration is 81 frames, with 16 frames per second, resulting in a 5-second video.
5. Positive prompt: Control the generated image/video with a prompt. Enter a reasonable description based on the uploaded image.

Note:

1. The force_rate parameter in the **Load Video** node determines the frame rate per second of the video. For example, input 16 for 16 frames per second.

2. The frame_load_cap and skip_first_frames parameters in the **Load Video** node are for preview reference only.
The final decision for skipped frames and generated frames is made by the green parameter button.

3. Save location:

Videos will be saved in the "output/videos" folder.

4. Video acceleration:

If you haven't installed triton and sageattn in comfyui, please delete WanVideo Torch Compile Settings in the wan2.1 video generation workflow.
Change the attention_mode in WanVideo Model Loader to sdpa.
Otherwise, it will cause errors.

If your GPU memory is insufficient and cannot generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although it might be slower, it will at least generate successfully.


You can also join Ashuo's VIP group on WeChat: yumengashuo

The community provides:
1. Exclusive advanced workflows for comfyui.
2. Solutions to various problems.
3. Ashuo's integration package to solve environmental conflicts and 100 original workflows.
4. The latest international AI news.
5. Community Q&A support.