
Wan VACE Erase, Add Objects
Use depth or pose maps to control video generation
Usage 1:
1. Load video
2. Enable control function
2. Enter prompts
3. Generate directly
Usage 2:
1. Load video
2. Enable reference image function
2. Load reference image
2. Enable control function
2. Enter prompts
3. Generate directly
Features:
1. Recognize as rectangular mask: Generate a rectangular mask based on the objects you input for removal
2. Precise mask recognition: Generate a precise mask based on the objects you input for removal
3. Enable reference image: Generate corresponding objects based on the reference image you upload
4. Recognize subject and cut out: Can only recognize images with a subject, may fail for scenes
Parameters:
1. Random seed: Changing the random seed will alter the generation results
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value, and the other side will scale proportionally
3. Skip the first few frames of the video: Control based on the original frame count of the video. For example, if the original video is 30 frames per second, setting this to 30 will skip the first second
4. Duration of the generated video: Set according to the video model. For example, the official example of wan2.1 generates 81 frames, with 16 frames per second, resulting in a 5-second video
6. Positive prompts: Prompts to control the generated image/video. Enter reasonable descriptions based on the uploaded image
Notes:
1. The **Load Video** node's `force_rate` determines the video's frames per second. Inputting 16 means 16 frames per second
1. The **Load Video** node's `frame_load_cap` and `skip_first_frames` are for preview reference only
The final decision on skipped frames and generated video frames is made by the green parameter button
1. Save location
The video will be saved in the “output/video” folder
2. Video acceleration
If your ComfyUI does not have Triton and SageAttn installed, then
remove the WanVideo Torch Compile Settings in wan2.1 video generation
and change the WanVideo Model Loader's `attention_mode` to `sdpa`
Otherwise, errors will occur
If you have low VRAM and cannot generate, you can set `blocks_to_swap` in WanVideo BlockSwap to 30~40. Although slower, it will still generate
You can also join Ashuo's membership group. WeChat: yumengashuo
Community provides:
1. Exclusive advanced workflows for ComfyUI
2. Troubleshooting various issues
3. Ashuo's integration package to resolve environment conflicts, 100 original workflows
4. Latest AI news from overseas
5. Community Q&A support
Wan VACE Erase, Add Objects
Use depth or pose maps to control video generation
Usage 1:
1. Load video
2. Enable control function
2. Enter prompts
3. Generate directly
Usage 2:
1. Load video
2. Enable reference image function
2. Load reference image
2. Enable control function
2. Enter prompts
3. Generate directly
Features:
1. Recognize as rectangular mask: Generate a rectangular mask based on the objects you input for removal
2. Precise mask recognition: Generate a precise mask based on the objects you input for removal
3. Enable reference image: Generate corresponding objects based on the reference image you upload
4. Recognize subject and cut out: Can only recognize images with a subject, may fail for scenes
Parameters:
1. Random seed: Changing the random seed will alter the generation results
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value, and the other side will scale proportionally
3. Skip the first few frames of the video: Control based on the original frame count of the video. For example, if the original video is 30 frames per second, setting this to 30 will skip the first second
4. Duration of the generated video: Set according to the video model. For example, the official example of wan2.1 generates 81 frames, with 16 frames per second, resulting in a 5-second video
6. Positive prompts: Prompts to control the generated image/video. Enter reasonable descriptions based on the uploaded image
Notes:
1. The **Load Video** node's `force_rate` determines the video's frames per second. Inputting 16 means 16 frames per second
1. The **Load Video** node's `frame_load_cap` and `skip_first_frames` are for preview reference only
The final decision on skipped frames and generated video frames is made by the green parameter button
1. Save location
The video will be saved in the “output/video” folder
2. Video acceleration
If your ComfyUI does not have Triton and SageAttn installed, then
remove the WanVideo Torch Compile Settings in wan2.1 video generation
and change the WanVideo Model Loader's `attention_mode` to `sdpa`
Otherwise, errors will occur
If you have low VRAM and cannot generate, you can set `blocks_to_swap` in WanVideo BlockSwap to 30~40. Although slower, it will still generate
You can also join Ashuo's membership group. WeChat: yumengashuo
Community provides:
1. Exclusive advanced workflows for ComfyUI
2. Troubleshooting various issues
3. Ashuo's integration package to resolve environment conflicts, 100 original workflows
4. Latest AI news from overseas
5. Community Q&A support