
Wan2.1 VACE Video Transformation
This workflow will mimic the video actions while maintaining the style of the reference image (including characters and background).
Usage:
1. Load Video1 and reference image.
2. Input prompts.
3. Generate.
Functions:
1. Pose Control: Use openpose to control video.
2. Depth Control: Use depth maps to control video.
3. Automatic Prompts: Use florence2 for reverse prompt generation from images. If not enabled, manual input of prompts is required.
Parameters:
1. Random Seed: Changing the random seed will alter the generated result.
2. Resolution Limit: The maximum width and height of images/videos will not exceed the set value, and the other side will scale proportionally.
3. Skip Initial Video Frames: Control based on the original frame count of the video. For instance, if the original video has 30 frames per second, setting this to 30 will skip the first second.
4. Duration of Generated Video: Set this based on the video model. For example, the official example of wan2.1 generates a duration of 81 frames at 16 frames per second, resulting in a 5-second video.
5. Positive Prompts: Control the generated image/video based on reasonable descriptions of the uploaded image.
Note:
1. The **Load Video** node's force_rate determines the frames per second of the video. Inputting 16 means 16 frames per second.
2. The **Load Video** node's frame_load_cap and skip_first_frames are only for preview reference.
The final decision on skipped frames and video generation frames is made by the green parameter buttons.
3. Save Location
Videos will be saved to the “output/videos” folder.
4. Video Acceleration
If your comfyui does not have triton and sageattn installed, please delete
WanVideo Torch Compile Settings in the wan2.1 image-to-video.
Change WanVideo Model Loader's attention_mode to sdpa.
Otherwise, errors will occur.
If your VRAM is insufficient for generation, you can set blocks_to_swap to 30~40 in the WanVideo BlockSwap. Although slower, it will at least generate.
Collaborate with me using the video guide!: https://space.bilibili.com/1876480181?spm_id_from=333.337.0.0
VACE series: Freely replace any actor in movie scenes.
You can also join Ashuo's membership group. WeChat: yumengashuo
Community provides:
1. Exclusive advanced workflows for comfyui.
2. Solutions to various difficult issues.
3. Ashuo's integration package to resolve environmental conflicts and 100 original workflows.
4. Latest international AI news.
5. Community Q&A support.
Wan2.1 VACE Video Transformation
This workflow will mimic the video actions while maintaining the style of the reference image (including characters and background).
Usage:
1. Load Video1 and reference image.
2. Input prompts.
3. Generate.
Functions:
1. Pose Control: Use openpose to control video.
2. Depth Control: Use depth maps to control video.
3. Automatic Prompts: Use florence2 for reverse prompt generation from images. If not enabled, manual input of prompts is required.
Parameters:
1. Random Seed: Changing the random seed will alter the generated result.
2. Resolution Limit: The maximum width and height of images/videos will not exceed the set value, and the other side will scale proportionally.
3. Skip Initial Video Frames: Control based on the original frame count of the video. For instance, if the original video has 30 frames per second, setting this to 30 will skip the first second.
4. Duration of Generated Video: Set this based on the video model. For example, the official example of wan2.1 generates a duration of 81 frames at 16 frames per second, resulting in a 5-second video.
5. Positive Prompts: Control the generated image/video based on reasonable descriptions of the uploaded image.
Note:
1. The **Load Video** node's force_rate determines the frames per second of the video. Inputting 16 means 16 frames per second.
2. The **Load Video** node's frame_load_cap and skip_first_frames are only for preview reference.
The final decision on skipped frames and video generation frames is made by the green parameter buttons.
3. Save Location
Videos will be saved to the “output/videos” folder.
4. Video Acceleration
If your comfyui does not have triton and sageattn installed, please delete
WanVideo Torch Compile Settings in the wan2.1 image-to-video.
Change WanVideo Model Loader's attention_mode to sdpa.
Otherwise, errors will occur.
If your VRAM is insufficient for generation, you can set blocks_to_swap to 30~40 in the WanVideo BlockSwap. Although slower, it will at least generate.
Collaborate with me using the video guide!: https://space.bilibili.com/1876480181?spm_id_from=333.337.0.0
VACE series: Freely replace any actor in movie scenes.
You can also join Ashuo's membership group. WeChat: yumengashuo
Community provides:
1. Exclusive advanced workflows for comfyui.
2. Solutions to various difficult issues.
3. Ashuo's integration package to resolve environmental conflicts and 100 original workflows.
4. Latest international AI news.
5. Community Q&A support.