
Wan2.1 VACE Video-to-Art
This workflow will mimic the reference style image (including characters and background) to generate video actions.
Usage:
1. Load video1 and reference image.
2. Enter prompts.
3. Generate directly.
Functions:
1. Pose control: Use openpose to control the video.
2. Depth control: Use depth maps to control the video.
3. Auto prompts: Use florence2 to reverse image prompts; if not enabled, prompts need to be manually entered.
Parameters:
1. Random seed: Changing the random seed will alter the generated result.
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value, and the other side will scale proportionally.
3. Skip the first few frames of the video: Control based on the original frame count of the video. For example, if the original video has 30 frames per second, setting this to 30 will skip the first second.
4. Duration of the generated video: Set based on the video model. For example, the official example of wan2.1 generates 81 frames with 16 frames per second, resulting in a 5-second video.
6. Positive prompts: Control the words describing the generated images/videos based on the uploaded image and input reasonable descriptions.
Notes:
1. The **Load Video** node's force_rate determines the video's frame rate per second. Inputting 16 means 16 frames per second.
1. The **Load Video** node's frame_load_cap and skip_first_frames are only for preview reference.
The final decision on skipped frames and generated frames comes from the green parameter buttons.
1. Save location
Videos will be saved in the “output/video” folder.
2. Video acceleration
If your comfyui does not have triton and sageattn installed, delete
WanVideo Torch Compile Settings in wan2.1 video-to-image.
Change attention_mode in WanVideo Model Loader to sdpa.
Otherwise, errors will occur.
If your video memory is too low to generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although slower, it will at least generate.
Use my video tutorial for assistance!: https://space.bilibili.com/1876480181?spm_id_from=333.337.0.0
VACE series allows you to freely replace any actor in movie scenes.
You can also join Ashuo’s Knowledge Planet ID: 17259412
Direct link: https://t.zsxq.com/TIfva
Community provides:
1. Exclusive advanced workflows for comfyui.
2. Solutions to various complex problems.
3. Ashuo integrated package to resolve environment conflicts with 100 original workflows.
4. Latest international AI news.
5. Community Q&A support.
Wan2.1 VACE Video-to-Art
This workflow will mimic the reference style image (including characters and background) to generate video actions.
Usage:
1. Load video1 and reference image.
2. Enter prompts.
3. Generate directly.
Functions:
1. Pose control: Use openpose to control the video.
2. Depth control: Use depth maps to control the video.
3. Auto prompts: Use florence2 to reverse image prompts; if not enabled, prompts need to be manually entered.
Parameters:
1. Random seed: Changing the random seed will alter the generated result.
2. Resolution limit: The maximum width and height of the image/video will not exceed the set value, and the other side will scale proportionally.
3. Skip the first few frames of the video: Control based on the original frame count of the video. For example, if the original video has 30 frames per second, setting this to 30 will skip the first second.
4. Duration of the generated video: Set based on the video model. For example, the official example of wan2.1 generates 81 frames with 16 frames per second, resulting in a 5-second video.
6. Positive prompts: Control the words describing the generated images/videos based on the uploaded image and input reasonable descriptions.
Notes:
1. The **Load Video** node's force_rate determines the video's frame rate per second. Inputting 16 means 16 frames per second.
1. The **Load Video** node's frame_load_cap and skip_first_frames are only for preview reference.
The final decision on skipped frames and generated frames comes from the green parameter buttons.
1. Save location
Videos will be saved in the “output/video” folder.
2. Video acceleration
If your comfyui does not have triton and sageattn installed, delete
WanVideo Torch Compile Settings in wan2.1 video-to-image.
Change attention_mode in WanVideo Model Loader to sdpa.
Otherwise, errors will occur.
If your video memory is too low to generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although slower, it will at least generate.
Use my video tutorial for assistance!: https://space.bilibili.com/1876480181?spm_id_from=333.337.0.0
VACE series allows you to freely replace any actor in movie scenes.
You can also join Ashuo’s Knowledge Planet ID: 17259412
Direct link: https://t.zsxq.com/TIfva
Community provides:
1. Exclusive advanced workflows for comfyui.
2. Solutions to various complex problems.
3. Ashuo integrated package to resolve environment conflicts with 100 original workflows.
4. Latest international AI news.
5. Community Q&A support.