Wan2.1 VACE video-to-image
Invitation code 【rh v1312】
https://www.runninghub.cn/?inviteCode=rh v1312
Use this code to get a reward of 1000 RH coins (roughly enough to process over 20 videos or hundreds of images).

This workflow will maintain the style of reference images (including characters and backgrounds) to generate video actions through imitation.


Usage:
1. Load video 1 and reference images
2. Enter prompts
3. Generate
Features:
1. Pose control: Use openpose to control the video
2. Depth control: Use depth maps to control the video
3. Automatic prompts: Use florence2 to reverse-generate prompts from images. If not enabled, you’ll need to manually input prompts.

Parameters:
1. Random seed: Changing the random seed will alter the generation results
2. Resolution limit: The maximum width and height of images/videos will not exceed the set value. The other dimension will scale proportionally.
3. Skip how many frames at the start of the video: Control based on the original frame count of the video. For example, if the original video is 30 frames per second, setting this to 30 will skip the first second.
4. Duration of generated video: Set based on the video model. For example, the official example of wan2.1 generates 81 frames with 16 frames per second, resulting in a 5-second video.
6. Positive prompts: Control the descriptive prompts for the generated images/videos based on the uploaded images.

Notes:

1. The force_rate parameter in the **Load Video** node determines the frames per second of the video. Inputting 16 means 16 frames per second.

1. The frame_load_cap and skip_first_frames parameters in the **Load Video** node are only for preview reference.
The final decision on skipping frames and video generation frames is determined by the green parameter buttons.

1. Save location

Videos will be saved in the “output/videos” folder.

2. Video acceleration

If your comfyui does not have triton and sageattn installed, please delete the WanVideo Torch Compile Settings in the wan2.1 video-to-image workflow.
Change the attention_mode of WanVideo Model Loader to sdpa.
Otherwise, errors may occur.

If you have low VRAM and cannot generate, you can set blocks_to_swap in WanVideo BlockSwap to 30~40. Although slower, it will at least generate.


Use this along with my video explanation!: https://space.bilibili.com/1876480181?spm_id_from=333.337.0.0
The VACE series allows you to freely replace any actor in movie scenes.


You can also join Ashuo’s membership group. WeChat: yumengashuo

The community provides:
1. Exclusive advanced comfyui workflows
2. Solutions to various complex issues
3. Ashuo’s integration package to resolve environment conflicts with 100 original workflows
4. The latest international AI news
5. Prompt responses to community inquiries