Use my invite code ➡️ rh-v1182 when you register to get
Workflow Testing and Usage Instructions:
1. First, a big thank you to eddy for open-sourcing several Loras. When combined with KJ's Wan2.2 animate workflow, these Loras have great performance in both character consistency and video motion. It's especially important to note that "lightx2v_elite_it2v_animate_face" already has the lightx2v acceleration built-in, so you don't need any other speed-up Loras. This Lora also helps maintain the reference character's consistency, so I recommend a strength value between 1.0 and 1.2. If you need high consistency with the reference image character, I suggest a strength of 0.35-0.5 for "WAN22_MoCap_fullbodyCOPY ED". If you want to lean more towards the character in the reference video, a strength of 0.7-1 is better.
2. Because this involves loading the Wan2.2 model and SAMSegment, it requires a lot of VRAM. That's why I've enabled WanVideo Block Swap by default. In my testing, the entire workflow can run on a 24G GPU. However, I recommend using the 48G GPUs on RunningHub for a much smoother experience.
3. For different kinds of reference videos, I've preset two different masking methods in the workflow. You only need to choose one of them. If your reference video has only one character, use the "Single-character usage" group. If your video has multiple characters and you only want to mask certain areas, use the "Multi-role usage" group.
4. This workflow can generate longer-than-usual videos, but I don't recommend using it for anything over 30 seconds. In my tests, I found that for a 20-second video, the character's consistency starts to decay around the 10-second mark, and the color tone also shifts slightly. I believe this is caused by the influence of the reference video during the context looping process. That's why I'd recommend keeping the videos you generate with this workflow to around 20 seconds and no longer than 30. Going past 30 seconds could lead to unpredictable degradation or a serious loss of consistency.
5. For more detailed instructions, please see the notes inside the workflow.
Use my invite code ➡️ rh-v1182 when you register to get
Workflow Testing and Usage Instructions:
1. First, a big thank you to eddy for open-sourcing several Loras. When combined with KJ's Wan2.2 animate workflow, these Loras have great performance in both character consistency and video motion. It's especially important to note that "lightx2v_elite_it2v_animate_face" already has the lightx2v acceleration built-in, so you don't need any other speed-up Loras. This Lora also helps maintain the reference character's consistency, so I recommend a strength value between 1.0 and 1.2. If you need high consistency with the reference image character, I suggest a strength of 0.35-0.5 for "WAN22_MoCap_fullbodyCOPY ED". If you want to lean more towards the character in the reference video, a strength of 0.7-1 is better.
2. Because this involves loading the Wan2.2 model and SAMSegment, it requires a lot of VRAM. That's why I've enabled WanVideo Block Swap by default. In my testing, the entire workflow can run on a 24G GPU. However, I recommend using the 48G GPUs on RunningHub for a much smoother experience.
3. For different kinds of reference videos, I've preset two different masking methods in the workflow. You only need to choose one of them. If your reference video has only one character, use the "Single-character usage" group. If your video has multiple characters and you only want to mask certain areas, use the "Multi-role usage" group.
4. This workflow can generate longer-than-usual videos, but I don't recommend using it for anything over 30 seconds. In my tests, I found that for a 20-second video, the character's consistency starts to decay around the 10-second mark, and the color tone also shifts slightly. I believe this is caused by the influence of the reference video during the context looping process. That's why I'd recommend keeping the videos you generate with this workflow to around 20 seconds and no longer than 30. Going past 30 seconds could lead to unpredictable degradation or a serious loss of consistency.
5. For more detailed instructions, please see the notes inside the workflow.