

Workflow name: Keyframe interpolation + AnimateDiff
[Workflow introduction]
First, upload an image to the image where the cue word is inferred. Then, use the Joy_caption_load node to infer the cue word on the image. During this process, I uploaded seven pose images, and FILM VFI was used to perform pin processing on the image to achieve dynamic guidance of the video. Here is a 180° rotation effect. Then, combine ControlNet to control the image morphology and perform style transfer through ipadapter. Finally, transfer the processing results to animatediff, and then connect the sampler to generate the video.
[Usage scenario]
You can upload an image and use ipadapter to perform style transfer to make your image vivid. This process maintains the consistency of the character well, so that the image will not lose its original characteristics while changing its style. For practitioners in the game and animation industry, this is undoubtedly a very good workflow. It can provide new inspiration and methods for your creation, improve work efficiency, and make your work more exciting.
[Key nodes]
AnimateDiff, FILM VFI
[Model version]
SD1.5
Model name: 3dAnimationDiffusion v10.safetensors
[LoRA model]
None
[ControlNet application]
HED fuzzy line preprocessor
scribble fp16.safetensors loader model
Start time: 0
End time: 0.6
Intensity: 0.5
[K sampler]
CFG: 8
Sampling method: dpmpp_2m
Scheduler: karras
Noise reduction: 1
Workflow name: Keyframe interpolation + AnimateDiff
[Workflow introduction]
First, upload an image to the image where the cue word is inferred. Then, use the Joy_caption_load node to infer the cue word on the image. During this process, I uploaded seven pose images, and FILM VFI was used to perform pin processing on the image to achieve dynamic guidance of the video. Here is a 180° rotation effect. Then, combine ControlNet to control the image morphology and perform style transfer through ipadapter. Finally, transfer the processing results to animatediff, and then connect the sampler to generate the video.
[Usage scenario]
You can upload an image and use ipadapter to perform style transfer to make your image vivid. This process maintains the consistency of the character well, so that the image will not lose its original characteristics while changing its style. For practitioners in the game and animation industry, this is undoubtedly a very good workflow. It can provide new inspiration and methods for your creation, improve work efficiency, and make your work more exciting.
[Key nodes]
AnimateDiff, FILM VFI
[Model version]
SD1.5
Model name: 3dAnimationDiffusion v10.safetensors
[LoRA model]
None
[ControlNet application]
HED fuzzy line preprocessor
scribble fp16.safetensors loader model
Start time: 0
End time: 0.6
Intensity: 0.5
[K sampler]
CFG: 8
Sampling method: dpmpp_2m
Scheduler: karras
Noise reduction: 1