AnimateDiff LCM Workflow with IP-Adapter & Multi-Pass Upscaling
This workflow generates fast, high-quality AI animations using AnimateDiff combined with LCM (Latent Consistency Models) for rapid inference. It features triple IP-Adapter conditioning for precise style control and a multi-pass upscaling pipeline for professional output quality.
Key Features:

Fast generation using LCM sampler (only 10 steps needed)
Triple IP-Adapter setup with RGB mask separation for regional style control
3-pass iterative refinement with progressive denoising (1.0 → 0.6 → 0.4)
4x-UltraSharp upscaling between passes for enhanced detail
RIFE frame interpolation for smooth 30fps output

How It Works:

Load three reference images (TOP, MIDDLE, BOTTOM) for different regions of your animation
Create an RGB mask where Red=TOP, Green=MIDDLE, Blue=BOTTOM areas
The workflow applies each reference image to its corresponding masked region via IP-Adapter
Initial generation at 440x640 with 24 frames
Three KSampler passes progressively refine the output while upscaling
RIFE interpolation triples the frame count for fluid motion

Required Models:

Checkpoint: Photon-LCM (SD1.5 LCM) - https://civitai.com/models/306814/photon-lcm
Motion Model: AnimateDiff LCM - https://civitai.com/models/326698/animatediff-lcm-motion-model
IP-Adapter: ip-adapter-plus_sd15
CLIP Vision: CLIP-ViT-H-14
Upscaler: 4x-UltraSharp

Required Custom Nodes:

ComfyUI-AnimateDiff-Evolved
ComfyUI-VideoHelperSuite
ComfyUI_IPAdapter_plus
ComfyUI-KJNodes
ComfyUI_Fill-Nodes

Credits:
Recreated from scratch after watching the Machine Delusions AnimateDiff tutorial: https://www.youtube.com/watch?v=opvZ8hLjR5A