

Powered By RTX 4090
Inference is very fast
For more workflows, visit
https://github.com/mit han lab/ComfyUI nunchaku/tree/main
Nunnchaku is an efficient 4-bit neural network inference engine using SVDQuant quantization. For the quantization library, check out DeepCompressor.
Epsilon
2025-04-08 Update
Epsilon
2025-04-08 Update
Workflow introduction
Inference is very fast
For more workflows, visit
https://github.com/mit han lab/ComfyUI nunchaku/tree/main
Nunnchaku is an efficient 4-bit neural network inference engine using SVDQuant quantization. For the quantization library, check out DeepCompressor.
Nodes Information
16
CLIPTextEncode
SaveImage
VAEDecode
VAELoader
BasicGuider
BasicScheduler
EmptySD3LatentImage
FluxGuidance
KSamplerSelect
ModelSamplingFlux
NunchakuFluxDiTLoader
NunchakuFluxLoraLoader
NunchakuTextEncoderLoader
PrimitiveNode
RandomNoise
SamplerCustomAdvanced