Itx-2.3-22b-dev-Q8_0.gguf

Itx-2.3-22b-dev-Q8_0.gguf
This is a GGUF quantized version of LTX-2.3.
unsloth/LTX-2.3-GGUF uses Unsloth Dynamic 2.0 methodology for SOTA performance.
- Important layers are upcasted to higher precision.
- Uses tooling from ComfyUI-GGUF by city96.
There are two sets of GGUF's published. One for the dev model and one for the distilled. The distilled model is optimized for few step generation, think 4-8 steps. dev on the other hand needs more steps at least 20, but you get better outputs. The distilled variant is useful as a drafting model or a refining model.
In fact the workflow published below, uses the distilled lora on top of the dev model to refine the intial output.
Model Information
Itx-2.3-22b-dev-Q8_0.gguf
This is a GGUF quantized version of LTX-2.3.
unsloth/LTX-2.3-GGUF uses Unsloth Dynamic 2.0 methodology for SOTA performance.
- Important layers are upcasted to higher precision.
- Uses tooling from ComfyUI-GGUF by city96.
There are two sets of GGUF's published. One for the dev model and one for the distilled. The distilled model is optimized for few step generation, think 4-8 steps. dev on the other hand needs more steps at least 20, but you get better outputs. The distilled variant is useful as a drafting model or a refining model.
In fact the workflow published below, uses the distilled lora on top of the dev model to refine the intial output.