P8-quantized weights for FLUX.1-Kontext-dev diffusion models. Supports both E4M3FN and E5M2 formats.
Model Overview
- Base Model: FLUX.1-Kontext-dev (diffusion model component)
- Quantization: Per-tensor dynamic quantization to FP8 (E4M3FN/E5M2)
- Size Reduction: ~40% smaller than original weights
- Model Scope: This is only the diffusion_model component (not full pipeline), should be placed in ComfyUI's
diffusion_models
directory
This model is sourced from an external transfer (transfer address:
https://huggingface.co/6chan/flux1-kontext-dev-fp8 ),if the original author has objections to this transfer, you can click,
Appeal
We will, within 24 hours, edit, delete, or transfer the model to the original author according to the original author's request