pixelwave_flux1_schnell_04_fp8
Back

pixelwave_flux1_schnell_04_fp8
1421 0 19

Girl

Boy

Space scene

Stylization

Characterization

Accentuation

pixelwave_flux1_schnell_04_fp8

PixelWave FLUX.1 schnell 04

PixelWave FLUX.1 schnell version 04 is an aesthetic refinement of FLUX.1 schnell. The training images are handpicked to ensure the model is biased towards striking images with beautiful colors, textures, and lighting. Whether it's abstract charcoal sketches, high-saturation high-fashion portraits, or moody cinematic cyberpunk-inspired art, PixelWave FLUX.1 schnell allows you greater flexibility in using the desired style.

  • Trained on the original Schnell model
  • Apache 2.0 License
  • No special nodes or code required
  • Supports FLUX LoRA
  • Euler normal, 8 steps.

You can use more steps to refine finer details, but there won't be significant changes after 8 steps.

Shout out to RunDiffusion

Special thanks to RunDiffusion (co-creator of Juggernaut!) for sponsoring the compute that made training this model possible! To help you try it out, they offer 250 short-term free daily credits (approximately 40 80 PW Schnell) on their platform.

For those needing API access to this model, we are collaborating with Runware.ai.

Thanks to their support, this model is out there!

Training

Training was completed using kohya_ss/sd scripts. You can find my branch of Kohya here, which also includes changes to the sd scripts submodule, so make sure to clone both.

Use the fine-tuning tab. I found the best results on my 4090 GPU using the page edlion8bit optimizer. Other optimizers struggled to learn.

I froze the time_in, vector_in, and mod/modulation parameters. This stopped "de-distillation."

I avoided training a single block beyond 15. You can set the blocks to train in the FLUX section.

LR 5e 6 trains quickly, but you must stop after a few thousand steps as it starts damaging blocks and slowing learning.

Then, you can use an earlier checkpoint to merge blocks, replace damaged blocks, and continue further training.

Signs of damaged blocks: paper texture on most images, loss of background details.

This model is sourced from an external transfer (transfer address: https://hf-mirror.com/mikeyandfriends/PixelWave_FLUX.1-schnell_04 ),if the original author has objections to this transfer, you can click,
Appeal
We will, within 24 hours, edit, delete, or transfer the model to the original author according to the original author's request

星图AI

星图AI

Girl

Boy

Space scene

Stylization

Characterization

Accentuation

Model Information

Active
Model Type:
Unet
Basic Model:
F1基础 S
Trigger Words:
pixelwave_flux1_schnell_04_fp8
Resource Name:
models/unet/pixelwave_flux1_schnell_04_fp8.safetensors
MD5:
53c0a5c22151b249ab39c2f285b39723

PixelWave FLUX.1 schnell 04

PixelWave FLUX.1 schnell version 04 is an aesthetic refinement of FLUX.1 schnell. The training images are handpicked to ensure the model is biased towards striking images with beautiful colors, textures, and lighting. Whether it's abstract charcoal sketches, high-saturation high-fashion portraits, or moody cinematic cyberpunk-inspired art, PixelWave FLUX.1 schnell allows you greater flexibility in using the desired style.

  • Trained on the original Schnell model
  • Apache 2.0 License
  • No special nodes or code required
  • Supports FLUX LoRA
  • Euler normal, 8 steps.

You can use more steps to refine finer details, but there won't be significant changes after 8 steps.

Shout out to RunDiffusion

Special thanks to RunDiffusion (co-creator of Juggernaut!) for sponsoring the compute that made training this model possible! To help you try it out, they offer 250 short-term free daily credits (approximately 40 80 PW Schnell) on their platform.

For those needing API access to this model, we are collaborating with Runware.ai.

Thanks to their support, this model is out there!

Training

Training was completed using kohya_ss/sd scripts. You can find my branch of Kohya here, which also includes changes to the sd scripts submodule, so make sure to clone both.

Use the fine-tuning tab. I found the best results on my 4090 GPU using the page edlion8bit optimizer. Other optimizers struggled to learn.

I froze the time_in, vector_in, and mod/modulation parameters. This stopped "de-distillation."

I avoided training a single block beyond 15. You can set the blocks to train in the FLUX section.

LR 5e 6 trains quickly, but you must stop after a few thousand steps as it starts damaging blocks and slowing learning.

Then, you can use an earlier checkpoint to merge blocks, replace damaged blocks, and continue further training.

Signs of damaged blocks: paper texture on most images, loss of background details.

This model is sourced from an external transfer (transfer address: https://hf-mirror.com/mikeyandfriends/PixelWave_FLUX.1-schnell_04 ),if the original author has objections to this transfer, you can click,
Appeal
We will, within 24 hours, edit, delete, or transfer the model to the original author according to the original author's request