Fan Bonus:Click your profile picture in the top-right corner, go to 'Invite Code,' and enter `rh-v1182` to get1,000 RH Coins. Plus, you'll get another 100 coins for logging in daily!


Workflow Testing and Instructions:

1. I tested this process on RunningHub, and the results show that it can't train every single style of Lora. In my tests, for example, it couldn't replicate motion blur or line art styles. If you want to try this locally, you can deploy the model and nodes using these links: DiffSynth-Studio/Z-Image-i2L and ComfyUI_RH_ZImageI2L. 

2. The images in your training set must all be in the same style. You need at least 4 images, but my workflow defaults to using 8. Based on how the Z-image model works, I recommend using images that are around 1 megapixel. 

3. If the style isn't showing up clearly in your generated images, try adjusting the **Model Strength** in the **LoraLoader** node—usually between 1.0 and 1.5 works best. If the image isn't following your prompt, try slightly increasing the **Clip Strength** in the LoraLoader until you get the balance you want. 

4. Once you've finished training a style Lora, you can download it from the "Task List" on the right (see the screenshot for the exact location).