
Workflow name: [Style transfer, dress change, face change] Portrait photography
[Workflow introduction]
This workflow implements all-round image processing, including dress change, background change, face change and other links. When you start using this process, you need to upload three images: body pose image, clothing style image and face image. Then, use the IPAdapter model to transfer the image style, and use the controlnet skeleton and deep model to control the character pose. Use joy_caption to reverse the prompt word text and connect the sampler to generate the first image. Then, use the controlnet deep model again to connect the sampler for image generation and optimize the image. Next, perform face cropping and face change redrawing, and then superimpose it on the image to achieve the effect of the specified person wearing the specified style of clothing, which improves the controllability of the whole process. You can also try to use this process to experience the fun and surprise brought by its powerful image processing function!
[Use scenario]
This process allows you to upload images of specified models, environments and clothing, and then synthesize them to achieve the effect of the specified model wearing a specific style of clothing in a specified environment. Through meticulous image processing and synthesis, ensure the perfect integration of models, clothing and environment. This process can help you easily realize the visual effects you imagine and provide image output that meets your needs.
[Key nodes]
IPAdapter, controlnet, SUPIR
[Model version]
SDXL
Model name: dreamshaperXL v21TuroDPMSDE.safctensors
[LoRA model]
None
[ControlNet application]
controlnet-union-sdxl loader
DW posture preprocessor
Start time: 0
End time: 0.5
Intensity: 0.85
Depth Anything V2
Start time: 0
End time: 0.8
Intensity: 0.85
[K sampler]
CFG: 6.6
Sampling method: euler_ancestral
Scheduler: normal
Noise reduction: 1
Workflow name: [Style transfer, dress change, face change] Portrait photography
[Workflow introduction]
This workflow implements all-round image processing, including dress change, background change, face change and other links. When you start using this process, you need to upload three images: body pose image, clothing style image and face image. Then, use the IPAdapter model to transfer the image style, and use the controlnet skeleton and deep model to control the character pose. Use joy_caption to reverse the prompt word text and connect the sampler to generate the first image. Then, use the controlnet deep model again to connect the sampler for image generation and optimize the image. Next, perform face cropping and face change redrawing, and then superimpose it on the image to achieve the effect of the specified person wearing the specified style of clothing, which improves the controllability of the whole process. You can also try to use this process to experience the fun and surprise brought by its powerful image processing function!
[Use scenario]
This process allows you to upload images of specified models, environments and clothing, and then synthesize them to achieve the effect of the specified model wearing a specific style of clothing in a specified environment. Through meticulous image processing and synthesis, ensure the perfect integration of models, clothing and environment. This process can help you easily realize the visual effects you imagine and provide image output that meets your needs.
[Key nodes]
IPAdapter, controlnet, SUPIR
[Model version]
SDXL
Model name: dreamshaperXL v21TuroDPMSDE.safctensors
[LoRA model]
None
[ControlNet application]
controlnet-union-sdxl loader
DW posture preprocessor
Start time: 0
End time: 0.5
Intensity: 0.85
Depth Anything V2
Start time: 0
End time: 0.8
Intensity: 0.85
[K sampler]
CFG: 6.6
Sampling method: euler_ancestral
Scheduler: normal
Noise reduction: 1