

Author: AIGC101
Workflow source: https://openart.ai/workflows/aigc101/flux-style-transfer-better-version/lJw1nuyXNaGckheGnvqF
The clip_vision of the load flux ipadapter node recommends using this model:
https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors?download=true
Download it and put it in... /models/clipvision folder
About controlenet: depth anything v2 of depth is recommended. hed is recommended. canny is not recommended, the line will be framed more dead when using canny, if used, lower the weight
For clip_vision model of "load flux ipadapter" node, I recommends using this model:
https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors?download=true
Just download it and put it in ... /models/clipvision folder.
About controlenet: we recommend to use depth anything v2, we recommend to use hed, we don't recommend canny, if you use canny, the line will be framed too much, if you insist, turn down the weights.
Author: AIGC101
Workflow source: https://openart.ai/workflows/aigc101/flux-style-transfer-better-version/lJw1nuyXNaGckheGnvqF
The clip_vision of the load flux ipadapter node recommends using this model:
https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors?download=true
Download it and put it in... /models/clipvision folder
About controlenet: depth anything v2 of depth is recommended. hed is recommended. canny is not recommended, the line will be framed more dead when using canny, if used, lower the weight
For clip_vision model of "load flux ipadapter" node, I recommends using this model:
https://huggingface.co/openai/clip-vit-large-patch14/resolve/main/model.safetensors?download=true
Just download it and put it in ... /models/clipvision folder.
About controlenet: we recommend to use depth anything v2, we recommend to use hed, we don't recommend canny, if you use canny, the line will be framed too much, if you insist, turn down the weights.