Watercolor painting like images converted by “stable diffusion”

1-hour free courses are provided on the deeplearning ai site, so tried some. (the link is here. you can learn LLMs also) Among them, “How Diffusion Models Work” was awesome for me. (jupyter notebooks can be downloaded also!) I learned about the name of “stable diffusion” for the first time in that course and immediately installed its web UI and played with it much. I bougth a desktop computer with GPU while I never thought such an expense. My environment is WSL2 on Windows 11. I had a very hard time to setup tensorflow using with GPU, but finally I was able to run both tensorflow and Pytorch with GPU!

Well, as I wrote in the title, I wanted to try “stable diffusion” to make a photo look like a watercolor painting. The result was good. The image below is the converted image. The result by “stable diffusion” greatly varies by its settings such as kinds of checkpoint or lora add-ons. Here I used CounterfeitV30 for the checkpoint, img2img for image generation, and simply “cosmos flowers, watercolor painting” as a prompt.

Comparing with “Neural style transfer” (NST) by tensorflow I tried before, “stable diffusion” may look more like a watercolor painting. NST image is shown below.

Next, I took photos of sunflowers, so I tried to make them Van Gogh flavor with “stable diffusion”. So far it hasn’t worked very well. In this case, NST creates Van Gogh taste better than “stable diffusion”.

Conclusion: in order to get closer images that I want, “stable diffusion” should be an excellent option other than NST. In addition, “stable diffusion” has many curious features like background remover or canny edge detection. I would like to try more methods soon.