May be it is stupid and can't play a little, like artist will do?
How about extremely complex scene - look at sunset snowy and misty forest through frosty window
Beautiful
I've been playing with SD quite a bit, and have to say it's probably one of the best tools for mood boards, I'll explain why.
Due to the data dragnet style of training, it tends to generate imagery that is stereotypical. So it kinda forces you to be creative with prompts, and to think about what image you really want, and what constitutes that image.
"Person sitting on chair" quickly turns into: "middle aged man sitting on an old wooden chair, in a small room with wall paper, lit through an open window."
Then you can get an infinite amount of permutations of this. Just brilliant for mood board generation!
Personally I don't think it's a competition to the human neural network. Not because it's not good enough, but because people tend to enjoy filming. It's a tool that we can use, just like a camera is a tool.
Then again, when SD gets video, I wouldn't mind using it for b-roll. I don't really enjoy filming branches swaying all that much.
Then again, "middle aged man" turns out to be "white middle aged man with a belly".
It is not about prompts only.
I am using some initial seed image, sometimes specially modified, on each iteration you select one of 9 varients, change how much it can change overall image, change prompt, also paint some areas to change more, and such can be 10, 15, 20 iterations.
It is an art tool.
Note how I am able to make dress transparent.
Or how I can add innuendo, make you feel that girl is showing her private part to the sunflower. It was intentional, and source image had been same as both other reg dress girl. Even her clothes are being set not only buy prompts, but iterated and also painted and changed.
Another thing that appeared last week:
Made form this (many iterations in SD)
Slightly more of my experiments with art made using Stable Diffusion.
It used specially processed initial photo.
Now, let's try to play with depth of focus and precise lighting, while keeping exact flowers specs we need
@Vitaliy_Kiselev sorry, I didn't mean to say that the AI's have no value, or that they are not creative in their own right.
Actually, quite the opposite.
I see them as being an addition to photography, digital art, painting etc.
AI art is its own thing, and quite different to photography, et al. Firstly, as you have shown it takes a completely different skill set, one of describing, keywords, seed imagery etc.
I think as opposed to 'we don't need photographers anymore' possibly it is more like: 'we don't need stock anymore'. Shutterstock & Adobe must be worried big time.
Regardless, it's it's own thing, and AI for AI's sake is a valid artform. One that should be explored outside of "AI makes photographers redundant".
Have a look at this if you haven't already:
The big take away, is if it was perfect, like shot on camera, it would loose the important painterly, glitchy aesthetic. Sometimes, its the imperfections that make it better.
It can be almost perfect, it can be some style.
This thing integrates different things and you can mix them.
You own mind is functioning similar to it - it gets small focused spot image and previous image and makes estimate based on your own knowledge.
Mathematically it looks like this
Where peaks are visually pleasing images (as it thinks of them). As people trained NN to provide them that they want (most probably using back propagation learning).
Experiments with mosaic and materials
Metals and colored neodymium magnet balls
On lotus flowers
@Vitaliy Wow! How much RAM do you need for your cool experiments?
I use hosted thing.
But better you need 11GB or better VRAM and some high speed GPU with fast single precision CUDA.
So, 3080, 3090, 3090 Ti.
Or new 4090.
note that Stable Diffusion v1.5 is still not public for download and local use (and this is that is used here)/
It looks like you're new here. If you want to get involved, click one of these buttons!