←back to thread

221 points lnyan | 1 comments | | HN request time: 0.274s | source
Show context
hexmiles ◴[] No.44397622[source]
While looking at the examples of editing the bear image, I noticed that the model seemed to change more things than were strictly asked.

As an example, when asked to change the background, it also completely changed the bear (it has the same shirt but the fur and face are clearly different), and also: when it turned the bear in a balloon, it changed the background (removing the pavement) and lost the left seed in the watermelon.

It is something that can be fixed with better prompting, or is it a limitation of the model/architecture?

replies(1): >>44399930 #
1. godelski ◴[] No.44399930[source]

  > It is something that can be fixed with better prompting, or is it a limitation of the model/architecture?
Both. You can get better results through better prompting but the root cause of this is a limitation of the architecture and training methods (which are coupled).