←back to thread

20 points praveeninpublic | 1 comments | | HN request time: 0.521s | source

While browsing YouTube, an AI-generated video appeared and I reflexively told my wife, “That’s AI—skip it.”

Yet I’m using AI-created illustrations for my graphic novel, fully aware of copyright and legal debates.

Both Copilots and art generators are trained on vast datasets—so why do we cheer one and vilify the other?

We lean on ChatGPT to rewrite blog posts and celebrate Copilot for “boosting productivity,” but AI art still raises eyebrows.

Is this a matter of domain familiarity, perceived craftsmanship, or simple cultural gatekeeping?

1. jemmyw ◴[] No.43807412[source]
I paint as a hobby. I think a lot of people put less value on digital art in general. It's almost like too perfect doesn't hit it right for most people. I was in a local gallery last week and saw one image from afar thinking "that's very detailed to be so cheap" then get close enough to see it's printed digital art and lose interest. The next painting over was a waterfall and you could see the brush strokes, yet I found it infinitely more interesting.

However, AI image generation is immensely helpful when I want to do a painting. Before I would find photos I liked and stitch them together, or try to imagine things. Now I can get an image much closer to reference.

With code, my feeling is that we have to write way too much of it right now to express what we want. I can write a small bit of text to the LLM and it will fill out 75%+ of the code over multiple files, which I then just need to shape. So much is structure that needs repeating in variations. I don't have an answer but it seems like there's something else missing from our tools and LLMs are providing a bad imitation of what it should be.