Educating people about such a technical topic seems very difficult especially since people get emotional of their work being used.
I know because I'm literally working on setting up Dreambooth to do what I'd otherwise have to pay an artist to do.
And not only is it replacing artists, it's using their own work to do so. None of these could exist without being trained on the original artwork.
Surely you can imagine why they're largely not happy?
We want data privacy, but we also like playing with any sort of leaked information. We like it when we can get music for free but clutch our pearls when Microsoft sells our code back to us. We talk a good game about free speech, but fail to understand that being shouted over, DDoSed, or harassed is a form of censorship. And whenever words are used that reference any of these concepts in ways we haven't considered - i.e. "marginalized voices", or "consent" - we circle the wagons.
The only consistent thing I can infer is that we don't like it when we get a taste of our own medicine.
In this case, technologists figured out how to exploit people's work without compensating them. A camera is possible without the artists it replaces. Generative modeling is not. It's fundamentally different.
If people figured out how to generate this kind of art without exploiting uncompensated unwilling artists' free labor, it would be a different story.
2. AI-unaided art is on the way to becoming a niche artisanal field. Kids of tomorrow will treat illustration as they do calligraphy, celluloid film, and butter churning.
3. Because of these trends, their userbase will stop growing and eventually dwindle.
DeviantArt sees the writing on the wall. This is a risky but probably necessary pivot, though it will accelerate the loss of their existing userbase.
It seems odd to complain that computers are using human's artwork to inspire their own creations. Every human artist has done the exact same thing in their lifetime; it's unavoidable.
We're surrounded by people who don't understand what's happening. They seem to think some kind of art intelligence has been invented.
No, it's the aggregation and interpolation of vast amounts of existing art.
The same thing is happening with software, through Microsoft's Copilot:
https://bugfix-66.com/7a82559a13b39c7fa404320c14f47ce0c304fa...
I think people just don't understand what they're seeing. They have no idea what it is.
They think it's really "intelligence", dreaming and imagining and simulating and feeling and experimenting and...
It's none of these things. It's a sophisticated interpolation, not so different from linear interpolation:
a*x + (1-a)*y
Maybe I agree 80 percent With this. I teach art and certainly our illustration stream will have to re-think itself.
We are already seeing students of their own accord incorporate AI into their work. Mostly this is for ideation and development. But the best results come from the students who best know the formal language of art. This is not easy to come by and only very experienced artists and art directors speak this language effectively.
Agree.
But you also have to treat code the same way. We shouldn't be suing Open AI and Microsoft over copilot being trained on open source code. It's no different than models trained on art.
Besides, if Microsoft loses, they actually win. I expect they're one of the few companies with enough code to train the model on completely proprietary data. If they lose the case, they'll still be able to build the tool. The rest of us will be locked out of easy training data and won't be able to compete.
No it wouldn't. It would still compete against artists. We'd have worse models in the beginning and it would take time until someone licensed enough images to improve the models, but the capability is there and we know about it, too late to stop.
By the way, Stable Diffusion has been fine-tuned with Midjourney image text pairs. So now we also have AI trained on AI images.
I think both humans and AI without training are stupid. Take a human alone, raised alone, without culture. He/she will be closer to animals than humans. It's the culture that is the locus of intelligence and we're borrowing intelligence from it just like the AIs.
Landscapes are another matter. Try finding any photo of a landscape that is half as sublime as the landscape paintings made by the Hudson river school. An effective painter can improve upon optical reality in a way that beggers belief. They do this with a clever mix of increasing contrast and affinity in a way that would be almost impossible for a photographer.
It's like a very complicated form of linear interpolation:
a*x + (1-a)*y
These systems do not "think". Today I spent all day mulling an idea, experimenting with variations, feeling frustrated or excited, imagining it, simulating it, making mistakes, following paths of reasoning, deducing facts, revisiting dead-ends with new insight, daydreaming, talking to my wife about it, etc. That's human thought.These models do not "think" like a human, they do not dream or imagine or feel. They run a feed-forward system of linear equations (matrix multiplications).
They INTERPOLATE HUMAN WORK.
They don't exist without training data (huge amounts of intellectual property) aggregated and interpolated in a monstrous perversion of "fair use":
https://bugfix-66.com/7a82559a13b39c7fa404320c14f47ce0c304fa...
Starve the machine. Without your work, it's got nothing.
I don't find it odd to complain that publishing an artwork on DeviantArt has gone from "I intend humans to look at this" to "I (opt-out!) agree that a corporation may use this to generate new artwork for profit."
I would not complain if a painting of mine were exhibited in a museum and someone came in to look at it and draw something inspired by it.
I would complain if I handed over a painting of mine to that same museum to be exhibited, they scanned it in at high resolution, handed it over to a class of copy artists, who then produced artwork in order to compete with mine, before finally putting it up in a gallery.
Does that still seem odd?
The opting program is conpletely unnecessary, we shouldnt even be debating about it being optin by default with opt out being the choice, the opting program shouldnt have been done at all. AI models will be trained on everything visible whether DeviantArts is going to respect a flag or somebody else’s Stable Diffusion model doesnt. Poor taste to say its opt in by default knowing the AI can read it all extremely quickly and probably already has and will never forget. They could have just handled the backlash without an opting feature.
There's loads of BSD code, and to share, all that is required is a link to attributation.
It seems to me, there is money to be made, in getting model data colinked with "saw first" references.
Then, for example, after a github style codebot writes the code for you, it can show a link to "where it learned to help you today!".
There is no technical reason the can't be done, only a business model reason.
That said, I find the comments in this thread strange. Discussion about how tech moves on, and looms and such.
That arg was lost, when people could cut up songs, and slap them together, or cut up 100 textbooks into one. This is settled by endless laws, and caselaw. It isn't a new argument. Microsoft will lose.
We are talking about creative works being shuffled together and remixed as a legal protection for theft. That's all it is. There is ample evidence that these algorithms merely regurgitate what goes in and cannot create something entirely new. Which, is of coursw what you'd expect if you understand what is going on under the hood. But it is not what is being sold.
But again… aren’t people the same way? Noone exists in isolation. The Sir Isaac Newton quote comes to mine:
“If I have seen further, it is by standing on the shoulders of giants”
Edit: to be clear - these algorithms are specifically non-linear and are a far cry from ‘linear interpolation’. Yes they do involve matrix multiplication that does not make them interpolaters unless you want to water down the meaning of interpolation to be so generic it loses its meaning. Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).
I feel like this post by an HN user is pertinent[1].
> Have you ever done any reading on the Luddites? They weren't the anti technology, anti progress social force people think they were.
> They were highly skilled laborers who knew how to operate complex looms. When auto looms came along, factory owners decided they didn't want highly trained, knowledgeable workers they wanted highly disposable workers. The Luddites were happy to operate the new looms, they just wanted to realize some of the profit from the savings in labor along with the factory owners. When the factory owners said no, the Luddites smashed the new looms.
The Luddites went from middle class business owners and craftsmen to utter destitution. Many of the Luddites were tried for machine breaking and were either executed by the state, or exiled to penal colonies. They risked literally everything, because everything was at stake.
I bring this up because people like to pretend the Luddites were some cult of ignorant technophobes, but the reality is that many of us are in the same situation the Luddites were in, as highly skilled workers that operate complex machinery with comfortable middle class lives, before owners cut them out and their families starved in the streets.
Like a feed-forward chain of matrix multiplications, trained to predict its training data?
No, of course you weren't. That would be FUCKING RIDICULOUS.
Yes. They are angry that their labor was used to create something new and arguably more efficient, but they don't get a appropriate compensation for it.
On the plus side. I can imagine this tech empowering artists to create more stuff they previously couldn’t. I’m imagining a single person producing a whole animation which previously was only accessible to companies and teams.
This is not in good faith, please read HN rules.
Rather than attack me (calling me foolish, swearing at me) why don’t you rebut my ideas and have a conversation if you actually have something to contribute.
I’ve read the papers, I’ve worked personally with these systems. I understand them just fine. Notice that I said earlier: “regardless of how simple they are”. I understand you are trying to water them down to be simple interpolation which they definitely are not but even if they were that simple it wouldn’t change the legal calculus here one bit. New art is being generated (far beyond any ‘transformative’ legal test precedent) and any new art that is substantively different from its inputs is legally protectable.
The "product" that chess players produce is not replaceable by ML systems. The game itself, the "fight" of two minds (or one mind against the machine, in the past) is the "product". Watching two chess AIs play against each other can't replace that.
For artists, the product is their output, the art itself. An approximation of that art can also be produced by a ML system now, making artists an unnecessary cost factor[1] for e.g. simple illustrations.
They are not comparable, IMO. Chess players are not replaced by ML systems, artists will be.
> it's unavoidable.
It really isn't. Of course it would be possible to just outlaw the use of things like "the pile", which includes gigabytes of random texts with unknown copyright status. The same goes for any training set that uses images scraped of the web, ignoring any copyright.
Yes, people would still do it, but it would have the same status that piracy has. You can't build a US multi-billion dollar company on piracy (for long), and you wouldn't be able to do so with ML systems that were trained on random stuff from the internet.
I don't think this, in such broad strokes, would be a good thing, to be clear. Such datasets are great for research! But I have a really hard time understanding this defeatism that there is "nothing we can do".
[1] from the perspective of some customers e.g. magazines or ad companies - I don't agree with this
Personally, I don’t think it is likely that copyright laws will change to protect against algorithmic usage (too much precedent in more general reuse cases and for what is considered transformative). Having said that I also don’t think this will be the death of artists by any stretch, some industries will need to change or evolve but it will be just another tool in an artists belt IMO.
It doesn't matter, and never did in the first place. All large models (including SD) are already trained on other models output, since there's simply no possible way to have a high quality tagged dataset of the size they need. Smaller models are used to classify the data for larger ones, then the process is repeated for even larger models, with whatever manual data you have. Humans only select the data sources, and otherwise curate the entire bootstrapping process. This kind of curated training actually produces better results.
These algorithms are specifically non-linear a far cry from ‘linear interpolation’ unless you want to water down the meaning of interpolation to be so generic it loses its meaning.
Having said all that - the sophistication of the algorithm is beyond the point here as long as what they are generating is substantially transformative (which >99% of the possible outputs are legally speaking).
1) DA tos stipulates that works of art needs to be opt-in for inclusion into AI datasets 2) scraping/downloading art will include a response header with "X-Robots-Tag: noimageai" to indicate individuals building models should not use the downloaded image in a dataset.
As someone overall empathetic to the artist community (who have had to deal with NFTs, copycats, difficult industry), I have no illusions that this will have a hard time standing up against fair-use laws. That being said, if you're looking to build an ai-art model, this might be worth paying attention to regardless if you decide to comply or not.
[1] https://www.deviantart.com/team/journal/Tell-AI-Datasets-If-...
There is a place for AI art generation and there is a place for artists. NFTs were interesting in how they overvalued otherwise mediocre art. These models are interesting in how they bring down the cost and experience needed for making derivative art.
To me, the creativity still lies in someone being able to produce something meaningful. Art is about being able to convey ideas in a way that's impossible to communicate in some other way. An artist is someone that makes art. In that sense everyone who has generated art is an artist. Oversaturating the world with derivative art will only make novel things stand out more.
It's very hard to share a nuanced take on this topic because this argument has become framed in such a binary way. With something like medicine, the value of a doctor's opinion is very clear to a layperson. But when it comes to art, the value of an artist's perspective is not clear at all. However, I think making parallels to music makes it clear for me. AI generated music will replace elevator music at best, but I don't think the public fears ai models will ever replace musicians. At most ai will complement the art creation process. The "soul" and novelty in art will always come from an idea another human wants to communicate.
Is it? Or does the idea of a photo presuppose the painting? Could a camera have been invented by someone not looking at the world through the lens of a very particular tradition of art?
I suspect their indignation is more to do with their work being consumed without their permission, and then turned into a tool that undermines their value. These tools wouldn't exist without the work of artists. I don't think it's fair to act like no injustice has been done.
Up until now the tools being used aren't usually defining the end product. I don't mean library code or song samples, etc. where licensing comes into play. The tools I'm referring to, like an IDE, or art program, or even a paintbrush, might enhance the process of creation but they don't define the product.
With ML the output of the tool becomes a concrete part of the end product. So, surely the copyright ownership for the part of the product generated via ML is not held by the person using the tool. Which means you would have to license the ML generated part of your product from whoever produced the tool.
I have no legal expertise but it definitely feels like there is a dangerous trap in using these tools without that question being answered.
I can highly recommend the book “But is it art?” By Cynthia Freeland to get a better perspective on this topic!
And AI generated art will replace a lot of 'decorative art', perhaps not art that hangs in galleries and provokes thought but that people buy because it looks nice on their wall, or as a screen saver, or t-shirt. If that means that there's less demand for humans producing this kind of artwork - there will be less people making it and fewer good training images.
In "high art" there's always been artists like Jeff Koons or Damien Hirst who direct other artists and technicians in the production of their artwork, or even apprentices painting large parts of a master renaissance painters work. With AI generated Art I can't help seeing a future where the brand becomes more important than the art - images created from the description/thoughts of Lady Gaga/Kanye West
Notably in your example both are certainly legal just varied by the level of controversy around them and on that level I would agree. Scaled up processes do tend to attract more controversy.
And the "idea" of a frozen, materialized, two dimensional projection of what we see with our eyes, aka an image, transcends cultures and tools.
It does not depend on a particular tradition of art. You might argue that it depends on human culture, but that's a different thing.
Also, making a portrait photo does not need concrete instances of portrait paintings.
So I don't really follow your argument.
Also, yes, I'd say the invention if the camera could be motivated by reasons that have nothing to do with art at all (documenting the physical world). The lines get blurry depending on how you define "art".
But none of that implies that the invention of the camera depends on recycling prior art.
An image AI is incapable of depicting something that's entirely missing from its training data.
I wonder how much this applies to other fields, corporate art and imagery for sure, but also a whole bunch of the low cost illustration and commission work I can see getting completely gutted, with the existing space for an entry level human to build from is no longer worthwile.
This changes the design and availability of the software tools, the willingness for educational institutions to engage in these topics, and may even reduce the idea of a professional artist back to high art only (and we only need like 50 artists a year thanks).
That would effectively make the opt-out worthless.
And on the contrary, let's say you managed to push the state of the art, like you developed a more efficient fast fourier transform, now, how would you go about charging money for that?
Isn't google search business built on "unethical sourced data", they keep a pirated copy of every website they encounter and feed it to their algorithms.
Isn't by definition human culture built on "unethical sourced data" remixed by the human brain? Example: imagine you are creating a punk band. You will use all your background knowledged of what punk is, the band's you like and maybe if you're creative an unexpected source of inspiration from things outside the world of punk? How's that essentially different from how stable diffusion works?
Another good example is the "Who let the dogs out" song. There's an article / podcast at https://99percentinvisible.org/episode/whomst-among-us-let-t... where they try to find the origin. At some point even the creators don't really know where the source of the inspiration came from but some of the sources are geographically close which seems to point to a common source. Some of the variations seem quite different, some are pretty close.
Overall I think this is just computers replacing some human capabilities, like machines in factories. You lose most of the poethics in the artistry of a human doing something by hand and gain the capability speed. Doing x per second instead of y per month. If you need the symbolism and the poethics of art you'll keep using a human. if you need to generate a thousand variations of an idea you'll use stable diffusion.
A quote attributed to Mark Twain says “A photograph is a most important document, and there is nothing more damning to go down to posterity than a silly, foolish smile caught and fixed forever.“
For professional artists who do it for the money, yes, that's true.
For amateur artists, the product can be the process, the flow of creating art. Futhermore, I'd say a lot of art isn't about conveying an idea or whatever. You see something, you paint it, because you like it, give it your own spin. Maybe the end result is good, maybe not. Often enough, the art becomes "valuable", because others give it some new context.
> Chess players are not replaced by ML systems, artists will be.
Traditional artists working with real materials won't be. They might even get new interest, because digital art will be flodded with spam.
Or a magazine does, what the art scene has been about forever: Hire artists because of their name or their background.
The job "digital artist" was created roughly 20-30 years ago and now is transformed to something else or might become obsolete. Bummer for digital artists, but not sure if this will destroy "artists" in general.
It's a problem of scale.
> Because if anything, I think AI-generated art is in the process of disproving this exact hypothesis
But it's not creating anything, it's regurgitating it's training material (through a suitably fine blender) in the way that scores best. These models are nothing without the actual art they've appropriated.
However, I think the main argument is less about the artistic merit of AI generated art and more about the impact on the ability for artists to be artists when one of the means of generating a means for a living is removed from them. The elevator music and office artwork pieces were the means for income for many that allowed for the pursuit of more complex and long term projects. It was art insomuch as it was a creative endeavor, but I'm not sure how many artists believed that such pieces were their true expression.
A lot of that is now quite easily replaceable by anyone with a bit of time, a few source sample images, some keyword manipulation, and a computer as "simple" as a MacBook. Music generation likely isn't far off, and I think Meta even demo'd some AI-Video generator.
Automation should serve the people, and I have no doubt that at some point in a nicer future it will be a boon where we can have "nice things" and a lot of expression that wasn't previously possible. In the interim, there is a slew of people whose living means are heavily at risk. Patreon, et. al., aren't going to be enough to sustain every single artist, and like a lot of automation advances in the past, it will disenfranchise a rather large population. Besides that, not everyone can just draw furry porn for big commissions.
I think that this should concern a lot of tech persons also who imagine themselves protected from this as the human element of programming, technology design, etc, simply "cannot be replicated", but I think that projects like Co-Pilot are showing that there is a huge focus on replicating AI-Art in the same way, and similar to artists, a lot of programmers are having their code forced into the system to remove their agency, and with no compensation. The very act of producing something in order to sustain one's self is now also an act of self-destruction as the product feeds the AI more data.
I think the technology and the potential benefits of AI generated X is great and it's a step towards removing a lot of the petty grunt-work that is required to make the world work. The big question is how are we going to keep the lights on for people if there isn't a system in place to ensure that people can sustain themselves? Current social safety nets don't cut it, and socially there is still a huge opposition towards creating better safety nets.
I suspect that's why there is such concern over AI-Generated anything; the classic thinking and creative work that was a safe place from automation is now automated, and the world doesn't look ready yet to make the leap to societies that have automated away the need for menial tasks for everyone and provide everyone a pretty nice standard of living.
I mean countless books are being written each year and quality still need to be curated (or trash promoted with lots of money)
Yes. Do two wrongs make a right? An accusation of hypocricy is not an argument.
> Isn't by definition human culture built on "unethical sourced data" remixed by the human brain?
Do not compare AI to human brains. They do not work the same at all, but however similar they are (or might become), they are legally distinct, since copyright law is meant to encourage humans, not AI, to create works.
The problem is people at large companies creating these AI models, wanting the freedom to copy artists’ works when using it, but these large companies also want to keep copyright protection intact, for their regular business activities. They want to eat the cake and have it too. And they are arguing for essentially eliminating copyright for their specific purpose and convenience, when copyright has virtually never been loosened for the public’s convenience, even when the exceptions the public asks for are often minor and laudable. If these companies were to argue that copyright should be eliminated because of this new technology, I might not object. But now that they come and ask… no, they pretend to already have, a copyright exception for their specific use, I will happily turn around and use their own copyright maximalist arguments against them.
It isn't "inspiration". These machine models aren't actually intelligent. There's no expressive element here, with regard to the machine producing the art.
What it really is is just a new tool for producing art. The sculptor had his chisel, the painter had a paintbrush, the photographer had a camera, the graphic designer had Photoshop or whatever, and now you can make art by being skillful in coming up with a prompt. It still requires skill, skill with the tool, just like anything else.
The difference is that this new tool (probably) doesn't enable the creation of anything truly novel.
It would chill the whole ML space significantly for decades, IMO, as the only truly safe data would be synthetic or licensed. This can work for some applications (e.g. Microsoft used synthetic data for facial landmark recognition[1]), but it would kill DALL-E 2 et al.
But in any case, the difference is huge between writing a one-sentence prompt and choosing and arranging a motive, arranging the photo setting & lighting (unless it's a Paparazzi snapshot), getting and paying the models, and so on. If you compare this to prompt-based AI, some complex prompts may be judged as creative work, others are too simple to count as it. Changing the color, contrast and sharpness of an existing image also doesn't necessarily count as original artwork (but see some of Andy Warhol's work for differing opinions, of course).
Humans, artists aren't the machines other human created, they interpret or copy, not interpolate.
The thing about computers/computing is that being better at a task usually gives someone a commercial advantage; finding them and exchanging money for the implementation seems fairly straightforward...
Is it a licence (in practice) for a right to brand name or really the look? Could Forza make a model that looks almost exactly like a Ferrari but name it Furrari?
You know you're comparing the best example of photography with the worst example of using an AI, right?
A photographer may carefully study their craft and set up their shots. They can also be someone who puts no thought at all into taking an almost random photo. Same as someone can carefully tune the parameters of a model and refine their prompt until they get something which meets their exacting specifications, or they can be someone who grunts into a text box and clicks generate.
Playing around with Stable Diffusion running locally, comparing my output to the things held up as AI art removes any doubt in my mind as to whether the creators are artists. They are.
AI will never cut off their ear or do something artistically bizarre. It's usually the man or woman behind the art that makes art art.
They even gave a linear equation in their example… again not even close. If we can call what these algorithms do interpolation - we can call what humans do interpolation too - it makes the word that meaningless
Untrue - this legally falls under the million precedent cases that have come before it - if the derived work (be it by algorithm or by human brain) is substantially transformed it is perfectly legal.
Not sure I would agree with that. Granted there may be a cultural component in the mix somewhere, but as someone who has painted from observation many faces, the fugitive nature of a smile presents almost insurmountable problems. Franz Hals (below) could do it because he painted insanely quickly.
https://www.art-prints-on-demand.com/a/hals-frans/thelaughin...
https://images.prismic.io/barnebys/a671f804-2e03-4541-afa0-9...
https://az333960.vo.msecnd.net/images-9/laughing-boy-frans-h...
They key issue is that a smile involves the eyes as much as the face. This cannot be faked without the frozen effect: example:
https://images7.alphacoders.com/694/694598.jpg
As for photography, the long exposures of early photography made the capture of anything fugitive an impossibility. However, the moment that snapshot photography was invented (Kodaks Box Brownie) smiles were being photographed all the time.
My current opinion is yes. See Fedsearch and the whole controversy around it recently. Some people don’t like their data being scraped or studied without their consent, even if you could technically visit it.
I enable noindex by default on my Mastodon instance.
Doing a personal experience now where I don’t use Google or any other search engines that are crawler based. I heavily use links I get from other people, bookmarks, portals, “a webpage full of cool links”, and browsing history.