←back to thread

721 points ralusek | 2 comments | | HN request time: 0.625s | source
Show context
ryandrake ◴[] No.41870217[source]
I'm making some big assumptions about Adobe's product ideation process, but: This seems like the "right" way to approach developing AI products: Find a user need that can't easily be solved with traditional methods and algorithms, decide that AI is appropriate for that thing, and then build an AI system to solve it.

Rather than what many BigTech companies are currently doing: "Wall Street says we need to 'Use AI Somehow'. Let's invest in AI and Find Things To Do with AI. Later, we'll worry about somehow matching these things with user needs."

replies(15): >>41870304 #>>41870341 #>>41870369 #>>41870422 #>>41870672 #>>41870780 #>>41870851 #>>41870929 #>>41871322 #>>41871724 #>>41871915 #>>41871961 #>>41872523 #>>41872850 #>>41873162 #
jthacker ◴[] No.41870369[source]
This is certainly a great immediately useful tool but also a relatively small ROI, both the return and the investment. Big tech is aiming for a much bigger return on a clearly bigger investment. That’s going to potentially look like a lot of useless stuff in the meantime. Also, if it wasn’t for big tech and big investments, there wouldn’t even be these tools / models at this level of sophistication for others to be using for applications like this one.
replies(2): >>41870490 #>>41870639 #
HarHarVeryFunny ◴[] No.41870490[source]
While the press lumps it all together as "AI", you have to differentiate LLMs (driven by big tech and big money) from unrelated image/video types of generative models and approaches like diffusion, NeRF, Gaussian splatting, etc, which have their roots in academia.
replies(1): >>41870923 #
copperx ◴[] No.41870923[source]
LLMs don't have their roots in academia?
replies(1): >>41871017 #
withinboredom ◴[] No.41871017[source]
Not anymore.
replies(2): >>41871091 #>>41871250 #
1. stavros ◴[] No.41871250[source]
This makes no sense. A thing's roots don't change, either it did start there or it didn't.
replies(1): >>41871790 #
2. HarHarVeryFunny ◴[] No.41871790[source]
It didn't.

At least, the Transformer didn't. The abstract idea of a language model goes way back though within the field of linguistics, and people were building simplistic "N-gram" models before ever using neural nets, then using other types of neural net such as LSTMs and CNNs(!) before Google invented the Transformer (primarily with the goal of fully utilizing the parallelism available from GPUs - which couldn't be done with a recurrent model like LSTM).