←back to thread

317 points laserduck | 1 comments | | HN request time: 0.212s | source
Show context
myflash13 ◴[] No.42171710[source]
Anything that requires deep “understanding” or novel invention is not a job for a statistical word regurgitator. I’ve yet to see a single example, in any field, of an LLM actually inventing something truly novel (as judged by the experts in that space). Where LLMs shine is in producing boilerplate -- though that is super useful. So far I have yet to see anything resembling an original “thought” from an LLM (and I use AI at work every day).
replies(3): >>42172675 #>>42176079 #>>42177501 #
mycall ◴[] No.42177501[source]
There are many LLMs that are producing original "thought".

ESM3: https://www.evolutionaryscale.ai/blog/esm3-release

AlphaProof/AlphaGeometry2: https://deepmind.google/discover/blog/ai-solves-imo-problems...

MatPilot discovering new materials: https://arxiv.org/abs/2411.08063

Then of course NVidia Omniverse with their digital-twin learning.

https://blog.google/technology/ai/google-ai-big-scientific-b...

replies(1): >>42180712 #
1. myflash13 ◴[] No.42180712[source]
Taking a quick glance at all of these, they seem to be aspirational or a “brute force” type of search, which computers have always been good at, before AI. Does not seem like any novel research to me. The parameters and methods are set by humans and these systems search within a well defined space.