←back to thread

321 points laserduck | 1 comments | | HN request time: 0.211s | source
Show context
fsndz ◴[] No.42157451[source]
They want to throw LLMs at everything even if it does not make sense. Same is true for all the AI agent craze: https://medium.com/thoughts-on-machine-learning/langchains-s...
replies(10): >>42157567 #>>42157658 #>>42157733 #>>42157734 #>>42157763 #>>42157785 #>>42158142 #>>42158278 #>>42158342 #>>42158474 #
marcosdumay ◴[] No.42157567[source]
If feels like the entire world has gone crazy.

Even the serious idea that the article thinks could work is throwing the unreliable LLMs at verification! If there's any place you can use something that doesn't work most of the time, I guess it's there.

replies(5): >>42157699 #>>42157841 #>>42157907 #>>42158151 #>>42158574 #
ajuc ◴[] No.42157699[source]
It's similar in regular programming - LLMs are better at writing test code than actual code. Mostly because it's simpler (P vs NP etc), but I think also because it's less obvious when test code doesn't work.

Replace all asserts with expected ==expected and most people won't notice.

replies(4): >>42157802 #>>42157883 #>>42158103 #>>42158154 #
1. jeltz ◴[] No.42157802[source]
> Replace all asserts with expected ==expected and most people won't notice.

Those tests were very common back when I used to work in Ruby on Rails and automatically generating test stubs was a popular practice. These stubs were often just converted into expected == expected tests so that they passed and then left like that.