←back to thread

625 points lukebennett | 3 comments | | HN request time: 0.001s | source
1. atomsatomsatoms ◴[] No.42139072[source]
At least they can generate haikus now
replies(1): >>42139466 #
2. Der_Einzige ◴[] No.42139466[source]
In general, no they can't:

https://gwern.net/gpt-3#bpes

https://paperswithcode.com/paper/most-language-models-can-be...

The appearance of improvements in that capability are due to the vocabulary of modern LLMs increasing. Still only putting lipstick on a pig.

replies(1): >>42139732 #
3. falcor84 ◴[] No.42139732[source]
I don't see how results from 2 years ago have any bearing on whether the models we have now can generate haikus (which from my experience, they absolutely can).

And if your "lipstick on a pig" argument is that even when they generate haikus, they aren't really writing haikus, then I'll link to this other gwern post, about how they'll never really be able to solve the rubik's cube - https://gwern.net/rubiks-cube