←back to thread

695 points crescit_eundo | 3 comments | | HN request time: 0s | source
Show context
swiftcoder ◴[] No.42144784[source]
I feel like the article neglects one obvious possibility: that OpenAI decided that chess was a benchmark worth "winning", special-cases chess within gpt-3.5-turbo-instruct, and then neglected to add that special-case to follow-up models since it wasn't generating sustained press coverage.
replies(8): >>42145306 #>>42145352 #>>42145619 #>>42145811 #>>42145883 #>>42146777 #>>42148148 #>>42151081 #
scott_w ◴[] No.42145811[source]
I suspect the same thing. Rather than LLMs “learning to play chess,” they “learnt” to recognise a chess game and hand over instructions to a chess engine. If that’s the case, I don’t feel impressed at all.
replies(5): >>42146086 #>>42146152 #>>42146383 #>>42146415 #>>42156785 #
antifa ◴[] No.42146415[source]
TBH I think a good AI would have access to a Swiss army knife of tools and know how to use them. For example a complicated math equation, using a calculator is just smarter than doing it in your head.
replies(1): >>42146582 #
PittleyDunkin ◴[] No.42146582[source]
We already have the chess "calculator", though. It's called stockfish. I don't know why you'd ask a dictionary how to solve a math problem.
replies(4): >>42146684 #>>42147106 #>>42149986 #>>42162440 #
1. the_af ◴[] No.42147106{3}[source]
A generalist AI with a "chatty" interface that delegates to specialized modules for specific problem-solving seems like a good system to me.

"It looks like you're writing a letter" ;)

replies(1): >>42147436 #
2. datadrivenangel ◴[] No.42147436[source]
Lets clip this in the bud before it grows wings.
replies(1): >>42150584 #
3. nuancebydefault ◴[] No.42150584[source]
It looks like you have a deja vu