←back to thread

181 points thunderbong | 1 comments | | HN request time: 0.2s | source
Show context
mycentstoo ◴[] No.45083181[source]
I believe choosing a well known problem space in a well known language certainly influenced a lot of the behavior. AIs usefulness is correlated strongly with its training data and there’s no doubt been a significant amount of data about both the problem space and Python.

I’d love to see how this compares when either the problem space is different or the language/ecosystem is different.

It was a great read regardless!

replies(5): >>45083320 #>>45085533 #>>45086752 #>>45087639 #>>45092126 #
Insanity ◴[] No.45083320[source]
100% this. I tried haskelling with LLMs and it’s performance is worse compared to Go.

Although in fairness this was a year ago on GPT 3.5 IIRC

replies(6): >>45083408 #>>45083590 #>>45083706 #>>45085045 #>>45085275 #>>45085640 #
1. danielbln ◴[] No.45083408[source]
Post-training in all frontier models has improved significantly wrt to programming language support. Take Elexir, which LLMs could barely handle a test ago, but now support has gotten really good