←back to thread

181 points thunderbong | 3 comments | | HN request time: 0.449s | source
Show context
mycentstoo ◴[] No.45083181[source]
I believe choosing a well known problem space in a well known language certainly influenced a lot of the behavior. AIs usefulness is correlated strongly with its training data and there’s no doubt been a significant amount of data about both the problem space and Python.

I’d love to see how this compares when either the problem space is different or the language/ecosystem is different.

It was a great read regardless!

replies(5): >>45083320 #>>45085533 #>>45086752 #>>45087639 #>>45092126 #
Insanity ◴[] No.45083320[source]
100% this. I tried haskelling with LLMs and it’s performance is worse compared to Go.

Although in fairness this was a year ago on GPT 3.5 IIRC

replies(6): >>45083408 #>>45083590 #>>45083706 #>>45085045 #>>45085275 #>>45085640 #
1. r_lee ◴[] No.45083706[source]
I'm not sure I'd say "100% this" if I was talking about GPT 3.5...
replies(2): >>45084580 #>>45085309 #
2. verelo ◴[] No.45084580[source]
Yeah, 3.5 was good when it came out but frankly anyone reviewing AI for coding not using sonnet 4.1, GPT-5 or equivalent is really not aware of what they've missed out on.
3. Insanity ◴[] No.45085309[source]
Yah, that’s a fair point. I had assumed it’d remain relatively similar given that the training data would be smaller for languages like Haskell versus languages like Python & JavaScript.