←back to thread

181 points thunderbong | 1 comments | | HN request time: 0.233s | source
Show context
mycentstoo ◴[] No.45083181[source]
I believe choosing a well known problem space in a well known language certainly influenced a lot of the behavior. AIs usefulness is correlated strongly with its training data and there’s no doubt been a significant amount of data about both the problem space and Python.

I’d love to see how this compares when either the problem space is different or the language/ecosystem is different.

It was a great read regardless!

replies(5): >>45083320 #>>45085533 #>>45086752 #>>45087639 #>>45092126 #
Insanity ◴[] No.45083320[source]
100% this. I tried haskelling with LLMs and it’s performance is worse compared to Go.

Although in fairness this was a year ago on GPT 3.5 IIRC

replies(6): >>45083408 #>>45083590 #>>45083706 #>>45085045 #>>45085275 #>>45085640 #
1. diggan ◴[] No.45083590[source]
> Although in fairness this was a year ago on GPT 3.5 IIRC

GPT3.5 was impressive at the time, but today's SOTA (like GPT 5 Pro) are almost night-and-difference both in terms of just producing better code for wider range of languages (I mostly do Rust and Clojure, handles those fine now, was awful with 3.5) and more importantly, in terms of following your instructions in user/system prompts, so it's easier to get higher quality code from it now, as long as you can put into words what "higher quality code" means for you.