←back to thread

467 points mraniki | 1 comments | | HN request time: 0.2s | source
Show context
phkahler ◴[] No.43534852[source]
Here is a real coding problem that I might be willing to make a cash-prize contest for. We'd need to nail down some rules. I'd be shocked if any LLM can do this:

https://github.com/solvespace/solvespace/issues/1414

Make a GTK 4 version of Solvespace. We have a single C++ file for each platform - Windows, Mac, and Linux-GTK3. There is also a QT version on an unmerged branch for reference. The GTK3 file is under 2KLOC. You do not need to create a new version, just rewrite the GTK3 Linux version to GTK4. You may either ask it to port what's there or create the new one from scratch.

If you want to do this for free to prove how great the AI is, please document the entire session. Heck make a YouTube video of it. The final test is weather I accept the PR or not - and I WANT this ticket done.

I'm not going to hold my breath.

replies(15): >>43534866 #>>43534869 #>>43535026 #>>43535180 #>>43535208 #>>43535218 #>>43535261 #>>43535424 #>>43535811 #>>43535986 #>>43536115 #>>43536743 #>>43536797 #>>43536869 #>>43542998 #
jchw ◴[] No.43536743[source]
I suspect it probably won't work, although it's not necessarily because an LLM architecture could never perform this type of work, but rather because it works best when the training set contains inordinate sample data. I'm actually quite shocked at what they can do in TypeScript and JavaScript, but they're definitely a bit less "sharp" when it comes to stuff outside of that zone in my experience.

The ridiculous amount of data required to get here hints that there is something wrong in my opinion.

I'm not sure if we're totally on the same page, but I understand where you're coming from here. Everyone keeps talking about how transformational these models are, but when push comes to shove, the cynicism isn't out of fear or panic, its disappointment over and over and over. Like, if we had an army of virtual programmers fixing serious problems for open source projects, I'd be more excited about the possibilities than worried about the fact that I just lost my job. Honest to God. But the thing is, if that really were happening, we'd see it. And it wouldn't have to be forced and exaggerated all the time, it would be plainly obvious, like the way AI art has absolutely flooded the Internet... except I don't give a damn if code is soulless as long as it's good, so it would possibly be more welcome. (The only issue is that it most likely actually suck when that happens, and rather just be functional enough to get away with, but I like to try to be optimistic once in a while.)

You really make me want to try this, though. Imagine if it worked!

Someone will probably beat me to it if it can be done, though.

replies(5): >>43537512 #>>43538902 #>>43539761 #>>43541786 #>>43552468 #
jay_kyburz ◴[] No.43538902[source]
So yesterday I wanted to convert a color pallet I had in Lua that was 3 rgb ints, to Javascript 0x000000 notation. I sighed, rolled my eyes, but before I started this incredibly boring mindless task, asked Gamini if it would just do it for me. It worked, and I was happy, and I moved on.

Something is happening, its just not exciting as some people make it sound.

replies(1): >>43538992 #
1. jchw ◴[] No.43538992[source]
Be a bit more careful with that particular use case. It usually works, but depending on circumstances, LLMs have a relatively high tendency to start making the wrong correlations and give you results that are not actually accurate. (Colorspace conversions make it more obvious, but I think even simpler problems can get screwed up.)

Of course, for that use case, you can _probably_ do a bit of text processing in your text processing tools of choice to do it without LLMs. (Or have LLMs write the text processing pipeline to do it.)