←back to thread

358 points andrewstetsenko | 1 comments | | HN request time: 0.201s | source
Show context
sysmax ◴[] No.44360302[source]
AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.

Here's a fresh example that I stumbled upon just a few hours ago. I needed to refactor some code that first computes the size of a popup, and then separately, the top left corner.

For brevity, one part used an "if", while the other one had a "switch":

    if (orientation == Dock.Left || orientation == Dock.Right)
        size = /* horizontal placement */
    else
        size = /* vertical placement */

    var point = orientation switch
    {
        Dock.Left => ...
        Dock.Right => ...
        Dock.Top => ...
        Dock.Bottom => ...
    };
I wanted the LLM to refactor it to store the position rather than applying it immediately. Turns out, it just could not handle different things (if vs. switch) doing a similar thing. I tried several variations of prompts, but it very strongly leaning to either have two ifs, or two switches, despite rather explicit instructions not to do so.

It sort of makes sense: once the model has "completed" an if, and then encounters the need for a similar thing, it will pick an "if" again, because, well, it is completing the previous tokens.

Harmless here, but in many slightly less trivial examples, it would just steamroll over nuance and produce code that appears good, but fails in weird ways.

That said, splitting tasks into smaller parts devoid of such ambiguities works really well. Way easier to say "store size in m_StateStorage and apply on render" than manually editing 5 different points in the code. Especially with stuff like Cerebras, that can chew through complex code at several kilobytes per second, expanding simple thoughts faster than you could physically type them.

replies(2): >>44360561 #>>44360985 #
gametorch[dead post] ◴[] No.44360561[source]
[flagged]
npinsker ◴[] No.44360703[source]
Sweeping generalizations about how LLMs will always (someday) be able to do arbitrary X, Y, and Z don't really capture me either
replies(1): >>44360737 #
gametorch[dead post] ◴[] No.44360737[source]
[flagged]
agentultra ◴[] No.44360773[source]
Until the day that thermodynamics kicks in.

Or the current strategies to scale across boards instead of chips gets too expensive in terms of cost, capital, and externalities.

replies(1): >>44360798 #
gametorch ◴[] No.44360798[source]
I mean fair enough, I probably don't know as much about hardware and physics as you
replies(1): >>44360933 #
agentultra ◴[] No.44360933[source]
Just pointing out that there are limits and there’s no reason to believe that models will improve indefinitely at the rates we’ve seen these last couple of years.
replies(1): >>44361007 #
soulofmischief ◴[] No.44361007[source]
There is reason to believe that humans will keep trying to push the limitations of computation and computer science, and that recent advancements will greatly accelerate our ability to research and develop new paradigms.

Look at how well Deepseek performed with the limited, outdated hardware available to its researchers. And look at what demoscene practitioners have accomplished on much older hardware. Even if physical breakthroughs ceased or slowed down considerably, there is still a ton left on the table in terms of software optimization and theory advancement.

And remember just how young computer science is as a field, compared to other human practices that have been around for hundreds of thousands of years. We have so much to figure out, and as knowledge begets more knowledge, we will continue to figure out more things at an increasing pace, even if it requires increasingly large amounts of energy and human capital to make a discovery.

I am confident that if it is at all possible to reach human-level intelligence at least in specific categories of tasks, we're gonna figure it out. The only real question is whether access to energy and resources becomes a bigger problem in the future, given humanity's currently extraordinarily unsustainable path and the risk of nuclear conflict or sustained supply chain disruption.

replies(3): >>44362851 #>>44362891 #>>44366737 #
1. soulofmischief ◴[] No.44362891[source]
* hundreds or thousands, not of