Most active commenters
  • soulofmischief(8)
  • gametorch(6)
  • sysmax(3)
  • agentultra(3)
  • nextlevelwizard(3)

←back to thread

358 points andrewstetsenko | 37 comments | | HN request time: 1.542s | source | bottom
1. sysmax ◴[] No.44360302[source]
AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.

Here's a fresh example that I stumbled upon just a few hours ago. I needed to refactor some code that first computes the size of a popup, and then separately, the top left corner.

For brevity, one part used an "if", while the other one had a "switch":

    if (orientation == Dock.Left || orientation == Dock.Right)
        size = /* horizontal placement */
    else
        size = /* vertical placement */

    var point = orientation switch
    {
        Dock.Left => ...
        Dock.Right => ...
        Dock.Top => ...
        Dock.Bottom => ...
    };
I wanted the LLM to refactor it to store the position rather than applying it immediately. Turns out, it just could not handle different things (if vs. switch) doing a similar thing. I tried several variations of prompts, but it very strongly leaning to either have two ifs, or two switches, despite rather explicit instructions not to do so.

It sort of makes sense: once the model has "completed" an if, and then encounters the need for a similar thing, it will pick an "if" again, because, well, it is completing the previous tokens.

Harmless here, but in many slightly less trivial examples, it would just steamroll over nuance and produce code that appears good, but fails in weird ways.

That said, splitting tasks into smaller parts devoid of such ambiguities works really well. Way easier to say "store size in m_StateStorage and apply on render" than manually editing 5 different points in the code. Especially with stuff like Cerebras, that can chew through complex code at several kilobytes per second, expanding simple thoughts faster than you could physically type them.

replies(2): >>44360561 #>>44360985 #
2. npinsker ◴[] No.44360703[source]
Sweeping generalizations about how LLMs will always (someday) be able to do arbitrary X, Y, and Z don't really capture me either
replies(1): >>44360737 #
3. DataDaoDe ◴[] No.44360705[source]
The interesting questions happen when you define X, Y and Z and time. For example, will llms be able to solve the P=NP problem in two weeks, 6 months, 5 years, a century? And then exploring why or why not
4. guappa ◴[] No.44360768[source]
If you need a model per task, we're very far from AGI.
5. agentultra ◴[] No.44360773{4}[source]
Until the day that thermodynamics kicks in.

Or the current strategies to scale across boards instead of chips gets too expensive in terms of cost, capital, and externalities.

replies(1): >>44360798 #
6. gametorch ◴[] No.44360798{5}[source]
I mean fair enough, I probably don't know as much about hardware and physics as you
replies(1): >>44360933 #
7. sysmax ◴[] No.44360830[source]
I am working on a GUI for delegating coding tasks to LLMs, so I routinely experiment with a bunch of models doing all kinds of things. In this case, Claude Sonnet 3.7 handled it just fine, while Llama-3.3-70B just couldn't get it. But that is literally the simplest example that illustrates the problem.

When I tried giving top-notch LLMs harder tasks (scan an abstract syntax tree coming from a parser in a particular way, and generate nodes for particular things) they completely blew it. Didn't even compile, let alone logical errors and missed points. But once I broke down the problem to making lists of relevant parsing contexts, and generating one wrapper class at a time, it saved me a whole ton of work. It took me a day to accomplish what would normally take a week.

Maybe they will figure it out eventually, maybe not. The point is, right now the technology has fundamental limitations, and you are better off knowing how to work around them, rather than blindly trusting the black box.

replies(1): >>44360860 #
8. gametorch ◴[] No.44360860{3}[source]
Yeah exactly.

I think it's a combination of

1) wrong level of granularity in prompting

2) lack of engineering experience

3) autistic rigidity regarding a single hallucination throwing the whole experience off

4) subconscious anxiety over the threat to their jerbs

5) unnecessary guilt over going against the tide; anything pro AI gets heavily downvoted on Reddit and is, at best, controversial as hell here

I, for one, have shipped like literally a product per day for the last month and it's amazing. Literally 2,000,000+ impressions, paying users, almost 100 sign ups across the various products. I am fucking flying. Hit the front page of Reddit and HN countless times in the last month.

Idk if I break down the prompts better or what. But this is production grade shit and I don't even remember the last time I wrote more than two consecutive lines of code.

replies(2): >>44360937 #>>44364148 #
9. agentultra ◴[] No.44360933{6}[source]
Just pointing out that there are limits and there’s no reason to believe that models will improve indefinitely at the rates we’ve seen these last couple of years.
replies(1): >>44361007 #
10. sysmax ◴[] No.44360937{4}[source]
If you are launching one product per day, you are using LLMs to convert unrefined ideas into proof-of-concept prototypes. That works really well, that's the kind of work that nobody should be doing by hand anymore.

Except, not all work is like that. Fast-forward to product version 2.34 where a particular customer needs a change that could break 5000 other customers because of non-trivial dependencies between different parts of the design, and you will be rewriting the entire thing by humans or having it collapse under its own weight.

But out of 100 products launched on the market, only 1 or 2 will ever reach that stage, and having 100 LLM prototypes followed by 2 thoughtful redesigns is way better than seeing 98 human-made products die.

11. soulofmischief ◴[] No.44360985[source]
> AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.

AI stands for Artificial Intelligence. There are no inherent limits around what AI can and can't do or comprehend. What you are specifically critiquing is the capability of today's popular models, specifically transformer models, and accompanying tooling. This is a rapidly evolving landscape, and your assertions might no longer be relevant in a month, much less a year or five years. In fact, your criticism might not even be relevant between current models. It's one thing to speak about idiosyncrasies between models, but any broad conclusions drawn outside of a comprehensive multi-model review with strict procedure and controls is to be taken with a massive grain of salt, and one should be careful to avoid authoritative language about capabilities.

It would be useful to be precise in what you are critiquing, so that the critique actually has merit and applicability. Even saying "LLM" is a misnomer, as modern transformer models are multi-modal and trained on much more than just textual language.

replies(3): >>44361085 #>>44363335 #>>44364768 #
12. soulofmischief ◴[] No.44361007{7}[source]
There is reason to believe that humans will keep trying to push the limitations of computation and computer science, and that recent advancements will greatly accelerate our ability to research and develop new paradigms.

Look at how well Deepseek performed with the limited, outdated hardware available to its researchers. And look at what demoscene practitioners have accomplished on much older hardware. Even if physical breakthroughs ceased or slowed down considerably, there is still a ton left on the table in terms of software optimization and theory advancement.

And remember just how young computer science is as a field, compared to other human practices that have been around for hundreds of thousands of years. We have so much to figure out, and as knowledge begets more knowledge, we will continue to figure out more things at an increasing pace, even if it requires increasingly large amounts of energy and human capital to make a discovery.

I am confident that if it is at all possible to reach human-level intelligence at least in specific categories of tasks, we're gonna figure it out. The only real question is whether access to energy and resources becomes a bigger problem in the future, given humanity's currently extraordinarily unsustainable path and the risk of nuclear conflict or sustained supply chain disruption.

replies(3): >>44362851 #>>44362891 #>>44366737 #
13. mattbee ◴[] No.44361085[source]
What a ridiculous response, to scold the GP for criticising today's AI because tomorrow's might be better. Sure, it might! But it ain't here yet buddy.

Lots of us are interested in technology that's actually available, and we can all read date stamps on comments.

replies(1): >>44361213 #
14. soulofmischief ◴[] No.44361213{3}[source]
You're projecting that I am scolding OP, but I'm not. My language was neutral and precise. I presented no judgment, but gave OP the tools to better clarify their argument and express valid, actionable criticism instead of wholesale criticizing "AI" in a manner so imprecise as to reduce the relevance and effectiveness of their argument.

> But it ain't here yet buddy . . . we can all read date stamps on comments.

That has no bearing on the general trajectory that we are currently on in computer science and informatics. Additionally, your language is patronizing and dismissive, trading substance for insult. This is generally frowned upon in this community.

You failed to actually address my comment, both by failing to recognize that it was mainly about using the correct terminology instead of criticizing an entire branch of research that extends far beyond transformers or LLMs, and by failing to establish why a rapidly evolving landscape does not mean that certain generalizations cannot yet be made, unless they are presented with several constraints and caveats, which includes not making temporally-invariant claims about capabilities.

I would ask that you reconsider your approach to discourse here, so that we can avoid this thread degenerating into an emotional argument.

replies(1): >>44361419 #
15. mattbee ◴[] No.44361419{4}[source]
The GP was very precise in the experience they shared, and I thought it was interesting.

They were obviously not trying to make a sweeping comment about the entire future of the field.

Are you using ChatGPT to write your loquacious replies?

replies(1): >>44361497 #
16. soulofmischief ◴[] No.44361497{5}[source]
> They were obviously not trying to make a sweeping comment about the entire future of the field

OP said “AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.”

I'm not going to patronize you by explaining why this is not "very precise", or why its lack of temporal caveats is an issue, as I've already done so in an earlier comment. If you're still confused, you should read the sentence a few times until you understand. OP did not even mention which specific model they tested, and did not provide any specific prompt example.

> Are you using ChatGPT to write your loquacious replies?

If you can't handle a few short paragraphs as a reply, or find it unworthy of your time, you are free to stop arguing. The Hacker News guidelines actually encourage substantive responses.

I also assume that in the future, accusing a user of using ChatGPT will be against site guidelines, so you may as well start phasing that out of your repertoire now.

Here are some highlights from the Hacker News guidelines regarding comments:

- Don't be snarky

- Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

- Assume good faith

- Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken.

https://news.ycombinator.com/newsguidelines.html

replies(1): >>44362890 #
17. realusername ◴[] No.44362766[source]
Maybe it will improve or maybe not, I feel like we're at the same point as the first release of Cursor, in 2023
18. crackalamoo ◴[] No.44362851{8}[source]
I agree. And if human civilization survives, your concerns about energy and resources will be only short term on the scale of civilization, especially as we make models more efficient.

The human brain uses just 20 watts of power, so it seems to me like it is possible to reach human-level intelligence in principle by using much greater power and less of the evolutionary refinement over billions of years that the brain has.

19. anonymars ◴[] No.44362890{6}[source]
This is a lot of words, but does any of it contradict this:

> AI can very efficiently apply common patterns to vast amounts of code, but it has no inherent "idea" of what it's doing.”

Are you saying that AI does have an inherent idea of what it's doing or is doing more than that? Today?

We're in an informal discussion forum. I don't think the bar we're looking for is some rigorous deductive proof. The above matches my experience as well. Its a handy applied interactive version of an Internet search.

If someone has a different experience that would be interesting. But this just seems like navel-gazing over semantics.

replies(1): >>44363491 #
20. soulofmischief ◴[] No.44362891{8}[source]
* hundreds or thousands, not of
21. koonsolo ◴[] No.44363335[source]
I learned neural networks around 2000, and it was old technology then. The last real jump we saw was going from ChatGPT 3.5 to 4, and that is more than 2 years ago.

It seems you don't recollect how much time passed without any big revolutions in AI. Deep learning was a big jump. But when the next jump comes? Might be tomorrow, but looking at history, might be in 2035.

According to what I see, the curve has already flattened and now only a new revolution could get us to the next big step.

replies(2): >>44366251 #>>44366294 #
22. soulofmischief ◴[] No.44363491{7}[source]
> Are you saying that AI does have an inherent idea of what it's doing or is doing more than that?

No. I stated that OP cannot make that kind of blanket, non-temporally constrained statements about artificial intelligence.

> We're in an informal discussion forum. I don't think the bar we're looking for is some rigorous deductive proof

We're in a technology-oriented discussion forum, the minimum bar to any claim should be that it is supported by evidence, otherwise it should be presented as what it is: opinion.

> this just seems like navel-gazing over semantics.

In my opinion, conversation is much easier when we can agree that words should mean something. Imprecise language matched with an authoritative tone can mislead an audience. This topic in particular is rife with imprecise and uninformed arguments, and so we should take more care to use our words correctly, not less.

Furthermore, my argument goes beyond semantics, as it also deals with the importance of constraints when making broad, unbacked claims.

23. nextlevelwizard ◴[] No.44364148{4}[source]
Can you provide links to these 30 products you have shipped?

I keep hearing how people are so god damn productive with LLMs, but whenever I try to use them they can not reliably produce working code. Usually producing something that looks correct at first, but either doesn't work (at all or as intended).

Going over you list:

1. if the problem is that I need to be very specific with how I want LLM to fix the issue, like providing it the solution, why wouldn't I just make the change myself?

2. I don't even know how you can think that not vibe coding means you lack experience

3. Yes. If the model keeps trying to use non-existent language feature or completely made up functions/classes that is a problem and nothing to do with "autism"

4. This is what all AI maximalists want to think; that only reason why average software developer isn't knee deep in AI swamp with them is that they are luddites who are just scared for their jobs. I personally am not as I have not seen LLMs actually being useful for anything but replacing google searches.

5. I don't know why you keep bringing up Reddit so much. I also don't quite get who is going against the tide here, are you going against the tide of the downvotes or am I for not using LLMs to "fucking fly"?

>But this is production grade shit

I truly hope it is, because...

>and I don't even remember the last time I wrote more than two consecutive lines of code.

Means if there is a catastrophic error, you probably can't fix it yourself.

replies(1): >>44366200 #
24. ThunderSizzle ◴[] No.44364768[source]
> AI stands for Artificial Intelligence. There are no inherent limits around what AI can and can't do or comprehend.

Artificial, as in Artificial sand or artificial grass. Sure, it appears as sand or grass at first, but upon closer examination, it becomes very apparent that it's not real. Artificial is basically a similar word to magic - as in, it offers enough misdirection in order for people to think there might be intelligence, but upon closer examination, it's found lacking.

It's still impressive that it can do that, going all the way back to gaming AIs, but it's also a veil that is lifted easily.

25. gametorch ◴[] No.44366200{5}[source]
> if the problem is that I need to be very specific with how I want LLM to fix the issue, like providing it the solution, why wouldn't I just make the change myself?

I type 105 wpm on a bad day. Try gpt-4.1. It types like 1000 wpm. If you can formally describe your problem in English and the number of characters in the English prompt is less than whatever code you write, gpt-4.1 will make you faster.

Obviously you have to account for gpt-4.1 being wrong sometimes. Even so, if you have to run two or three prompts to get it right, it still is going to be faster.

> I don't even know how you can think that not vibe coding means you lack experience

If you lack experience, you're going to prompt the LLM to do the wrong thing and engineer yourself into a corner and waste time. Or you won't catch the mistakes it makes. Only experience and "knowing more than LLM" allows you to catch its mistakes and fix them. (Which is still faster than writing the code yourself, merely by way of it typing 1000 wpm.)

> If the model keeps trying to use non-existent language feature or completely made up functions/classes that is a problem and nothing to do with "autism"

You know that you can tell it those functions are made up and paste it the latest documentation and then it will work, right? That knee-jerk response makes it sound like you have this rigidity problem, yourself.

> I personally am not as I have not seen LLMs actually being useful for anything but replacing google searches.

Nothing really of substance here. Just because you don't know how to use this tool that doesn't mean no one does.

This is the least convincing point for me, because I come along and say "Hey! This thing has let me ship far more working code than before!" and then your response is just "I don't know how to use it." I know that it's made me more productive. You can't say anything to deny that. Do you think I have some need to lie about this? Why would I feel the need to go on the internet and reap a bunch of downvotes while peddling some lie that does stand to get me anything if I convince people of the lie?

> I also don't quite get who is going against the tide here, are you going against the tide of the downvotes

Yeah, that's what I'm saying. People will actively shame and harass you for using LLMs. It's mind boggling that a tool, a technology, that works for me and has made me more productive, would be so vehemently criticized. That's why I listed these 5 reasons, the only reasons I have thought of yet.

> Means if there is a catastrophic error, you probably can't fix it yourself.

See my point about lacking experience. If you can't do the surgery yourself every once in a while, you're going to hate these tools.

Really, you've just made a bunch of claims about me that I know are false, so I'm left unconvinced.

I'm trying to have a charitable take. I don't find joy in arguing or leaving discussions with a bitter taste. I genuinely don't know why people are so mad at me claiming that a tool has helped me be more productive. They all just don't believe me, ultimately. They all come up with some excuse as to why my personal anecdotes can be dismissed and ignored: "even though you have X, we should feel bad for you because Y!" But it's never anything of substance. Never anything that has convinced me. Because at the end of the day, I'm shipping faster. My code works. My code has stood the test of time. Insults to my engineering ability I know are demonstrably false. I hope you can see the light one day. These are extraordinary tools that are only getting better, at least by a little bit, in the foreseeable future. Why deny?

replies(2): >>44366559 #>>44374038 #
26. scrivna ◴[] No.44366251{3}[source]
Agree, the AI companies aren’t able to improve the base models so they’re pivoting to making add-ons like “agents” which seem to only be instructions atop the base models.
replies(1): >>44367255 #
27. koonsolo ◴[] No.44366294{3}[source]
Since I can't seem to add an edit to my post, here's a realization:

My 2035 prediction actually seems pretty optimistic. It was more than 20 years that we haven't seen any big AI revolutions, so 2045 would be more realistic.

And it seems our current AI is also not going to get us there any faster.

28. rootnod3 ◴[] No.44366559{6}[source]
Would also love to see those daily shipped products. What I see on reddit is the same quiz done several times just for different categories and the pixel art generator. That does not look like shipping a product per day as you claim.
replies(1): >>44367328 #
29. agentultra ◴[] No.44366737{8}[source]
> And remember just how young computer science is as a field, compared to other human practices that have been around for hundreds of thousands of years.

How long do you think Homo sapiens have been on Earth and how long has civilization been here?

I’ve been programming since 89. I know what you can squeeze into 100k.

But you can only blast so much electricity into a dense array of transistors before it melts the whole thing and electrons jump rails. We hit that limit a while ago. We’ve done a lot of optimization of instruction caching, loading, and execution. We front loaded a ton of caching in front of the registers. We’ve designed chips specialized to perform linear algebra calculations and scaled them to their limits.

AI is built on scaling the number of chips across the board. Which has the effect of requiring massive amounts of power. And heat dissipation. That’s why we’re building out so many new data centres: each one requiring land, water, and new sources of electricity generation to maintain demand levels for other uses… those sources mostly being methane and coal plants.

Yes, we might find local optimizations in training to lower the capital cost and external costs… but they will be a drop in the bucket at the scale we’re building out this infrastructure. We’re basically brute forcing the scale up here.

And computer science might be older than you think. We just used to call it logic. It took some electrical engineering innovations to make the physical computers happen but we had the theoretical understanding of computation for quite some time before those appeared.

A young field, yes, and a long way to go… perhaps!

But let’s not believe that innovation is magic. There’s hard science and engineering here. Electrons can only travel so fast. Transistor density can only scale so much. Etc.

replies(1): >>44367234 #
30. soulofmischief ◴[] No.44367234{9}[source]
> How long do you think Homo sapiens have been on Earth and how long has civilization been here?

I already corrected my typo in a child comment.

> We’re basically brute forcing the scale up here

Currently, but even that will eventually hit thermodynamic and socioeconomic limits, just as single chips are.

> And computer science might be older than you think. We just used to call it logic.

In my opinion, two landmark theory developments were type theory and the lambda calculus. Type theory was conceived to get around Russell's paradox and others, which formal logic could not do on its own.

As far as hardware, sure we had mechanical calculators in the 17th century, and Babbage's analytical engine in the 19th century, and Ada Lovelace's program, but it wasn't until the mid-20th century that computer science coalesced as its own distinct field. We didn't used to call computer science logic; it's a unification of physical advancements, logic and several other domains.

> Electrons can only travel so fast.

And we have no reason to believe that current models are at all optimized on a software or theoretical level, especially since, as you say yourself, we are currently just focused on brute-forcing innovation as its the more cost-effective solution for the time being.

But as I said, once theoretical refinement becomes more cost-effective, we can look at the relatively short history of computer science to see just how much can be done on older hardware with better theory:

>> Even if physical breakthroughs ceased or slowed down considerably, there is still a ton left on the table in terms of software optimization and theory advancement.

31. soulofmischief ◴[] No.44367255{4}[source]
Progress is progress. Just as a raw base models need RL to be useful, an agentic layer allows us to put these probabilistic machines on rails.
32. gametorch ◴[] No.44367328{7}[source]
On my main, not gonna dox myself. Being pro AI is clearly a faux pas for personal branding.

Just a few days ago got flamed for only having 62 users on GameTorch. Now up to 91 and more paying subs. Entire thing written by LLMs and hasn't fallen over once. I'd rather be a builder than an armchair critic.

People would rather drag you down in the hole that they're in than climb out.

replies(1): >>44368385 #
33. rootnod3 ◴[] No.44368385{8}[source]
Not trying to drag down, genuinely interested due to the claim.
34. nextlevelwizard ◴[] No.44374038{6}[source]
This is going to be all over the place and possibly hard to follow, I am just going to respond in "real time" as I read your comment, if you think that is too lazy to warrant reading I completely understand. I hope you have a nice day.

WPM is not my limiting factor. Maybe the difference is that I am not working on trivial software so a lot of thought goes into the work, typing is the least time consuming part. Still I don't see how your 105 wpm highly descriptive and instructive English can be faster than just fixing the thing. Even if after you prompt your LLM takes 1ms to fix the issue you have probably already wasted more time by debugging the issue and writing the prompt.

So your "you lack engineering experience" was actually "you don't know LLMs well", maybe use the words you intend and not make them into actual insults.

I am not going to be pasting in any C++ spec into an LLM.

Yet when I checked your profile you have shipped one sprite image generator website. I find all these claims so hard to believe. Everyone just keeps telling me how they are making millions off of LLMs but no one has to the recipes to show. Just makes me feel like you have stock in OpenAI or something and are trying your hardest to pump it up.

I think the shaming and harassing is mostly between your ears, at least I am not trying to shame or harass you for using LLMs, if anything I want to have superpowers too. If LLMs really work for you that is nice and you should keep doing it, I just have not seen the evidence you are talking about. I am willing to admit that it could very well be a skill issue, but I need more proof than "trust me" or "1000 wpm".

I don't think I have made any claims about you, although you have used loaded language like "autism" and "lack of engineering experience" and heavily implied that I am just too dumb to use the tools.

>I'm trying to have a charitable take.

c'mon nothing about your comments has been charitable in anyway. No one is mad at you personally. Do not take criticism of your tools as personal attacks. Maybe the tools will get good, but again my problem with LLMs and hype around them is that no one has been able to demonstrate them actually being as good as the hype suggests.

replies(1): >>44374108 #
35. gametorch ◴[] No.44374108{7}[source]
I appreciate the reply.

What is everyone working on that takes more than five minutes to think about?

For me, the work is insurmountable and infinite, while coming up with the solution is never too difficult. I'm not saying this to be cocky. I mean this:

In 99.9999999999% of the problems I encounter in software engineering, someone smarter than me has already written the battle tested solution that I should be using. Redis. Nginx. Postgres. etc. Or it's a paradigm like depth first search or breadth first search. Or just use a hash set. Sometimes it's a little crazier like Bloom filters but whatever.

Are you like constantly implementing new data structures and algorithms that only exist in research papers or in your head?

Once you've been engineering for 5 or 10 years, you've seen almost everything there is to see. Most of the solutions should be cached in your brains at that point. And the work just amounts to tedious, unimportant implementation details.

Maybe I'm forgetting that people still get bogged down in polymorphism and all that object oriented nonsense. If you just use flat structs, there's nothing too complicated that could possibly happen.

I worked in HFT, for what it's worth, and that should be considered very intense non-CRUD "true" engineering. That, I agree, LLMs might have a little more trouble with. But it's still nothing insane.

Software engineering is extremely formulaic. That's why it's so easy to statistically model it with LLMs.

replies(1): >>44374476 #
36. nextlevelwizard ◴[] No.44374476{8}[source]
I write embedded software in C++ for industrial applications. We have a lot of proprietary protocols and custom hardware. We have some initiatives to train LLMs with our protocols/products/documentation, but I have not been impressed with the results. Same goes with our end-to-end testing framework. I guess it isn't so popular so the results vary a lot.

I have been doing this for 8 year and while yes I have seen a lot you can't just copy-paste solutions due to flash, memory, and performance constraints.

Again maybe this is a skill issue and maybe I will be replaced with an LLM, but so far they seem more like cool toys. I have used LLMs to write AddOns for World of Warcraft since my Lua knowledge is mostly writing Wireshark plugins for our protocols and for that it has been nice, but it is nothing someone who actually works with Lua or with WoW API couldn't produce faster or just as fast, because I have to describe what I want and then try and see if the API the LLM provides exists and if it works as the LLM assumed it would.

replies(1): >>44377173 #
37. gametorch ◴[] No.44377173{9}[source]
Again, I appreciate the reply. I think my view on LLMs is skewed towards the positive because I've only been building CRUD apps, command line tools, and games with them. I apologize if I came off as incendiary or offensive.