←back to thread

221 points caspg | 7 comments | | HN request time: 1.501s | source | bottom
Show context
thefourthchime ◴[] No.42165457[source]
For years I've kept a list of apps / ideas / products I may do someday. I never made the time, with Cursor AI I have already built one, and am working on another. It's enabling me to use frameworks I barely know, like React Native, Swift, etc..

The first prompt (with o1) will get you 60% there, but then you have a different workflow. The prompts can get to a local minimum, where claude/gpt4/etc.. just can't do any better. At which point you need to climb back out and try a different approach.

I recommend git branches to keep track of this. Keep a good working copy in main, and anytime you want to add a feature, make a branch. If you get it almost there, make another branch in case it goes sideways. The biggest issue with developing like this is that you are not a coder anymore; you are a puppet master of a very smart and sometimes totally confused brain.

replies(5): >>42165545 #>>42165831 #>>42166210 #>>42169944 #>>42170110 #
lxgr ◴[] No.42165545[source]
> For years I've kept a list of apps / ideas / products I may do someday. I never made the time, with Cursor AI I have already built one, and am working on another.

This is one fact that people seem to severely under-appreciate about LLMs.

They're significantly worse at coding in many aspects than even a moderately skilled and motivated intern, but for my hobby projects, until now I haven't had any intern that would even as much as taking a stab at some of the repetitive or just not very interesting subtasks, let alone stick with them over and over again without getting tired of it.

replies(2): >>42165600 #>>42165998 #
Sakos ◴[] No.42165600[source]
It also reduces the knowledge needed. I don't particularly care about learning how to setup and configure a web extension from scratch. With LLM, I can get 90% of that working in minutes, then focus on the parts that I am interested in. As somebody with ADHD, it was primarily all that supplementary, tangential knowledge which felt like an insurmountable mountain to me and made it impossible to actually try all the ideas I'd had over the years. I'm so much more productive now that I don't have to always get into the weeds for every little thing, which could easily delay progress for hours or even days. I can pick and choose the parts I feel are important to me.
replies(1): >>42166112 #
imiric ◴[] No.42166112[source]
> It also reduces the knowledge needed. I don't particularly care about learning how to setup and configure a web extension from scratch. With LLM, I can get 90% of that working in minutes, then focus on the parts that I am interested in.

Eh, I would argue that the apparent lower knowledge requirement is an illusion. These tools produce non-working code more often than not (OpenAI's flagship models are not even correct 50% of the time[1]), so you still have to read, understand and debug their output. If you've ever participated in a code review, you'll know that doing that takes much more effort than actually writing the code yourself.

Not only that, but relying on these tools handicaps you into not actually learning any of the technologies you're working with. If you ever need to troubleshoot or debug something, you'll be forced to use an AI tool for help again, and good luck if that's a critical production issue. If instead you take the time to read the documentation and understand how to use the technology, perhaps even with the _assistance_ of an AI tool, then it might take you more time and effort upfront, but this will pay itself off in the long run by making you more proficient and useful if and when you need to work on it again.

I seriously don't understand the value proposition of the tools in the current AI hype cycle. They are fun and useful to an extent, but are severely limited and downright unhelpful at building and maintaining an actual product.

[1]: https://openai.com/index/introducing-simpleqa/

replies(4): >>42166445 #>>42166468 #>>42166683 #>>42166825 #
Sakos ◴[] No.42166468[source]
All the projects I've been able to start and make progress in in the past year vs the ten years before that are substantive enough proof for me that you're wrong in pretty much all of your arguments. My direct experience proves statements like "the lower knowledge requirement is an illusion" and "it takes much more effort to review code than to write it" wrong. I do code reviews all the time. I write code all the time. I've had AI help me with my projects and I've reviewed and refactored that code. You're quite simply wrong. And I don't understand why you're so eager to argue that my direct experience is wrong, as if you're trying to gaslight me.

It's quite honestly mystifying to me.

It's simply not the case that we need to be experts in every single part of a software project. Not for personal projects and not for professional ones either. So it doesn't make any sense to me not to use AI if I've directly proven to myself that it can improve my productivity, my understanding and my knowledge.

> If you ever need to troubleshoot or debug something, you'll be forced to use an AI tool for help again

This is proof to me that you haven't used AI much. Because AI has helped me understand things much quicker and with much less friction than I've ever been able to before. And I have often been able to solve things AI has had issues with, even if it's a topic I have zero experience with, through the interaction with the AI.

At some point, being able to make progress (and how that affects the learning process) trumps this perfect ideal of the programmer who figures out everything on their own through tedious, mind-numbing long hours solving problems that are at best tangential to the problems they were actually trying to solve hours ago.

Frankly, I'm tired of not being able to do any of my personal projects because of all the issues I've mentioned before. And I'm tired of people like you saying I'm doing it wrong, DESPITE ME NOT BEING ABLE TO DO IT AT ALL BEFORE.

Honestly, fuck this.

replies(4): >>42166827 #>>42166967 #>>42166978 #>>42167346 #
1. senorrib ◴[] No.42166978[source]
It’s baffling to see all the ignorant answers to this thread, OP. My experience has been similar to yours, and I’ve been pushing complex software to production for the past 20 years.

Feels like a bunch o flat earth arguments; they’d rather ignore evidence (or even try out by themselves) to keep the illusion that you need to write it all yourself for it to be “high quality”.

replies(2): >>42167053 #>>42168551 #
2. imiric ◴[] No.42167053[source]
Or, hey, maybe we've just had different experiences, and are using these tools differently? I even concede that I may not be great at prompting, which could be the cause of my problems.

I'm not arguing that writing everything yourself leads to higher quality. I'm arguing that _in my experience_ a) it takes more time and effort to read, troubleshoot and fix code generated by these tools than it would take me to actually write it myself, and b) that taking the time to read the documentation and understand the technologies I'm working with would actually save me time and effort in the future.

You're free to disagree with all of this, but don't try to tell me my experience is somehow lesser than yours.

replies(2): >>42168635 #>>42168853 #
3. thefourthchime ◴[] No.42168551[source]
Thanks, my guess is that many complaining about the technology haven't honestly tried to embrace it.
replies(1): >>42168731 #
4. senorrib ◴[] No.42168635[source]
I wasn’t targeting this specifically at you or your individual experience. However, I did hear the same arguments you make ad nauseam, and they usually come from people that are either just too skeptical, or don’t put the effort required to use the tool.
5. rtsil ◴[] No.42168731[source]
Or denial/rejection is natural defense reaction for people who feel threatened.
6. fragmede ◴[] No.42168853[source]
So link chats where you've run into the very real limitations these things have. What language you're using, what framework you're in, what library it hallucinated. I'm not interested in either of us shouting past each other, I genuinely want to understand how your experience, which is not at all lesser than mine, is so different. Am I ignoring flaws that you otherwise can't overlook? Are you expecting too much from it with too little input? Without details, all we can do is describe feelings at each other and get frustrated when the other person's experience is different. Might as well ask your star sign while we're at it.
replies(1): >>42182452 #
7. imiric ◴[] No.42182452{3}[source]
I use OpenRouter, which saves chats in local storage, and my browser is configured to delete all history and data on exit. So, unfortunately, I can't link you to an exact session.

I give more details of one instance of this behavior using Claude 3.5 Sonnet a few weeks ago here[1]. I was asking it to implement a specific feature using a popular Go CLI library. I could probably reproduce it, but honestly can't be bothered, nor do I wish to use more of my API credits for this.

Besides, why should I have to prove anything in this discussion? We're arguing based on good faith, and just as I assume your experience is based on positive interactions, so should you assume mine is based on negative ones.

But I'll give you one last argument based on principles alone.

LLMs are trained on mountains of data from various online sources (web sites, blogs, documentation, GitHub, SO, etc.). This training takes many months and has a cutoff point sometime in the past. When you ask them to generate some code using a specific library, how can you be sure that the code is using the specific version of the library you're currently using? How can you be sure that the library is even in the training set and that the LLM won't just hallucinate it entirely?

Some LLMs allow you to add sufficient context to your prompts (with RAG, etc.) to increase the likelihood of generating working code, which can help, but still isn't foolproof, and not all services/tools allow this.

But more crucially, when you ask it to do something that the library doesn't support, the LLM will never tell you "this isn't possible" or "I don't know". It will instead proceed to hallucinate a solution because that's what it was trained to do.

And how are these state-of-the-art coding LLMs that pass all these coding challenges capable of producing errors like referencing an undefined variable? Surely these trivial bugs shouldn't be possible, no?

All of these issues were what caused me to waste more than an hour fighting with both Claude 3.5 Sonnet and GPT-4o. And keep in mind that this was a fairly small problem. This is why I can't imagine how building an entire app, using a framework and dozens of libraries, could possibly be more productive than doing it without them. But clearly this doesn't seem to be an opinion shared by most people here, so let's agree to disagree.

[1]: https://news.ycombinator.com/item?id=41987474