←back to thread

858 points cryptophreak | 10 comments | | HN request time: 0.676s | source | bottom
Show context
themanmaran ◴[] No.42935503[source]
I'm surprised that the article (and comments) haven't mentioned Cursor.

Agreed that copy pasting context in and out of ChatGPT isn't the fastest workflow. But Cursor has been a major speed up in the way I write code. And it's primarily through a chat interface, but with a few QOL hacks that make it way faster:

1. Output gets applied to your file in a git-diff style. So you can approve/deny changes.

2. It (kinda) has context of your codebase so you don't have to specify as much. Though it works best when you explicitly tag files ("Use the utils from @src/utils/currency.ts")

3. Directly inserting terminal logs or type errors into the chat interface is incredibly convenient. Just hover over the error and click the "add to chat"

replies(8): >>42935579 #>>42935604 #>>42935621 #>>42935766 #>>42935845 #>>42937616 #>>42938713 #>>42939579 #
1. dartos ◴[] No.42935579[source]
I think the wildly different experiences we all seem to have with AI code tools speaks to the inconsistency of the tools and our own lack of understanding of what goes into programming.

I’ve only been slowed down with AI tools. I tried for a few months to really use them and they made the easy tasks hard and the hard tasks opaque.

But obviously some people find them helpful.

Makes me wonder if programming approaches differ wildly from developer to developer.

For me, if I have an automated tool writing code, it’s bc I don’t want to think about that code at all.

But since LLMs don’t really act deterministically, I feel the need to double check their output.

That’s very painful for me. At that point I’d rather just write the code once, correctly.

replies(3): >>42935622 #>>42936378 #>>42936638 #
2. aprilthird2021 ◴[] No.42935622[source]
I think it's about what you're working on. It's great for greenfield projects, etc. Terrible for complex projects that plug into a lot of other complex projects (like most of the software those of us not at startups work on day to day)
replies(1): >>42935644 #
3. dartos ◴[] No.42935644[source]
It’s been a headache for my greenfield side projects and for my day to day work.

Leaning on these tools just isn’t for me rn.

I like them most for one off scripts or very small bash glue.

4. sangnoir ◴[] No.42936378[source]
> But since LLMs don’t really act deterministically, I feel the need to double check their output.

I feel the same

> That’s very painful for me. At that point I’d rather just write the code once, correctly.

I use AI tools augmentatively, and it's not painful for me, perhaps slightly inconvenient. But for boiler-plate-heavy code like unit tests or easily verifiable refactors[1], adjusting AI-authored code on a per-commit basis is still faster than me writing all the code.

1. Like switching between unit-test frameworks

5. kenjackson ◴[] No.42936638[source]
I use LLMs several times a day, and I think for me the issue is that verification is typically much faster than learning/writing. For example, I've never spent much time getting good at scripting. Sure, probably a gap I should resolve, but I feel like LLMs do a great job at it. And what I need to script is typically easy to verify, I don't need to spend time learning how to do things like, "move the files of this extension to this folder, but rewrite them so that the name begins with a three digit number based on the date when it was created, with the oldest starting with 001" -- or stuff like that. Sometimes it'll have a little bug, but one that I can debug quickly.

Scripting assistance by itself is worth the price of admission.

The other thing I've found it good at is giving me an English description of code I didn't write... I'm sure it sometimes hallucinates, but never in a way that has been so wrong that its been apparent to me.

replies(2): >>42937921 #>>42938049 #
6. shaan7 ◴[] No.42937921[source]
I think you and the parent comment are onto something. I also feel like the parent since I find it relatively difficult to read code that someone else wrote. My brain easily gets biased into thinking that the cases that the code is covering are the only possible ones. On the flip side, if I were writing the code, I am more likely to determine the corner cases. In other words, writing code helps me think, reading just biases me. This makes it extremely slow to review a LLM's code at which point I'd just write it myself.

Very good for throwaway code though, for example a PoC which won't really be going to production (hopefully xD).

replies(1): >>42963488 #
7. skydhash ◴[] No.42938049[source]
Your script example is a good one, but the nice thing about scripting is when you learn the semantic of it. Like the general pattern of find -> filter/transform -> select -> action. It’s very easy to come up with a one liner that can be trivially modified to adapt it to another context. More often than not, I find LLMs generate overly complicated scripts.
replies(1): >>42939137 #
8. lukeschlather ◴[] No.42939137{3}[source]
It's astounding how often I ask an LLM to generate some thing, do a little more research, come back and I'm ready to use the code it generated and I realize, no, it's selected the wrong flags entirely.

Although most recently I caught it because I fed it into both gpt-4o and o1 and o1 had the correct flags. Then I asked 4o to expand the flags from the short form to the long form and explain them so I could double-check my reasoning as to why o1 was correct.

replies(1): >>42991393 #
9. dartos ◴[] No.42963488{3}[source]
Yes! It’s the same for me.

Maybe it’s bc I’ve been programming since I was young or because I mainly learned by doing code-along books, but writing the code is where my thinking gets done.

I don’t usually plan, then write code. I write code, understand the problem space, then write better code.

I’ve known friends and coworkers who liked to plan out a change in psudocode or some notes before getting into coding.

Maybe these different approaches benefit from AI differently.

10. dartos ◴[] No.42991393{4}[source]
At that point, wouldn’t the man pages be better than asking 4o?

It was already wrong once.