←back to thread

858 points cryptophreak | 1 comments | | HN request time: 0.001s | source
Show context
themanmaran ◴[] No.42935503[source]
I'm surprised that the article (and comments) haven't mentioned Cursor.

Agreed that copy pasting context in and out of ChatGPT isn't the fastest workflow. But Cursor has been a major speed up in the way I write code. And it's primarily through a chat interface, but with a few QOL hacks that make it way faster:

1. Output gets applied to your file in a git-diff style. So you can approve/deny changes.

2. It (kinda) has context of your codebase so you don't have to specify as much. Though it works best when you explicitly tag files ("Use the utils from @src/utils/currency.ts")

3. Directly inserting terminal logs or type errors into the chat interface is incredibly convenient. Just hover over the error and click the "add to chat"

replies(8): >>42935579 #>>42935604 #>>42935621 #>>42935766 #>>42935845 #>>42937616 #>>42938713 #>>42939579 #
dartos ◴[] No.42935579[source]
I think the wildly different experiences we all seem to have with AI code tools speaks to the inconsistency of the tools and our own lack of understanding of what goes into programming.

I’ve only been slowed down with AI tools. I tried for a few months to really use them and they made the easy tasks hard and the hard tasks opaque.

But obviously some people find them helpful.

Makes me wonder if programming approaches differ wildly from developer to developer.

For me, if I have an automated tool writing code, it’s bc I don’t want to think about that code at all.

But since LLMs don’t really act deterministically, I feel the need to double check their output.

That’s very painful for me. At that point I’d rather just write the code once, correctly.

replies(3): >>42935622 #>>42936378 #>>42936638 #
kenjackson ◴[] No.42936638[source]
I use LLMs several times a day, and I think for me the issue is that verification is typically much faster than learning/writing. For example, I've never spent much time getting good at scripting. Sure, probably a gap I should resolve, but I feel like LLMs do a great job at it. And what I need to script is typically easy to verify, I don't need to spend time learning how to do things like, "move the files of this extension to this folder, but rewrite them so that the name begins with a three digit number based on the date when it was created, with the oldest starting with 001" -- or stuff like that. Sometimes it'll have a little bug, but one that I can debug quickly.

Scripting assistance by itself is worth the price of admission.

The other thing I've found it good at is giving me an English description of code I didn't write... I'm sure it sometimes hallucinates, but never in a way that has been so wrong that its been apparent to me.

replies(2): >>42937921 #>>42938049 #
skydhash ◴[] No.42938049[source]
Your script example is a good one, but the nice thing about scripting is when you learn the semantic of it. Like the general pattern of find -> filter/transform -> select -> action. It’s very easy to come up with a one liner that can be trivially modified to adapt it to another context. More often than not, I find LLMs generate overly complicated scripts.
replies(1): >>42939137 #
lukeschlather ◴[] No.42939137{3}[source]
It's astounding how often I ask an LLM to generate some thing, do a little more research, come back and I'm ready to use the code it generated and I realize, no, it's selected the wrong flags entirely.

Although most recently I caught it because I fed it into both gpt-4o and o1 and o1 had the correct flags. Then I asked 4o to expand the flags from the short form to the long form and explain them so I could double-check my reasoning as to why o1 was correct.

replies(1): >>42991393 #
1. dartos ◴[] No.42991393{4}[source]
At that point, wouldn’t the man pages be better than asking 4o?

It was already wrong once.