←back to thread

858 points cryptophreak | 2 comments | | HN request time: 0.526s | source
Show context
themanmaran ◴[] No.42935503[source]
I'm surprised that the article (and comments) haven't mentioned Cursor.

Agreed that copy pasting context in and out of ChatGPT isn't the fastest workflow. But Cursor has been a major speed up in the way I write code. And it's primarily through a chat interface, but with a few QOL hacks that make it way faster:

1. Output gets applied to your file in a git-diff style. So you can approve/deny changes.

2. It (kinda) has context of your codebase so you don't have to specify as much. Though it works best when you explicitly tag files ("Use the utils from @src/utils/currency.ts")

3. Directly inserting terminal logs or type errors into the chat interface is incredibly convenient. Just hover over the error and click the "add to chat"

replies(8): >>42935579 #>>42935604 #>>42935621 #>>42935766 #>>42935845 #>>42937616 #>>42938713 #>>42939579 #
dartos ◴[] No.42935579[source]
I think the wildly different experiences we all seem to have with AI code tools speaks to the inconsistency of the tools and our own lack of understanding of what goes into programming.

I’ve only been slowed down with AI tools. I tried for a few months to really use them and they made the easy tasks hard and the hard tasks opaque.

But obviously some people find them helpful.

Makes me wonder if programming approaches differ wildly from developer to developer.

For me, if I have an automated tool writing code, it’s bc I don’t want to think about that code at all.

But since LLMs don’t really act deterministically, I feel the need to double check their output.

That’s very painful for me. At that point I’d rather just write the code once, correctly.

replies(3): >>42935622 #>>42936378 #>>42936638 #
kenjackson ◴[] No.42936638[source]
I use LLMs several times a day, and I think for me the issue is that verification is typically much faster than learning/writing. For example, I've never spent much time getting good at scripting. Sure, probably a gap I should resolve, but I feel like LLMs do a great job at it. And what I need to script is typically easy to verify, I don't need to spend time learning how to do things like, "move the files of this extension to this folder, but rewrite them so that the name begins with a three digit number based on the date when it was created, with the oldest starting with 001" -- or stuff like that. Sometimes it'll have a little bug, but one that I can debug quickly.

Scripting assistance by itself is worth the price of admission.

The other thing I've found it good at is giving me an English description of code I didn't write... I'm sure it sometimes hallucinates, but never in a way that has been so wrong that its been apparent to me.

replies(2): >>42937921 #>>42938049 #
1. shaan7 ◴[] No.42937921[source]
I think you and the parent comment are onto something. I also feel like the parent since I find it relatively difficult to read code that someone else wrote. My brain easily gets biased into thinking that the cases that the code is covering are the only possible ones. On the flip side, if I were writing the code, I am more likely to determine the corner cases. In other words, writing code helps me think, reading just biases me. This makes it extremely slow to review a LLM's code at which point I'd just write it myself.

Very good for throwaway code though, for example a PoC which won't really be going to production (hopefully xD).

replies(1): >>42963488 #
2. dartos ◴[] No.42963488[source]
Yes! It’s the same for me.

Maybe it’s bc I’ve been programming since I was young or because I mainly learned by doing code-along books, but writing the code is where my thinking gets done.

I don’t usually plan, then write code. I write code, understand the problem space, then write better code.

I’ve known friends and coworkers who liked to plan out a change in psudocode or some notes before getting into coding.

Maybe these different approaches benefit from AI differently.