Most active commenters
  • nottorp(3)

←back to thread

627 points cratermoon | 13 comments | | HN request time: 1.254s | source | bottom
1. thombles ◴[] No.44462054[source]
Speaking as a grump who recently chilled out, put reservations on hold and gave Claude a crack... it turns out that the anti-AI crowd (which still includes me in many regards) gets a lot wrong about the experience of using it, as demonstrated in TFA. You don't get reams of wishy-washy code unless you ask for it. If you're an experienced developer who Knows What They Want then you can wield it like a scalpel. None of the output is a surprise because you discussed the context and requirements first. It just gets there (probably) faster than you might have typing out keywords yourself. If the goal is Whatever, then sure, you will get that faster.
replies(3): >>44462089 #>>44462657 #>>44465322 #
2. nottorp ◴[] No.44462089[source]
> If you're an experienced developer who Knows What They Want then you can wield it like a scalpel.

But that's not what the marketing says. The marketing says it will do your entire job for you.

In reality, it will save you some typing if you already know what to do.

On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.

replies(2): >>44462120 #>>44462860 #
3. thombles ◴[] No.44462120[source]
Well I never set much store by marketing and I'm not planning to start. :) More seriously though it helps explain the apparent contradiction that it sounds scammy at a macro level yet many individuals report getting a lot of value out of it.
replies(1): >>44462636 #
4. nottorp ◴[] No.44462636{3}[source]
> many individuals report getting a lot of value

I'm not sure it's a lot of value. It probably is in the short term, but in the long run...

There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.

5. hyper57 ◴[] No.44462657[source]
> You don't get reams of wishy-washy code unless you ask for it.

I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:

    @staticmethod
    def from_name(name: str) -> 'FPackageId':
        # C# Encoding.Unicode is UTF-16 LE in Python
        lower_name_bytes = name.lower().encode('utf-16-le')
        hashed_id = CityHash64(lower_name_bytes)
        return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:

> This custom ToLower(), does that produce the same result as .lower() in Python?

Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."

I pushed back with:

> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...

  Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.

  You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.
replies(3): >>44462754 #>>44464502 #>>44467550 #
6. rcxdude ◴[] No.44462754[source]
It's pretty difficult to have a useful back and forth with an LLM, because they're really heavily finetuned to be agreeable (and also they're not particularly smart, just knowledgeable, so their 'system 1' is a lot better than their 'system 2', to analogize with 'thinking fast and slow'). Generally speaking if they don't get a useful answer in one shot or with relatively simple, objective feedback, they're just going to flop around and agree with whatever you last suggested.
replies(1): >>44463091 #
7. adastra22 ◴[] No.44462860[source]
> In reality, it will save you some typing if you already know what to do.

This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.

replies(1): >>44464001 #
8. ◴[] No.44463091{3}[source]
9. nottorp ◴[] No.44464001{3}[source]
I'd love it if everyone who posts statements like this would also include a link to their professional experience. Or at least state a number of years they've been developers.

I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.

I'm also sure it wouldn't help your interns to grow to your level.

replies(1): >>44468135 #
10. prmph ◴[] No.44464502[source]
Exactly.

But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.

11. alittlebee ◴[] No.44465322[source]
I like this perspective, it’s not a replacement for thinking, it’s a replacement for typing what has already been typed countless times.
12. swat535 ◴[] No.44467550[source]
Yes, if they make a mistake and you point it out, they will make another mistake. In many of those cases, it's better to scrap it and start with a fresh context, try a different prompt (or provider smaller context).

Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.

13. adastra22 ◴[] No.44468135{4}[source]
Why? LLMs are fantastic tutors, if used right. It's how you use it, not the background knowledge you bring to it.