But that's not what the marketing says. The marketing says it will do your entire job for you.
In reality, it will save you some typing if you already know what to do.
On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.
I'm not sure it's a lot of value. It probably is in the short term, but in the long run...
There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.
I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:
@staticmethod
def from_name(name: str) -> 'FPackageId':
# C# Encoding.Unicode is UTF-16 LE in Python
lower_name_bytes = name.lower().encode('utf-16-le')
hashed_id = CityHash64(lower_name_bytes)
return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:> This custom ToLower(), does that produce the same result as .lower() in Python?
Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."
I pushed back with:
> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...
Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.
You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.
I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.
I'm also sure it wouldn't help your interns to grow to your level.
But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.
Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.