←back to thread

579 points paulpauper | 7 comments | | HN request time: 0s | source | bottom
Show context
wg0 ◴[] No.43609507[source]
Unlike many - I find author's complaints on the dot.

Once all the AI batch startups have sold subscriptions to the cohort and there's no more further market growth because businesses outside don't want to roll the dice on a probabilistic model that doesn't have an understanding of pretty much anything rather is a clever imitation machine on the content it has seen, the AI bubble will burst when more statups would start packing up by end of 2026 or max 2027.

replies(1): >>43612749 #
consumer451 ◴[] No.43612749[source]
I would go even further than TFA. In my personal experience using Windsurf daily, Sonnet 3.5 is still my preferred model. 3.7 makes many more changes that I did not ask for, often breaking things. This is an issue with many models, but it got worse with 3.7.
replies(3): >>43612845 #>>43612928 #>>43616646 #
behnamoh ◴[] No.43612845[source]
3.7 is like a wild horse. you really must ground it with clear instructions. it sucks that it doesn't automatically know that but it's tameable.
replies(1): >>43613301 #
consumer451 ◴[] No.43613301[source]
Could you share any successful prompting techniques for grounding 3.7, even just a project-specific example?
replies(1): >>43614192 #
behnamoh ◴[] No.43614192[source]
I use this:

    I don't want to drastically change my current code, nor do I like being told to create several new files and numerous functions/classes to solve this problem. I want you to think clearly and be focused on the task and don't get wild! I want the most straightforward approach which is elegant, intuitive, and rock solid.
replies(1): >>43620515 #
pdimitar ◴[] No.43620515[source]
As a caveat, I told it to make minimal code for one task and it completely skipped a super important aspect of it, justifying it by saying that I said "minimal".

Not cool, Claude 3.7, not cool.

replies(1): >>43630506 #
namaria ◴[] No.43630506{3}[source]
Doesn't trading prompt patches trying to get around undefined behavior from the model make you wonder if this is a net positive?
replies(1): >>43630677 #
1. pdimitar ◴[] No.43630677{4}[source]
Huh? I'm not even sure what you said, can you clarify?
replies(1): >>43630902 #
2. namaria ◴[] No.43630902[source]
I thought the value proposition of using LLMs to code is the lesser cognitive load of just describing what you want in natural language. But if it turns out writing the prompt is so involved, you end up trading snippets on forums and you often run into undefined behavior (the thing you described turned out to be ambiguous to the LLM and it gave you something you did not expect at all)...

I have to wonder, wouldn't just writing the code be more productive in the end?

replies(1): >>43631258 #
3. pdimitar ◴[] No.43631258[source]
Yes and no.

Yes: if you are an expert in the area. In this case I needed something fairly specific I am far from an expert in. I know both Elixir and Rust quite well but couldn't quickly figure out how to be able to wrap a Rust object in just the right container(s) data type(s) so it can be safely accessed from any OS thread even though the object at hand is `Send` but not `Sync`. And I wanted it done without a mutex.

No: because most programming languages are just verbose. Many times I know _exactly_ what I will write 10 minutes later but I still have to type it out. If I can describe it to an LLM well enough then part of that time is saved.

Mind you, I am usually an LLM hater. They are over-glorified, they don't "reason" and they don't "understand" -- it baffles me to this day that an audience seemingly as educated as HN believes in that snake oil.

That being said, they are still a useful tool and as good engineers it's on us to recognize a tool's utility and its strong and weak usages and adapt our workflows to that. I believe me and many others do just that.

The rest... believe in forest nymphs.

So yeah. I agree that a significant part of the time it's just quicker to type it out. But people like myself are good at articulating their needs so with us it's often a coin toss. I choose to type the code out myself more often than not because (1) I don't want to pay for any LLM yet and (2) I don't want to forget my craft which I love to this day and never did it just for the money.

replies(1): >>43631277 #
4. namaria ◴[] No.43631277{3}[source]
Thanks for the perspective. I don't feel love or hate, I am just perplexed (haha) about the discourse around it sometimes.
replies(1): >>43631353 #
5. pdimitar ◴[] No.43631353{4}[source]
Difficult for me not to hate LLMs when there are literal hundreds of billions at stake and people are lying through their teeth for money, as they always do.

Which does lead to all the weird discourse around them indeed.

replies(1): >>43636677 #
6. namaria ◴[] No.43636677{5}[source]
> there are literal hundreds of billions at stake and people are lying through their teeth for money, as they always do.

That's pretty much how I've felt about life in this world ever since I can remember.

replies(1): >>43637690 #
7. pdimitar ◴[] No.43637690{6}[source]
And you were correct then, and you're correct now.

I have no clue where it came from but I've been like that since I don't know, 12-13 years old to this day (45 now).