←back to thread

The Claude Code Framework Wars

(shmck.substack.com)
125 points ShMcK | 6 comments | | HN request time: 1.217s | source | bottom
Show context
troupo ◴[] No.45156574[source]
> a set of rules, roles, and workflows that make its output predictable and valuable.

Let me stop you right there. Are you seriously talking about predictable when talking about a non-deterministic black box over which you have no control?

replies(6): >>45156592 #>>45156630 #>>45156764 #>>45156773 #>>45158223 #>>45159030 #
raincole ◴[] No.45156764[source]
Yes, and there is absolutely nothing wrong with that. Living creatures are mostly black boxes. It doesn't mean we don't aim for making medicine with predictable effects (and side effects).
replies(2): >>45156832 #>>45156841 #
troupo ◴[] No.45156832[source]
Medicine that can either kill you, cure you, or have no effect at any given time for the same disease is quite unlikely to even pass certification.

Do you know why?

replies(1): >>45156946 #
1. raincole ◴[] No.45156946[source]
That is exactly my point.
replies(1): >>45158844 #
2. troupo ◴[] No.45158844[source]
Then no one understands your point
replies(1): >>45159112 #
3. raincole ◴[] No.45159112[source]
Maybe you can try to read other comments below your original comment, as they mostly share the same point and I don't bother to repeat what everyone else has said.

I'll put it concisely:

Trying to build predictable result upon unpredictable, not fully understood mechanisms is an extremely common practice in every single field.

But anyway you think LLM is just coin toss so I won't engage with this sub-thread anymore.

replies(1): >>45159513 #
4. troupo ◴[] No.45159513{3}[source]
And you should read replies to those replies, including yours.

Nothing in the current AI world is as predictable as, say, the medicine you can buy or you get prescribed. None of the shamanic "just one more prompt bro" rituals have the predicting power of physics laws. Etc.

You could reflect on that.

> But anyway you think LLM is just coin toss

A person telling me to "try to read comments" couldn't read and understand my comment.

replies(1): >>45165771 #
5. touristtam ◴[] No.45165771{4}[source]
> Nothing in the current AI world is as predictable as, say, the medicine you can buy or you get prescribed.

Do you know there are approve drugs that have been put in the market for treating one ailment and that have proven to have effect on another or have been shown to have unwanted side effect, and therefore have been shifted? The whole drugs _market_ is full of them and all that is needed is to have enough trial to prove desired effect...

The LLM output is yours to decide if it is relevant to your work or not, but it seems that your experience is consistently subpar with what others have reported.

replies(1): >>45173725 #
6. troupo ◴[] No.45173725{5}[source]
> Do you know there are

Yes, I know. Doesn't really disprove my point

> all that is needed is to have enough trial to prove desired effect

all that is needed lol. You mean multi-stage trials with baselines, control groups, testing against placebos etc.?

Compared to "yolo just believe me" of LLMs.

> The LLM output is yours to decide if it is relevant to your work or not, but it seems that your experience is consistently subpar with what others have reported.

Indeed, because all we have to do with those reports is have blind unquestionable faith. "Just one more prompt, and I swear it will be 100% more efficient with literally othing to judge efficiency by, no baselines, nothing".