←back to thread

277 points gk1 | 4 comments | | HN request time: 0.855s | source
Show context
deepdarkforest ◴[] No.44398967[source]
What irks me about anthropic blog posts, is that they are vague about details that are important to be able to (publicly) draw any conclusions they want to fit their narrative.

For example, I do not see the full system prompt anywhere, only an excerpt. But most importantly, they try to draw conclusions about the hallucinations in a weird vague way, but not once do they post an example of the notetaking/memory tool state, which obviously would be the only source of the spiralling other than the SP. And then they talk about the need of better tools etc. No, it's all about context. The whole experiment is fun, but terribly ran and analyzed. Of course they know this, but it's cooler to treat claudius or whatever as a cute human, to push the narrative of getting closer to AGI etc. Saying additional scaffolding is needed a bit is a massive understatement. Context is the whole game. That's like if a robotics company says "well, our experiment with a robot picking a tennis ball of the ground went very wrong and the ball is now radioactive, but with a bit of additional training and scaffolding, we expect it to compete in Wimbledon by mid 2026"

Similar to their "claude 4 opus blackmailing" post, they intentionally hid a bit the full system prompt, which had clear instructions to bypass any ethical guidelines etc and do whatever it can to win. Of course then the model, given the information immediately afterwards would try to blackmail. You literally told it so. The goal of this would to go to congress [1] and demand more regulations, specifically mentioning this blackmail "result". Same stuff that Sam is trying to pull, which would benefit the closed sourced leaders ofc and so on.

[1]https://old.reddit.com/r/singularity/comments/1ll3m7j/anthro...

replies(4): >>44399454 #>>44399954 #>>44400303 #>>44401076 #
beoberha ◴[] No.44399454[source]
I read the article before reading your comment and was floored at the same thing. They go from “Claudius did a very bad job” to “middle managers will probably be replaced” in a couple paragraphs by saying better tools and scaffolding will help. Ok… prove it!

I will say: it is incredibly cool we can even do this experiment. Language models are mind blowing to me. But nothing about this article gives me any hope for LLMs being able to drive real work autonomously. They are amazing assistants, but they need to be driven.

replies(3): >>44399730 #>>44401092 #>>44405749 #
tavavex ◴[] No.44399730[source]
I'm inclined to believe what they're saying. Remember, this was a minor off-shoot experiment from their main efforts. They said that even if it can't be tuned to perfection, obvious improvements can be made. Like, the way how many LLMs were trained to act as kind, cheery yes-men was a conscious design choice, probably not the way they inherently must be. If they wanted to, I don't see what's stopping someone from training or finetuning a model to only obey its initial orders, treat customer interactions in an adversarial way and only ever care about profit maximization (what is considered a perfect manager, basically). The biggest issue is the whole sudden-onset psychosis thing, but with a sample size of one, it's hard to tell how prevalent this is, what caused it, whether it's universal and if it's fixable. But even if it remained, I can see businesses adopting these to cut their expenses in all possible ways.
replies(4): >>44399991 #>>44400030 #>>44401382 #>>44401639 #
1. gessha ◴[] No.44401382[source]
I believe this is a case of “20% of the work requiring 80% of the effort”. The current progress on LLMs and products that build on top of them is impressive but I’ll believe the blog’s claims when we have solid building blocks to build off of and not APIs and assumptions that break all the time.
replies(1): >>44401973 #
2. dangus ◴[] No.44401973[source]
The volume of kool aid surrounding this industry is crazy to me. It’s truly ruining an industry I used to have a lot of enthusiasm for. All we have left is snake oil salesmen, like the Salesforce CEO telling lies about no longer hiring software engineers while they have over 900 software engineering roles on their careers page.

This entire blog article talked about this failed almost completely with just about zero tangible success, hand waved away with “clear paths” to fix it.

I’m just kind of sitting here stunned that the basic hallucination problem isn’t fixed yet. We are using a natural language interface tool that isn’t really designed for doing anything quantitative and trying to shoehorn in that functionality by begging the damn thing to coorperate by tossing in more prompts.

I perused Andon Labs’ page and they have this golden statement:

> Silicon Valley is rushing to build software around today's AI, but by 2027 AI models will be useful without it. The only software you'll need are the safety protocols to align and control them.

That AI 2027 study that everyone cites endlessly is going to be hilarious to witness fall apart in embarrassment. 2027 is a year and a half away and these scam AI companies are claiming that you won’t even need software by then.

Insanely delusional, and honestly, the whole industry should be under investigation for defrauding investors.

replies(1): >>44402241 #
3. andrekandre ◴[] No.44402241[source]

  > All we have left is snake oil salesmen
it seems like recent trends end up like this... its like we are desperate for any kind of growth and its causing all kinds of pathologies with over-promising and over-investing...
replies(1): >>44402787 #
4. tempestn ◴[] No.44402787{3}[source]
Not just recent. All hype cycles are like this.