←back to thread

760 points MindBreaker2605 | 2 comments | | HN request time: 0.411s | source
Show context
mehulashah ◴[] No.45899804[source]
Most of the folks on this topic are focused on Meta and Yann’s departure. But, I’m seeing something different.

This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.

replies(32): >>45899874 #>>45899875 #>>45899899 #>>45899901 #>>45900006 #>>45900079 #>>45900194 #>>45900255 #>>45900503 #>>45900521 #>>45900598 #>>45900784 #>>45900879 #>>45900974 #>>45901096 #>>45901209 #>>45901645 #>>45901914 #>>45902184 #>>45902195 #>>45902236 #>>45902357 #>>45902533 #>>45902641 #>>45902792 #>>45902972 #>>45903154 #>>45903226 #>>45903630 #>>45904085 #>>45904807 #>>45906016 #
DebtDeflation ◴[] No.45899899[source]
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
replies(3): >>45900068 #>>45901012 #>>45902265 #
ximeng ◴[] No.45901012[source]
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
replies(2): >>45901217 #>>45901918 #
sidewndr46 ◴[] No.45901217[source]
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
replies(2): >>45901260 #>>45901727 #
1. bitbuilder ◴[] No.45901727[source]
>It's entirely possible they don't have the ability in house to resolve it.

I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.

If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.

This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.

replies(1): >>45903193 #
2. warkdarrior ◴[] No.45903193[source]
AGI/ASI does not need perfect terminal rendering to crush all humans like bugs.