←back to thread

625 points lukebennett | 1 comments | | HN request time: 0.214s | source
Show context
aresant ◴[] No.42139647[source]
Taking a hollistic view informed by a disruptive OpenAI / AI / LLM twitter habit I would say this is AI's "What gets measured gets managed" moment and the narrative will change

This is supported by both general observations and recently this tweet from an OpenAI engineer that Sam responded to and engaged ->

"scaling has hit a wall and that wall is 100% eval saturation"

Which I interpert to mean his view is that models are no longer yielding significant performance improvements because the models have maxed out existing evaluation metrics.

Are those evaluations (or even LLMs) the RIGHT measures to achieve AGI? Probably not.

But have they been useful tools to demonstrate that the confluence of compute, engineering, and tactical models are leading towards signifigant breathroughts in artificial (computer) intelligence?

I would say yes.

Which in turn are driving the funding, power innovation, public policy etc needed to take that next step?

I hope so.

(1) https://x.com/willdepue/status/1856766850027458648

replies(2): >>42139702 #>>42142811 #
1. ActionHank ◴[] No.42139702[source]
> Which in turn are driving the funding, power innovation, public policy etc needed to take that next step?

They are driving the shoveling of VC money into a furnace to power their servers.

Should that money run dry before they hit another breakthrough "AI" popularity is going to drop like a stone. I believe this to be far more likely an outcome than AGI or even the next big breakthrough.