←back to thread

64 points m-hodges | 1 comments | | HN request time: 0.001s | source
Show context
prisenco ◴[] No.45078963[source]
For junior devs wondering if they picked the right path, remember that the world still needs software, ai still breaks down at even a small bit of complexity, and the first ones to abandon this career will be those who only did it for money anyways and they’ll do the same once the trades have a rough year (as they always do).

In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.

replies(5): >>45079011 #>>45079019 #>>45079029 #>>45079186 #>>45079322 #
kragen ◴[] No.45079322[source]
Nobody has any idea what AI is going to look like five years from now. Five years ago we had GPT-2; AI couldn't code at all. Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

Anyone who tells you they know what the future looks like five years from now is lying.

replies(2): >>45079366 #>>45079457 #
noosphr ◴[] No.45079457[source]
Unless we have another breakthrough like attention we do know that AI will keep struggling with context and costs will grow quadratically with context.

On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.

I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.

replies(1): >>45079562 #
kragen ◴[] No.45079562[source]
It's only been 8 years since the attention breakthrough. Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough. AI researchers rather than physicists or chemists won the Nobel Prizes in physics and (for AlphaFold) chemistry last year. Agentic software development, MCP, and video generation are more or less new this year.

Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.

replies(2): >>45080001 #>>45080114 #
lossolo ◴[] No.45080001[source]
> Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough.

OK, I will bite.

So "Sparsely-gated MoE" isn’t some new intelligence, it's a sharding trick. You trade parameter count for FLOPs/latency with a router. And MoE predates transformers anyway.

RLHF is packaging. Supervised finetune on instructions, learn a reward model, then nudge the policy. That’s a training objective swap plus preference data. It's useful, but not breakthrough.

CoT is a prompting hack to force the same model to externalize intermediate tokens. The capability was there, you’re just sampling a longer trajectory. It’s UX for sampling.

Scaling laws are an empirical fit telling you "buy more compute and data" That’s a budgeting guideline, not new math or architecture. https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/sta...

LoRA is linear algebra 101, low rank adapters to cut training cost and avoid touching the full weights. The base capability still comes from the giant pretrained transformer.

AlphaFold 2’s magic is mostly attention + A LOT of domain data/priors (MSAs, structures, evolutionary signal). Again attention core + data engineering.

"DeepSeek’s cost breakthrough" is systems engineering.

Agentic software dev/MCP is orchestration, that’s middleware and protocols, it helps use the model, it doesn’t make the model smarter.

Video generation? Diffusion with temporal conditioning and better consistency losses. It’s DALL-E style tech stretched across time with tons of data curation and filtering.

Most headline "wins" are compiler and kernel wins: FlashAttention, paged KV-cache, speculative decoding, distillation, quantization (8/4 bit), ZeRO/FSDP/TP/PP... These only move the cost curve, not the intelligence.

The biggest single driver the last few years has been the data so de dup, document quality scores, aggressive filtration, mixture balancing (web/code/math), synthetic bootstrapping, eval driven rewrites etc etc. You can swap half a dozen training "tricks" and get similar results if your data mix and scale are right.

For me a real post attention "breakthrough", would be something like: training that learns abstractions with sample efficiency far beyond scaling laws, reliable formal reasoning, causal/world-model learning that transfers out of distribution. None of the things you listed do that.

Almost everything since attention is optimization, ops, and data curation. I mean give me exact pretrain mix, filtering heuristics, and finetuning datasets for Claude/GPT-5 and without peeking at the secret sauce architecture I can get close just by matching tokens, quality filters and training schedule. The "breakthroughs" are mostly better ways to spend compute and clean data, not new ways to think.

replies(3): >>45080042 #>>45080140 #>>45080408 #
BobbyTables2 ◴[] No.45080140[source]
Indeed. I’m shocked that we train “AI” pretty much as one would build a fancy auto-complete.

Not necessarily a bad approach but feels like something is missing for it to be “intelligent”.

Should really be called “artificial knowledge” instead.

replies(2): >>45080164 #>>45080325 #
1. kragen ◴[] No.45080164[source]
"What do you mean, they talk?"

"They talk by flapping their meat at each other!"