←back to thread

64 points m-hodges | 8 comments | | HN request time: 0s | source | bottom
Show context
prisenco ◴[] No.45078963[source]
For junior devs wondering if they picked the right path, remember that the world still needs software, ai still breaks down at even a small bit of complexity, and the first ones to abandon this career will be those who only did it for money anyways and they’ll do the same once the trades have a rough year (as they always do).

In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.

replies(5): >>45079011 #>>45079019 #>>45079029 #>>45079186 #>>45079322 #
kragen ◴[] No.45079322[source]
Nobody has any idea what AI is going to look like five years from now. Five years ago we had GPT-2; AI couldn't code at all. Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

Anyone who tells you they know what the future looks like five years from now is lying.

replies(2): >>45079366 #>>45079457 #
noosphr ◴[] No.45079457[source]
Unless we have another breakthrough like attention we do know that AI will keep struggling with context and costs will grow quadratically with context.

On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.

I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.

replies(1): >>45079562 #
kragen ◴[] No.45079562[source]
It's only been 8 years since the attention breakthrough. Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough. AI researchers rather than physicists or chemists won the Nobel Prizes in physics and (for AlphaFold) chemistry last year. Agentic software development, MCP, and video generation are more or less new this year.

Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.

replies(2): >>45080001 #>>45080114 #
1. lossolo ◴[] No.45080001[source]
> Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough.

OK, I will bite.

So "Sparsely-gated MoE" isn’t some new intelligence, it's a sharding trick. You trade parameter count for FLOPs/latency with a router. And MoE predates transformers anyway.

RLHF is packaging. Supervised finetune on instructions, learn a reward model, then nudge the policy. That’s a training objective swap plus preference data. It's useful, but not breakthrough.

CoT is a prompting hack to force the same model to externalize intermediate tokens. The capability was there, you’re just sampling a longer trajectory. It’s UX for sampling.

Scaling laws are an empirical fit telling you "buy more compute and data" That’s a budgeting guideline, not new math or architecture. https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/sta...

LoRA is linear algebra 101, low rank adapters to cut training cost and avoid touching the full weights. The base capability still comes from the giant pretrained transformer.

AlphaFold 2’s magic is mostly attention + A LOT of domain data/priors (MSAs, structures, evolutionary signal). Again attention core + data engineering.

"DeepSeek’s cost breakthrough" is systems engineering.

Agentic software dev/MCP is orchestration, that’s middleware and protocols, it helps use the model, it doesn’t make the model smarter.

Video generation? Diffusion with temporal conditioning and better consistency losses. It’s DALL-E style tech stretched across time with tons of data curation and filtering.

Most headline "wins" are compiler and kernel wins: FlashAttention, paged KV-cache, speculative decoding, distillation, quantization (8/4 bit), ZeRO/FSDP/TP/PP... These only move the cost curve, not the intelligence.

The biggest single driver the last few years has been the data so de dup, document quality scores, aggressive filtration, mixture balancing (web/code/math), synthetic bootstrapping, eval driven rewrites etc etc. You can swap half a dozen training "tricks" and get similar results if your data mix and scale are right.

For me a real post attention "breakthrough", would be something like: training that learns abstractions with sample efficiency far beyond scaling laws, reliable formal reasoning, causal/world-model learning that transfers out of distribution. None of the things you listed do that.

Almost everything since attention is optimization, ops, and data curation. I mean give me exact pretrain mix, filtering heuristics, and finetuning datasets for Claude/GPT-5 and without peeking at the secret sauce architecture I can get close just by matching tokens, quality filters and training schedule. The "breakthroughs" are mostly better ways to spend compute and clean data, not new ways to think.

replies(3): >>45080042 #>>45080140 #>>45080408 #
2. kragen ◴[] No.45080042[source]
I don't disagree with any of this, though it sounds like you know more about it than I do.
3. BobbyTables2 ◴[] No.45080140[source]
Indeed. I’m shocked that we train “AI” pretty much as one would build a fancy auto-complete.

Not necessarily a bad approach but feels like something is missing for it to be “intelligent”.

Should really be called “artificial knowledge” instead.

replies(2): >>45080164 #>>45080325 #
4. kragen ◴[] No.45080164[source]
"What do you mean, they talk?"

"They talk by flapping their meat at each other!"

5. jofla_net ◴[] No.45080325[source]
This and parent are both approaching toward what I see as the main obstacle, that we as a species don't know how in its entirety a human mind thinks (and it varies among people), so trying to "model" it and reproduce it is reduced to a game of black-boxing. We black box the mind in terms of what situations its been seen to be in and how it has performed, the millions of correlative inputs/outputs are the training data. Yet, since we don't know the fullness of the interior we can only see its outputs it becomes somewhat of a Plato's cave situation. We believe it 'thinks' this way but again we cannot empirically say it performed a task a certain way, so unlike most other engineering problems, we are grasping at straws while trying to reconstruct it. This doesn't not mean that a human mind's inner-workings can't ever be %100 reproduced, but not until we know it further.
replies(2): >>45080593 #>>45123194 #
6. kianN ◴[] No.45080408[source]
This is a great summary of why despite so much progress/tricks being discovered, so little progress to the core limitations to LLMs are made.
7. tempodox ◴[] No.45080593{3}[source]
And there is another important difference: Our environments have oodles of details that inform us, while LLM training data is just “everything humans have ever written”. Those are completely different things. And LLMs have no concept of facts, only statements about facts in their training data that may or may not be true.
8. BobbyTables2 ◴[] No.45123194{3}[source]
It’s kinda interesting that a simple generator that probabilistically generates a word based on the previous one (less than 100 lines of simple python code), when trained on a book or two, will fairly consistently capitalize the first word after a period and form somewhat reasonable sentences.

It’s not that it knows grammar, it just was trained on a dataset that applied proper capitalization.

Humans learn from seeing patterns. I suspect AI only repeats them, more like a parrot.