←back to thread

64 points m-hodges | 1 comments | | HN request time: 0.236s | source
Show context
prisenco ◴[] No.45078963[source]
For junior devs wondering if they picked the right path, remember that the world still needs software, ai still breaks down at even a small bit of complexity, and the first ones to abandon this career will be those who only did it for money anyways and they’ll do the same once the trades have a rough year (as they always do).

In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.

replies(5): >>45079011 #>>45079019 #>>45079029 #>>45079186 #>>45079322 #
kragen ◴[] No.45079322[source]
Nobody has any idea what AI is going to look like five years from now. Five years ago we had GPT-2; AI couldn't code at all. Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

Anyone who tells you they know what the future looks like five years from now is lying.

replies(2): >>45079366 #>>45079457 #
noosphr ◴[] No.45079457[source]
Unless we have another breakthrough like attention we do know that AI will keep struggling with context and costs will grow quadratically with context.

On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.

I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.

replies(1): >>45079562 #
kragen ◴[] No.45079562[source]
It's only been 8 years since the attention breakthrough. Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough. AI researchers rather than physicists or chemists won the Nobel Prizes in physics and (for AlphaFold) chemistry last year. Agentic software development, MCP, and video generation are more or less new this year.

Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.

replies(2): >>45080001 #>>45080114 #
BobbyTables2 ◴[] No.45080114[source]
I think it’s currently too easy to get drunk on easy success cases for AI.

It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.

I’ve use copilot a lot. Faster then google, gives great results.

Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.

Sure, keep questions confined to something it was heavily trained on, answers will be great.

But yeah, AI going to get rid of a lot of low skilled labor.

replies(3): >>45080159 #>>45080590 #>>45081156 #
1. CamperBob2 ◴[] No.45080590[source]
It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.

No, it's more like asking a 4th-grader college math questions, and then desperately looking for ways to not be impressed when they get it right.

Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.

What would have been impressive is if the model had replied, "WTF, do I look like Google? Look it up there, dumbass."