←back to thread

64 points m-hodges | 2 comments | | HN request time: 0s | source
Show context
prisenco ◴[] No.45078963[source]
For junior devs wondering if they picked the right path, remember that the world still needs software, ai still breaks down at even a small bit of complexity, and the first ones to abandon this career will be those who only did it for money anyways and they’ll do the same once the trades have a rough year (as they always do).

In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.

replies(5): >>45079011 #>>45079019 #>>45079029 #>>45079186 #>>45079322 #
kragen ◴[] No.45079322[source]
Nobody has any idea what AI is going to look like five years from now. Five years ago we had GPT-2; AI couldn't code at all. Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

Anyone who tells you they know what the future looks like five years from now is lying.

replies(2): >>45079366 #>>45079457 #
tmn ◴[] No.45079366[source]
There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
replies(1): >>45079424 #
kragen ◴[] No.45079424[source]
No, there isn't. When speaking of logically consistent possibilities, the two problems are precisely isomorphic under Boolean negation.
replies(1): >>45079980 #
bryanrasmussen ◴[] No.45079980[source]
good point, someone recently said

> Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

do all these seem logically consistent possibilities to you?

replies(1): >>45080025 #
kragen ◴[] No.45080025[source]
Yes, obviously. You presumably don't know what "consistent" means in logic, and your untutored intuition is misleading you into guessing that possibilities like those could conceivably be inconsistent.

https://en.m.wikipedia.org/wiki/Consistency

replies(1): >>45080085 #
bryanrasmussen ◴[] No.45080085{3}[source]
or I just wanted to make sure that you were adamant that the list of those three possibilities were equally probable, to reiterate

> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.

that each of these things, being logically consistent, have equal chances of being the case 5 years from now?

replies(1): >>45080090 #
1. kragen ◴[] No.45080090{4}[source]
No. Fuck off. There's no uniform probability distribution over the reals, so stop trying to put bullshit in my mouth.
replies(1): >>45080962 #
2. bryanrasmussen ◴[] No.45080962[source]
OK well you obviously seem to be having some bad time about something in your life right now so I won't continue, other than to note the comment that started this said

>There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like

which I took to mean there are probability distributions around what things will happen, and it seemed to be your assertion that there wasn't, that a number of things only one of which seemed especially probable, were equally probable. I'm glad to learn you don't think this as it seems totally crazy, especially for someone praising LLMs which after all spend their time making millions of little choices based on probability.