Most active commenters
  • p_v_doom(3)

←back to thread

130 points whobre | 19 comments | | HN request time: 0.404s | source | bottom
1. GMoromisato ◴[] No.44642752[source]
I think Sinofsky is asking a question: what does the future look like given that (a) writing is thinking but (b) nobody reads and (c) LLMs are being used to write and read.

It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.

Some plausible (to me) possibilities:

1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

2. Augmentation: Thinking is primarily done by humans, but augmented by AI. E.g., I write my thoughts down (maybe in 5 bullet points or maybe in paragraphs) and I give it to the LLM to critique. The LLM helps by poking holes and providing better arguments. The result can be distributed to everyone else by LLMs in customized form (some people get bullet points, some get slide decks, some get the full document).

3. Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy. The executive team is far smaller and just oversees the AI. The AI decides how to allocate resources, align incentives, and communicate plans. Just as programmers let the compiler write the machine code, why bother with the minutiae of resource allocation? That sounds like something an algorithm could do. And since nobody reads anyway, the AI can direct people individually, but in a coordinated fashion. Indeed, the AI can be far more coordinated than an executive team.

replies(8): >>44642834 #>>44643064 #>>44643077 #>>44643096 #>>44643421 #>>44643644 #>>44644620 #>>44645453 #
2. oldge ◴[] No.44642834[source]
The executives in example three seem redundant and a cost center we can eliminate.
replies(2): >>44643075 #>>44644243 #
3. didericis ◴[] No.44643064[source]
4. Degradation: Humans with specialized knowledge lose their specialized knowledge due to over reliance on AI, and AI degrades over time due to the lack of new human data and AI contaminated data sets.
replies(1): >>44643140 #
4. bugbuddy ◴[] No.44643075[source]
Everyone without significant capital is redundant and can be eliminated.
5. Swizec ◴[] No.44643077[source]
> 1. Bifurcation: Maybe a subset of knowledge workers continue to write and read and therefore drive the decisions of the business. The remainder just do what the LLM says and eventually get automated away.

This already happens. Being the person who writes the doc [for what we wanna do next] gives it ridiculous leverage and sway in the business. Everyone else is immediately put in the position of feedbacking instead of driving and deciding.

Being the person who feedbacks gives you incredible leverage over people who just follow instructions from the final version

replies(1): >>44645461 #
6. andai ◴[] No.44643096[source]
The other day I gave GPT a journal entry and asked it to rewrite it from the POV of a person with low Openness (personality trait). I found this very illuminating, as I am on the opposite end of that spectrum.
7. bugbuddy ◴[] No.44643140[source]
5. Society collapses in an Idiocratic fashion.
replies(2): >>44643384 #>>44647247 #
8. volemo ◴[] No.44643384{3}[source]
6. PROFIT?
replies(2): >>44644325 #>>44644629 #
9. makeitdouble ◴[] No.44643421[source]
> It's that (already) old joke: we give the LLM 5 bullet points to write a memo and the recipient uses an LLM to turn it back to 5 bullet points.

This is already how we moved from stupidly long and formal emails to Slack messages. And from messages to reactions.

I understand not every field went there, but I think it's just a matter of time we collectively cut the traditional boilerplate, which would negate most of what the LLMs are bringing to the table right now.

> 2. Augmentation

I see it as the equivalent of Intellisense but expanded to everything. As a concept, it doesn't sound so bad ?

10. ◴[] No.44643644[source]
11. soco ◴[] No.44644243[source]
You might be missing that exactly the executives are the ones who can and will eliminate people. Why should they eliminate themselves? They'd rather use the same AI to invent a reason for them to stay.
12. pringk02 ◴[] No.44644325{4}[source]
That’s the hope
13. coldtea ◴[] No.44644620[source]
>Transformation: Maybe the AI does the thinking. Would that be so bad? The board of directors sets goals and approves the basic strategy.

We've already lost the war if we only consider this from a busines aspect.

AI "doing the thinking" will cover all of society and aspects of life, not just office job automation.

14. coldtea ◴[] No.44644629{4}[source]
Short term, sure.
15. p_v_doom ◴[] No.44645453[source]
> The AI decides how to allocate resources, align incentives, and communicate plans

Not gonna happen. The resources, incentives and plans even are almost exclusively about communication and people work. You can run as many optimization algorithms as you want, but an organization is ultimately made of people, and even in small startups the complexity and nuance involved in resource allocation planning and communication is too big for anything that is not a mega-super-AI to handle. Hell, in most companies these parts are incredibly dysfunctional now ...

16. p_v_doom ◴[] No.44645461[source]
> Being the person who writes the doc

If only people read them. Everyone is so pressured by made up goals, fake deadlines and horrible communication, that nobody ever wants to read more than a bullet point or two.

replies(1): >>44645700 #
17. skydhash ◴[] No.44645700{3}[source]
I think GP is talking about the docs at source of the action, not the reports that came after.
replies(1): >>44656742 #
18. hshdhdhj4444 ◴[] No.44647247{3}[source]
Even if AI isn’t sentient and/or thinking, if AI can functionally emulate everything humans can do, I think that will raise significant existential questions about our future.
19. p_v_doom ◴[] No.44656742{4}[source]
And I meant all docs, not just reports