I’ve been thinking about this for months, and still don’t know what to make of it.
Organizations many times seem capable to diffuse blame for mistakes within their human beaurocracy but as beaurocracy is reduced with AI, individuals become more exposed.
This alone - in my view - is sufficient counterpressure to fully replace humans in organizations.
Shorter reply: if my AI setup fails I'm the one to blame. If I do a bad job at helping coworkers perform better is the blame fully mine?
The LLM isn't always smart, but it's always attentive. It rewards that effort in a way that people frequently don't. (Arguably this is a company culture issue, but it's also a widespread issue.)
In organizations that value innovation, people will spend time reading and writing. It's a positive feedback loop, almost a litmus test of quality of the work culture.
I also enjoy discussing solutions with people in real time too. But writing documentation in a vacuum without any feedback or even knowing if someone will read the spec?? Soul draining stuff.
In fact, the best of both worlds would be having a discussion with someone else (real person) while an AI agent listens, takes notes, and provides feedback / insights using different models. Vetting your ideas etc.