←back to thread

117 points soraminazuki | 2 comments | | HN request time: 0.548s | source
1. tomjen3 ◴[] No.45081073[source]
Makes sense. I have seen far too many coworkers dismiss AI completely without trying it for their job.

At this point, you need to learn what AI can and cannot do, for the same reason you need to keep up with new versions of whatever framework you use. Since AI develops so fast (e.g many image use cases that AI would be terrible at 4 months ago, they now do perfectly), you need to repeat that exercise frequently.

There are 4 problems with adoptions as I see them:

1) Hype. Some people overhype what AI can do, which causes people to dismiss them when they don't immediately work;

2) Plenty of people don't like to change what they do/feel threatened by change. Doubly so when that change is perceived (real or not) to impact their job.

3) AI is weird and so it sometimes fails spectacularly at simple things, while it works very well at more complex things;

4) People use ChatGPTs free model or other AIs that are free. These are older/less powerful models, which means people end up with wrong expectations of what they can and cannot do.

5) Who likes to be told what to do? Especially by a clueless boss.

Where I running a company, I would ensure that my employees had access to a top of the line model and cursor/windsurf. I would monitor usage and have a talk with those whose usage was drastically lower than their peers.

However it would be a talk only - with the aim of figuring out why AI did not work for that employee, and what we could do to fix it.

replies(1): >>45081533 #
2. pmg101 ◴[] No.45081533[source]
This isn't too bad but still takes a kind of panopticon style of people management for granted.

Instead of letting everyone do their own thing then "talking to" certain people why not get people to work together and see how each other do or don't get value from LLMs, to build institutional confidence and skills.