←back to thread

117 points soraminazuki | 1 comments | | HN request time: 0.198s | source
Show context
topkai22 ◴[] No.45080616[source]
The answer, well documented in the article, is yes.

While the article presents cases that appear the be problematic in the particulars, I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect.

It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.

It's easy to defend the utility of these tools so long as you caveat them. For example, I've had a lot of success in AI driven code generation for utility scripts, but it is less useful for full fledged feature development in our main code base. AI driven code summarization and its ability to do coding standards enforcement on PRs is a huge help.

Finally, I find the worries in the article about using these tools on sensitive data or scenarios such as ideation to be rather overdrawn. They are just SaaS services. You shouldn't use the free version of most tools for business purposes due to often problematic licensing, but purchasing and legal should be able help find an appropriate service. After all, if you are using google docs or Microsoft 365 to create and store your documents why would (at least with some due diligence that they don't retain or train on your input) you treat Gemini or Copilot (or their other LLM options) as presenting higher legal peril?

replies(6): >>45080681 #>>45080761 #>>45081020 #>>45081023 #>>45081374 #>>45081429 #
beezlewax ◴[] No.45080761[source]
> but it is unlikely all of them are

How so? I have access to a huge number of these tools and they're all pretty similar.

replies(2): >>45080814 #>>45081103 #
1. Azrael3000 ◴[] No.45080814[source]
That's also what he writes in the article, I.e. LLMs are large language models, so the approach is generally flawed. A sentiment I agree with.