Most active commenters

    ←back to thread

    117 points soraminazuki | 12 comments | | HN request time: 0.595s | source | bottom
    1. topkai22 ◴[] No.45080616[source]
    The answer, well documented in the article, is yes.

    While the article presents cases that appear the be problematic in the particulars, I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect.

    It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.

    It's easy to defend the utility of these tools so long as you caveat them. For example, I've had a lot of success in AI driven code generation for utility scripts, but it is less useful for full fledged feature development in our main code base. AI driven code summarization and its ability to do coding standards enforcement on PRs is a huge help.

    Finally, I find the worries in the article about using these tools on sensitive data or scenarios such as ideation to be rather overdrawn. They are just SaaS services. You shouldn't use the free version of most tools for business purposes due to often problematic licensing, but purchasing and legal should be able help find an appropriate service. After all, if you are using google docs or Microsoft 365 to create and store your documents why would (at least with some due diligence that they don't retain or train on your input) you treat Gemini or Copilot (or their other LLM options) as presenting higher legal peril?

    replies(6): >>45080681 #>>45080761 #>>45081020 #>>45081023 #>>45081374 #>>45081429 #
    2. mgh95 ◴[] No.45080681[source]
    > It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are. A great historical analogies are the adoption of PCs in the 80s and the adoption of the internet/web in the 90s. Not everything we tried back then was an improvement on existing technologies or processes but in general if you weren't experimenting across a broad swath of your business you were going to get left behind.

    There is a difference between experimentation and mandated usage, however. In the former, you typically see "shadow IT" attempt to access useful tools outside the bounds of what is considered acceptable, as compared to mandated usage. This indicates a greater willingness to adopt.

    There is also a difference between a technology replicating an existing functionality in a new medium (email vs usps) and introduction of a new technology. In the former, there is clear market demand, and only a matter of redirecting existing demand to new tools. In the latter, it is unclear if the technology will be useful.

    I don't think that just because LLMs are a new technology which use computing makes them the internet and I don't think it's accurate to analyze them through the lens you propose.

    3. beezlewax ◴[] No.45080761[source]
    > but it is unlikely all of them are

    How so? I have access to a huge number of these tools and they're all pretty similar.

    replies(2): >>45080814 #>>45081103 #
    4. Azrael3000 ◴[] No.45080814[source]
    That's also what he writes in the article, I.e. LLMs are large language models, so the approach is generally flawed. A sentiment I agree with.
    5. EagnaIonat ◴[] No.45081020[source]
    > It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are.

    All can absolutely be wrong at the same time, but the tool isn't the main issue IMHO. Its the user.

    For simple generic stuff its not an issue, but where you need an expert, it has to be an expert in that field who uses the AI. So you know what is wrong.

    A good recent example is the OpenAI Academy. Clearly the site content is generated by ChatGPT, and completely misses the point of the areas it claims to be training you in.

    6. makeitdouble ◴[] No.45081023[source]
    > A great historical analogies are the adoption of PCs in the 80s

    Another historical analogy is Scientific Management, pushed top down and widely adopted by the industry. It has many flavors and all of them were wrong.

    We have samples in basically any direction one would like to argue for. Historical precedence isn't a good argument IMHO.

    7. soraminazuki ◴[] No.45081103[source]
    The definition of insanity is doing the same thing over and over again and expecting a different result. The current hype is now officially insane.
    replies(1): >>45089487 #
    8. bigstrat2003 ◴[] No.45081374[source]
    > I think coming to the conclusion that bosses/managers shouldn't be pushing or mandating the use of AI tools in general is incorrect. It's quite possible that any one new AI tool is wrong, but it is unlikely all of them are.

    If the tool is good, then management won't need to mandate it. People will be tripping over themselves to get access to the tool that helps them to do their job better. So perhaps you're right that some of the tools will be good (though I personally haven't yet had that experience), but I think that it is incorrect for managers to push for (let alone mandate) tool usage. Measure the result, not the path an employee takes to get there. If Bob uses AI tools to great effect, but Alice is doing just as well as him without using said tools, it's a mistake to force her to change her workflow thinking that the tools will be just as good for her as for Bob.

    replies(1): >>45081455 #
    9. bitwize ◴[] No.45081429[source]
    Do you also believe that open-plan offices make people more productive by "fostering collaboration"?
    10. pmg101 ◴[] No.45081455[source]
    Somewhat true but let's also recognise all of us have a certain level of friction. Yes, Alice may be effective with using tool A, due to her knowledge and experience, but not have the higher context to realise that she's at a local maximum and could, after a period of confusion and relearning, become even MORE effective using tool B.

    However this is a subtle and nuanced situation requiring careful people management and helping to nudge or lead people, letting them take risks, letting them fail, giving them psychological safety, and praising their attempts. Blanket mandates are just a very tone deaf and stupid way to try to achieve this.

    11. strunz ◴[] No.45089487{3}[source]
    That's not the definition of insanity.
    replies(1): >>45090832 #
    12. soraminazuki ◴[] No.45090832{4}[source]
    Um, it's a famous quote.