Most of these efforts have questionable returns and most projects will usually involve increasing test coverage or categorising customer incidents for better triage, apart from these low hanging fruits not much comes out of it.
People still play the visibility game though. Hey, look at what we did using LLMs. That’s so cool, now where’s my promotion? Business outcomes wise, there’s some low hanging fruits that have been plucked but otherwise it doesn’t live up to the hype.
Personally for me, it is helpful in a few scenarios,
1. Much better search interface than traditional search engines. If I want to ramp up on some new technology or product, it gives me a good broad overview and references to dive deep. No more 10 blue links.
2. Better autocomplete than before but it’s still not as groundbreaking as AI hype hucksters make it out to be
3. If I want to learn some concepts (say how ext4 FS works), it can give a good breakdown of the high level concepts and then I go need to study and come back with more Q’s. This is the only genuine use case that I really like. Where I can iteratively ask Q’s to clarify and cement my understanding of a concept. I have used Claude code and ChatGPT for this and I can barely see any difference between the two.
This is my balanced take.