←back to thread

378 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
aeon_ai ◴[] No.44984252[source]
AI is a change management problem.

Using it well requires a competent team, working together with trust and transparency, to build processes that are designed to effectively balance human guidance/expertise with what LLM's are good at. Small teams are doing very big things with it.

Most organizations, especially large organizations, are so far away from a healthy culture that AI is amplifying the impact of that toxicity.

Executives who interpret "Story Points" as "how much time is that going to take" are asking why everything isn't half a point now. They're so far removed from the process of building maintainable and effective software that they're simply looking for AI to serve as a simple pass through to the bottom line.

The recent study showing that 95% of AI pilots failed to deliver ROI is a case study in the ineffectiveness of modern management to actually do their jobs.

replies(8): >>44984371 #>>44984602 #>>44984660 #>>44984777 #>>44984897 #>>44986307 #>>44989493 #>>44995318 #
grey-area ◴[] No.44984371[source]
Or maybe it's just not as good as it's been sold to be. I haven't seen any small teams doing very big things with it, which ones are you thinking of?
replies(4): >>44984398 #>>44984761 #>>44984844 #>>44985097 #
sim7c00 ◴[] No.44984398[source]
you are not wrong. the only 'sane' approaches ive seen with vibe coding is making a PoC to see if some concept works. then rewrite it entirely to make sure its sound.

besides just weird or broken code, anything exposed to user input is usually severly lacking sanity checks etc.

llms are not useless for coding. but imho letting llms do the coding will not yield production grade code.

replies(3): >>44984665 #>>44984758 #>>44989739 #
bbarnett ◴[] No.44984665[source]
Koko the gorilla understood language, but most others of her ilk simlpy make signs because a thing will happen.

Move hand this way and a human will give a banana.

LLMs have no understanding at all of the underlying language, they've just seen that a billion times a task looks like such and such, so have these tokens after them.

replies(2): >>44984791 #>>44988798 #
SirHumphrey ◴[] No.44984791[source]
What does it matter if they have understanding of the underlying language or not? Heck, do humans even have the "understanding of the underlying language". What does that even mean?

It's a model. It either predicts usefully or not. How it works is mostly irrelevant.

replies(6): >>44985306 #>>44985689 #>>44985840 #>>44986116 #>>44987421 #>>45001767 #
1. grey-area ◴[] No.45001767[source]
Without understanding you can’t have creativity or fix mistakes. It matters a lot.