Most active commenters
  • dist-epoch(4)
  • kookamamie(3)

←back to thread

54 points tudorizer | 22 comments | | HN request time: 1.205s | source | bottom
1. oytis ◴[] No.44367106[source]
I don't get his argument, and if it wasn't Martin Fowler I would just dismiss it. He admits himself that it's not an abstraction over previous activity as it was with HLLs, but rather a new activity altogether - that is prompting LLMs for non-deterministic outputs.

Even if we assume there is value in it, why should it replace (even if in part) the previous activity of reliably making computers do exactly what we want?

replies(2): >>44403162 #>>44403847 #
2. dist-epoch ◴[] No.44403162[source]
Because unreliably solving a harder problem with LLMs is much more valuable than reliably solving an easier problem without.
replies(4): >>44403214 #>>44403346 #>>44404165 #>>44407471 #
3. darkwater ◴[] No.44403214[source]
Which harder problems are LLMs going to (unreliably) solve in your opinion?
replies(1): >>44403853 #
4. oytis ◴[] No.44403346[source]
OK, so we are having two classes of problems here - ones worth solving unreliably, and ones that are better solved without LLMs. Doesn't sound like a next level of abstraction to me
replies(2): >>44403871 #>>44404015 #
5. kookamamie ◴[] No.44403847[source]
Funny, I dismiss the opinion based on the author in question.
replies(1): >>44403918 #
6. dist-epoch ◴[] No.44403853{3}[source]
Anything which requires "common sense".

A contrived example: there are only 100 MB of disk space left, but 1 GB of logs to write. LLM discards 900 MB of logs and keeps only the most important lines.

Sure, you can nitpick this example, but it's the kind of edge case handling that LLMs can "do something resonable" that before required hard coding and special casing.

replies(1): >>44406838 #
7. dist-epoch ◴[] No.44403871{3}[source]
I was thinking more along this line: you can solve unreliably 100% of the problem with LLMs, or solve reliably only 80% of the problem.

So you trade reliability to get to that extra 20% of hard cases.

8. Insanity ◴[] No.44403918[source]
Serious question - why? I know of the author but don’t see a reason to value his opinion on this topic more or less because of this.

(Attaching too much value to the person instead of the argument is more of an ‘argument from authority’)

replies(1): >>44404170 #
9. pydry ◴[] No.44404015{3}[source]
The story of programming is not largely one of humans striving to be more reliable when programming but putting up better defenses against our own inherent unreliabilities.

When I watch juniors struggle they seem to think that it's because they dont think hard enough whereas it's usually because they didnt build enough infrastructure that would prevent them from needing to think too hard.

As it happens, when it comes to programming, LLM unreliabilities seem to align quite closely with ours so the same guardrails that protect against human programmers' tendencies to fuck up (mostly tests and types) work pretty well for LLMs too.

10. furyofantares ◴[] No.44404165[source]
I'm pretty deep into these things and have never had them solve a harder problem than I can solve. They just solve problems I can solve much, much faster.

Maybe that does add up to solving harder higher level real world problems (business problems) from a practical standpoint, perhaps that's what you mean rather than technical problems.

Or maybe you're referring to producing software which utilizes LLMs, rather than using LLMs to program software (which is what I think the blog post is about, but we should certainly discuss both.)

replies(1): >>44404503 #
11. kookamamie ◴[] No.44404170{3}[source]
Let's just say I think a lot of damage was caused by their OOP evangelism back in the day.
replies(2): >>44404432 #>>44406920 #
12. diggan ◴[] No.44404432{4}[source]
You don't think the damage was done by the people who religiously follow whatever loudmouths says? Those are the people I'd stop listening to, rather than ignoring what an educator says when sharing their perspective.

Don't get me wrong, I feel like Fowler is wrong about some things too, and wouldn't follow what he says as dogma, but I don't think I'd attribute companies going after the latest fad as his fault.

replies(2): >>44404971 #>>44406359 #
13. dist-epoch ◴[] No.44404503{3}[source]
> solve a harder problem than I can solve

If you've never done web-dev, and want to create an web-app, where does that fall? In principle you could learn web-dev in 1 week/month, so technically you could do it.

> maybe you're referring to producing software which utilizes LLMs

but yes, this is what I meant, outsourcing "business logic" to an LLM instead of trying to express it in code.

14. kookamamie ◴[] No.44404971{5}[source]
Perhaps. Then again, advocating things like Singleton as anything beyond a gloriefied global variable is pretty high on my BS list.

An example: https://martinfowler.com/bliki/StaticSubstitution.html

replies(1): >>44405576 #
15. diggan ◴[] No.44405576{6}[source]
> gloriefied global variable is pretty high on my BS list

Say you have a test that is asserting the output of some code, and that code is using a global variable of some kind, how do you ensure you can have tests that are using different values for that global variable and it all works? You'd need to be able to change it during tests somehow.

Personally, I think a lot of the annoying parts of programming go away when you use a more expressive language (like Clojure), including this one. But for other languages, you might need to work around the limitations of the language and then approaches like using Singletons might make more sense.

At the same time, Fowlers perspective is pretty much always in the context of "I have this piece of already written code I need to make slightly better", obviously the easy way is to not have global variables in the first place, but when working with legacy code you do stumble upon one or three non-optimal conditions.

16. alganet ◴[] No.44406359{5}[source]
You need to understand that Mr. Fowler works for a consultancy.

LLMs sound great for consultants. A messy hyped technology that you can charge to pretend to fix? Jackpot.

All things these consultancies eventually promote are learnings they had with their own clients.

The OOP patterns he described in the past likely came from observing real developers while being in this consultant role, and _trying_ to document how they overcame typical problems of the time.

I have a feeling that the real people with skin on the game (not consultants) that came up with that stuff would describe it in much simpler terms.

Similarly, it is likely that some of these posts are based on real experience but "consultancified" (made vague and more complex than it needs to be).

replies(1): >>44407367 #
17. sarchertech ◴[] No.44406838{4}[source]
In that example something simple like log the errors, or log the first error of the same type per 5 minute block had some percent chance of solving 100% of the problem.

And it’s not just this specific problem. I don’t think letting an LLM handle edge cases is really ever an appropriate use case in production.

I’d much rather the system just fail so that someone will fix it. Imagine a world where at every level instead of failing and halting, everything error just got bubbled up to an LLM that tried to do something reasonable.

Talk about emergent behavior, or more likely catastrophic cascading failures.

I can kind of see your point if you’re talking about a truly hopeless scenario. Like some imaginary autonomous spacecraft that is going to crash into the sun, so in a last ditch effort the autopilot turns over the controls to an LLM.

But even in that scenario we have to have some way of knowing that we truly are in a hopeless scenario. Maybe it just appears that way and the LLM makes it worse.

Or maybe the LLM decides to pilot it into another spacecraft to reduce velocity.

My point is there aren’t many scenarios where “do something reasonable 90% of the time, but do something insane the other 10% of the time” is better than do nothing.

I’ve been using LLMs at work and my gut feeling saying I’m getting some productivity boost, but I’m not even certain of that because I have also spent time chasing subtle bugs that I wouldn’t have introduced myself. I think I’m going to need to see the results of some large well designed studies and several years of output before I really feel confident saying one way or the other.

18. Disposal8433 ◴[] No.44406920{4}[source]
His Refactoring book was a good thing at the time. But it ends there, he should have tried to program instead of writing all the other books that made no sense.
19. dcminter ◴[] No.44407367{6}[source]
I'm a bit too lazy to check, but didn't he leave thoughtworks?

Apropos of nothing I saw him speak once at a corporate shindig and I didn't get the impression that he enjoyed it very much. Some of the engineering management were being super weird about him being a (very niche) famous person too...

replies(1): >>44408005 #
20. ◴[] No.44407471[source]
21. alganet ◴[] No.44408005{7}[source]
https://martinfowler.com/aboutMe.html

> [...] I work for Thoughtworks [...]

> [...] I don't come up with original ideas, but do a pretty good job of recognizing and packaging the ideas of others [...]

> [...] I see my main role as helping my colleagues to capture and promulgate what we've learned about software development to help our profession improve. We've always believed that this openness helps us find clients, recruit the best people, and help our clients succeed. [...]

So, we should read him as such. It's a consultant, trying to capture what successful teams do. Sometimes succeeding, sometimes failing.

replies(1): >>44412862 #
22. dcminter ◴[] No.44412862{8}[source]
Yeah, seems I was misremembering - looks like he just doesn't do talks any more.