←back to thread

301 points lukeio | 4 comments | | HN request time: 0.801s | source
Show context
Xenoamorphous ◴[] No.46233361[source]
Kinda tangential but in the advent of AI I feel like there won’t be a niche for “handcrafted software”.

When quartz watches came up the makers of mechanical watches struggled. Quartz watches are cheaper, more accurate in many cases and servicing is usually restricted to replacing a battery. However some people appreciate a good mechanical watch (and the status symbol aspect of course) and nowadays the mechanical watch market is flourishing. Something similar happened with artificial fabrics (polyester, acrylic) and cheap made clothes, there’s a market for handmade clothes that use natural fabrics.

Nobody (well, barring a few HN readers) will ever care if the software was written by people or a bot, as long as it works.

replies(6): >>46233710 #>>46233804 #>>46233962 #>>46234833 #>>46235108 #>>46236684 #
jesse__ ◴[] No.46234833[source]
This is a bad analogy.

> more accurate in many cases

It's laughable that LLMs can be considered more accurate than human operators at the macro level. Sure, if I ask a search bot the date Notre Dame was built, it'll get it right more often than me, but if I ask it to write even a simple heap memory allocator, it's going to vomit all over itself.

> Nobody [...] will ever care if the software was written by people or a bot, as long as it works

Yeah.. wake me up when LLMs can produce even nominally complex pieces software that are on-par with human quality. For anything outside of basic web apps, we're a long way off.

replies(1): >>46237554 #
mmooss ◴[] No.46237554[source]
> if I ask a search bot the date Notre Dame was built, it'll get it right more often than me

With both of you doing research in your own ways, you'll get it right more often (I hope).

replies(1): >>46240616 #
1. jesse__ ◴[] No.46240616[source]
I meant without looking it up
replies(1): >>46240645 #
2. mmooss ◴[] No.46240645[source]
The bot always looks it up, in a way.
replies(1): >>46241231 #
3. jesse__ ◴[] No.46241231[source]
I mean, so do I, if you think about it like that. I just have a much lower chance of successfully retrieving the correct information.

In the comparison I was making with respect to accuracy was that the bot is much more likely to accurately answer fact-based queries, and much less likely to succeed at any tasks that require actual 'thinking'. Especially when that task is not particularly common in the training set, such as writing a memory allocator. I can write and debug a simple allocator in half an hour, no worries. I'd be surprised if any of the current LLMs could.

replies(1): >>46241375 #
4. mmooss ◴[] No.46241375{3}[source]
I agree. I was just making a tangential point with a bit of exaggeration; sorry if it seemed to distract from your main point.

If you look up the factual question in a quality source, you'll be more accurate than the bot which looked at many sources. That's all I meant.