/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
The "confident idiot" problem: Why AI needs hard rules, not vibe checks
(steerlabs.substack.com)
323 points
steerlabs
| 2 comments |
04 Dec 25 20:48 UTC
|
HN request time: 0.018s
|
source
Show context
tasuki
◴[
08 Dec 25 15:19 UTC
]
No.
46193203
[source]
▶
>>46152838 (OP)
#
I wish we didn't use LLMs to create test code. Tests should be the only thing written by a human. Let the AI handle the implementation so they pass!
replies(1):
>>46193583
#
1.
lxgr
◴[
08 Dec 25 15:43 UTC
]
No.
46193583
[source]
▶
>>46193203
#
Humans writing tests can only help against some subset of all problems that can happen with incompetent or misaligned LLMs. For example, they can game human-written and LLM-written tests just the same.
replies(1):
>>46210175
#
ID:
GO
2.
tasuki
◴[
09 Dec 25 20:27 UTC
]
No.
46210175
[source]
▶
>>46193583 (TP)
#
Not property-based tests. Either way, the human is there to tell the machine what to do: tests are one way of expressing that.
↑