←back to thread

165 points distalx | 1 comments | | HN request time: 0.314s | source
Show context
ilaksh[dead post] ◴[] No.43948635[source]
[flagged]
timewizard ◴[] No.43948876[source]
> 2023 is ancient history in the LLM space.

Okay, what specifically has improved in that time, which would allay the doctors specific concerns?

> do certain core aspects

And not others? Is there a delineated list of such failings in the current set of products?

> given the right prompts and framework

A flamethrower is perfectly safe given the right training and support. In the wrong hands it's likely to be a complete and total disaster in record short time.

> a weak prompt that was not written by a subject matter expert

So how do end users ever get to use a tool like this?

replies(1): >>43949126 #
1. ilaksh ◴[] No.43949126[source]
The biggest thing that has improved is the intelligence of the models. The leading models are much more intelligent and robust. Still brittle in some ways, but totally capable of giving CBT advise.

The same way end users ever get to use a tool. Open source or an online service, for example.