←back to thread

46 points petethomas | 2 comments | | HN request time: 0.441s | source
Show context
gchamonlive ◴[] No.44397333[source]
If you put lemons in a blender and add water it'll produce lemon juice. If you put your hand in a blender however, you'll get a mangled hand. Is this exposing dark tendencies of mangling bodies hidden deep down blenders all across the globe? Or is it just doing what's supposed to be doing?

My point is, we can add all sorts of security measures but at the end of the day nothing is a replacement for user education and intention.

replies(4): >>44397460 #>>44397741 #>>44397742 #>>44397831 #
1. _wire_ ◴[] No.44397831[source]
The industry sells the devices as "intelligent" which brings the expectation of maturity and wisdom-- dependability.

So the analogy is more like a cabin door on a 737. Some yahoo could try to open it in flight, but that doesn't justify it spontaneously blowing out at altitude.

But the elephant in the room is why are we persevering over these silly dichotomies? If you've got a problem with an AI, why not just ask the AI? Can't it clean up after making a poopy?!

replies(1): >>44403683 #
2. gchamonlive ◴[] No.44403683[source]
Yeah and that's a problem for the industry. At most it's exposing a problem in society. These companies are not interested in smart LLMs. They are interested in smart LLMs just as long as they make them obscenely rich.

For the regular user it's just a matter of changing the prompt to get a better output using a capable model. So it's a matter of education.

Of course model bias takes a role. If you train a model on racist posts you'll get a racist model. But as long as you have a fairly capable model for the average use, these edge cases aren't of interest for the user who can just adjust their prompts.