Most active commenters
  • dangus(5)
  • piskov(4)
  • geor9e(3)
  • (3)
  • OutOfHere(3)
  • dragonwriter(3)

26 points piskov | 31 comments | | HN request time: 2.083s | source | bottom
1. _wire_ ◴[] No.45777924[source]
Haha! What a joke

"You can't believe how smart and capable this thing is, ready to take over and run the world"

(Not suitable for any particular purpose - Use at your own risk - See warnings - User is responsible for safe operation...)

(Pan from home robot clumsily depositing clean dishes into an empty dishwasher to a man in VR goggles in next room making all the motions of placing objects in a box)

Check all services you wish to subscribe ($1000 per service per month): - Put laundry in washing machine - Microwave mac & cheese dinner - Change and feed baby - Get granny to toilet - Fix Windows software update error on PC - Reboot wifi router to restore internet connection

replies(1): >>45778687 #
2. piskov ◴[] No.45777932[source]
Unless the following excludes (which it shouldn’t) personal use vs batch one:

Empower people. People should be able to make decisions about their lives and their communities. So we don’t allow our services to be used to manipulate or deceive people, to interfere with their exercise of human rights, to exploit people’s vulnerabilities, or to interfere with their ability to get an education or access critical services, including any use for:

automation of high-stakes decisions in sensitive areas without human review:

- critical infrastructure

- education

- housing

- employment

- financial activities and credit insurance

- legal ===

- medical ===

- essential government services

- product safety components

- national security

- migration

- law enforcement

3. SilverElfin ◴[] No.45778138[source]
Wouldn’t this affect many prominent startups? Why wouldn’t they move to a competitor? Is OpenAI assuming to just be for consumers?
replies(1): >>45778205 #
4. piskov ◴[] No.45778205[source]
What stops others to do the same (if they haven’t already)?

It’s a safe bet: we don’t allow you to ask for medical advice so we are not liable if you do and drink mercury or what have you based on our advice.

5. MrCoffee7 ◴[] No.45778309[source]
You can still ask questions for medical advice. You just need to phrase the question more like a hypothetical one instead of making it obvious that you are asking for yourself.
6. geor9e ◴[] No.45778609[source]
HN Headline is categorically false.

"you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional"

So, they didn't add any handrails, filters, or blocks to the software. This is just boilerplate "consult your doctor too!" to cover their ass.

replies(5): >>45778632 #>>45778661 #>>45778735 #>>45778919 #>>45778955 #
7. unyttigfjelltol ◴[] No.45778632[source]
Doesn’t prohibit brainstorming what to ask your doctor, or which professional consultation to prioritize.

Does prohibit, for illustration, LLM-powered surgical device.

Everything else is “gray area”?

replies(2): >>45778645 #>>45778673 #
8. geor9e ◴[] No.45778645{3}[source]
Prohibits in the "I'm a sign, not a cop" sense.

There is no way for them to even remotely verify if you are "without appropriate involvement by a licensed professional" in the room, so to a rebellious outlaw, these prohibitions might as well not exist.

9. CGamesPlay ◴[] No.45778661[source]
Did the headline get changed? It 100% matches what you're calling out: "OpenAI updates terms to forbid usage..."
replies(2): >>45780781 #>>45785876 #
10. SoftTalker ◴[] No.45778673{3}[source]
> brainstorming what to ask your doctor

Generally a bad idea. If you want to be a doctor, go to medical school.

replies(4): >>45778760 #>>45778967 #>>45785929 #>>45785946 #
11. SoftTalker ◴[] No.45778687[source]
Standard cop-out that software companies always try to include. They disclaim any warranty of merchantibility and fitness for a particular purpose. So if you try to claim that the software doesn't do what it's supposed to do, they take no responsibility for that.
12. ◴[] No.45778735[source]
13. ralph84 ◴[] No.45778760{4}[source]
I don’t want to be a doctor, I just want to fix what ails me. You don’t need an MD to research symptoms.
replies(1): >>45781647 #
14. dangus ◴[] No.45778919[source]
While you are correct, the question now becomes whether the disclaimer can ever be removed.

If the AI isn’t smart enough to replace a licensed expert even given unlimited access to everything a doctor would learn in medical, where is the value in the AI?

replies(1): >>45778995 #
15. OutOfHere ◴[] No.45778955[source]
OpenAI already blocked public access to custom GPTs that gave medical advice. I had multiple such custom GPTs get blocked from their previously functional shared access.
16. OutOfHere ◴[] No.45778967{4}[source]
The bad idea is to live and die in ignorance. The good idea is to use GPT to find ideas and references that one can then verify. If it were up to the medical establishment, they would block the public from accessing medical research altogether, and they already do this by paywalling much research.
17. dragonwriter ◴[] No.45778995{3}[source]
Plenty of other automation supports licensed experts without replacing them and has value, so if even AI supports licensed efforts but can never replace them, it could still have value in that application.
replies(1): >>45781326 #
18. jitbit ◴[] No.45779964[source]
Why is this flagged? This is pretty significant actually.

So stories like this are no longer possible? https://news.ycombinator.com/item?id=45734582

19. piskov ◴[] No.45780781{3}[source]
It hasn’t been changed
20. dangus ◴[] No.45781326{4}[source]
But this isn’t what was advertised by the AI companies themselves. They’ve been telling us AGI is imminent.

Now we are moving the goalposts to “it’ll be a nice tool to use like SaaS software.”

replies(1): >>45782139 #
21. OutOfHere ◴[] No.45781647{5}[source]
GPT saved me yesterday. It helped me identify and verify a rare three-way undocumented medicine interaction that was causing anguish. The interaction was hypomagnesia and serious arrhythmia caused by a combination of berberine, famotidine, and vonoprazan. This was despite magnesium supplementation.

Two months ago it helped me accurately identify a gastrointestinal diverticulitis-type issue, find the right medication for it (metronidazole), which fixed the issue for good. It also guided me on identifying the cause, and also on pausing and restoring fiber intake appropriately.

Granted, it is very easy for people to make serious mistakes in using LLMs, but granted how many mistakes doctors make, it is better to take some self-responsibility first. The road to making a useful diagnosis can be windy, but with sufficient exploration, GPT will get you there.

22. dragonwriter ◴[] No.45782139{5}[source]
> But this isn’t what was advertised by the AI companies themselves. They’ve been telling us AGI is imminent.

Other than OpenAI, I don’t think that’s actually true of what the companies have been advertising.

But, in any case, things can have value and still fall short of what those with a financial interest in the public overestimating the imminent significance of an industry promote. The claim here was about what was necessary for AI to have value, not what was necessary to meet the picture that the most enthusiastic, biased proponents were painting. Those are very different questions, and, if you don’t like moving goalposts, you shouldn’t move them from the former to the latter.

replies(1): >>45787143 #
23. geor9e ◴[] No.45785876{3}[source]
They didn't forbid it.

Does "US law forbids driving without a seatbelt" mean the same as "US law forbids driving"?

24. ◴[] No.45785929{4}[source]
25. ◴[] No.45785946{4}[source]
26. dangus ◴[] No.45787143{6}[source]
When I originally said “where’s the value in the AI?” in my first comment the implied situation relates to how vastly more expensive it is than traditional SaaS to deliver.

AI is undoubtedly useful, but at its current infrastructure cost it’s not going to be worth selling unless it can actually put people out of work so that enterprise customers are motivated to spend salary-level money on it. That’s the only way to make the numbers black with the kind of deficits the industry has.

Making existing employees 5-20% more productive isn’t enough. You can already get that kind of improvement for very cheap. That’s the kind of improvement you get by buying your employees catered lunch or a SaaS license for a CRUD app.

My company is paying less money for AI subscriptions per seat than some pretty low impact tools like password managers.

You’d think that CoPilot might charge us $100 instead of $10 if they really thought it was that valuable.

There’s no goalpost being moved on my end.

replies(1): >>45791191 #
27. dragonwriter ◴[] No.45791191{7}[source]
> AI is undoubtedly useful, but at its current infrastructure cost it’s not going to be worth selling unless it can actually put people out of work

This doesn't even make sense unless you make the false assumption that the work to do is fixed: things that increase productivity increase employment (they increase the value delivered by each unit of labor, which at a fixed cost of labor increases the range of applications at which it is profitable to apply the same labor or, holding employment level fixed, increases market-clearing pay, the usual result of which is that both employment and pay go up in the field whose priductivity was increased, but less than you would expect in each case if the other was fixed.)

replies(2): >>45795423 #>>45795993 #
28. dangus ◴[] No.45795423{8}[source]
It makes sense in that there’s already a dollar value companies are willing to pay to help employees work faster or better or what have you.

I can pay X company $N dollars to make my employees work Z amount faster, or maybe make my work compliant with Z regulation while avoiding Y amount of work to achieve it.

AI tools are basically “they might make your employees faster or slower or make mistakes or maybe not.” That’s why they only cost $10-100 a month per seat.

They don’t directly solve a problem like the most expensive enterprise software.

Like I said AI is cheaper than really boring stuff like basic PAM tools or password managers. Why is AI so cheap when it’s so expensive to deliver and supposedly delivers revolutionary productivity gains?

This is why I said that until AI is actually replacing whole humans, the infrastructure cost is too insane. Alternatively, they can suddenly reduce costs by a crazy amount somehow.

29. dangus ◴[] No.45795993{8}[source]
https://www.brethorsting.com/blog/2025/10/the-data-center-bu...
30. whatpeoplewant ◴[] No.45800525[source]
This update pushes LLMs away from direct advice toward decision-support, which is where multi-agent/agentic patterns help. An agentic LLM can orchestrate retrieval of clinical/legal guidelines, run structured checklists, and escalate to licensed humans, while parallel agents cross-check citations, calibrate uncertainty, and enforce refusal policies. A distributed agentic AI with provenance and audit trails won’t remove liability, but it’s a more defensible architecture than a single end-to-end chatbot for high-risk domains.