←back to thread

339 points throw0101c | 2 comments | | HN request time: 0.917s | source
Show context
oytis ◴[] No.44609364[source]
I just hope when (if) the hype is over, we can repurpose the capacities for something useful (e.g. drug discovery etc.)
replies(16): >>44609452 #>>44609461 #>>44609463 #>>44609471 #>>44609489 #>>44609580 #>>44609632 #>>44609635 #>>44609712 #>>44609785 #>>44609958 #>>44609979 #>>44610227 #>>44610522 #>>44610554 #>>44610755 #
alphazard ◴[] No.44609712[source]
The rest of the world has not caught up to current LLM capabilities. If it all stopped tomorrow and we couldn't build anything more intelligent than what we have now: there would be years of work automating away toil across various industries.
replies(3): >>44609814 #>>44609887 #>>44614001 #
Terr_ ◴[] No.44609814[source]
Creating oodles of new jobs in internally QAing LLM results, or finding and suing companies for reckless outcomes. :p
replies(1): >>44609973 #
1. alphazard ◴[] No.44609973[source]
As long as liability is clearly assigned, it doesn't have an economic impact. The ambiguity of liability is what creates negative economic impact. Once it's assigned initially through law, then it can be reassigned via contract in exchange for cash to ensure the most productive outcome.

e.g. if OpenAI is responsible for any damages caused by ChatGPT then the service shuts down until you waive liability and then it's back up. Similarly if companies are responsible for the chat bots they deploy then they can buy insurance or put up guard rails around the chat bot, or not use it.

replies(1): >>44610453 #
2. Terr_ ◴[] No.44610453[source]
> As long as liability is clearly assigned, it doesn't have an economic impact

In a reality with perfect knowledge, complete laws always applied, and populated by un-bankrupt-able immortals with infinite lines of credit, yes. :P