Most active commenters
  • 6510(3)

←back to thread

169 points mattmarcus | 16 comments | | HN request time: 0.001s | source | bottom
Show context
nativeit ◴[] No.43613007[source]
We’re all just elementary particles being clumped together in energy gradients, therefore my little computer project is sentient—this is getting absurd.
replies(4): >>43613040 #>>43613242 #>>43613636 #>>43613675 #
nativeit ◴[] No.43613040[source]
Sorry, this is more about the discussion of this article than the article itself. The moving goal posts that acolytes use to declare consciousness are becoming increasingly cult-y.
replies(2): >>43613083 #>>43613334 #
1. wongarsu ◴[] No.43613083[source]
We spent 40 years moving the goal posts on what constitutes AI. Now we seem to have found an AI worthy of that title and instead start moving the goal posts on "consciousness", "understanding" and "intelligence".
replies(7): >>43613139 #>>43613150 #>>43613171 #>>43613259 #>>43613347 #>>43613368 #>>43613468 #
2. cayley_graph ◴[] No.43613139[source]
Indeed, science is a process of discovery and adjusting goals and expectations. It is not a mountain to be summited. It is highly telling that the LLM boosters do not understand this. Those with a genuine interest in pushing forward our understanding of cognition do.
replies(1): >>43613371 #
3. goatlover ◴[] No.43613150[source]
Those have all been difficult words to define with much debate over that past 40 years or longer.
4. bluefirebrand ◴[] No.43613171[source]
> Now we seem to have found an AI worthy of that title and instead start moving the goal posts on "consciousness", "understanding" and "intelligence".

We didn't "find" AI, we invented systems that some people want to call AI, and some people aren't convinced it meets the bar

It is entirely reasonable for people to realize we set the bar too low when it is a bar we invented

replies(1): >>43613243 #
5. darkerside ◴[] No.43613243[source]
What should the bar be? Should it be higher than it is for the average human? Or even the least intelligent human?
replies(2): >>43613462 #>>43613561 #
6. Sohcahtoa82 ◴[] No.43613259[source]
> We spent 40 years moving the goal posts on what constitutes AI.

Who is "we"?

I think of "AI" as a pretty all-encompassing term. ChatGPT is AI, but so is the computer player in the 1995 game Command and Conquer, among thousands of other games. Heck, I might even call the ghosts in Pac-man "AI", even if their behavior is extremely simple, predictable, and even exploitable once you understand it.

7. acchow ◴[] No.43613347[source]
> Now we seem to have found an AI worthy of that title and instead start moving the goal posts on "consciousness"

The goalposts already differentiated between "totally human-like" vs "actually conscious"

See also Philosophical Zombie thought experiment from the 70s.

8. arkh ◴[] No.43613368[source]
The original meaning of mechanical Turk is about a chess hoax and how it managed to make people think it was a thinking machine. https://en.wikipedia.org/wiki/Mechanical_Turk

The current LLM anthropomorphism may soon be known as the silicon Turk. Managing to make people think they're AI.

replies(1): >>43616596 #
9. delusional ◴[] No.43613371[source]
They believe that once they reach this summit everything else will be trivial problems that can be posed to the almighty AI. It's not that they don't understand the process, it's that they think AI is going to disrupt that process.

They literally believe that the AI will supersede the scientific process. It's crypto shit all over again.

replies(1): >>43613440 #
10. redundantly ◴[] No.43613440{3}[source]
Well, if that summit were reached and AI is able to improve itself trivially, I'd be willing to cede that they've reached their goal.

Anything less than that, meh.

11. joe8756438 ◴[] No.43613462{3}[source]
there is no such bar.

We don’t even have a good way to quantify human ability. The idea that we could suddenly develop a technique to quantify human ability because we now have a piece of technology that would benefit from that quantification is absurd.

That doesn’t mean we shouldn’t try to measure the ability of an LLM. But it does mean that the techniques used to quantify an LLMs ability are not something that can be applied to humans outside of narrow focus areas.

12. 6510 ◴[] No.43613468[source]
My joke was that the what it cant do debate changed into what it shouldn't be allowed to.
replies(1): >>43613749 #
13. bluefirebrand ◴[] No.43613561{3}[source]
Personally I don't care what the bar is, honestly

Call it AI, call it LLMs, whatever

Just as long as we continue to recognize that it is a tool that humans can use, and don't start trying to treat it as a human, or as a life, and I won't complain

I'm saving my anger for when idiots start to argue that LLMs are alive and deserve human rights

14. wizardforhire ◴[] No.43613749[source]
There ARE no jokes aloud on hn.

Look I’m no stranger to love. you know the rules and so do I… you can’t find this conversation with any other guy.

But since the parent was making a meta commentary on this conversation I’d like to introduce everyone here as Kettle to a friend of mine known as #000000

replies(1): >>43646843 #
15. 6510 ◴[] No.43616596[source]
The mechanical Turk did something truly magical. Everyone stopped moaning that automation was impossible because most machines (while some absurdly complex) were many orders of magnitude simpler than chess.

The initial LLMs simply lied about everything. If you happened to know something it was rather shocking but for topics you knew nothing about you got a rather convincing answer. Then the arms race begun and now the lies are so convincing we are at viable robot overlords.

16. 6510 ◴[] No.43646843{3}[source]
Is there reason to think language developers understand nullability?