←back to thread

Human

(quarter--mile.com)
717 points surprisetalk | 5 comments | | HN request time: 0.972s | source
Show context
DeusExMachina ◴[] No.43992849[source]
> Perhaps you, a human, read this and think: Well, this world sounds kind of boring. Some of the machines think so, too.

> Most of the machines got bored of the project. But, all of a sudden, things began to get interesting.

> The result was like nothing the machines had ever seen. It was wonderful

> Machine society began obsessing over this development.

> The machines were impressed. And a bit scared.

Boredom, interest, wonder, obsession, being impressed and scared are all emotions that the machines in the story should not be able to experience.

replies(6): >>43992875 #>>43992932 #>>43992963 #>>43993334 #>>43993358 #>>43993708 #
1. killerstorm ◴[] No.43992963[source]
Jürgen Schmidhuber introduced curiosity/boredom mechanisms as a way to improve learning in reinforcement learning environment:

https://people.idsia.ch/~juergen/curiositysab/curiositysab.h...

This mechanism can be formalized.

> Zero reinforcement should be given in case of perfect matches, high reinforcement should be given in case of `near-misses', and low reinforcement again should be given in case of strong mismatches. This corresponds to a notion from `esthetic information theory' which tries to explain the feeling of `beauty' by means of the quotient of `subjective complexity' and `subjective order' or the quotient of `unfamiliarity' and `familiarity' (measured in an information-theoretic manner).

This type of architecture is very similar to GAN which later became very successful

replies(1): >>43993482 #
2. littlestymaar ◴[] No.43993482[source]
While this is interesting, gp's point still stand as the text explicitly says “There is no emotion” in the world of machines.
replies(1): >>43994562 #
3. killerstorm ◴[] No.43994562[source]
The language of reward mechanism can be translated to language of emotions. Emotions is something humans experience and understand on innate level, they are qualia. If a reward structure is translated to our language we can get a better intuitive understanding.

E.g. a direct negative reward associated with undesired states is often called "pain". E.g. if you want robot to avoid bumping into walls you give it a "pain" feedback and then it would learn to avoid walls. That's exactly how it works for humans, animals, etc. Obviously robot does not literally experience "pain" as an emotion, it's just a reward structure.

replies(1): >>44002278 #
4. littlestymaar ◴[] No.44002278{3}[source]
What you've written doesn't change anything to the fact that there's a contradiction in the author's writing. And as ithkuil said in another comment[1] it's not surprising at all that such a contradiction would occur in a work of fiction written by a human, because we are first an foremost emotional beings and we cannot really imagine how would a society of purely rational beings would be.

I don't really understand why you want to pretend that there's no inconsistency in a piece of fiction, by invoking pseudo-technical arguments that are entirely foreign to the said piece of fiction.

[1]: https://news.ycombinator.com/item?id=43992932

replies(1): >>44013552 #
5. killerstorm ◴[] No.44013552{4}[source]
One way to interact with a piece of fiction is to find a way to connect it to things you know or thought of.