Most active commenters
  • throwanem(21)
  • n4r9(15)
  • wubrr(5)
  • slibhb(3)
  • 827a(3)
  • skydhash(3)
  • ChrisMarshallNY(3)
  • tshaddox(3)
  • fofff(3)

←back to thread

170 points bookofjoe | 95 comments | | HN request time: 0.209s | source | bottom
1. slibhb ◴[] No.43644865[source]
LLMs are statistical models trained on human-generated text. They aren't the perfectly logical "machine brains" that Asimov and others imagined.

The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.

replies(5): >>43645899 #>>43646817 #>>43647147 #>>43647395 #>>43650058 #
2. n4r9 ◴[] No.43646621[source]
I've only read the first Foundation novel by Asimov. But what you write applies equally well to many other Golden Age authors e.g. Heinlein and Bradbury, plus slightly later writers like Clarke. I doubt there was much in the way of autism awareness or diagnosis at the time, but it wouldn't be surprising if any of these landed somewhere on the spectrum.

Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.

replies(1): >>43649293 #
3. beloch ◴[] No.43646817[source]
What we used to think of as "AI" at one point in time becomes a mere "algorithm" or "automation" by another point in time. A lot of what Asimov predicted has come to pass, very much in the way he saw it. We just no longer think of it as "AI".

LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.

replies(2): >>43647847 #>>43648825 #
4. wubrr ◴[] No.43647147[source]
> LLMs are statistical models trained on human-generated text.

I mean, not only human-generated text. Also, human brains are arguably statistical models trained on human-generated/collected data as well...

replies(2): >>43647173 #>>43647476 #
5. slibhb ◴[] No.43647173[source]
> Also, human brains are arguably statistical models trained on human-generated/collected data as well...

I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.

replies(1): >>43647230 #
6. wubrr ◴[] No.43647230{3}[source]
Almost everything we learn in schools, universities, most jobs, history, news, hackernews, etc is literally human-generated text. Our brains have an efficient structure to learn language, which has evolved over time, but the processes of actually learning languages happens after you are born, based on human-generated text/voice. Things like balance/walking, motion control, speaking (physical voice control), other physical things are trained on sensory data, but there's no reason LLMs/AIs can't be trained on similar data (and in many cases they already are).
replies(1): >>43647571 #
7. BeetleB ◴[] No.43647395[source]
Reminds me of an old math professor I had. Before word processors, he'd write up the exam on paper, and the department secretary would type it up.

Then when word processors came around, it was expected that faculty members will type it up themselves.

I don't know if there were fewer secretaries as a result, but professors' lives got much worse.

He misses the old days.

replies(2): >>43647446 #>>43650143 #
8. zusammen ◴[] No.43647446[source]
To be truthful, though, that’s only like 0.01 percent of the “academia was stolen from us and being a professor (if you ever get there at all) is worse” problem.
replies(1): >>43652256 #
9. 827a ◴[] No.43647476[source]
Maybe; at some level are dogs' brains also simple sensory-collecting statistical models? A human baby and a dog are born on the same day; that dog never leaves that baby's side, for 20 years. It sees everything it sees, it hears everything it hears, it is given the opportunity to interact with its environment in roughly the same way the human baby does, to the degree to which they are both physically capable. The intelligence differential after that time will still be extraordinary.

My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.

The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.

However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.

It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.

replies(1): >>43647535 #
10. wubrr ◴[] No.43647535{3}[source]
Well the dog brain and human brain are very different statistical models, and I don't think we have any objective way of comparing/quantifying LLMs (as an architecture) vs human brains at this point. I think it's likely LLMs are currently not as good as human brains for human tasks, but I also think we can't say with any confidence that LLMs/NNs can't be better than human brains.
replies(1): >>43647881 #
11. skydhash ◴[] No.43647571{4}[source]
What we generate is probably a function of our sensory data + what we call creativity. At least humans still have access to the sensory data, so we can separate the two (with various success).

LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.

replies(2): >>43647697 #>>43649780 #
12. wubrr ◴[] No.43647697{5}[source]
> At least humans still have access to the sensory data

I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...

> So it embed how we may use words, but not why we use this word and not others.

Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.

replies(1): >>43647979 #
13. israrkhan ◴[] No.43647847[source]
Exactly... as someone said " I need AI to do my laundary and dishes, while I can focus on art and creative stuff" ... But AI is doing the exact opposite, i.e creative stuff (drawing, poetry, coding, documents creation etc), while we are left to do the dishes/laundary.
replies(4): >>43648114 #>>43648246 #>>43649501 #>>43653897 #
14. 827a ◴[] No.43647881{4}[source]
For sure; we don't have a way of comparing the architectural substrate of human intelligence versus LLM intelligence. We don't even have a way of comparing the architectural substrate of one human brain with another.

Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.

On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.

This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.

replies(1): >>43648051 #
15. skydhash ◴[] No.43647979{6}[source]
> I don't understand this point - we can obviously collect sensory data and use that for training.

Sensory data is not the main issue, but how we interpret them.

In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.

But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.

> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.

Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.

Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.

So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.

replies(2): >>43648249 #>>43648880 #
16. matheusd ◴[] No.43648051{5}[source]
Attempting to summarize your argument (please let me know if I succeeded):

Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?

If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?

replies(1): >>43648441 #
17. TheOtherHobbes ◴[] No.43648114{3}[source]
As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.

Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.

There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.

AI can reinforce that. But - ironically - it can also be very good at subverting it.

replies(4): >>43648543 #>>43650053 #>>43650713 #>>43651530 #
18. bad_user ◴[] No.43648246{3}[source]
I have yet to enjoy any of the "creative" slop coming out of LLMs.

Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.

Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.

I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".

replies(1): >>43648380 #
19. wubrr ◴[] No.43648249{7}[source]
> In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.

There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.

> there's a lot of training done with corrections when we say a sentence incorrectly.

There's a lot of the same training for LLMs.

> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.

LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.

20. ChrisMarshallNY ◴[] No.43648380{4}[source]
> I have yet to enjoy any of the "creative" slop coming out of LLMs.

Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?

I mean, that's AI "creativity," at its peak!

[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)

replies(3): >>43648785 #>>43648786 #>>43651366 #
21. 827a ◴[] No.43648441{6}[source]
Well, my argument is more-so directed at the people who say "well, the human brain is just a statistical model with training data". If I say: Both birds and airplanes are just a fuselage with wings, then proceed to dump billions of dollars into developing better wings; we're missing the bigger picture on how birds and airplanes are different.

LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.

But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.

22. Qworg ◴[] No.43648543{4}[source]
The wits in robotics would say we already have domestic robots - we just call them dishwashers and washing machines. Once something becomes good enough to take the job completely, it gets the name and drops "robotic" - that's why we still have robotic vacuums.
replies(3): >>43648760 #>>43650740 #>>43664528 #
23. j_bum ◴[] No.43648760{5}[source]
Oh that’s an interesting idea.

I know I could google it, but I wonder washing machines originally was called an “automatic clothes washer” or something similar before it became widely adopted.

24. ninkendo ◴[] No.43648785{5}[source]
I haven’t cried from laughing like this in a good while, thanks!
25. codethief ◴[] No.43648786{5}[source]
Apparently, the lyrics were not AI-generated, see https://www.reddit.com/r/Music/comments/1byjm7m/comment/l0wm...
replies(1): >>43648790 #
26. ChrisMarshallNY ◴[] No.43648790{6}[source]
Good find!

A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).

27. protocolture ◴[] No.43648825[source]
The bottom line from Kasparovs book on AI was that AI researchers want to AGI, but every decade they are forced to release something to generate revenue and its branded as AI until the next time.

And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.

28. jstanley ◴[] No.43648880{7}[source]
> there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs

I don't buy it. I think our eyes are approximately as fine as we perceive them to be.

When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.

replies(1): >>43653630 #
29. throwanem ◴[] No.43649293{3}[source]
Heinlein doesn't develop his characters? Oh, come on. You can't have read him at all!
replies(1): >>43651479 #
30. __MatrixMan__ ◴[] No.43649501{3}[source]
We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves. If we instead went around bragging about how efficiently we can fold a shirt, complete with mocap datasets of how it's done, we'd have gotten the other kind of AI first.
replies(1): >>43650075 #
31. throwaway7783 ◴[] No.43649780{5}[source]
One can look at creativity as discovery of a hitherto unknown pattern in a very large space of patterns.

No reason to think an LLM (a few generations down the line if not now) cannot do that

replies(1): >>43650668 #
32. hn_throwaway_99 ◴[] No.43650053{4}[source]
> As someone else said - maybe you haven't noticed but there's a machine washing your clothes, and there's a good chance it has at least some very basic AI in it.

This really seems like an "akshually" argument to me...

Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.

> Obviously this is really wishing for domestic robots, not AI

I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.

replies(1): >>43651601 #
33. Lerc ◴[] No.43650058[source]
"LLMs are statistical models"

I see this referenced over and over again to trivialise AI as if it is a fait acompli.

I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

replies(3): >>43650367 #>>43653955 #>>43674231 #
34. hn_throwaway_99 ◴[] No.43650075{4}[source]
> We ended up here because we have a propensity to share our creative outputs, and keep our laundry habits to ourselves

Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.

Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.

replies(1): >>43650879 #
35. ◴[] No.43650143[source]
36. lelandbatey ◴[] No.43650367[source]
In this one case it's not meant to trivialize, it's meant to point out that LLMs don't behave the way we thought that AI would behave. We thought we'd have 100% logically-sound thinking machines because we built them on top of digital logic. We thought they'd be obtuse, we thought they'd be "book smart but not wise". LLMs are just different from that; hallucinations, the whole "fancy words and great sentences but no substance to a paragraph", all that is different from the rigid but perfect brains we thought AI would bring. That's what "statistical machine" seems to be trying to point out.

It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).

replies(1): >>43650455 #
37. Lerc ◴[] No.43650455{3}[source]
That's a very archaic view of AI, like 70's era symbolic AI.
38. skydhash ◴[] No.43650668{6}[source]
Not really, sometimes it's just plausible lies. We distort the world, but respects some basic rules, making it believable. Another difference from LLMs is that we can store this distortion and lay upon it as $TRUTH.

And we can distort quite far (see cartoons in drawing, dubstep in music,...)

replies(1): >>43667663 #
39. tshaddox ◴[] No.43650713{4}[source]
> maybe you haven't noticed but there's a machine washing your clothes

Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”

This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.

40. tshaddox ◴[] No.43650740{5}[source]
I think that’s a bit silly. The reason we don’t commonly refer to a dishwasher as a robot isn’t because dishwashers exist and we only use “robot” for things that don’t exist.

(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)

It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).

replies(1): >>43651638 #
41. __MatrixMan__ ◴[] No.43650879{5}[source]
If you wanted finger-positioning data for how millions of different people fold thousands of different shirts, where would you go looking for that dataset?

My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.

It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.

42. bad_user ◴[] No.43651366{5}[source]
I don't find that very funny. It's interesting to see what AI can do, but wait a month or two and watch it again.

Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.

This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.

replies(1): >>43652373 #
43. n4r9 ◴[] No.43651479{4}[source]
[The italics and punctuation suggest your comment is sarcastic, but I'm going to treat it as serious just in case.]

Yeah, I'd say characterisation is a weakness of his. I've read Stranger in a Strange Land, The Moon is a Harsh Mistress, Starship Troopers, and Double Star. Heinlein does explore characters more than, say, Clark, but he doesn't go much for internal change or emotional growth. His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.

replies(2): >>43653902 #>>43657088 #
44. GeoAtreides ◴[] No.43651530{4}[source]
>there's a good chance it has at least some very basic AI in it.

lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances

45. lannisterstark ◴[] No.43651601{5}[source]
> I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine.

I don't mean to sound insensitive, but, how? Literal hours?

46. Qworg ◴[] No.43651638{6}[source]
What are robots or not is a point of debate - there are many different definitions.

Generally, it has to automate a task with some intelligence, so dishwashers qualify. It isn't a existence proof (nor did I state that).

replies(1): >>43656087 #
47. jhbadger ◴[] No.43652256{3}[source]
This wasn't just a "academia" thing, though. All business executives (even low level ones) had secretaries in the 1980s and earlier too. Typing wasn't something most people could do and it was seen as a waste of time for them to learn. So people dictated letters to secretaries who typed them. After the popularity of personal computers, it just became part of everyone's job to type their correspondence themselves and secretaries (greatly reduced in number and rebranded as "assistants" who deal more with planning meetings and things) became limited only to upper management.
48. ChrisMarshallNY ◴[] No.43652373{6}[source]
Seems that you may have a point. As noted in another comment[0], the [rather puerile] lyrics were completely bro-sourced. They used Suno to mimic an old-style band.

[0] https://news.ycombinator.com/item?id=43648786

49. andsoitis ◴[] No.43653630{8}[source]
The brain turns the raw input from the eyes into the rich, layered visual experience we have of the world:

- basic features (color, brightness and contrast, edges and shapes, motion and direction)

- depth and spatial relationships

- recognition

- location and movement

- focus and attention

- prediction and filling in gaps

“Seeing” real world requires much more than simply seeing with one eye.

50. schwartzworld ◴[] No.43653897{3}[source]
We thought machines were gonna do the work so we could pursue art and music. Instead of machines get to make the art and music, while humans work in the Amazon warehouses.
replies(1): >>43653945 #
51. throwanem ◴[] No.43653902{5}[source]
Thank you for confirming, especially at such effort, when a simple "No, I haven't; I just spend too much time uncritically reading feminism Twitter," would have amply sufficed. There's an honesty to this response in spite of itself, and in spite of itself I respect that.
replies(2): >>43654169 #>>43654286 #
52. aaronbaugher ◴[] No.43653945{4}[source]
It was kind of funny to see the shift in the media reaction when they realized the new batch of machines are better at replacing writers than at replacing truckers.
53. vacuity ◴[] No.43653955[source]
Personally, I have a negative opinion of LLMs, but I agree completely. Many people are motivated to reject LLMs solely because they see them as "soulless machines". Judge based on the facts of the matter, and make your values clear if you must bring them into it, but don't pretend you're not applying values when you are. You can do worse: kneejerk emotional reactions are just pointless.
54. Balgair ◴[] No.43654169{6}[source]
I sincerely have no idea if any of your comments in this thread are sarcastic or not. (This comment is also not sarcastic FYI).

Generally, I also agree that Heinlein's characters are one dimensional and could benefit from greater character growth, though that was a bit of a hallmark of Golden Age sci-fi.

replies(1): >>43655066 #
55. n4r9 ◴[] No.43654286{6}[source]
Not sure if it will help me saying this, but that's a disappointingly dismissive and avoidant response well below HN standards. I'm very willing to engage with any counter-arguments in good faith. I don't use Twitter (or Mastodon, or BlueSky, or TikTok, or Facebook, or Threads etc...), but I do enjoy discussing sci fi of different periods on Goodreads groups.
replies(1): >>43655032 #
56. throwanem ◴[] No.43655032{7}[source]
It seems filthy rich of you to claim good faith at this time, but I have recently begun to gather that in some quarters lately, it is considered offensively unreasonable to expect working knowledge of any material as a prerequisite for participating competently in discussion thereof. So though your claim is facially false, I ironically can't fairly consider that it is other than honestly made. Your precepts are in any case your problem. Good luck with it, you Hacker News expert.
replies(1): >>43657705 #
57. throwanem ◴[] No.43655066{7}[source]
"Teenage boy wish fulfillment" is well beneath any reasonable standard of criticism, and I've addressed that with about as much respect as it deserves.

There is much worthy of critique in Heinlein, especially in his depiction of women. I've spent about a quarter century off and on both reading and formulating such critiques, much more recently than I've spent meaningful time with his fiction. I've also read what he had to say for himself before he died, and what Mrs. Heinlein - she kept the name - said about him after. If we want to talk about, for example, how the themes of maternal incest and specifically feminine embodiment of weakly superhuman AGI in his later work reflect a degree of senescence and the wish for a supercompetent maternal figure to whom to surrender the burden of responsibility, or if we want to talk about how Heinlein seems to spend an enormous amount of time just generally exploring stuff from female characters' perspectives that an honest modern inquiry would recognize as fumbling badly but earnestly in the direction of something like a contemporary understanding of gender, then we could talk about that.

No one wants to, though. You can't use anything like that as a stick to beat people with, so it never gets a look in, and those as here who care nothing for anything of the subject save if it looks serviceable as a weapon claim to be the only ones in the talk who are honest. They don't know the man's work well enough to talk about the years he spent selling stories that absolutely revolve around character development, which exist solely to exemplify it! Of course these are universally dismissed as his 'juveniles' - a few letters shy of 'juvenilia' - because science fiction superfans are all children and so are science fiction superhaters, neither of whom knows how to respond in any way better than a tantrum on the rare occasion of being told bluntly it's well past time they grew up.

But they're the honest ones. Why not? So it goes. It's a conversation I know better than to try to have, especially on Hacker News; if I don't care for how it's proceeding, I've no one but myself to blame.

58. tshaddox ◴[] No.43656087{7}[source]
I'm more interested in how we regularly use the term, rather than how we might attempt to come up with a rigorous definition (particularly when that rigorous definition conflicts awkwardly with regular usage).

My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.

59. throwanem ◴[] No.43657088{5}[source]
Excuse me for giving the impression of a pedant, but do you mean Clarke, as in Arthur C., there? I've been trying since I first read your comment to puzzle out to whom by that name you could possibly be referring in this context, and it's only just dawned on me to wonder if you simply have not bothered to learn the spelling of the name you intended to mention.
replies(1): >>43657559 #
60. n4r9 ◴[] No.43657559{6}[source]
Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.
replies(1): >>43657766 #
61. n4r9 ◴[] No.43657705{8}[source]
I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.
replies(1): >>43657794 #
62. throwanem ◴[] No.43657766{7}[source]
> Yes, that Clarke. Sorry for putting you to the extra effort. I spelled it correctly in the initial post you replied to. Guess I assumed that people would spot the back-reference.

In entire fairness, I was distracted by you having said he and his contemporaries must all have been autistic, as if either you yourself were remotely competent to embark upon any such determination, or as though it would in some way indict their work if they were.

I'm sure you would never in a million years dare utter "the R-slur" in public, though I would guess that in private the violation of taboo is thrilling. That's fine as far as it goes, but you really should not expect to get away with pretending you can just say "autistic" to mean the same thing and have no one notice, you blatantly obvious bigot.

63. throwanem ◴[] No.43657794{9}[source]
> I'd be happy to receive any pointers on how I'm wrong - perhaps I've misinterpreted what I've read, or there are characters in the rest of his work that defy my stance.

If you meant that honestly, you would already have found ample directions for further research, easily enough not to need asking. Everything you claim to want lies just a Google search away, on any of the various and I should hope fairly easily identifiable search terms I have mentioned. "It is not my job to educate you."

Or, rather, it would still not be my job even if to learn were what you really want here. You don't, of course. That's why you haven't bothered so much as trying a few searches that might turn up something you would have to pretend to have read. Much easier to try to make me look emotionally unstable - 'defy?' Really. - because you can't actually answer anything I've said and you know it. Good luck with that, too.

replies(1): >>43658041 #
64. n4r9 ◴[] No.43658041{10}[source]
I've read the books, mulled them over, discussed them with others, and done some reading of what other critics have to say online. I've given my opinion and some of the reasoning behind it. If you want more of my reasoning I'm happy to give it. You have given nothing in response. It feels a lot like you've jumped to conclusions because my opinion is very different to yours. So you've immediately decided not to engage but are nevertheless hellbent on making me out to be uninformed or stupid.

We've clearly got off on the wrong foot here. I don't want to make out like I think Heinlein is crap. He had a lot of fantastic, creative ideas about science, technology, culture, sexuality and governance. He was extremely daring and sometimes quite subtle in the way he explored those ideas. But - in the novels I've read - his characters lack a certain depth and relatability. They express very little of the self-doubt, empathy, growth, and deep-seated motivations that are core to the human condition. So it goes also with Asimov, Clarke, Bradbury, and others. And it's fine that those weren't their strong suits. They had other strengths. And there were other writers like Bester, Dick, Le Guin, Zelazny, Herbert etc... who could fill the gaps.

replies(1): >>43658422 #
65. throwanem ◴[] No.43658422{11}[source]
Herbert for better gender and emotional politics than Heinlein. Herbert! And to think I imagined there was nothing left you could say to surprise me.

Don't expect me to stop discussing what your behavior displays of your character, just because you've finally shown the modicum of rhetorical sense or tactical cunning required to minimally amend that behavior. Again, if you actually meant even a fraction of what you say, you would now be reading instead of still speaking. If it bothers you that you continue to indict yourself by your actions this way, consider acting differently.

Should you at any future point opt to develop a thesis in this area which is capable of supporting knowledgeable discussion, I confide it will find an audience in accord with its quality. In the meantime, please stop inviting me to participate in the project of recovering your totally unforced embarrassment.

Believe it or not by the look of things, I already have enough else to do today. Wiping your nose as you struggle and fail to learn from your vastly overprivileged young life's first encounter with entirely merited and obviously unmitigated contempt doesn't really make the list, at least not past the point at which it ceases to amuse, which I admit is now fast approaching.

replies(2): >>43658805 #>>43659127 #
66. fofff ◴[] No.43658805{12}[source]
Fuck off, you condescending prick.
replies(1): >>43658836 #
67. throwanem ◴[] No.43658836{13}[source]
> Fuck off, you condescending prick.

Ah, here we go. I understand why you're using a fresh throwaway for this sort of thing, of course. Can't risk being seen for no better than you have to be, eh? But this at least - and, I strongly suspect, at last - is honest.

You can't abuse me in any way you're wise or sensible enough to imagine finding, so now you'll go mistreat someone inside the span of your arm's reach, blaming me all the while for your own infantile urge to do so. I wish you every bit as much joy of it as you deserve. And I hope they know your current Hacker News handle.

replies(1): >>43658975 #
68. fofff ◴[] No.43658975{14}[source]
Sitting here rolling my eyes at your response. Seriously, fuck off.
replies(1): >>43659011 #
69. throwanem ◴[] No.43659011{15}[source]
> Sitting here rolling my eyes at your response. Seriously, fuck off.
replies(1): >>43659136 #
70. n4r9 ◴[] No.43659127{12}[source]
In Dune there are female characters with their own desires and designs on the world, who go out and take what they want. There is profound loss, and personal transformation. There is coming to terms with intensely sad or painful circumstances. There is overcoming doubt, building resilience, and taking responsibility and control of one's destiny. These things were not really explored in what I've read of Heinlein.
replies(1): >>43659187 #
71. fofff ◴[] No.43659136{16}[source]
Bye, asshole.
replies(1): >>43659146 #
72. throwanem ◴[] No.43659146{17}[source]
> Bye, asshole.

If you didn't want to prove me right when I said six hours ago [1] that you were throwing a tantrum, why continue throwing the tantrum?

[1] https://news.ycombinator.com/item?id=43655066

replies(1): >>43664202 #
73. throwanem ◴[] No.43659187{13}[source]
> These things were not really explored in what I've read of Heinlein.

I don't know how much further you expect me to need to boil down "read more" and still be able to take you seriously. How do you expect that, when you haven't even bothered trying to justify how you chose those four novels to represent forty years?

I see that 'seriously' very much describes how you like to regard yourself. You've insisted most thoroughly others must regard you likewise, regardless of what you show yourself anywhere near capable of actually rewarding or indeed even appreciating. Do you have a favorable impression of your efforts thus far? Have they had the results that you hoped?

We would now be having a different conversation if you had said anything to suggest to me it would be worth my trouble to continue in the attempt. I'd have enjoyed that conversation, I think; as most days here, I had hopes of learning something new. You've felt the need to spend the day doing this instead. If you don't like how that's working out, whom fairly may you blame?

replies(1): >>43659211 #
74. n4r9 ◴[] No.43659211{14}[source]
At this point I'm mostly just intrigued to see whether you'll keep replying and whether you'll make any substantial points.
replies(2): >>43659234 #>>43661555 #
75. throwanem ◴[] No.43659234{15}[source]
> At this point I'm mostly just intrigued to see whether you'll keep replying and whether you'll make any substantial points.

Every substantive point I've actually made all day you have totally ignored, and this is what it's worth your time still to do. But sure. You can stop paying me rent to live in your head any time you like. Keep telling yourself that. I don't doubt you need to, to get through a day.

Also, 122d40d7236cd3ade496d0101d8029ec.

replies(1): >>43662005 #
76. throwanem ◴[] No.43661555{15}[source]
And then it turns out that having taken bits and bites out of my entire mortal day, to pursue this pointless argument with you, was just what I needed even if nothing at all what I wanted. It put me in a state of mind where I could find some kind words to say to my family that I think some folks there may have been quite a bit, if in a small way, needing to hear for a while.

That's not even slightly to your credit, of course. But I can't fairly say you weren't involved, and I have to admit I genuinely appreciate this result, however inadvertent and I'm sure unimaginable on your part it may be. So, though I say it through gritted teeth, thank you for your time. If for absolutely nothing else whatsoever, for that at least I must express my genuine gratitude.

Intolerable though you've been throughout, and despite what I assume to have been your every intention, something good may yet come of your ill efforts. You deserve to know that. May it heap as many coals of fire on your head as your heart should prove small enough to deserve.

77. n4r9 ◴[] No.43662005{16}[source]
Substantive as in about Heinlein's work, rather than attacks on me or my motivations.
replies(1): >>43662114 #
78. throwanem ◴[] No.43662114{17}[source]
> Substantive as in about Heinlein's work, rather than attacks on me or my motivations.

We could have done that fifteen hours ago [1], or eleven hours ago [2], or nine hours ago [3] [4], or any time you wanted. You haven't. What's changed?

[1] https://news.ycombinator.com/item?id=43655066

[2] https://news.ycombinator.com/item?id=43657766

[3] https://news.ycombinator.com/item?id=43659136

[4] https://news.ycombinator.com/item?id=43659187

replies(1): >>43662541 #
79. n4r9 ◴[] No.43662541{18}[source]
I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you. I've offered to go into more detail. I've contrasted Heinlein's work with contemporaneous works. Saying "you should go and read more" is not compelling, especially given the amount of effort you've expended to avoid saying anything of substance. I wonder if you feel insecure about whether such a defense is possible.
replies(1): >>43662556 #
80. throwanem ◴[] No.43662556{19}[source]
> I've given you lots of opportunity to offer a defense to the points I raised in my first reply to you...Saying "you should go and read more" is not compelling...I wonder if you feel insecure about whether such a defense is possible.

No, you don't. I've said nothing I need defend, and you've said nothing you can. It would be one thing if I had to say not to piss on my boots and tell me it's raining, but this doesn't even count as pissing. It's just you repeating yourself from yesterday and that's boring for both of us.

"You are a bigot" is a factual claim I have made [1], now quite a number of hours and comments ago. You haven't addressed it. You won't. You can't. You have no choice now but to let it stand. You have shown it more true than even you yourself can pretend to ignore. You need someone to tell you it isn't really true, in a way you can believe. No one is here to tell you that.

There are other embarrassments, of course; you've shown yourself not a tenth the scholar you fancy yourself to be, nor able to handle yourself even slightly in the face of someone who needs nothing from you and cares neither for nor against you. You would care more that I called you an abuser, but you don't see the people you try to treat that way as human. So what you're really stuck on is that I called you a bigot and you can't answer back. Hence still finding it worth your while to try to talk me into letting you off that hook.

Sorry, not sorry. Go back to bed. Read a book while you're there, why don't you? It might help you sleep.

edit: You also haven't explained what makes those four books you named as exemplary as you called them. Can you describe the common thread? I ask because I actually have read them, in no case fewer than three times, and they really haven't all that much in common. Oh, by the same author, certainly. But you've only dropped names. You haven't tried to draw any comparisons or demonstrate anything by the rhetorical juxtaposition of those characters, though I grant you keep insisting it must count for something that you listed them. You haven't, so far as I can see, discussed or even mentioned a single event in the plot of any of those novels. For all the nothing you've had to say with any actual reference to them, even the few texts you named might as well not exist!

It is extremely risible at this time of you to try to claim you are the one here interested in talking about Heinlein. If there were a God, it would not be safe to tell a lie of that magnitude near a church. But no matter. To get back to the first question I asked here just above: Did anyone actually explain to you why those four should be the first and last of Heinlein worth talking about? Did you ever think to ask? Or was it that they were part of an assignment? - you turned in a paper and assumed the passing grade meant you must have learned something by the transaction, and that for you was where the matter and all semblance of curiosity ended.

I hope it isn't that last one. I already believed firmly that student loan relief was the correct action both ethically and economically; as I have said in other quarters lately, it is not possible for you to be enough of an asshole to change my politics. But if this is you recapitulating something you paid to be taught - if you're currently pursuing or God forfend have completed an American university education, and the best approximation of clear thought you can manage is this - then whoever sold you and your family that bill of goods ought damn well be horsewhipped, and that they merely see the loan annulled instead would be a considerable mercy.

[1] https://news.ycombinator.com/item?id=43657766

replies(1): >>43662722 #
81. n4r9 ◴[] No.43662722{20}[source]
I meant that you might offer a defense of Heinlein against my initial points: for example, that there's a strong element of wish fulfilment in his characters. This is neither an extreme nor an uncommon critique. You clearly disagree with it quite strongly. I just want to know what about it you personally find unconvincing.
replies(1): >>43662761 #
82. throwanem ◴[] No.43662761{21}[source]
You ask what I find unconvincing. I'm happy to further oblige you:

> His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.

"Cartoonish." "Pathetic." "Stupid." "Submissive." "Clumsily sexualized." "Teenage boy." 'Moon men' - you mean Loonies? And this all was you yesterday [1]. How far do you really expect to get with this farcical pantomime of sweet reason now? I ask again: What's changed?

This all began when I said you obviously hadn't read what you claimed to have [2], and it got so far up your nose you couldn't help going and proving me right. You've made a lot more bad decisions since then, but don't worry: I'll keep reminding you as long as you show you need me to that you can amend your behavior at any time.

[1] https://news.ycombinator.com/item?id=43651479

[2] https://news.ycombinator.com/item?id=43649293

replies(1): >>43664360 #
83. Balgair ◴[] No.43664202{18}[source]
To be clear, I'm not the newbie account with the expletives. I've no idea who that is.
replies(1): >>43666737 #
84. n4r9 ◴[] No.43664360{22}[source]
Will email you some links/screenshots later today to demonstrate that I've read them (and expand on my points). Would post them here but keen to keep accounts separate.
replies(1): >>43666118 #
85. mylittlebrain ◴[] No.43664528{5}[source]
Similarly, we already have AI, which is really MI (Machine Intelligence). Long before the current hype cycle the defense industry and others have been using the same tools being applied now. Of course, there are differences, such as scale and architecture, etc.
86. throwanem ◴[] No.43666118{23}[source]
Okay. Before you do so and for no particular reason, I feel I should note two things.

First, assuming you are not in fact a public figure, I will not publicly reveal your identity or any information I believe could lead to its disclosure, and that is exactly as far into my confidence as you may expect to come. That caveat excepted, I hereby explicitly disclaim any presumption you may have of privacy in any communication you make with me via email or other nonpublic means.

I won't dox you. I understand it isn't as safe for everyone as for me to have their name in the world. And I'm not saying I intend to publish all, or indeed any, of what you send; if it deserves in my view to remain in confidence, I will keep it so. But if you think taking this conversation to email will give you a chance to play games where no one else can see, you had better think twice.

(Should you by any of several plausible means dig up my phone number and try giving me a call, I hereby explicitly advise that any such action on your part constitutes "prior consent" per Md. Code §10–402 [1], and I will exercise my option under that law without further notice.)

Second, there exists an organization with which I have a legal agreement, binding on all our various heirs and assigns, to the effect we are quits forever. I will refer to this company as "Name Redacted for Legal Reasons" or "Name Redacted" for short, and describe it as the brainchild of a fascinating and tight-knit group of siblings, any of the three (technically four) of whom I'd have liked the chance to know better than I did.

I will also note, not for the first time, that I signed that agreement in entire good faith which has endured from that day through this, and I earnestly believe the same of my counterparty both collectively, and in the individual and separate persons of those who represented Name Redacted to me throughout that process as well as through my prior period of employment.

Now, if I were an employee of Name Redacted for Legal Reasons, and I had started a day's worth of shit in public with a signatory of such an agreement as I describe - that is, if I had acted in a way which could be construed to compromise my employer's painstakingly arrived-upon mutual quitclaim - then the very last thing I would ever want to do would be to allow to come into existence documentary evidence of my possibly somewhat innocent but certainly very grave foolishness. Because if that did happen, I would understand I R. May confidently expect very soon to become 'the most fired-for-causedest person in the history of fuck.'

As I said, I signed in good faith. In that same good faith, what choice really would I have but to privately disclose in full detail? It would be irresponsible of me to assume this was the only problem such intemperate behavior might be creating for Name Redacted, any or all of which might be far more consequential than this.

I'm sure at this point I'm only talking to hear myself speak, though. In any case, I look forward to your email.

[1] https://mgaleg.maryland.gov/mgawebsite/Laws/StatuteText?arti...

replies(1): >>43668306 #
87. throwanem ◴[] No.43666737{19}[source]
Oh, I know; I don't blame you at all for feeling some need to clarify, but I was under no confusion. Sorry you got tangled up in all this. I hope it hasn't been totally lacking in literary-critical interest, at least.
88. throwaway7783 ◴[] No.43667663{7}[source]
What you are saying does not seem to contradict what I'm saying. Any distortion would be another hitherto unknown pattern.
89. n4r9 ◴[] No.43668306{24}[source]
Sent. I don't have the patience to parse whatever you're trying to say here, but it's not you I'm worried about either way.
replies(1): >>43669153 #
90. throwanem ◴[] No.43669153{25}[source]
For posterity, I repeat here my entire response to your email, omitting only the signature where no new information is present:

> Is that all? You mistake your opinions for facts, and when motivated you freely ignore the difference between an author's voice and that of a viewpoint character. This you share with millions, and feel the need to be secretive about? I thought you had something serious to talk about. Go away.

replies(1): >>43670374 #
91. n4r9 ◴[] No.43670374{26}[source]
You now acknowledge at least that I have read the books?
replies(1): >>43671696 #
92. throwanem ◴[] No.43671696{27}[source]
No, I acknowledge that there's a good few paragraphs more of the superficial, doctrinaire nonsense you have been parroting than was immediately obvious to me here. Enough to be worth pasting through GPTZero, more than enough to say anything novel or interesting, and what a shame you never got there.

One example to shut you up: about the first thing every serious critic of The Moon is a Harsh Mistress addresses is that it's intentionally and explicitly written with about two-thirds of an eye toward the American "revolution" - hence the correspondences between Professor de la Paz and Benjamin Franklin (with a generous dash of Jefferson) on the one hand, and Mannie as an obvious George Washington expy on another. These are intentional similarities! Heinlein mentions it explicitly in Expanded Universe (don't hold me to that, it may have been in Grumbles from the Grave) and it's treated at length in one or another of the crit histories I've read, or maybe it was the Patterson biography, I'm not reading back hundreds of pages of diary notes for your lazy ass. It may have been the novel's own preface! He was intentionally loose with the correspondences, both in character and in plot, for narrative and didactic reasons, and that has proven a fruitful vein for both critical analysis and outright criticism over the years, and you can't even talk about any of it. You didn't notice any of this. Because you never learned the difference between looking at books and reading them. I'm sure you looked at every page, though!

These essays of yours are, generously, on the level of a college freshman who parties too much, studies too little, and treats English as a dump course. I did more thoughtful work as a high-school senior. For this you feel the need to be secretive? What a joke. Get lost, flyweight.

replies(1): >>43671877 #
93. n4r9 ◴[] No.43671877{28}[source]
Just to note; it would be really helpful if you could ease up on the ad hominem. It's not going to stop me and doesn't add to the weight of your arguments. It just drags the discussion down and makes it harder to figure out what your arguments are

> These are intentional similarities!

I said that there are "clear comparisons" to the American revolution. I didn't suggest that the comparison was accidental. If anything, I assumed it was supposed to be read that way.

> One example to shut you up

Well, you've failed there. Perhaps we should focus on the cause of your initial outrage: Heinlein's (lack of) character depth?

> For this you feel the need to be secretive?

It's privacy rather than secrecy. I don't want it to be too easy to link this account to my Goodreads.

replies(1): >>43672061 #
94. ◴[] No.43672061{29}[source]
95. slibhb ◴[] No.43674231[source]
I did not in any way "trivialise AI". LLMS are amazing and a massive accomplishment. I just wanted to contrast them to Asimov's conception of AI.

> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.

I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.

To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.