The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
The upshot of this is that LLMs are quite good at the stuff that he thinks only humans will be able to do. What they aren't so good at (yet) is really rigorous reasoning, exactly the opposite of what 20th century people assumed.
Alfred Bester's "The stars my destination" stands out as a shining counterpoint in this era. You don't get much character development like that in other works until the sixties imo.
LLM's are just the latest form of "AI" that, for a change, doesn't quite fit Asimov's mold. Perhaps it's because they're being designed to replace humans in creative tasks rather than liberate humans to pursue them.
I'd say no, human brains are "trained" on billions of years of sensory data. A very small amount of that is human-generated.
Then when word processors came around, it was expected that faculty members will type it up themselves.
I don't know if there were fewer secretaries as a result, but professors' lives got much worse.
He misses the old days.
My point in bringing up that metaphor is to focus the analogy: When people say "we're just statistical models trained on sensory data", we tend to focus way too much on the "sensory data" part, which has led to for example AI manufacturers investing billions of dollars into slurping up as much human intellectual output as possible to train "smarter" models.
The focus on the sensory input inherently devalues our quality of being; that who we are is predominately explicable by the world around us.
However: We should be focusing on the "statistical model" part: that even if it is accurate to holistically describe the human brain as a statistical model trained on sensory data (which I have doubts about, but those are fine to leave to the side): its very clear that the fundamental statistical model itself is simply so far superior in human brains that comparing it to an LLM is like comparing us to a dog.
It should also be a focal point for AI manufacturers and researchers. If you are on the hunt for something along the spectrum of human level intelligence, and during this hunt you are providing it ten thousand lifetimes of sensory data, to produce something that, maybe, if you ask it right, it can behave similarity to a human who has trained in the domain in only years: You're barking up the wrong tree. What you're producing isn't even on the same spectrum; that doesn't mean it isn't useful, but its not human-like intelligence.
LLMs have access to what we generate, but not the source. So it embed how we may use words, but not why we use this word and not others.
I don't understand this point - we can obviously collect sensory data and use that for training. Many AI/LLM/robotics projects do this today...
> So it embed how we may use words, but not why we use this word and not others.
Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Here's my broad concern: On the one hand, we have an AI thought leader (Sam Altman) who defines super-intelligence as surpassing human intelligence at all measurable tasks. I don't believe it is controversial to say that we've established that the goal of LLM intelligence is something along these lines: it exists on the spectrum of human intelligence, its trained on human intelligence, and we want it to surpass human intelligence, on that spectrum.
On the other hand: we don't know how the statistical model of human intelligence works, at any level at all which would enable reproduction or comparison, and there's really good reason to believe that the human intelligence statistical model is vastly superior to the LLM model. The argument for this lies in my previous comment: the vast majority of contribution of intelligence advances in LLM intelligence comes from increasing the volume of training data. Some intelligence likely comes from statistical modeling breakthroughs since the transformer, but by and large its from training data. On the other hand: Comparatively speaking, the most intelligent humans are not more intelligent because they've been alive for longer and thus had access to more sensory data. Some minor level of intelligence comes from the quality of your sensory data (studying, reading, education). But the vast majority of intelligence difference between humans is inexplicable; Einstein was just Born Smarter; God granted him a unique and better statistical model.
This points to the undeniable reality that, at the very least, the statistical model of the human brain and that of an LLM is very different, which should cause you to raise eyebrows at Sam Altman's statement that superintelligence will evolve along the spectrum of human intelligence. It might, but its like arguing that the app you're building is going to be the highest quality and fastest MacOS app ever built, and you're building it using WPF and compiling it for x86 to run on WINE and Rosetta. GPT isn't human intelligence; at best, it might be emulating, extremely poorly and inefficiently, some parts of human intelligence. But, they didn't get the statistical model right, and without that its like forcing a square peg into a round hole.
Sensory data is not the main issue, but how we interpret them.
In Jacob Bronowski's The Origins of Knowledge and Imagination, IIRC, there's an argument that our eyes are very coarse sensors. Instead they do basic analysis from which the brain can infer the real world around us with other data from other organs. Like Plato's cave, but with much more dimensions.
But we humans came with the same mechanisms that roughly interpret things the same way. So there's some commonality there about the final interpretation.
> Humans learn language by observing other humans use language, not by being taught explicit rules about when to use which word and why.
Words are symbols that refers to things and the relations between them. In the same book, there's a rough explanation for language which describe the three elements that define it: Symbols or terms, the grammar (or the rules for using the symbols), and a dictionary which maps the symbols to things and the rules to interactions in another domain that we already accept as truth.
Maybe we are not taught the rules explicitly, but there's a lot of training done with corrections when we say a sentence incorrectly. We also learn the symbols and the dictionary as we grow and explore.
So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
Because we can't compare human and LLM architectural substrates, LLMs will never surpass human-level performance on _all_ tasks that require applying intelligence?
If my summary is correct, then is there any hypothetical replacement for LLM (for example, LLM+robotics, LLMs with CoT, multi-modal LLMs, multi-modal generative AI systems, etc) which would cause you to then consider this argument invalid (i.e. for the replacement, it could, sometime replace humans for all tasks)?
It's been quite a while since anyone in the developed world has had to wash clothes by slapping them against a rock while standing in a river.
Obviously this is really wishing for domestic robots, not AI, and robots are at least a couple of levels of complexity beyond today's text/image/video GenAI.
There were already huge issues with corporatisation of creativity as "content" long before AI arrived. In fact one of our biggest problems is the complete collapse of the public's ability to imagine anything at all outside of corporate content channels.
AI can reinforce that. But - ironically - it can also be very good at subverting it.
Maybe some day I will, but I find it hard to believe it, given a LLM just copies its training material. All the creativity comes from the human input, but even though people can now cheaply copy the style of actual artists, that doesn't mean they can make it work.
Art is interesting because it is created by humans, not despite it. For example, poetry is interesting because it makes you think about what did the author mean. With LLMs there is no author, which makes those generated poems garbage.
I'm not saying that it can't work at all, it can, but not in the way people think. I subscribe to George Orwell's dystopian view from 1984 who already imagined the "versificator".
There are 2 types of grammar for natural language - descriptive (how the language actually works and is used) and prescriptive (a set of rule about how a language should be used). There is no known complete and consistent rule-based grammar for any natural human language - all of these grammar are based on some person or people, in a particular period of time, selecting a subset of the real descriptive grammar of the language and saying 'this is the better way'. Prescriptive, rule-based grammar is not at all how humans learn their first language, nor is prescriptive grammar generally complete or consistent. Babies can easily learn any language, even ones that do not have any prescriptive grammar rules, just by observing - there have been many studies that confirm this.
> there's a lot of training done with corrections when we say a sentence incorrectly.
There's a lot of the same training for LLMs.
> So LLMs learn the symbols and the rules, but not the whole dictionary. It can use the rules to create correct sentences, and relates some symbols to other, but ultimately there's no dictionary behind it.
LLMs definitely learn 'the dictionary' (more accurately a set of relations/associations between words and other types of data) and much better than humans do, not that such a 'dictionary' is an actual determined part of the human brain.
Oh, come on. Who can't love the "classic" song, I Glued My Balls to My Butthole Again[0]?
I mean, that's AI "creativity," at its peak!
[0] https://www.youtube.com/watch?v=wPlOYPGMRws (Probably NSFW)
LLM luddites often call LLMs stochastic parrots or advanced text prediction engines. They're right, in my view, and I feel that LLM evangelists often don't understand why. Because LLMs have a vastly different statistical model, even when they showcase signs of human-like intelligence, what we're seeing cannot possibly be human-like intelligence, because human intelligence is inseparable from its statistical model.
But, it might still be intelligence. It might still be economically productive and useful and cool. It might also be scarier than most give it credit for being; we're building something that clearly has some kind of intelligence, crudely forcing a mask of human skin over it, oblivious to what's underneath.
A friend demoed Suno to me, a couple of days ago, and it did generate lyrics (but not NSFW ones).
And often they get caught up supporting the latest fake AI craze that they dont get to research AGI.
I don't buy it. I think our eyes are approximately as fine as we perceive them to be.
When you look through a pair of binoculars at a boat and some trees on the other side of a lake, the only organ that's getting a magnified view is the eyes, so any information you derive comes from the eyes and your imagination, it can't have been secretly inferred from other senses.
No reason to think an LLM (a few generations down the line if not now) cannot do that
This really seems like an "akshually" argument to me...
Nobody is denying that there are dishwashers and washing machines, and that they are big time savers. But is it really a wonder what people are referring to when they say "I want AI to wash my dishes and do my laundry"? That is, I still spend hours doing the dishes and laundry every week, and I have a dishwasher and washing machine. But I still want something to fold my laundry, something that lets me just dump my dishes in the sink and have them come out clean, ideally put away in the cabinets.
> Obviously this is really wishing for domestic robots, not AI
I don't mean this to be an "every Internet argument is over semantics" example, but literally every company and team I know that's working on autonomous robots refers heavily to them as AI. And there is a fundamental difference between "old school" robotics, i.e robots following procedural instructions, and robots that use AI-based models, e.g https://deepmind.google/discover/blog/gemini-robotics-brings... . I think it's doubly weird that you say that today's washing machines "has at least some very basic AI in it" (I think "very basic" is doing a lot of heavy lifting there...), but don't think AI refers to autonomous robots.
I see this referenced over and over again to trivialise AI as if it is a fait acompli.
I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
Somehow I doubt that the reason gen AI is way ahead of laundry-folding robots is because it's some kind of big secret about how to fold a shirt, or there aren't enough examples of shirt folding.
Manipulating a physical object like a shirt (especially a shirt or other piece of cloth, as opposed to a rigid object) is orders of magnitude more complex that completing a text string.
It was assumed that if you asked the same AI the same question, you'd get the same answer every time. But that's not how LLMs work (I know you can see them the same every time and get the same output but at we don't do that so how we experience them is different).
And we can distort quite far (see cartoons in drawing, dubstep in music,...)
Well sure, there’s also a computer recording, storing, and manipulating the songs I record and the books I write. But that’s not what we mean by “AI that composes music and writes books.”
This isn’t a quibble about the term “AI.” It’s simply clear from context that we’re talking about full automation of these tasks initiated by nothing more than a short prompt from the human.
(This should already be clear given that robots do exist, and we do call them robots, as you yourself noted, but never mind that for now.)
It’s not even about the level of mechanical or computational complexity. Automobiles have a lot of mechanical and computational complexity, but also aren’t called robots (ignoring of course self-driving cars).
My point is just that the availability of training data is vastly different between these cases. If we want better AI we're probably going to have to generate some huge curated datasets for mundane things that we've never considered worth capturing before.
It's an unfortunate quirk of what we decide to share with each other that has positioned AI to do art and not laundry.
Compare that to the parodies made by someone like "Weird Al" Yankovic. And I get that these tools will get better, but the best parodies work due to the human performer. They are funny because they aren't fake.
This goes for other art forms. People mention photography a lot, comparing it with painting. Photography works because it captures a real moment in time and space; it works because it's not fake. Painting also works because it shows what human imagination and skill with brushes can do. When it's fake (e.g., not made by a human painting with brushes on canvas, but by a Photoshop filter), it's meaningless.
Yeah, I'd say characterisation is a weakness of his. I've read Stranger in a Strange Land, The Moon is a Harsh Mistress, Starship Troopers, and Double Star. Heinlein does explore characters more than, say, Clark, but he doesn't go much for internal change or emotional growth. His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
lol no, what it has it's a finite state machine, you don't want undefined or new behaviour in user appliances
I don't mean to sound insensitive, but, how? Literal hours?
- basic features (color, brightness and contrast, edges and shapes, motion and direction)
- depth and spatial relationships
- recognition
- location and movement
- focus and attention
- prediction and filling in gaps
“Seeing” real world requires much more than simply seeing with one eye.
Generally, I also agree that Heinlein's characters are one dimensional and could benefit from greater character growth, though that was a bit of a hallmark of Golden Age sci-fi.
There is much worthy of critique in Heinlein, especially in his depiction of women. I've spent about a quarter century off and on both reading and formulating such critiques, much more recently than I've spent meaningful time with his fiction. I've also read what he had to say for himself before he died, and what Mrs. Heinlein - she kept the name - said about him after. If we want to talk about, for example, how the themes of maternal incest and specifically feminine embodiment of weakly superhuman AGI in his later work reflect a degree of senescence and the wish for a supercompetent maternal figure to whom to surrender the burden of responsibility, or if we want to talk about how Heinlein seems to spend an enormous amount of time just generally exploring stuff from female characters' perspectives that an honest modern inquiry would recognize as fumbling badly but earnestly in the direction of something like a contemporary understanding of gender, then we could talk about that.
No one wants to, though. You can't use anything like that as a stick to beat people with, so it never gets a look in, and those as here who care nothing for anything of the subject save if it looks serviceable as a weapon claim to be the only ones in the talk who are honest. They don't know the man's work well enough to talk about the years he spent selling stories that absolutely revolve around character development, which exist solely to exemplify it! Of course these are universally dismissed as his 'juveniles' - a few letters shy of 'juvenilia' - because science fiction superfans are all children and so are science fiction superhaters, neither of whom knows how to respond in any way better than a tantrum on the rare occasion of being told bluntly it's well past time they grew up.
But they're the honest ones. Why not? So it goes. It's a conversation I know better than to try to have, especially on Hacker News; if I don't care for how it's proceeding, I've no one but myself to blame.
My point is simply that we absolutely do not refer to a home dishwasher as a robot. Nor an old thermostat with a bimetallic strip and a mercury switch. Nor even a normal home PC.
In entire fairness, I was distracted by you having said he and his contemporaries must all have been autistic, as if either you yourself were remotely competent to embark upon any such determination, or as though it would in some way indict their work if they were.
I'm sure you would never in a million years dare utter "the R-slur" in public, though I would guess that in private the violation of taboo is thrilling. That's fine as far as it goes, but you really should not expect to get away with pretending you can just say "autistic" to mean the same thing and have no one notice, you blatantly obvious bigot.
If you meant that honestly, you would already have found ample directions for further research, easily enough not to need asking. Everything you claim to want lies just a Google search away, on any of the various and I should hope fairly easily identifiable search terms I have mentioned. "It is not my job to educate you."
Or, rather, it would still not be my job even if to learn were what you really want here. You don't, of course. That's why you haven't bothered so much as trying a few searches that might turn up something you would have to pretend to have read. Much easier to try to make me look emotionally unstable - 'defy?' Really. - because you can't actually answer anything I've said and you know it. Good luck with that, too.
We've clearly got off on the wrong foot here. I don't want to make out like I think Heinlein is crap. He had a lot of fantastic, creative ideas about science, technology, culture, sexuality and governance. He was extremely daring and sometimes quite subtle in the way he explored those ideas. But - in the novels I've read - his characters lack a certain depth and relatability. They express very little of the self-doubt, empathy, growth, and deep-seated motivations that are core to the human condition. So it goes also with Asimov, Clarke, Bradbury, and others. And it's fine that those weren't their strong suits. They had other strengths. And there were other writers like Bester, Dick, Le Guin, Zelazny, Herbert etc... who could fill the gaps.
Don't expect me to stop discussing what your behavior displays of your character, just because you've finally shown the modicum of rhetorical sense or tactical cunning required to minimally amend that behavior. Again, if you actually meant even a fraction of what you say, you would now be reading instead of still speaking. If it bothers you that you continue to indict yourself by your actions this way, consider acting differently.
Should you at any future point opt to develop a thesis in this area which is capable of supporting knowledgeable discussion, I confide it will find an audience in accord with its quality. In the meantime, please stop inviting me to participate in the project of recovering your totally unforced embarrassment.
Believe it or not by the look of things, I already have enough else to do today. Wiping your nose as you struggle and fail to learn from your vastly overprivileged young life's first encounter with entirely merited and obviously unmitigated contempt doesn't really make the list, at least not past the point at which it ceases to amuse, which I admit is now fast approaching.
Ah, here we go. I understand why you're using a fresh throwaway for this sort of thing, of course. Can't risk being seen for no better than you have to be, eh? But this at least - and, I strongly suspect, at last - is honest.
You can't abuse me in any way you're wise or sensible enough to imagine finding, so now you'll go mistreat someone inside the span of your arm's reach, blaming me all the while for your own infantile urge to do so. I wish you every bit as much joy of it as you deserve. And I hope they know your current Hacker News handle.
If you didn't want to prove me right when I said six hours ago [1] that you were throwing a tantrum, why continue throwing the tantrum?
I don't know how much further you expect me to need to boil down "read more" and still be able to take you seriously. How do you expect that, when you haven't even bothered trying to justify how you chose those four novels to represent forty years?
I see that 'seriously' very much describes how you like to regard yourself. You've insisted most thoroughly others must regard you likewise, regardless of what you show yourself anywhere near capable of actually rewarding or indeed even appreciating. Do you have a favorable impression of your efforts thus far? Have they had the results that you hoped?
We would now be having a different conversation if you had said anything to suggest to me it would be worth my trouble to continue in the attempt. I'd have enjoyed that conversation, I think; as most days here, I had hopes of learning something new. You've felt the need to spend the day doing this instead. If you don't like how that's working out, whom fairly may you blame?
Every substantive point I've actually made all day you have totally ignored, and this is what it's worth your time still to do. But sure. You can stop paying me rent to live in your head any time you like. Keep telling yourself that. I don't doubt you need to, to get through a day.
Also, 122d40d7236cd3ade496d0101d8029ec.
That's not even slightly to your credit, of course. But I can't fairly say you weren't involved, and I have to admit I genuinely appreciate this result, however inadvertent and I'm sure unimaginable on your part it may be. So, though I say it through gritted teeth, thank you for your time. If for absolutely nothing else whatsoever, for that at least I must express my genuine gratitude.
Intolerable though you've been throughout, and despite what I assume to have been your every intention, something good may yet come of your ill efforts. You deserve to know that. May it heap as many coals of fire on your head as your heart should prove small enough to deserve.
We could have done that fifteen hours ago [1], or eleven hours ago [2], or nine hours ago [3] [4], or any time you wanted. You haven't. What's changed?
[1] https://news.ycombinator.com/item?id=43655066
[2] https://news.ycombinator.com/item?id=43657766
No, you don't. I've said nothing I need defend, and you've said nothing you can. It would be one thing if I had to say not to piss on my boots and tell me it's raining, but this doesn't even count as pissing. It's just you repeating yourself from yesterday and that's boring for both of us.
"You are a bigot" is a factual claim I have made [1], now quite a number of hours and comments ago. You haven't addressed it. You won't. You can't. You have no choice now but to let it stand. You have shown it more true than even you yourself can pretend to ignore. You need someone to tell you it isn't really true, in a way you can believe. No one is here to tell you that.
There are other embarrassments, of course; you've shown yourself not a tenth the scholar you fancy yourself to be, nor able to handle yourself even slightly in the face of someone who needs nothing from you and cares neither for nor against you. You would care more that I called you an abuser, but you don't see the people you try to treat that way as human. So what you're really stuck on is that I called you a bigot and you can't answer back. Hence still finding it worth your while to try to talk me into letting you off that hook.
Sorry, not sorry. Go back to bed. Read a book while you're there, why don't you? It might help you sleep.
edit: You also haven't explained what makes those four books you named as exemplary as you called them. Can you describe the common thread? I ask because I actually have read them, in no case fewer than three times, and they really haven't all that much in common. Oh, by the same author, certainly. But you've only dropped names. You haven't tried to draw any comparisons or demonstrate anything by the rhetorical juxtaposition of those characters, though I grant you keep insisting it must count for something that you listed them. You haven't, so far as I can see, discussed or even mentioned a single event in the plot of any of those novels. For all the nothing you've had to say with any actual reference to them, even the few texts you named might as well not exist!
It is extremely risible at this time of you to try to claim you are the one here interested in talking about Heinlein. If there were a God, it would not be safe to tell a lie of that magnitude near a church. But no matter. To get back to the first question I asked here just above: Did anyone actually explain to you why those four should be the first and last of Heinlein worth talking about? Did you ever think to ask? Or was it that they were part of an assignment? - you turned in a paper and assumed the passing grade meant you must have learned something by the transaction, and that for you was where the matter and all semblance of curiosity ended.
I hope it isn't that last one. I already believed firmly that student loan relief was the correct action both ethically and economically; as I have said in other quarters lately, it is not possible for you to be enough of an asshole to change my politics. But if this is you recapitulating something you paid to be taught - if you're currently pursuing or God forfend have completed an American university education, and the best approximation of clear thought you can manage is this - then whoever sold you and your family that bill of goods ought damn well be horsewhipped, and that they merely see the loan annulled instead would be a considerable mercy.
> His male characters typically fall into one of two cartoonish camps: either supremely confident, talented, intelligent and independent (e.g. Jubal, Bernardo, Mannie, Bonforte...) or vaguely pathetic and stupid (e.g. moon men). His female characters are submissive, clumsily sexualised objects who contribute very little to the plot. There are a few partial exceptions - e.g. Lorenzo in Double Star and female pilots in Starship Troopers - but the general atmosphere is one of teenage boy wish fulfilment.
"Cartoonish." "Pathetic." "Stupid." "Submissive." "Clumsily sexualized." "Teenage boy." 'Moon men' - you mean Loonies? And this all was you yesterday [1]. How far do you really expect to get with this farcical pantomime of sweet reason now? I ask again: What's changed?
This all began when I said you obviously hadn't read what you claimed to have [2], and it got so far up your nose you couldn't help going and proving me right. You've made a lot more bad decisions since then, but don't worry: I'll keep reminding you as long as you show you need me to that you can amend your behavior at any time.
First, assuming you are not in fact a public figure, I will not publicly reveal your identity or any information I believe could lead to its disclosure, and that is exactly as far into my confidence as you may expect to come. That caveat excepted, I hereby explicitly disclaim any presumption you may have of privacy in any communication you make with me via email or other nonpublic means.
I won't dox you. I understand it isn't as safe for everyone as for me to have their name in the world. And I'm not saying I intend to publish all, or indeed any, of what you send; if it deserves in my view to remain in confidence, I will keep it so. But if you think taking this conversation to email will give you a chance to play games where no one else can see, you had better think twice.
(Should you by any of several plausible means dig up my phone number and try giving me a call, I hereby explicitly advise that any such action on your part constitutes "prior consent" per Md. Code §10–402 [1], and I will exercise my option under that law without further notice.)
Second, there exists an organization with which I have a legal agreement, binding on all our various heirs and assigns, to the effect we are quits forever. I will refer to this company as "Name Redacted for Legal Reasons" or "Name Redacted" for short, and describe it as the brainchild of a fascinating and tight-knit group of siblings, any of the three (technically four) of whom I'd have liked the chance to know better than I did.
I will also note, not for the first time, that I signed that agreement in entire good faith which has endured from that day through this, and I earnestly believe the same of my counterparty both collectively, and in the individual and separate persons of those who represented Name Redacted to me throughout that process as well as through my prior period of employment.
Now, if I were an employee of Name Redacted for Legal Reasons, and I had started a day's worth of shit in public with a signatory of such an agreement as I describe - that is, if I had acted in a way which could be construed to compromise my employer's painstakingly arrived-upon mutual quitclaim - then the very last thing I would ever want to do would be to allow to come into existence documentary evidence of my possibly somewhat innocent but certainly very grave foolishness. Because if that did happen, I would understand I R. May confidently expect very soon to become 'the most fired-for-causedest person in the history of fuck.'
As I said, I signed in good faith. In that same good faith, what choice really would I have but to privately disclose in full detail? It would be irresponsible of me to assume this was the only problem such intemperate behavior might be creating for Name Redacted, any or all of which might be far more consequential than this.
I'm sure at this point I'm only talking to hear myself speak, though. In any case, I look forward to your email.
[1] https://mgaleg.maryland.gov/mgawebsite/Laws/StatuteText?arti...
> Is that all? You mistake your opinions for facts, and when motivated you freely ignore the difference between an author's voice and that of a viewpoint character. This you share with millions, and feel the need to be secretive about? I thought you had something serious to talk about. Go away.
One example to shut you up: about the first thing every serious critic of The Moon is a Harsh Mistress addresses is that it's intentionally and explicitly written with about two-thirds of an eye toward the American "revolution" - hence the correspondences between Professor de la Paz and Benjamin Franklin (with a generous dash of Jefferson) on the one hand, and Mannie as an obvious George Washington expy on another. These are intentional similarities! Heinlein mentions it explicitly in Expanded Universe (don't hold me to that, it may have been in Grumbles from the Grave) and it's treated at length in one or another of the crit histories I've read, or maybe it was the Patterson biography, I'm not reading back hundreds of pages of diary notes for your lazy ass. It may have been the novel's own preface! He was intentionally loose with the correspondences, both in character and in plot, for narrative and didactic reasons, and that has proven a fruitful vein for both critical analysis and outright criticism over the years, and you can't even talk about any of it. You didn't notice any of this. Because you never learned the difference between looking at books and reading them. I'm sure you looked at every page, though!
These essays of yours are, generously, on the level of a college freshman who parties too much, studies too little, and treats English as a dump course. I did more thoughtful work as a high-school senior. For this you feel the need to be secretive? What a joke. Get lost, flyweight.
> These are intentional similarities!
I said that there are "clear comparisons" to the American revolution. I didn't suggest that the comparison was accidental. If anything, I assumed it was supposed to be read that way.
> One example to shut you up
Well, you've failed there. Perhaps we should focus on the cause of your initial outrage: Heinlein's (lack of) character depth?
> For this you feel the need to be secretive?
It's privacy rather than secrecy. I don't want it to be too easy to link this account to my Goodreads.
> I'm not entirely sure why invoking statistics feels like a rebuttal to me. Putting aside the fact that LLMs are not purely statistics, even if they were what proof is there that you cannot make a statistical intelligent machine. It would not at all surprise me to learn that someone has made a purely statistical Turing complete model. To then argue that it couldn't think you are saying computers can never think, and by that and the fact that we think you are invoking a soul, God, or Penrose.
I don't follow this. I don't believe that LLMs are capable of thinking. I don't believe that computers, as they exist now, are capable of thinking (regardless of the program they run). I do believe that it is possible to build machines that can think -- we just don't know how.
To me, the strange move you're making is assuming that we will "accidentally" create thinking machines while doing AI research. On the contrary, I think we'll build thinking, conscious machines after understanding our own consciousness, or at least the consciousness of other animals, and not before.