However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
Are they being vague about the danger? If possible, please link to a communique from them. I've missed it somehow. Thanks.
Discussed here yesterday: https://news.ycombinator.com/item?id=43633383
https://www.anthropic.com/news/anthropic-education-report-ho...
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.
One recent HN comment [0] comparing corporations and institutions to AI really stuck with me - those are already superhuman intelligences.
To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.
Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.
The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.
This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.
Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.
It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.
I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.
> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.
To be fair, many people did die on level crossings and by wandering on to the tracks.
We learned over time to put in place safety fences and tunnels.
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.
And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.
Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.
Dumb bombs kill people just as easily. One 80-year old nuke is, at least potentially, more effective than the entirety of the world's drones.
The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)
You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.
... only because "unsafe" and "leaky" are a Ponzi's best-and-loves-to-be-roofied-and-abused friend ... you see, intelligence is only good when it doesn't irreversibly break everything to the point where most of the variety of the physical structure that evolved it and maintains it is lost.
you could argue, of course, and this is an abbreviated version, that a new physical structure then evolves a new intelligence that is adapted (emerged from and adjusts to) to the challenges of the new environment but that's not the point of already capable self-healing systems;
except if the destructive part of the superhuman intelligence is more successful with it's methods of sabotage and disruption of
(a) 'truthy' information flow and
b) individual and collective super-rational agency -- for the good of as many systems-internal entities as possible, as a precaution due to always living in uncertainty and being surrounded by an endless amount of variables currently tagged "noise"
-- than it's counterpart is in enabling and propagating a) and b) ...
in simpler words, if the red team FUBARS the blue team or vice versa, the superhuman intelligence can be assumed to have cancer or that at least some vital part of the force is corrupted otherwise.
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
It is no wonder the Zizians were birthed from LW.
I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.
It's sort of gone downhill, since.
For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.
In the case of asbestos, this is incorrect. Many people knew it was deadly, but the corporations selling it hid it for decades, killing thousands of people. There are quite a few other examples besides asbestos, like leaded fuel or cigarettes.
Me, I don't have billions of dollars, but I might be in the top 10% or something. And I just cringe when I see guys use their money and status or job title, or connections, or cars or shoes or... anything they have as opposed to who they are as a way to impress people. (Women, usually). I understand this is what they think they have to do. Like, I understand that's how primates function, and you're just doing what apes do, but do they seriously think they'll ever be able to trust anyone who pretends to like them after that person thinks they're rich?
Maybe I'm just lucky I got to watch it up close when I was a teenager. Lol. My brother's first wife, at his wedding, got up and gave a speech... she said, "my friends all said he was too short, but I told them he was taller when he was standing on his wallet". Some people laughed. I didn't. After fifteen years of screaming at each other and drug abuse, she committed suicide and he got with the next secretary who hated him but wanted his money. Oh well.
My answer has always been to appear to be poor as fuck until I know what drives someone. When I meet a girl, I'll open doors and always buy dinner... at a $2 taco joint. And make sure she offers to buy the next round of drinks. I'll play piano in a random bar, and make her sing along. I'll order her the cheapest beer. I'll show her a painting I made and tell her I can't make any money selling 'em, is why I'm broke. If anyone asks me what I do, I don't say SWE or CTO, I say I'm a writer or a musician between things. And I'll do this for months until I get to know a person. Yeah, it's a test. The girls I've had relationships with, the girl I'm with right now, passed it. She doesn't even want to know. She says, whatever you got, I could've been with someone richer than you but I didn't want that life, so play piano for me. I'm not saying I've got the key to happiness, or humility, and maybe I'm a total asshole too, but... at least I'm not an asshole who's so hollow they have to crow about their job or their money to find "love" from people who - let's say this - can not, and will not ever love them.
Couldn't this very same argument have been used against any form of mental augmentation, like written language and computers? Or, in an extended interpretation, against any form of physical augmentation, like tool use?
A useless AI isn't a threat: nobody will use it.
LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.
Like designing US trade policy.
> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
What does the latter have to do with the former?
> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.
Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?
> And this hasn't changed at all over the past five years.
They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".
> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
And write code. Not great code, but "it'll do" code. And use APIs.
> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops
The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.
Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.
--
That said, I agree with you about the limitations of using them for research. Where you say this:
> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)
The analogy is with stock market flash-crashes, but those can be undone if everyone agrees "it was just a bug".
Software operates faster than human reaction times, so there's always pressure to fully automate aspects of military equipment, e.g. https://en.wikipedia.org/wiki/Phalanx_CIWS
Unfortunately, a flash-war from a bad algorithm, from a hallucination, from failing to specify that the moon isn't expected to respond to IFF pings even when it comes up over the horizon from exactly the direction you've been worried about finding a Soviet bomber wing… those are harder to undo.
https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
In reflecting on my career I can say I got into it for the right reasons. That is, I liked programming — but also found out fairly quickly that not everyone could do it and so it could be a career path that would prove lucrative. And this in particular for someone who had no other likelihood, for example, of ever owning a home. I was probably not going to be able to afford graduate school (had barely paid for state college by working minimum wage jobs throughout college and over the summers) and regardless I was not the most studious person. (My degree was Education — I had expected a modest income as a career high school teacher).
But as I say, I enjoyed programming at first. And when it arrived, the web was just a giant BBS as far as I was concerned and so of course I liked it. But it is possible to find a thing that you really like can go to shit over the ensuing decades. (And for that matter, my duties as an engineer got shittier as well as the career "evolved". I had not originally signed up for code reviews, unit tests, scrum, etc. Oh well.)
Money as a pursuit made sense to me after I was in the field and saw that others around me were doing quite well — able as I say, to afford to buy a home — something I had assumed would always be out of reach for me (my single mother had always rented, I assumed I would as well — oh, I still had a modest college loan to pay off too). So I learned about 30-year home loans, learned about the real estate market in the Bay Area, learned also about RSUs, capital gains tax, 401Ks, index finds, etc.
But as is becoming a theme in this thread (?) at some point I was satisfied that I had done enough to secure a home, tools for my hobbies, and had raised three girls — paid for their college. I began to see the now burdensome career I was in as an albatross around my soul. The technology that I had once enjoyed, made my career on the back of, had gone sour.
“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”
― Kurt Vonnegut
You got yours. Now what?
Guide to Bow Tillering:
https://straightgrainedboard.com/beginners-guide-on-bow-till...
I could have skipped sooner maybe?
Once I had kids though I found I had a higher tolerance for a job getting shittier, a lower tolerance for restarting in a new career. So I put up with a worsening job for them.
I quit the moment my last daughter left for college.
You're also not the only one doing charity work.
Just sayin'.
I think it's a little deeper than that. It's the democratization of capability.
If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.
If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.
If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.
We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.
The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.
I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.
To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.
Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.
I can't spell for shit anymore. Ever since auto correct became omnipresent in pretty much all writing fields, my brain just kinda ditched remembering how to spell words.
buuuttt
Manual labor has been obsolete for at least 100 years now for certain classes of people, and fitness is still an enormous recreational activity people partake in. So even in an AI heavy society, I still strongly suspect there will be "brain games" that people still enjoy and regularly play.
> If men learn [writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
You've precisely defined why nobody takes LessWrong seriously.
There is another side to this, which is maybe we don’t need to know a lot of things.
It was true with search engines already, but maybe truer with LLMs. That thing you’re querying probably doesn’t actually matter. It’s neurotic digging and searching for an object you will never use or benefit from. The urge to seek is strong but you won’t find the thing you’re searching for this way.
You might learn more by just going for a walk.
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
Jesus turned over tables when they were trying to profit inside the church. His movement seemed to turn out pretty good.
I get it that money coming into the industry made the whole industry suck. Honestly, Apple was a much more fun to place to work at when there was no money to be made there (no more than a paycheck anyway). Others may disagree, but I found its success made it increasingly a shittier place to work. (Others though, as I say, may have enjoyed the wider reach the platform enjoyed with its success.)
> Jesus turned over tables when they were trying to profit inside the church. His movement seemed to turn out pretty good.
Applying this story to posting anonymous comments on an internet forum seems like a stretch. There are hardly any meaningful consequences for your decision to write in this way, whereas Jesus very much became a target after that demonstration.
If that isn't one of the deepest aphorisms on psychology out there, I don't know what is.
I'm not religious, but for this alone you deserve a life of blessings and happiness. The fact that I never ever have to fuck around with Adobe PDF apps to juggle PDFs is one of the load-bearing things keeping me sane in an insane world.
See https://www.streetepistemology.com/ for content about this. It is possible to guide discussions in a healthy manner and with positive goals in mind.
Hey, as long as they are both up front and clear about what they are getting out of their relationship. They're grown adults after all. I knew someone who proudly would admit he was a "sugar daddy" and both he and his "girlfriends" would fully agree that their relationships were transactional and contingent on the money flow. I knew someone in college who was very open and unapologetic that her plan was to find and marry someone rich. There's no right and wrong.
I can’t stand Adobe Reader, and use Preview, all the time.
Your explanation makes more sense, however.
A few years ago, I’d have quietly filed this kind of article under “too hard” or passed a log analysis request from the CIO down the line. Now? I get AI to draft the query, check it, run it, and move on. It’s not about thinking less — it’s about clearing the clutter so I can focus where it counts.
Like with the calculator, why would you need to be able to calculate things on paper if you can just have a machine do it for you? Same goes for more advanced AI: what's the point of being able to do things without them?
Not to offend, but in my opinion that's nothing more than a romantic view of what humans "should be capable of". 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.
However, it allows you to do things you don't understand. I'm again taking examples from what I see at my university (n=1): almost all students deliver complex programming projects involving multi-threading, but can't answer a basic quizz about the same language in-person. And by basic question I mean "select among the propositions listed below the correct keyword used to declare a variable in Golang". I'm not kidding, at least one-third of the class is actually answering something wrong here.
So yeah, maybe we as a society agree on the fact that those people will not be software engineers, but prompt engineers. They'll send instructions to an agent that will display text in a strange and cryptic language, and maybe when they'll press "Run" lights will be green. But as a professional, why should I hire them once they earned their diploma? They are far from being ready for the professional world, can't debug systems without using LLMs (and maybe those LLMs can't help them because the company context is too important), and most importantly they are way less capable than freshly graduated engineers from a few years back.
> 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.
I hope so, but I'm quite pessimistic unfortunately. Expertise and focus capabilities are dying, and we are more and more relying on artificial "intelligence" and its biases. But the future will tell
But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.
Besides, that's simply not what the LW crowd is talking about. They're talking about, e.g., hypercompetent AIs developing novel undetectable biological weapons that kill all humans on purpose. (This is the "AI 2027" scenario.)
Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!
They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.
I don't believe having this option will make people a lot less functional. Sure, some may slip through the cracks by faking it, but we'll soon develop different metrics to judge somebodies true capabilities. Actually, we'll probably create AI for that as well.
As a professional, you hire people who get things done. If that means hiring skilled LLM users, that do not fully understand what they produce, but what they make consistently works about as often as classic dev output does, and they do this in a fraction of the time... You would be crazy _not_ to hire them.
It's true that inexperienced developers will probably generate a massive tech debt during the time where AI is good enough to provide code, but not good enough to fish out hidden bugs. It will soon surpass humans at that skill though, and can then quickly clean up all the spaghetti.
Over the last two years my knowledge on how to perform and automate repetitive and predictable tasks has gradually worn away, replaced by a higher level understanding of software architecture. I use it to guide language models to a desired outcome. For those that want to learn, LLM's excel at explaining code. For this, and plenty of other subjects, it's the greatest learning tool we have ever had! All it takes is a curious mind.
We are in a transitionary time and we simply need to figure out how to deal with this new technology, warts and all. It's not like there is an alternative scenario; it's not going to go away...
This is increasingly happening to me every day. Hope the alien overlords don't have spelling tests (as their version of IQ tests) to separate the serfs from the field-masters.
Fair point. But they are heavily metaphor-laden paragraphs.
Textual interpretation is a highly subjective activity. Entire careers consist of interpreting, reinterpreting, and discussing texts that others have already interpreted. Film critics, book reviewers, political pundits, TV anchors, podcasters, etc.
'In 1972, Chinese premier Zhou Enlai was asked about the impact of the French Revolution. "Too early to say," he replied'
I had my own sense of what the "coquina" metaphor stood for. I wanted to see other peoples' interpretations. Turns out my interpretation was wrong.
They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.
And mines, and the CIWS I linked to and several like it (I think SeaRAM is similar autonomy to engage), and the Samsung SGR-A1 whose autonomy led to people arguing that we really ought to keep humans in the loop: https://en.wikipedia.org/wiki/Lethal_autonomous_weapon
The problem is, the more your adversaries automate, the more you need to automate to keep up. Right now we can even have the argument about the SGR-A1 because it's likely to target humans who operate at human speeds and therefore a human in the loop isn't a major risk to operational success. Counter Rocket Artillery Mortar systems already need to be autonomous because human eyes can't realistically track a mortar in mid-flight.
There were a few times in the cold war where it was luck that the lack of technology that forced us to rely on humans in the loop, humans who said "no".
People are protesting against fully autonomous weapons because they're obviously useful enough to be militarily interesting, not just because they're obviously threatening.
> Besides, that's simply not what the LW crowd is talking about.
LW talks about every possible risk. I got the flash-war idea from them.
> Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!
For about a decade after Word Lens showed the world that it was possible to run real time augmented reality translations on a smartphone, I've been surprising people — even fellow expat software developers — that this exists and is possible.
Today, I guess I have to surprise you with the 2024 Nobel Prize in Chemistry. Given my experience with Word Lens, I fully expect to keep on surprising people with this for another decade.
Drugs/biosci:
• DSP-1181: https://www.bbc.com/news/technology-51315462
• Halicin: https://en.wikipedia.org/wiki/Halicin
• Abaucin: https://en.wikipedia.org/wiki/Abaucin
• The aforementioned 2024 Nobel Prize for AlphaFold: https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Che...
PV:
• Materials: https://www.chemistryworld.com/news/ai-aids-discovery-of-sol...
• Other stuff: https://www.weforum.org/stories/2024/08/how-ai-can-help-revo...
Polymers:
• https://arxiv.org/abs/2312.06470
• https://arxiv.org/abs/2312.03690
• https://arxiv.org/abs/2409.15354
> They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.
One of the things humans know is "how to use lab equipment to get science done": https://www.nature.com/articles/s44286-023-00002-4
"Just with superhuman speed and memory" is a lot, even if they were somehow otherwise limited to a human equivalent of IQ 90.
Fewer people can read dead comments. And what others said.
It is funny that people thing Daoists do not get angry and yet all of you suppress anger in some unnatural way to "get along" or to make sure you are not dwnvoted.
If the quote threw in some casual racism, or advocated for stealing the envelope instead of paying for it, I would similarly disregard the overall philosophy.