Most active commenters
  • profsummergig(7)
  • FollowingTheDao(5)
  • ben_w(5)
  • DrSiemer(3)
  • A_D_E_P_T(3)
  • criddell(3)
  • ChrisMarshallNY(3)
  • JKCalhoun(3)
  • chipsrafferty(3)

Playing in the Creek

(www.hgreer.com)
343 points c1ccccc1 | 117 comments | | HN request time: 3.126s | source | bottom
1. profsummergig ◴[] No.43651005[source]
Requesting someone to please explain the "coquina" metaphor.
replies(5): >>43651071 #>>43651073 #>>43651084 #>>43651280 #>>43651415 #
2. hecanjog ◴[] No.43651071[source]
I think that they're saying a little bit of playing around with replacing thinking and composing with automated tools is recoverable, but at an industrial or societal scale the damage is significant. Like the difference between shoveling away some sand with your hands to bury the small creatures temporarily and actually destroying their habitat by "lobbying city council members to put in a groin or seawall, and seriously move that beach sand."
replies(1): >>43651091 #
3. xmprt ◴[] No.43651073[source]
My understanding is that the author is this superior being trying to accomplish a massive task (damming a beach) while knowing that it could cause problems for these clams. In the real world, Anthropic is trying to accomplish a massive task (building AGI) and they're finally starting to notice the potential impacts this has on people.
4. doctoboggan ◴[] No.43651075[source]
This is an excellent essay, and I feel similar to the author but couldn't express it as nicely.

However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.

replies(4): >>43651579 #>>43651607 #>>43652279 #>>43654374 #
5. jjcob ◴[] No.43651084[source]
Coquinas are clams that bury themselves in the sand very close to the surface [1]. The author worries that while they are playing with the sand, they might accidentally bury coquina clams too deep and kill them because they can no longer reach the surface.

Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.

[1]: https://www.youtube.com/watch?v=KZUlf7quu3o

replies(2): >>43651125 #>>43655879 #
6. unwind ◴[] No.43651086[source]
Ah, this [1] meaning of tillering (bending wood to form a bow), not this [2] (production of side shoots in grasses). The joys of new words.

[1]: https://www.howtomakealongbow.co.uk/part-5-tillering

[2]: https://en.wikipedia.org/wiki/Tiller_(botany)

replies(1): >>43651182 #
7. profsummergig ◴[] No.43651091{3}[source]
I skimmed the Anthropic report and didn't catch the negative effects. Did they mention any? Good on them if they did.
replies(1): >>43651180 #
8. profsummergig ◴[] No.43651125{3}[source]
> Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.

Are they being vague about the danger? If possible, please link to a communique from them. I've missed it somehow. Thanks.

replies(1): >>43651136 #
9. vermilingua ◴[] No.43651136{4}[source]
https://www.anthropic.com/news/anthropic-education-report-ho...

Discussed here yesterday: https://news.ycombinator.com/item?id=43633383

replies(1): >>43651163 #
10. profsummergig ◴[] No.43651163{5}[source]
Thank you.
11. hecanjog ◴[] No.43651180{4}[source]
Yes, they mention a few times the concern that students are offloading critical thinking rather than using the tool for learning.
replies(1): >>43651410 #
12. defrost ◴[] No.43651182[source]
As I recall tillering is more about the shaping of the bow to achieve an optimal bend and force delivery on release.

It's an iterative process of bending and shaping, bending again, and wood removal in stages.

13. axpvms ◴[] No.43651212[source]
My backyard creek also had crocodiles in it.
replies(1): >>43652342 #
14. ern ◴[] No.43651280[source]
Maybe I’m not smart enough, or too tired to decode these metaphors, so I plugged the essay into ChatGPT and got a clear explanation from 4o.
replies(2): >>43651887 #>>43652491 #
15. Cthulhu_ ◴[] No.43651410{5}[source]
I just hope the educational institutions catch on, stick with their principles and don't give them the paperwork. The paper / title should be evidence of students' learning and thinking abilities, not of just their output.
16. cubefox ◴[] No.43651415[source]
Anthropic (Claude.ai) is mentioning in their report on LLMs and education that students use Claude to cheat and do their work for them:

https://www.anthropic.com/news/anthropic-education-report-ho...

17. MrBuddyCasino ◴[] No.43651572[source]
That was a well written essay with a non-sequitur AI Safety thing tacked to the end. His real world examples were concrete, and the reason to stop escalating easy to understand ("don't flood the neighbourhood by building a real dam").

The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".

There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.

replies(2): >>43651756 #>>43652038 #
18. yapyap ◴[] No.43651579[source]
> However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.

I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.

19. khazhoux ◴[] No.43651604[source]
Parents: you know how every day you look at your child and you’re struck with wonder at the amazing and quirky and unique person your little one is?

I swear that’s what lesswrong posters see every day in the mirror.

20. dachris ◴[] No.43651607[source]
The paperclip maximizers are already here, but they are maximizing money.

One recent HN comment [0] comparing corporations and institutions to AI really stuck with me - those are already superhuman intelligences.

[0] https://news.ycombinator.com/item?id=43580681

replies(3): >>43651783 #>>43652605 #>>43656596 #
21. DrSiemer ◴[] No.43651677[source]
So many articles and comments claim Ai will destroy critical thinking in our youths. Is there any evidence that this conviction that many people share is even remotely true?

To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.

Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.

The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.

replies(5): >>43651964 #>>43651975 #>>43652300 #>>43652603 #>>43657305 #
22. luc4sdreyer ◴[] No.43651756[source]
> the high velocities of steam locomotives might kill you

This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.

Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.

replies(2): >>43652249 #>>43652661 #
23. actionfromafar ◴[] No.43651783{3}[source]
I could imagine a Star Trek episode where someone says "I always assumed the paperclip optimizer was a parable for unchecked capitalism?"
24. profsummergig ◴[] No.43651887{3}[source]
Ah. Should have thought of that. Going to do that now. Thanks.
25. iNic ◴[] No.43651964[source]
I don't think you got the point of the article? It is saying that we as wise humans know (sometimes) when to stop optimizing for a goal, due to the negative side effects. AIs (and as some other people have pointed out corporations) do not naturally have this line in their head, and we must draw such lines carefully and with purpose for these superhuman beings.
26. BrenBarn ◴[] No.43651969[source]
It's a nice article. In a way though it kind of bypasses what I see as the main takeaways.

It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.

If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.

As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.

The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.

As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.

replies(3): >>43652468 #>>43652620 #>>43653836 #
27. dsign ◴[] No.43651975[source]
> Ai will destroy critical thinking in our youths

I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.

> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.

One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.

28. iNic ◴[] No.43652038[source]
The progress-care trade-off is a difficult one to navigate, and is clearly more important with AI. I've seen people draw analogies to companies, which have often caused harm in pursuit of greater profits, both purposefully and simply as byproducts: oil-spills, overmedication, pollution, ecological damage, bad labor conditions, hazardous materials, mass lead poisoning. Of course, the profit seeking company as an invention has been one of the best humans have ever made, but that doesn't mean we shouldn't take "corp safety" seriously. We pass various laws on how corps can operate and what they can and can not do to limit harms and _align_ them with the goals of society.

So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.

29. praptak ◴[] No.43652194[source]
If there's money to be made, there will always be someone with a shovel or a truckload of sparklers who is willing to take the risk (especially if the risk can be externalized to the public) and reap the reward.
30. ripe ◴[] No.43652249{3}[source]
> This [steam locomotives might kill you] obviously seems silly in hindsight.

To be fair, many people did die on level crossings and by wandering on to the tracks.

We learned over time to put in place safety fences and tunnels.

replies(1): >>43653372 #
31. A_D_E_P_T ◴[] No.43652264[source]
The author seems concerned about AI risk -- as in, "they're going to kill us all" -- and that's a common LW trope.

Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

As Dwarkesh once asked:

> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> Shouldn’t we be expecting that kind of stuff?

I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

replies(6): >>43652313 #>>43652314 #>>43653096 #>>43658616 #>>43659076 #>>43659525 #
32. ◴[] No.43652279[source]
33. Tistron ◴[] No.43652300[source]
I would expect people today to be quite a lot worse at mental arithmetic that we used to be before calculators. And worse at memorizing stuff than before writing.

We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.

And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.

34. tvc015 ◴[] No.43652313[source]
Aren’t semiautonomous drones already killing soldiers in Ukraine? Can you not imagine a future with more conflict and automated killing? Maybe that’s not seen as AI risk per se?
replies(2): >>43652564 #>>43653668 #
35. miningape ◴[] No.43652314[source]
If anything the AI would want to put itself out of its misery after having memorised all those LinkedIn posts
36. seafoamteal ◴[] No.43652342[source]
Florida?
replies(1): >>43652617 #
37. noduerme ◴[] No.43652468[source]
This is such a well-written response. There's something intentionally soothing about this post that slowly turns into a jarring form of self-congratulation as it goes along. Congratulations for knowing there's a limit to wrecking your parents' property. Congratulations for being able to appreciate the sand on the beach, in some no doubt instagrammable moment of existential simplicity. Congratulations for being so smart that you could have blown up your hand. And for "Leetcoding", whatever the fuck that means. And for claiming you quit a shady job because you got bored (but possibly also grew a conscience). And then topped off by the final turn: "This is, of course, about artificial intelligence development". I'd only add one thing to your analysis: We've got a demo right here of a psyche that would prefer love to money (but mostly both), and it's still determined to foist bad things onto the world in a load-bearing way, as a bid for either, or whatever it can get. My parents used to call that "a kid that doesn't care if he gets good or bad attention, as long as he gets attention." I think that's the root driver for almost all the tech billionaires of the past 20 years, and the one thing that unites Bezos, Zuck, Jobs, Dorsey, Musk... it's: "Look dad, I didn't just take your money. I'm so smart I could'a blown off my hand with all those fireworks you bought me, but see? Two hands! Look how much money I made from your money! Why aren't you proud of me?! Where can I find love? Maybe if I tell people what a leetcoder I am and how I could be making BAD AI but I'm just making GOOD AI, then everyone will love me."

Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.

replies(2): >>43652496 #>>43654184 #
38. criddell ◴[] No.43652491{3}[source]
Are you at all concerned that plugging stuff like this into ChatGPT is leaving you with weaker cognitive muscles? Or is it more similar to what people do when they see a new word and reach for their dictionary?
replies(2): >>43652812 #>>43656156 #
39. bombcar ◴[] No.43652496{3}[source]
I’ve often wondered how, with billions of dollars, do you know someone actually loves you and not your money?

Complicated!

replies(1): >>43652662 #
40. A_D_E_P_T ◴[] No.43652564{3}[source]
That's not "AI risk" because they're still tools that lack independent volition. Somebody's building them and setting them loose. They're not building themselves and setting themselves loose, and it's far from clear how to get there from here.

Dumb bombs kill people just as easily. One 80-year old nuke is, at least potentially, more effective than the entirety of the world's drones.

replies(1): >>43653239 #
41. hacb ◴[] No.43652603[source]
> The calculator did not erase math wizards

The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)

You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.

replies(1): >>43656731 #
42. bitethecutebait ◴[] No.43652605{3}[source]
> those are already superhuman intelligence(s)

... only because "unsafe" and "leaky" are a Ponzi's best-and-loves-to-be-roofied-and-abused friend ... you see, intelligence is only good when it doesn't irreversibly break everything to the point where most of the variety of the physical structure that evolved it and maintains it is lost.

you could argue, of course, and this is an abbreviated version, that a new physical structure then evolves a new intelligence that is adapted (emerged from and adjusts to) to the challenges of the new environment but that's not the point of already capable self-healing systems;

except if the destructive part of the superhuman intelligence is more successful with it's methods of sabotage and disruption of

(a) 'truthy' information flow and

b) individual and collective super-rational agency -- for the good of as many systems-internal entities as possible, as a precaution due to always living in uncertainty and being surrounded by an endless amount of variables currently tagged "noise"

-- than it's counterpart is in enabling and propagating a) and b) ...

in simpler words, if the red team FUBARS the blue team or vice versa, the superhuman intelligence can be assumed to have cancer or that at least some vital part of the force is corrupted otherwise.

43. FollowingTheDao ◴[] No.43652610[source]
"It was only once I got it that I realized I no longer could play the game "make as much money as I can.""

Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?

It is no wonder the Zizians were birthed from LW.

44. tilne ◴[] No.43652617{3}[source]
No they’ve got a little place on the Nile
45. ChrisMarshallNY ◴[] No.43652620[source]
> As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.

I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.

It's sort of gone downhill, since.

For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.

replies(1): >>43652904 #
46. MrBuddyCasino ◴[] No.43652661{3}[source]
> Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk.

In the case of asbestos, this is incorrect. Many people knew it was deadly, but the corporations selling it hid it for decades, killing thousands of people. There are quite a few other examples besides asbestos, like leaded fuel or cigarettes.

47. noduerme ◴[] No.43652662{4}[source]
I've got a particularly strong view on this, because I've got a brother who tried to get wildly rich in some seriously unethical ways to impress our father, and still never got a single word of praise from him. And who's miserable and unloved and been betrayed by the women he married... who married him for his money. He's so desperate for someone to come admire his cars and his TVs, to just come hang out with him. He pays for friends.

Me, I don't have billions of dollars, but I might be in the top 10% or something. And I just cringe when I see guys use their money and status or job title, or connections, or cars or shoes or... anything they have as opposed to who they are as a way to impress people. (Women, usually). I understand this is what they think they have to do. Like, I understand that's how primates function, and you're just doing what apes do, but do they seriously think they'll ever be able to trust anyone who pretends to like them after that person thinks they're rich?

Maybe I'm just lucky I got to watch it up close when I was a teenager. Lol. My brother's first wife, at his wedding, got up and gave a speech... she said, "my friends all said he was too short, but I told them he was taller when he was standing on his wallet". Some people laughed. I didn't. After fifteen years of screaming at each other and drug abuse, she committed suicide and he got with the next secretary who hated him but wanted his money. Oh well.

My answer has always been to appear to be poor as fuck until I know what drives someone. When I meet a girl, I'll open doors and always buy dinner... at a $2 taco joint. And make sure she offers to buy the next round of drinks. I'll play piano in a random bar, and make her sing along. I'll order her the cheapest beer. I'll show her a painting I made and tell her I can't make any money selling 'em, is why I'm broke. If anyone asks me what I do, I don't say SWE or CTO, I say I'm a writer or a musician between things. And I'll do this for months until I get to know a person. Yeah, it's a test. The girls I've had relationships with, the girl I'm with right now, passed it. She doesn't even want to know. She says, whatever you got, I could've been with someone richer than you but I didn't want that life, so play piano for me. I'm not saying I've got the key to happiness, or humility, and maybe I'm a total asshole too, but... at least I'm not an asshole who's so hollow they have to crow about their job or their money to find "love" from people who - let's say this - can not, and will not ever love them.

replies(4): >>43652738 #>>43653621 #>>43655591 #>>43671762 #
48. bombcar ◴[] No.43652738{5}[source]
One of the things I’ve heard, and found to be true, is that if you don’t love yourself it’s going to be terribly hard for others to love you
replies(1): >>43654608 #
49. adwn ◴[] No.43652812{4}[source]
> Are you at all concerned that plugging stuff like this into ChatGPT is leaving you with weaker cognitive muscles?

Couldn't this very same argument have been used against any form of mental augmentation, like written language and computers? Or, in an extended interpretation, against any form of physical augmentation, like tool use?

replies(2): >>43653365 #>>43654152 #
50. hobs ◴[] No.43653061{4}[source]
Are you jealous or mad that they didn't do more for you? Neither is a good look really. What have you done for me lately?
replies(1): >>43653413 #
51. ben_w ◴[] No.43653096[source]
A perfect AI isn't a threat: you can just tell it to come up with a set of rules whose consequences would never be things that we today would object to.

A useless AI isn't a threat: nobody will use it.

LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.

Like designing US trade policy.

> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.

What does the latter have to do with the former?

> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.

Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?

> And this hasn't changed at all over the past five years.

They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".

> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.

And write code. Not great code, but "it'll do" code. And use APIs.

> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.

While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops

The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.

Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.

--

That said, I agree with you about the limitations of using them for research. Where you say this:

> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.

I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)

52. appleorchard46 ◴[] No.43653197[source]
Could someone explain the metaphor? I'm struggling to see the connection between AI and the rest of the post.
replies(1): >>43653562 #
53. ben_w ◴[] No.43653239{4}[source]
Oh, but it is an AI risk.

The analogy is with stock market flash-crashes, but those can be undone if everyone agrees "it was just a bug".

Software operates faster than human reaction times, so there's always pressure to fully automate aspects of military equipment, e.g. https://en.wikipedia.org/wiki/Phalanx_CIWS

Unfortunately, a flash-war from a bad algorithm, from a hallucination, from failing to specify that the moon isn't expected to respond to IFF pings even when it comes up over the horizon from exactly the direction you've been worried about finding a Soviet bomber wing… those are harder to undo.

replies(1): >>43657902 #
54. migueldeicaza ◴[] No.43653292[source]
Vonnegut said it best:

https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...

replies(2): >>43653410 #>>43653925 #
55. JKCalhoun ◴[] No.43653338{4}[source]
I'm retired as well, dislike what we have for the internet these days.

In reflecting on my career I can say I got into it for the right reasons. That is, I liked programming — but also found out fairly quickly that not everyone could do it and so it could be a career path that would prove lucrative. And this in particular for someone who had no other likelihood, for example, of ever owning a home. I was probably not going to be able to afford graduate school (had barely paid for state college by working minimum wage jobs throughout college and over the summers) and regardless I was not the most studious person. (My degree was Education — I had expected a modest income as a career high school teacher).

But as I say, I enjoyed programming at first. And when it arrived, the web was just a giant BBS as far as I was concerned and so of course I liked it. But it is possible to find a thing that you really like can go to shit over the ensuing decades. (And for that matter, my duties as an engineer got shittier as well as the career "evolved". I had not originally signed up for code reviews, unit tests, scrum, etc. Oh well.)

Money as a pursuit made sense to me after I was in the field and saw that others around me were doing quite well — able as I say, to afford to buy a home — something I had assumed would always be out of reach for me (my single mother had always rented, I assumed I would as well — oh, I still had a modest college loan to pay off too). So I learned about 30-year home loans, learned about the real estate market in the Bay Area, learned also about RSUs, capital gains tax, 401Ks, index finds, etc.

But as is becoming a theme in this thread (?) at some point I was satisfied that I had done enough to secure a home, tools for my hobbies, and had raised three girls — paid for their college. I began to see the now burdensome career I was in as an albatross around my soul. The technology that I had once enjoyed, made my career on the back of, had gone sour.

replies(1): >>43653433 #
56. criddell ◴[] No.43653365{5}[source]
You can argue whatever you want to argue.

I make my living with my brain so I do worry about the downsides of removing boredom and mental struggle from my days.

replies(2): >>43653975 #>>43654314 #
57. Gracana ◴[] No.43653372{4}[source]
People thought that the speed itself was dangerous, that the wind and vibration and landscape screaming by at 25mph would cause physical and mental harm.
58. broabprobe ◴[] No.43653410[source]
huh, I wonder if he has relayed this story multiple times, I’m only familiar with this version, https://www.goodreads.com/quotes/12020560-talking-about-when...

“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”

― Kurt Vonnegut

replies(1): >>43654889 #
59. FollowingTheDao ◴[] No.43653433{5}[source]
It went sour for the same reason that you were "satisfied that I had done enough to secure a home, tools for my hobbies, and had raised three girls — paid for their college"; Money and selfishness. You were looking out for you and your little group.

You got yours. Now what?

replies(1): >>43653650 #
60. ido ◴[] No.43653562[source]
That AI is dangerous and the closer we get to the danger zone the better it would be if the companies developing these technologies understand it might be better to slow down and make sure it's safe vs pushing ahead at maximum speed.
replies(2): >>43655294 #>>43674048 #
61. Isamu ◴[] No.43653575[source]
>After I cracked the trick of tillering

Guide to Bow Tillering:

https://straightgrainedboard.com/beginners-guide-on-bow-till...

62. cafard ◴[] No.43653621{5}[source]
During most of my single days, I didn't have to pretend to be poor as fuck. On the other hand, I didn't really need to impress my father.
63. JKCalhoun ◴[] No.43653650{6}[source]
If you're trying to convince me that I was somehow part of the problem, it's not reaching me. I was as low(ly) as you can get in the "tech industry stack". While I still had some measure of agency as an engineer I added a crayon color picker to MacOS, added most of the PDF features people like in MacOS Preview. That was as much "driving the ship" as I was allowed — until I wasn't even allowed that.

I could have skipped sooner maybe?

Once I had kids though I found I had a higher tolerance for a job getting shittier, a lower tolerance for restarting in a new career. So I put up with a worsening job for them.

I quit the moment my last daughter left for college.

replies(3): >>43654108 #>>43654745 #>>43659774 #
64. UncleMeat ◴[] No.43653668{3}[source]
The LessWrong-style AI risk is "AI becomes so superhuman that it is indistinguishable from God and decides to destroy all humans and we are completely powerless against its quasi-divine capabilities."
replies(1): >>43659835 #
65. ChrisMarshallNY ◴[] No.43653747{6}[source]
Sorry to hear that. Not our fault, and it won't make your life any better to be bitter about it. It certainly doesn't help you, in the least, to be attacking folks in a public professional forum.

You're also not the only one doing charity work.

Just sayin'.

replies(1): >>43653923 #
66. nkozyra ◴[] No.43653836[source]
> it's about something mentioned earlier in the article: "make as much money as I can".

I think it's a little deeper than that. It's the democratization of capability.

If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.

If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.

If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.

We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.

The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.

I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.

To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.

Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.

replies(1): >>43657382 #
67. OisinMoran ◴[] No.43653925[source]
I love Vonnegut and this specific piece you link, but not sure it's really talking about the same thing as the main link.
68. Workaccount2 ◴[] No.43653975{6}[source]
It's almost certainly going to be bad, and almost certainly going to be unavoidable.

I can't spell for shit anymore. Ever since auto correct became omnipresent in pretty much all writing fields, my brain just kinda ditched remembering how to spell words.

buuuttt

Manual labor has been obsolete for at least 100 years now for certain classes of people, and fitness is still an enormous recreational activity people partake in. So even in an AI heavy society, I still strongly suspect there will be "brain games" that people still enjoy and regularly play.

replies(2): >>43654744 #>>43658609 #
69. sepositus ◴[] No.43654094{8}[source]
Do you think this style of argumentation is constructive and beneficial for the broader good of society? I can't think of a single person I've met who would respond positively to being labeled a (partial) sociopath after being able to only express a couple of paragraphs of thought.
replies(1): >>43654429 #
70. chipsrafferty ◴[] No.43654108{7}[source]
I don't think they're blaming you per se, they're saying the reason you didnt enjoy it is because you did it for money.
replies(1): >>43654541 #
71. TimorousBestie ◴[] No.43654152{5}[source]
In fact it has been, dating all the way back to Phaedrus.

> If men learn [writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

72. chipsrafferty ◴[] No.43654184{3}[source]
> But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts.

You've precisely defined why nobody takes LessWrong seriously.

73. steve_adams_86 ◴[] No.43654314{6}[source]
Me too.

There is another side to this, which is maybe we don’t need to know a lot of things.

It was true with search engines already, but maybe truer with LLMs. That thing you’re querying probably doesn’t actually matter. It’s neurotic digging and searching for an object you will never use or benefit from. The urge to seek is strong but you won’t find the thing you’re searching for this way.

You might learn more by just going for a walk.

74. red_admiral ◴[] No.43654355[source]
The story of playing at damming the creek or on the sand at the seaside is wholesome and brought a smile to my face. Cracking the "puzzle" is almost the bad ending of the game, if you don't get any fun at playing it anymore.

People should spend more of their time doing things because they're fun, not because they want to get better at it.

Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.

75. chipsrafferty ◴[] No.43654374[source]
A finance job is a zero-sum game. Most tech jobs are negative sum, in that they make the world worse. You have the wrong takeaway here. Companies like Amazon and Google and OpenAI and the like are not-so-slowly destroying our planet and companies like Citadel just move money around.
76. FollowingTheDao ◴[] No.43654429{9}[source]
Yes, I do, because I’m never going to change his mind, maybe I will, but probably not, but other people reading this can take sides and think about it in a non-direct way.

Jesus turned over tables when they were trying to profit inside the church. His movement seemed to turn out pretty good.

replies(5): >>43654565 #>>43655510 #>>43655910 #>>43657422 #>>43660685 #
77. JKCalhoun ◴[] No.43654541{8}[source]
Hmmm... Is it because I came to tolerate it for the paycheck that it sucked or is it possible it began to suck first?

I get it that money coming into the industry made the whole industry suck. Honestly, Apple was a much more fun to place to work at when there was no money to be made there (no more than a paycheck anyway). Others may disagree, but I found its success made it increasingly a shittier place to work. (Others though, as I say, may have enjoyed the wider reach the platform enjoyed with its success.)

78. sepositus ◴[] No.43654565{10}[source]
Fair enough, but I would be concerned about the people who think that making (offensive) psychiatric diagnoses over the internet is a good thing that should be promoted. In the best case, it only confirms people's biases and does nothing to move the needle towards unity rather than continued division.

> Jesus turned over tables when they were trying to profit inside the church. His movement seemed to turn out pretty good.

Applying this story to posting anonymous comments on an internet forum seems like a stretch. There are hardly any meaningful consequences for your decision to write in this way, whereas Jesus very much became a target after that demonstration.

79. bogdanoff_2 ◴[] No.43654594[source]
The solution to this problem is to choose a "game" that you 100% believe will positively impact the world.
80. munificent ◴[] No.43654608{6}[source]
It tickles me that this quote came from a YA novel of all places, but in The Perks of Being a Wallflower, Chbosky writes "We accept the love we think we deserve".

If that isn't one of the deepest aphorisms on psychology out there, I don't know what is.

81. criddell ◴[] No.43654744{7}[source]
We aren't talking about something like spelling or digging a hole. We're talking about a fundamental cognitive skill: reading eight short paragraphs of text and extracting meaning from it.
replies(1): >>43658661 #
82. ToucanLoucan ◴[] No.43654745{7}[source]
> added most of the PDF features people like in MacOS Preview.

I'm not religious, but for this alone you deserve a life of blessings and happiness. The fact that I never ever have to fuck around with Adobe PDF apps to juggle PDFs is one of the load-bearing things keeping me sane in an insane world.

replies(2): >>43655302 #>>43655597 #
83. Thorrez ◴[] No.43654889{3}[source]
He ignores his wife's suggestion because, among other things, he wants to see some great looking babes. Maybe this isn't a guy whose philosophy I want to follow.
replies(1): >>43655640 #
84. appleorchard46 ◴[] No.43655294{3}[source]
Thank you.
85. wulfstan ◴[] No.43655302{8}[source]
Yes. I used these features in Preview several times today. You have made my life easier on many occasions. For that, sir, I salute you.

May you enjoy your retirement tinkering in your software garden.

replies(1): >>43655965 #
86. dvaun ◴[] No.43655510{10}[source]
Have you considered socratic questioning and other forms of conversation, in order to affect more change?

See https://www.streetepistemology.com/ for content about this. It is possible to guide discussions in a healthy manner and with positive goals in mind.

replies(1): >>43663722 #
87. ryandrake ◴[] No.43655591{5}[source]
> she said, "my friends all said he was too short, but I told them he was taller when he was standing on his wallet". Some people laughed. I didn't.

Hey, as long as they are both up front and clear about what they are getting out of their relationship. They're grown adults after all. I knew someone who proudly would admit he was a "sugar daddy" and both he and his "girlfriends" would fully agree that their relationships were transactional and contingent on the money flow. I knew someone in college who was very open and unapologetic that her plan was to find and marry someone rich. There's no right and wrong.

88. ChrisMarshallNY ◴[] No.43655597{8}[source]
I second [third?] this.

I can’t stand Adobe Reader, and use Preview, all the time.

89. ryandrake ◴[] No.43655640{4}[source]
Looks like you're completely missing the point of the quote and instead rat-holing on one word that you don't like. HN in a nutshell.
replies(1): >>43665097 #
90. ziofill ◴[] No.43655807[source]
This is a tangent, but I would love so much to be able to give my kids memories of playing in a creek in the backyard...
91. deathanatos ◴[] No.43655879{3}[source]
As a child at the beach, I would think noticing the clams would result in attempting to unearth them. Childhood curiosity about why there are bubbles.

Your explanation makes more sense, however.

92. saagarjha ◴[] No.43655910{10}[source]
Other people reading it will (hello) also are unlikely to take your side if you call random commenters here sociopaths.
93. ToucanLoucan ◴[] No.43655965{9}[source]
For tax season alone! I'm in and out of Preview constantly, looking at PDFs, sorting the pages out, flipping scans. Utterly indispensable software. It feels crazy that it just comes free with the OS.
94. ern ◴[] No.43656156{4}[source]
I see AI like the reading glasses I’ll soon need — not because I can’t think clearly, but because it helps cut through things faster when my brain’s juggling too much.

A few years ago, I’d have quietly filed this kind of article under “too hard” or passed a log analysis request from the CIO down the line. Now? I get AI to draft the query, check it, run it, and move on. It’s not about thinking less — it’s about clearing the clutter so I can focus where it counts.

95. LinuxAmbulance ◴[] No.43656596{3}[source]
Corporations certainly have advantages over individuals, but classifying them as superhuman intelligences misses the mark. I'd go with a blind ravenous titan instead.
96. DrSiemer ◴[] No.43656731{3}[source]
This is incorrect. You very much need to know how to program to make an AI build an app for you. Language models are not capable of creating anything new without significant guidance and at least some understanding of the code, unless you're asking it to create projects that tutorials have been written about. AI in it's current form is also just "a tool you can work with".

Like with the calculator, why would you need to be able to calculate things on paper if you can just have a machine do it for you? Same goes for more advanced AI: what's the point of being able to do things without them?

Not to offend, but in my opinion that's nothing more than a romantic view of what humans "should be capable of". 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.

replies(1): >>43657122 #
97. hacb ◴[] No.43657122{4}[source]
Of course, an AI will not "magically" code an app the same way 10 developers will do in a year, I don't think we disagree on this.

However, it allows you to do things you don't understand. I'm again taking examples from what I see at my university (n=1): almost all students deliver complex programming projects involving multi-threading, but can't answer a basic quizz about the same language in-person. And by basic question I mean "select among the propositions listed below the correct keyword used to declare a variable in Golang". I'm not kidding, at least one-third of the class is actually answering something wrong here.

So yeah, maybe we as a society agree on the fact that those people will not be software engineers, but prompt engineers. They'll send instructions to an agent that will display text in a strange and cryptic language, and maybe when they'll press "Run" lights will be green. But as a professional, why should I hire them once they earned their diploma? They are far from being ready for the professional world, can't debug systems without using LLMs (and maybe those LLMs can't help them because the company context is too important), and most importantly they are way less capable than freshly graduated engineers from a few years back.

> 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.

I hope so, but I'm quite pessimistic unfortunately. Expertise and focus capabilities are dying, and we are more and more relying on artificial "intelligence" and its biases. But the future will tell

replies(1): >>43658209 #
98. fragmede ◴[] No.43657305[source]
> The calculator did not erase math wizards

But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.

99. BrenBarn ◴[] No.43657382{3}[source]
I don't agree. "Capability" is a red herring. It's not about what we can do, it's about what we allow ourselves to do.
100. babelfish ◴[] No.43657422{10}[source]
Have you tried following the dao, instead?
replies(1): >>43663748 #
101. A_D_E_P_T ◴[] No.43657902{5}[source]
"AI Safety" in that particular context is easy: Keep humans in the loop and don't give AIs access to sensitive systems. With certain small antipersonnel drones excepted, this is already the policy of all serious militaries.

Besides, that's simply not what the LW crowd is talking about. They're talking about, e.g., hypercompetent AIs developing novel undetectable biological weapons that kill all humans on purpose. (This is the "AI 2027" scenario.)

Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!

They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.

replies(1): >>43659622 #
102. DrSiemer ◴[] No.43658209{5}[source]
Isn't it irrelevant that students do not have the answer to a basic quiz though? In a real life situation, they can just _ask an LLM_ if they need to know something.

I don't believe having this option will make people a lot less functional. Sure, some may slip through the cracks by faking it, but we'll soon develop different metrics to judge somebodies true capabilities. Actually, we'll probably create AI for that as well.

As a professional, you hire people who get things done. If that means hiring skilled LLM users, that do not fully understand what they produce, but what they make consistently works about as often as classic dev output does, and they do this in a fraction of the time... You would be crazy _not_ to hire them.

It's true that inexperienced developers will probably generate a massive tech debt during the time where AI is good enough to provide code, but not good enough to fish out hidden bugs. It will soon surpass humans at that skill though, and can then quickly clean up all the spaghetti.

Over the last two years my knowledge on how to perform and automate repetitive and predictable tasks has gradually worn away, replaced by a higher level understanding of software architecture. I use it to guide language models to a desired outcome. For those that want to learn, LLM's excel at explaining code. For this, and plenty of other subjects, it's the greatest learning tool we have ever had! All it takes is a curious mind.

We are in a transitionary time and we simply need to figure out how to deal with this new technology, warts and all. It's not like there is an alternative scenario; it's not going to go away...

103. profsummergig ◴[] No.43658609{7}[source]
> I can't spell for shit anymore.

This is increasingly happening to me every day. Hope the alien overlords don't have spelling tests (as their version of IQ tests) to separate the serfs from the field-masters.

104. turtleyacht ◴[] No.43658616[source]
State of the Art (SOTA)
105. profsummergig ◴[] No.43658661{8}[source]
> eight short paragraphs of text

Fair point. But they are heavily metaphor-laden paragraphs.

Textual interpretation is a highly subjective activity. Entire careers consist of interpreting, reinterpreting, and discussing texts that others have already interpreted. Film critics, book reviewers, political pundits, TV anchors, podcasters, etc.

'In 1972, Chinese premier Zhou Enlai was asked about the impact of the French Revolution. "Too early to say," he replied'

I had my own sense of what the "coquina" metaphor stood for. I wanted to see other peoples' interpretations. Turns out my interpretation was wrong.

106. lukeschlather ◴[] No.43659076[source]
The thing about LLMs is that they're trained exclusively on text, and so they don't have much insight into these sorts of problems. But I don't know if anyone has tried making a multimodal LLM that is trained on x-ray tomography of parts under varying loads tagged with descriptions of what the parts are for - I suspect that such a multimodal model would be able to give you a good answer to that question.
107. groby_b ◴[] No.43659525[source]
No, the LLMs aren't going to kill us all. Neither are they going to help a determined mass murderer to get us all.

They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.

108. ben_w ◴[] No.43659622{6}[source]
> With certain small antipersonnel drones excepted

And mines, and the CIWS I linked to and several like it (I think SeaRAM is similar autonomy to engage), and the Samsung SGR-A1 whose autonomy led to people arguing that we really ought to keep humans in the loop: https://en.wikipedia.org/wiki/Lethal_autonomous_weapon

The problem is, the more your adversaries automate, the more you need to automate to keep up. Right now we can even have the argument about the SGR-A1 because it's likely to target humans who operate at human speeds and therefore a human in the loop isn't a major risk to operational success. Counter Rocket Artillery Mortar systems already need to be autonomous because human eyes can't realistically track a mortar in mid-flight.

There were a few times in the cold war where it was luck that the lack of technology that forced us to rely on humans in the loop, humans who said "no".

People are protesting against fully autonomous weapons because they're obviously useful enough to be militarily interesting, not just because they're obviously threatening.

> Besides, that's simply not what the LW crowd is talking about.

LW talks about every possible risk. I got the flash-war idea from them.

> Yet, as far as I'm aware, there's not a single important discovery or invention made by AI. No new drugs, no new solar panel materials, no new polymers, etc. And not for want of trying!

For about a decade after Word Lens showed the world that it was possible to run real time augmented reality translations on a smartphone, I've been surprising people — even fellow expat software developers — that this exists and is possible.

Today, I guess I have to surprise you with the 2024 Nobel Prize in Chemistry. Given my experience with Word Lens, I fully expect to keep on surprising people with this for another decade.

Drugs/biosci:

• DSP-1181: https://www.bbc.com/news/technology-51315462

• Halicin: https://en.wikipedia.org/wiki/Halicin

• Abaucin: https://en.wikipedia.org/wiki/Abaucin

The aforementioned 2024 Nobel Prize for AlphaFold: https://en.wikipedia.org/wiki/List_of_Nobel_laureates_in_Che...

PV:

• Materials: https://www.chemistryworld.com/news/ai-aids-discovery-of-sol...

• Other stuff: https://www.weforum.org/stories/2024/08/how-ai-can-help-revo...

Polymers:

https://arxiv.org/abs/2312.06470

https://arxiv.org/abs/2312.03690

https://arxiv.org/abs/2409.15354

> They know what humans know. They're no more competent than any human; they're as competent as low-level expert humans, just with superhuman speed and memory. It's not clear that they'll ever be able to move beyond what humans know and develop hypercompetence.

One of the things humans know is "how to use lab equipment to get science done": https://www.nature.com/articles/s44286-023-00002-4

"Just with superhuman speed and memory" is a lot, even if they were somehow otherwise limited to a human equivalent of IQ 90.

109. ben_w ◴[] No.43659774{7}[source]
> While I still had some measure of agency as an engineer I added a crayon color picker to MacOS

Oh, that was you? One of the few bits of skeuomorphism I can still find in the OS. I wish we still had more of that.

110. ben_w ◴[] No.43659835{4}[source]
With the side-note that, historically, humans have found themselves unable to distinguish a lot of things from God, e.g. thunderclouds — and, more recently, toast.
111. pseudalopex ◴[] No.43660685{10}[source]
> other people reading this can take sides

Fewer people can read dead comments. And what others said.

112. FollowingTheDao ◴[] No.43663722{11}[source]
I am not trying to affect change. I am expressing myself.
113. FollowingTheDao ◴[] No.43663748{11}[source]
If you encounter a mother bear with her cubs in the woods she will attack you. Her natural response has no morality. I am the same, that is how you follow the Dao.

It is funny that people thing Daoists do not get angry and yet all of you suppress anger in some unnatural way to "get along" or to make sure you are not dwnvoted.

114. Thorrez ◴[] No.43665097{5}[source]
I understand the point of the quote. My point was that if someone advocates for something I consider immoral, I prefer to disregard the overall philosophy that lead the person to advocate for that.

If the quote threw in some casual racism, or advocated for stealing the envelope instead of paying for it, I would similarly disregard the overall philosophy.

115. selimthegrim ◴[] No.43671762{5}[source]
This man understands why I do what I do. Nobody passed the test so far though. Previous classmates of mine just heckled me and ran and hid when I saw them.
116. nektro ◴[] No.43674048{3}[source]
if they were capable of that they wouldn't have started. but the second best time to stop is now so here's hoping theyre open to our calls