We need to teach our students that the employment they take doesn't exist in a vacuum. Your choice of employee can impact not only yourself but the wider world. There's more to life than intellectual satisfaction.
We need to teach our students that the employment they take doesn't exist in a vacuum. Your choice of employee can impact not only yourself but the wider world. There's more to life than intellectual satisfaction.
They pay well, and that’s where the interest ends. There’s a lot of challenges in gluing CRUD together at a large enough scale, but it’s not exactly valuable to the greater world.
I think this is important, especially in tech. Our contributions often change the world, even in little ways, but this compounds.
The minimal standard we should teach our students is to be part of the solution, not the problem, and that sitting on the fence counts as being on the side of the problem. Working for a "neutral" employer is just not good enough. There are plenty of worthwhile alternatives out there. We all should try to make the world a better place in some small way.
1. https://archive.ph/LwvMA 2. https://time.com/6293398/palantir-future-of-warfare-ukraine/
And I have no worries that the billionaires will make sure their views and values are aired and widely known, so students will be very much able to make up their own mind.
The problem with leaving it to parents is that parents are not uniformly qualified or interested in doing so, and it’s in society’s best interests not to leave important things to chance.
That seems like a very uncharitable take. For instance, don't you think the section on morality[1] addresses this head on?
Grey areas. By this I mean I mean ‘involve morally thorny, difficult decisions’: examples include health insurance, immigration enforcement, oil companies, the military, spy agencies, police/crime, and so on.
Every engineer faces a choice: you can work on things like Google search or the Facebook news feed, all of which seem like marginally good things and basically fall into category 1. You can also go work on category 2 things like GiveDirectly or OpenPhilanthropy or whatever.
The critical case against Palantir seemed to be something like “you shouldn’t work on category 3 things, because sometimes this involves making morally bad decisions”. An example was immigration enforcement during 2016-2020, aspects of which many people were uncomfortable with.
But it seems to me that ignoring category 3 entirely, and just disengaging with it, is also an abdication of responsibility. Institutions in category 3 need to exist. The USA is defended by people with guns. The police have to enforce crime, and - in my experience - even people who are morally uncomfortable with some aspects of policing are quick to call the police if their own home has been robbed. Oil companies have to provide energy. Health insurers have to make difficult decisions all the time. Yes, there are unsavory aspects to all of these things. But do we just disengage from all of these institutions entirely, and let them sort themselves out?
I don’t believe there is a clear answer to whether you should work with category 3 customers; it’s a case by case thing. Palantir’s answer to this is something like “we will work with most category 3 organizations, unless they’re clearly bad, and we’ll trust the democratic process to get them trending in a good direction over time”. Thus:
On the ICE question, they disengaged from ERO (Enforcement and Removal Operations) during the Trump era, while continuing to work with HSI (Homeland Security Investigations).
They did work with most other category 3 organizations, on the argument that they’re mostly doing good in the world, even though it’s easy to point to bad things they did as well.
I can’t speak to specific details here, but Palantir software is partly responsible for stopping multiple terror attacks. I believe this fact alone vindicates this stance.
This is an uncomfortable stance for many, precisely because you’re not guaranteed to be doing 100% good at all times. You’re at the mercy of history, in some ways, and you’re betting that (a) more good is being done than bad (b) being in the room is better than not. This was good enough for me. Others preferred to go elsewhere.
The danger of this stance, of course, is that it becomes a fully general argument for doing whatever the power structure wants. You are just amplifying existing processes. This is where the ‘case by case’ comes in: there’s no general answer, you have to be specific. For my own part, I spent most of my time there working on healthcare and bio stuff, and I feel good about my contributions. I’m betting the people who stopped the terror attacks feel good about theirs, too. Or the people who distributed medicines during the pandemic.
Even though the tide has shifted and working on these ‘thorny’ areas is now trendy, these remain relevant questions for technologists. AI is a good example – many people are uncomfortable with some of the consequences of deploying AI. Maybe AI gets used for hacking; maybe deepfakes make the world worse in all these ways; maybe it causes job losses. But there are also major benefits to AI (Dario Amodei articulates some of these well in a recent essay).
As with Palantir, working on AI probably isn’t 100% morally good, nor is it 100% evil. Not engaging with it – or calling for a pause/stop, which is a fantasy – is unlikely to be the best stance. Even if you don’t work at OpenAI or Anthropic, if you’re someone who could plausibly work in AI-related issues, you probably want to do so in some way. There are easy cases: build evals, work on alignment, work on societal resilience. But my claim here is that the grey area is worth engaging in too: work on government AI policy. Deploy AI into areas like healthcare. Sure, it’ll be difficult. Plunge in.8
When I think about the most influential people in AI today, they are almost all people in the room - whether at an AI lab, in government, or at an influential think tank. I’d rather be one of those than one of the pontificators. Sure, it’ll involve difficult decisions. But it’s better to be in the room when things happen, even if you later have to leave and sound the alarm.
However, this differs from universities teaching students which business areas are more moral to work in than others. Who would have the authority to decide which businesses are more ethical? Some argue that working in the defense industry is the least ethical career choice, while others claim it would be immoral not to support a country's right to purchase weapons for self-defense. These judgments are often subjective and could be heavily influenced by individual teachers' biases.
> leave moral instruction to parents and other institutions like it should be.
Should be, according to what doctrine? It certainly sounds like you're attempting to establish institutional moral instruction by imposing limits on when and where morality can be discussed.
Why are we allowed to teach students astronomy but not morality? Go back further and we couldn't even freely teach astronomy. Do you remember Galileo's trial for heresy? Or Socrates' condemnation to death for "corrupting the youth"? This war for teaching the ability to capably assess ethics and morality has been waging before you, I, Hacker News, universities, the internet, the printing press...
If you don't think it was right to kill Socrates for simply spreading the message of critical thinking, then you have to accept that adults can organize to teach whatever they wish at universities, assuming it doesn't run afoul of Constitutional protections.
For the most part it's an accurate representation of how morals are appropriated into institutions like academia.
As important qualities like community and a shared notion of a common good in humanity are, the system as it stands will render them according to its own interests and students will exit none the wiser. Character becomes standardized into a set of "values" of an entirely different sort.
The problem is that Students inevitably become parents, and some inevitably branch out into "other institutions" professionally, espousing Moral Character® and we're left to figure out who contaminated what?
The baby or the bathwater?
However, I cannot more strongly disagree with your implicit assumption of innocence for "category 1." Facebook alone is unquestionably more harmful than Palatir, and any purely for profit entity is by necessity intentionally unanchored to any ethical foundation at all. Facebook is known for explicitly supporting genocidal regimes abroad, and for intentionally ignoring white supremacy, child abuse and domestic terrorism here in the US, all while being very explicit about not cooperating with the government agencies responsible for combatting these issues.
To that end, I would extend your thesis to the effect that people who eschew category 3 for category 1 aren't simply abdicating social responsibility, but are hypocritically engaged in substantially more socially harmful behaviors.
Sure, Palatir leads to people dying, and sometimes those people are innocent bystanders, but those actions are the result of any engagement with the public sector. Facebook is a direct progenitor of genocide abroad and fascism stateside, and is wholly untethered from either conscience or consequence. Category 1 is worse.
I assume you've lived long enough to witness an internet stewarded by those who place ethics or morality above purely capitalistic motivation, vs. an internet stewarded by a generation of new-age, fake-ethical "They 'trust me'. Dumb fucks" tech entrepreneurs.
Similarly, doctors learn medical ethics, and, of course, not every question has the "right" answer. Partially, medical (and research) ethics are about knowing what constitutes malpractice under current law, but it's also about some more general ideas (on which the law might be based) that are hard to quantify. Here's one example: during a drug research, if the interim results show that the newly suggested treatment is unambiguously better than the one given to the control group, the researcher is compelled to stop the research and just move everyone to the new drug. But, the reality is rarely so clear-cut. The researcher might not be confident in the accuracy of the intermediate results. While the average success from a particular treatment might improve, it might also worsen the situation for some outliers in the target group etc. All this would lead the researcher to the situation where they need to select between continuing and stopping the research with no clear best choice.
followed by a one clause stone-throw. Irony?
So, the major democracies are imperialist powers? Do you live in a small dictatorship? If not, to be consistent with the rock you just threw, you don't pay your taxes? Do you just not take responsibility for anything? Because that's what he's arguing Palantir does.
Here's another take: since WW2 there's been a messy but semi-stable competition between the great powers expressed most visibly through a series of proxy wars near the perimeter of Russia and China. However, the competition is also expressed in the global economy, on the networks, in space, in the oceans. Turns out good people are often forced into ethically tenuous situations and in a world with 8 billion people, every one of whom has lots of opinions, there's a lot of possibility for entirely reasonable people to find themselves in life-and-death struggles.
Wolf packs defend their resources, mainly by marking their territorial boundaries but occasionally they fight. Are they unethical in doing so? Are we any different?
Words have meanings and neither of the terms you used are appropriate for this context. It’s possible that there could be an issue with the way standards are formulated but that’d be specific to a particular situation rather than inherent to the concept.
It's all well and good to say that your chosen priest caste won't exert hierarchical control, pinky swear, but history and human nature disagrees with you.
It's also odd to suggest that we can teach a system of deriving ethics or morality. Philosophers have been hard at work on this for a long time and haven't gotten terribly far, and they disagree with each other quite strenuously.
That's a priestly caste. Of if, as you say, there may be a problem in the formulated standards, then the body that formulates the standards would be the priestly caste. I don't have a problem with the concept, actually, but it's best to call it what it is. Pretending that this would be perfectly neutral is daft.
When I taught design I ended one of my courses with a lecture and discussion on ethics, and I'd like to think I was pretty even-handed. One common issue that most young designers encounter is being asked to implement dark patterns that improve the company's profits at the expense of the end-user's well-being. The goal of that lecture was not to tell students what is right and what is wrong but to get them to think critically about the effects of their decisions on end-users, customers, society, and the planet. But those answers are different for everyone, for example in my case I was more ethically comfortable working on US military projects than projects involving advertising, social media, gambling, or other forms of psychological manipulation.
They teach the different ethical frameworks, where they come from, and then get you to apply them to different situations. The classes don't tell you what's right and what's wrong, but rather, the different frameworks people can use to determine that.
Put another way, real engineers, doctors, scientists who work with human subjects, lawyers, finance people, etc. do not seem to have a conceptual hazard from professional ethics codes. Why would we expect software development to be so different?
In terms of parents not being qualified, who are you, or anyone else for that matter, to say who is and is not qualified to instruct their own in morality? It is an entirely subjective topic and certainly should not be given over to corrupted institutions. Moreover, do you really believe folks like Elizabeth Holmes, Jeff Skilling, SBF (whose mother is a legal ethicist!!), and all the nameless, white-shoed McKinsey criminals haven't received "ethical" instruction in their coursework? And how has that panned out? SBF is a particularly great example as his mother, who you would no doubt have deemed "qualified", reared one of the worst criminals of this generation.
Let the universities focus on the efficient discovery and dissemination of truth, and discard the wasteful, useless mis-education. Fire 90% of the admins, and tie student lending to financial outcomes of students. All of the grievance studies degrees that purport to provide ethical and moral training would vaporize overnight!
It isn't. Recall the big push on DEI initiatives, quite similar to the push to remove blacklist/whitelist or master/slave in the software world. Or the guardrails put onto LLMs so they don't become antisemitic or whatever. Why was it a good thing to do? Because the priestly caste said it was, and tolerated no questions about it. You seem to be unaware of the concept of institutional capture.
And, yes, all teaching involves some sort of bias. We haven't yet created the human that is free from bias.
In any event, the poster I replied to also included "morality" alongside "ethics", which is why I suggest it's not as cut and dried as you imply.
Parents can teach right and wrong, but they seldom teach about things like utilitarianism or hedonic treadmills.
That said, who did SBF largely derive his ethics from? His parents, at home, not at his mom's lectures. So all this does is illuminate why it's important for people to get exposed to a wider variety of opinions and ethical considerations.