Most active commenters
  • karambahh(4)

←back to thread

1630 points dang | 14 comments | | HN request time: 0.202s | source | bottom

Like everyone else, HN has been on a political binge lately. As an experiment, we're going to try something new and have a cleanse. Starting today, it's Political Detox Week on HN.

For one week, political stories are off-topic. Please flag them. Please also flag political threads on non-political stories. For our part, we'll kill such stories and threads when we see them. Then we'll watch together to see what happens.

Why? Political conflicts cause harm here. The values of Hacker News are intellectual curiosity and thoughtful conversation. Those things are lost when political emotions seize control. Our values are fragile—they're like plants that get forgotten, then trampled and scorched in combat. HN is a garden, politics is war by other means, and war and gardening don't mix.

Worse, these harsher patterns can spread through the rest of the culture, threatening the community as a whole. A detox week seems like a good way to strengthen the immune system and to see how HN functions under altered conditions.

Why don't we have some politics but discuss it in thoughtful ways? Well, that's exactly what the HN guidelines call for, but it's insufficient to stop people from flaming each other when political conflicts activate the primitive brain. Under such conditions, we become tribal creatures, not intellectually curious ones. We can't be both at the same time.

A community like HN deteriorates when new developments dilute or poison what it originally stood for. We don't want that to happen, so let's all get clear on what this site is for. What Hacker News is: a place for stories that gratify intellectual curiosity and civil, substantive comments. What it is not: a political, ideological, national, racial, or religious battlefield.

Have at this in the thread and if you have concerns we'll try to allay them. This really is an experiment; we don't have an opinion yet about longer-term changes. Our hope is that we can learn together by watching what happens when we try something new.

Show context
tarikjn ◴[] No.13108655[source]
I find this experiment a bit strange/disturbing, avoiding political subjects is a way of putting the head in the sand. HN is a community of hackers and entrepreneurs and politics affects these subjects one way or another wether we want to avoid it or not, and are an important component of entrepreneurial and technical subjects. It might be fine if HN was a scientific community, but it is not the case, and even then politics do interact with science, as one can conduct scientific experiments on government decisions, or politics can attack scientific community positions (e.g. climate change).

The way this sounds is that you are more concerned about politics as in people who take party positions and may feel excluded as a group when the majority of the community takes a different position. This is a slightly different issue i.e. party politics, and I think it is fine/a good thing, but it is also important to distinguish the two. This should essentially be under the same umbrella as personal attacks, as they are essentially the same thing.

replies(36): >>13108789 #>>13108826 #>>13108956 #>>13109024 #>>13109085 #>>13109124 #>>13109126 #>>13109160 #>>13109168 #>>13109250 #>>13109253 #>>13109552 #>>13109613 #>>13109650 #>>13109771 #>>13109861 #>>13109881 #>>13110130 #>>13110143 #>>13110264 #>>13110288 #>>13110291 #>>13110317 #>>13110358 #>>13110359 #>>13110619 #>>13110735 #>>13110742 #>>13110784 #>>13110864 #>>13110921 #>>13110996 #>>13111010 #>>13111196 #>>13111315 #>>13111420 #
1. golemotron ◴[] No.13109024[source]
I find it disturbing too. Right now one of the top stories on HN is about Amazon Go and the top comment is about whether the destruction of jobs it could cause is socially acceptable.

I don't know whether that is politics or not but I can't imagine discussing Amazon Go as a technology without having that discussion. In fact when you look at HN very little is about particular technologies. Most of our discussion is around the implications.

replies(4): >>13109093 #>>13109229 #>>13109320 #>>13112337 #
2. mattnewton ◴[] No.13109093[source]
I can't agree enough. I'm all for flagging uncivilized discussions, but preemptively censoring things that might turn into uncivilized discussions seems like throwing the baby out with the bathwater - we need to be able to talk about difficult things.
replies(1): >>13110131 #
3. michaelchisari ◴[] No.13109229[source]
I've been attending a lot of AI/Data Science conferences lately, and it's incredible to me how this is not a major part of any conversation about AI and automation.

A talk I recently attended by a data scientist from Amazon had him gloating about how many jobs he could eliminate.

Ironically, the only speaker who brought it up as a major social problem we'll have to tackle is someone from Uber. His solution was less than satisfactory, but at least he recognized the issue.

I don't want to pretend we live in a world of algorithms without consequence.

replies(2): >>13109462 #>>13109908 #
4. karambahh ◴[] No.13109320[source]
As technology inclined people, I consider it our duty to have that discussion.

We work on technologies that impact, in one way or another, other people's life.

As you correctly point out, the discussion of the social impacts of Amazon Go is currently open in another thread and I consider that a must.

Other example of the need of politics in here and in our heads when we design something is the case of Tristan Harris [0], as a former Google employee.

I am not saying I agree with Tristan Harris, or with one side or the other in the Amazon Go thread, but I consider HN as a place where civil political debate needs to take place, because we have a moral duty to have it.

We are, in a way, the 1% of "technologically aware people" (and probably among the world top 10% wealthiest...). We need to discuss these issues and we need to think before we act. I'm not trying to re-enact the 99% battle, but our privileges do come with a price and that price is thinking before we act...

I urge people on Amazon Go team to have that discussion. Do they consider working on that project socially acceptable for them or not, and why?

Do I consider, as a SaaS marketing provider, my job as socially acceptable, and why? That is something I, both as a citizen and a business owner, need to think about and openly discuss with my customers, shareholders and consumers/citizens if need be.

I will probably kick down an open door, but the etymology of politics is politika "affairs of the cities": aren't we all, as technology workers/operators/... all living in these cities?

[0]http://www.realclearlife.com/2016/10/27/former-google-produc...

5. karambahh ◴[] No.13109462[source]
As a "data scientist" myself and more importantly as a human being, I witness the same behaviour almost every day and it baffles me.

Yes, what we do can have consequences. We need to think about that!

I have friends working for weapons manufacturers. They don't gloat about building stuff that can blow children up!!! Why the hell should we be absolved from any moral consequences for our acts?

I am not equating elimating jobs and kill children, but I would prefer if our industry abstained from any thought on the consequences of its trades.

I once had to choose between working for a weapons manufacturers for a very nice salary. I chose not to work for them. But I thoroughly thought about it and I don't blame my friends for making a different choice. I politically object to that choice, but it does not mean I am some sort of white knight...and it does not mean that sometimes in the future, if presented with another opportunity, I wouldn't make a different choice...

replies(1): >>13110059 #
6. CN7R ◴[] No.13109908[source]
I think we're still in the early phase of A.I. where things seem more theoretical and thus ethics is not included in the discussion. However, as we near the time when policies will have large scale implications for our society, those consequences will be measured out. This is why I do not think A.I. will be a revolution but rather a gradual process. Already, the automation of cars is subjected to government regulation.
replies(2): >>13109934 #>>13110273 #
7. michaelchisari ◴[] No.13109934{3}[source]
True, that said, it's much less theoretical to the people doing it than the average blue collar worker whose lives they're disrupting.

That's why I think the industry has a moral and practical responsibility to push society to properly prepare for the results. Because we understand the implications better than anyone.

8. bduerst ◴[] No.13110059{3}[source]
It's the gun-manufacturer/shooter dissociation.

Anecdotally, I had a neighbor who programmed the guidance systems for bombs, and the only reason I remember him is because immediately after introducing himself as such, he followed up with, "But I'm not the one who's dropping them. By making them smarter I can save lives".

I think that no matter how technically intelligent a field's operators are, they are still subject to the same dissociations as everyone else.

replies(1): >>13110423 #
9. mightybyte ◴[] No.13110131[source]
We need to be able to talk about them, but it doesn't need to be here. There's value in places where you know you can come to discuss certain categories of things and avoid others. That is what's going on here.
replies(2): >>13110155 #>>13111393 #
10. golemotron ◴[] No.13110155{3}[source]
It seems that because social implications of technology can lead to political discussion, it's out of bounds.
11. karambahh ◴[] No.13110273{3}[source]
We have been working on AI in one form or another for 50 years or more, I think it ought to be time for a serious debate about this.

Lisp date from 1958 and some would argue that rule-based programming is AI. Eliza is also more than 5O years old.

The ethics of AI have been extensively discussed for a very long time.

In essence, the debate taking place around AI is a heir of the 19th debate on automated looms. Karel Čapek play, Robots, has been written in 1920 and it was already an ethical discussion of "autonomous machines"...

My first introduction to AI and its consequences and dilemna come from Isaac Asimov Foundation Cycle and that dates back to the 1950s.

AFAIK, the 3 Laws of Robotics invented by Asimov are actually used by philosophers & AI practitioners.

(I added and then removed references to the Golem, but...it could be argued as relevant to this discussion)

I am quite vehement in this discussion exactly because I am currently debating whethever or not I should release a new AI software I have designed. From a technical standpoint, I am quite proud of it, it is a nice piece of engineering. From a political standpoint, I feel that tool could be used for goals that I am not sure to agree with...

12. karambahh ◴[] No.13110423{4}[source]
You are absolutely right and I can totally relate to both your experience and your neighbor's.

I don't program guidance systems for bombs, but I program marketing tools which are, in essence, tricking consumers into buying stuff. I dissociate myself with that issue by considering that any commercial relationship is based on tricking the other party into buying more stuff, but I would totally understand if someone objected that my software is not morally acceptable to them (and I would politely suggest that they go bother someone else :p ).

Further down the line, we could end up discussing if living in a society based on capitalism is "right" or "wrong". I would totally understand if people considered that as "not an HN worthy submission", but I think that inside a thread on the moral, philosophical and social consequences of AI, it could come up as a subject...and be down-voted if need be, not flagged as off-topic.

13. mattnewton ◴[] No.13111393{3}[source]
I'm not arguing for a free for all, I am arguing that the proposed ban on politics for a week is too broad a brush, because it catches several relevant conversations, and the serious offsenses are already against the rules.
14. yagga ◴[] No.13112337[source]
With automation, robotics and AI the world does not need so many bio-robots anymore. Before we needed dumb people to do manual labor. Not anymore. Machines can do it. Most of people don't want to learn and change. What to do with the bio-mass that can only eat, shit and have fun?