Most active commenters
  • JimDabell(4)

←back to thread

101 points eleye | 14 comments | | HN request time: 1.246s | source | bottom
1. JimDabell ◴[] No.45790549[source]
This is something I’ve been saying for a while[0,1]:

Services need the ability to obtain an identifier that:

- Belongs to exactly one real person.

- That a person cannot own more than one of.

- That is unique per-service.

- That cannot be tied to a real-world identity.

- That can be used by the person to optionally disclose attributes like whether they are an adult or not.

Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.

[0] https://news.ycombinator.com/item?id=41709792

[1] https://news.ycombinator.com/item?id=44378709

The ability to “reset” your identity is the underlying hole that enables a vast amount of abuse. It’s possible to have persistent, pseudonymous access to the Internet without disclosing real-world identity. Being able to permanently ban abusers from a service would have a hugely positive effect on the Internet.

replies(6): >>45790613 #>>45790646 #>>45790899 #>>45791291 #>>45791379 #>>45791692 #
2. justsomehnguy ◴[] No.45790613[source]
> - Belongs to exactly one real person.

> - That a person cannot own more than one of.

These are mutually exclusive. Especially if you add 'cannot be tied to a real-world identity'.

replies(1): >>45790879 #
3. lowkey_ ◴[] No.45790646[source]
A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb

I first thought this was just a crypto play with 1 wallet per real person (wasn't a huge fan), but with the proliferation of AI, it makes sense we'll eventually need safeguards to ensure a user's humanity, ideally without any other identifiers needed.

replies(1): >>45791421 #
4. gruez ◴[] No.45790879[source]
The way that this is usually implemented is with some sort of HSM (eg. smart card, like on a e-id), that holds a private key that's shared with hundreds (or more) other HSMs. The HSM part ensures the key can't be copied out to forge infinite amounts of other identifies, and the shared private key ensures it's vaguely anonymous.
replies(1): >>45790937 #
5. eqvinox ◴[] No.45790899[source]
https://en.wikipedia.org/wiki/Sybil_attack

This is generally considered an unsolvable problem when trying to fulfill all of these requirements (cf. sibling post). Most subsets are easy, but not the full list.

replies(1): >>45797642 #
6. eqvinox ◴[] No.45790937{3}[source]
> - Belongs to exactly one real person.

I don't see how you can prevent multiple people sharing access to one HSM. Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?

btw: "usually". Can you cite an implementation?

replies(1): >>45791794 #
7. Ukv ◴[] No.45791379[source]
> - That a person cannot own more than one of.

Exactly one seems hard to implement (some kind of global registry?). I think relaxing this requirement slightly, such that a user could for instance get a small number of different identities by going to different attestors, would be easier to implement while also making for a better balance. That is, I don't want users to be able to trivially make thousands of accounts, but I also don't want websites to be able to entirely prevent privacy throwaway accounts, for a false ban from Google's services to be bound to your soul for life, to be permanently locked out using anything digital because your identifier was compromised by malware and can't be "reset", or so on.

replies(1): >>45797637 #
8. tredre3 ◴[] No.45791421[source]
> A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb

The flak should be because it's from Sam Altman. A billionaire tech bro giving us both the disease and the cure, and profiting massively along the way, is what's truly dystopian.

replies(1): >>45791438 #
9. armchairhacker ◴[] No.45791692[source]
An issue, also in crypto, is that people will get their "identifiers" stolen. How do you prevent stealing, or recover stolen identifiers, without compromising anonymity?

Another issue is that people will hire (or enslave) others to effectively lend their identifiers, and it's very hard to distinguish between someone "lending" their identifier vs using it for themselves.

I've been thinking about hierarchical management. Roughly, your identifier is managed by your town, which has its own identifier managed by your state, which has its own identifier managed by your government, which has its own identifier managed by a bloc of governments, which has its own identifier managed by an international organization. When you interact with a foreign website and it requests your identity, you forward the request to your town with your personal identifier, your town forwards the request to your state with the town's identifier, and so on. Town "management" means that towns generate, assign, and revoke stolen personal identifiers, and authenticate requests; state "management" means that states generate, assign, and revoke town identifiers, and authenticate requests (not knowing who in the town sent the request); etc.

The idea is to prevent a much more powerful organization, like a state, from persecuting a much less powerful one, like an individual. In the hierarchical system, your town can persecute you: they can refuse to give you an identifier, give yours to someone else, track what sites you visit, etc. But then, especially if you can convince other town members (which ideally happens if you're unjustly persecuted), it's easier for you to confront the town and convince them to change, than it is to confront and convince a large government. Likewise, states can persecute entire towns, but an entire town is better at resisting than an individual, especially if that town allies with other towns. And governments can persecute entire states, and blocs can persecute entire governments, and the international organization can persecute entire blocs, but not the layer below.

In practice, the hierarchy probably needs many more layers; today's "towns" are sometimes big cities, states are much larger than towns, governments and much more powerful than states, etc. so there must be layers in-between for the layer below to effectively challenge the layer above. Assigning layers may be particularly hard because it requires balance, to enable most justified persecutions, e.g. a bloc punishing a government for not taking care of its scam centers, while preventing most unjustified persecutions. And there will inevitably be towns, states, governments, etc. where the majority of citizens are "unjust", and the layer above can only punish them entirely. So yes, hierarchical management still has many flaws, but is there a better alternative?

replies(1): >>45797654 #
10. gruez ◴[] No.45791794{4}[source]
>btw: "usually". Can you cite an implementation?

u2f has it: https://security.stackexchange.com/questions/224692/how-does...

>I don't see how you can prevent multiple people sharing access to one HSM.

Obviously that's out of scope unless the HSM has a retina scanner or whatever, but even then there's nothing preventing someone from consensually using their cousin's government issued id (ie. HSM) to access a 18+ site.

> Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?

The idea is that the HSM will sign arbitrary proofs to give to relying parties. The relying parties can validate the key used to sign the proof is valid through some sort of certificate chain that is ultimately rooted at some government CA. However because the key is shared among hundreds/thousands/tens of thousands of HSMs/ids, it's impossible to tie that to a specific person/id/HSM.

> Is this assuming the HSM holds multiple keys?

Yeah, you'd need a separate device-specific key to sign/generate an identifier that's unique per-service. To summarize:

each HSM contains two keys:

1. K1: device-specific key, specific to the given HSM

2. K2: shared across some large number of HSMs

both keys is resistant to be extracted from the HSM, and the HSM will only use them for signing

To authenticate to a website (relying party):

1. HSM generates id, using something like hmac(site domain name, K1)

2. HSM generate signing blob containing the above id, and whatever additional attributes the user wants to disclose (eg. their name or whether they're 18+) plus timestamp/anti-replay token (or similar), signs it with k2, and returns to the site. The HSM also returns a certificate certifying that K2 is issued by some national government.

The site can verify the response comes from a genuine HSM because the certificate chains to some national government's CA. The site can also be sure that users can't create multiple accounts, because each HSM will generate the same id given the same site. However two sites can't correlate identities because the id changes depending on the site, and the signing key/certificate is shared among a large number of users. Governments can still theoretically deanonymize users if they retain K1 and work with site operators.

11. JimDabell ◴[] No.45797637[source]
> Exactly one seems hard to implement (some kind of global registry?).

Governments. Make it a digital passport.

> I also don't want websites to be able to entirely prevent privacy throwaway accounts, for a false ban from Google's services to be bound to your soul for life

People should be free to refuse to interact with you.

> to be permanently locked out using anything digital because your identifier was compromised by malware and can't be "reset", or so on.

Make it as difficult to reset as a passport. Not impossible, but enough friction that you wouldn’t want to keep doing it every time you get banned for spamming.

replies(1): >>45801731 #
12. JimDabell ◴[] No.45797642[source]
Sybil attacks are when you attack something with a vast number of identities. The whole point of what I am suggesting is that you limit the number of identities to one.
13. JimDabell ◴[] No.45797654[source]
> An issue, also in crypto, is that people will get their "identifiers" stolen. How do you prevent stealing, or recover stolen identifiers, without compromising anonymity?

You don’t; you invalidate them. Let the real owner explain to the issuing authority what happened.

> Another issue is that people will hire (or enslave) others to effectively lend their identifiers, and it's very hard to distinguish between someone "lending" their identifier vs using it for themselves.

It doesn’t matter. If somebody uses your Facebook account to hurl abuse at people, you can expect your Facebook account to be banned. If somebody uses your email account to spam people, you can expect your email account to be added to spam filters.

14. Ukv ◴[] No.45801731{3}[source]
> Governments. Make it a digital passport.

Some places don't have a sufficiently functional/digitally-competent government to manage it securely, and others would likely withhold/invalidate identifiers from groups they disfavor (like an ethnic/religious/political minority) - which would be fairly consequential if this is to dictate ability to communicate online. It's not the only way a government can do that, but it would be one that's alarmingly easy (requiring just inaction) and effective (to whatever extent the system works "as intended" in thwarting workarounds).

Presumably there also needs to be recourse against a corrupt government accepting bribes in exchange for giving out identifiers to spammers/etc., which to my understanding of the proposal would cut off all legitimate citizens of that country too if there's no redundancy.

Relaxing the requirement to allow for fallbacks (such that you can also apply to ICANN or some other international organization to get an identifier) should help, and if anything gives you more room to be picky about which organizations are accepted as attestors.

> People should be free to refuse to interact with you.

I think this conflates negative/passive rights (like the right to bear arms) with positive/active rights (like the right to counsel). Someone is free to refuse to interact with anyone who has worn fur if they can make that distinction, but that doesn't obligate me/society/governments to implement infrastructure to ensure that they can distinguish people who have worn fur - and people are (in general, not under oath/etc.) also free to lie about whether they have.