←back to thread

656 points EthanHeilman | 5 comments | | HN request time: 1.021s | source
Show context
staticassertion ◴[] No.30102061[source]
This is pretty incredible. These aren't just good practices, they're the fairly bleeding edge best practices.

1. No more SMS and TOTP. FIDO2 tokens only.

2. No more unencrypted network traffic - including DNS, which is such a recent development and they're mandating it. Incredible.

3. Context aware authorization. So not just "can this user access this?" but attestation about device state! That's extremely cutting edge - almost no one does that today.

My hope is that this makes things more accessible. We do all of this today at my company, except where we can't - for example, a lot of our vendors don't offer FIDO2 2FA or webauthn, so we're stuck with TOTP.

replies(15): >>30103088 #>>30103131 #>>30103846 #>>30104022 #>>30104121 #>>30104716 #>>30104840 #>>30105344 #>>30106941 #>>30107798 #>>30108481 #>>30108567 #>>30108916 #>>30111757 #>>30112413 #
c0l0 ◴[] No.30104121[source]
I think 3. is very harmful for actual, real-world use of Free Software. If only specific builds of software that are on a vendor-sanctioned allowlist, governed by the signature of a "trusted" party to grant them entry to said list, can meaningfully access networked services, all those who compile their own artifacts (even from completely identical source code) will be excluded from accessing that remote side/service.

Banks and media corporations are doing it today by requiring a vendor-sanctioned Android build/firmware image, attested and allowlisted by Google's SafetyNet (https://developers.google.com/android/reference/com/google/a...), and it will only get worse from here.

Remote attestation really is killing practical software freedom.

replies(16): >>30104148 #>>30104166 #>>30104241 #>>30104603 #>>30105136 #>>30106352 #>>30106792 #>>30107048 #>>30107250 #>>30107515 #>>30108070 #>>30108409 #>>30108716 #>>30108754 #>>30109550 #>>30123243 #
seibelj ◴[] No.30104148[source]
Reproducible builds are a thing, I don't know how widespread they are. I know the monero project has that built in so everyone compiles the exact same executable regardless of environment, and can verify the hash against the official version https://github.com/monero-project/monero
replies(3): >>30104553 #>>30104740 #>>30107844 #
nybble41 ◴[] No.30104553[source]
Reproducible builds allow the user of the software to verify the version that they are using or installing. They do not, by themselves, allow the sort of remote attestation which would permit a service to verify the context for authentication—the user, or a malicious actor, could simply modify the device to lie about the software being run.

Secure attestation about device state requires something akin to Secure Boot (with a TPM), and in the context of a BYOD environment precludes the device owner having full control of their own hardware. Obviously this is not an issue if the organization only permits access to its services from devices it owns, but no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.

replies(1): >>30105074 #
1. InitialLastName ◴[] No.30105074[source]
> no organization should have that level of control over devices owned by employees, vendors, customers, or anyone else who requires access to the organization's services.

It seems like the sensible rule of thumb is: If your organization needs that level of control, it's on your organization to provide the device.

replies(1): >>30106523 #
2. jacobr1 ◴[] No.30106523[source]
Or we could better adopt secure/confidential computing enclaves. This would allow the organization to have control over the silo'd apps and validate some degree of security (code tampering, memory encryption, etc) but not need to trust that other apps on the device or even the OS weren't compromised.
replies(2): >>30107746 #>>30108200 #
3. wizzwizz4 ◴[] No.30107746[source]
I'm uncomfortable letting organisations have control over the software that runs on my hardware. (Or, really, any hardware I'm compelled to use.)

Suppose the course I've been studying for the past three years now uses $VideoService, but $VideoService uses remote attestation and gates the videos behind a retinal scan, ten distinct fingerprints, the last year's GPS history and the entire contents of my hard drive?¹ If I could spoof the traffic to $VideoService, I could get the video anyway, but every request is signed by the secure enclave. (I can't get the video off somebody else, because it uses the webcam to identify when a camera-like object is pointed at the screen. They can't bypass that, because of the remote attestation.)

If I don't have ten fingers, and I'm required to scan ten fingerprints to continue, and I can't send fake data because my computer has betrayed me, what recourse is there?

¹: exaggeration; no real-world company has quite these requirements, to my knowledge

replies(1): >>30118137 #
4. nybble41 ◴[] No.30108200[source]
Secure enclaves are still dependent on someone other than the owner (usually the manufacturer) having ultimate control over the device. Otherwise the relying party has no reason to believe that the enclave is secure.
5. jacobr1 ◴[] No.30118137{3}[source]
So there are two levels of tradeoffs:

1) The requirements themselves. These are different for consumer vs employee type scenarios. So general, I'd prefer we err on the side of DRM free for things like media, but there are legitimate concerns around things like data privacy when you are an employee of an organization handling sensitive data.

2) Presuming there are legitimate reasons to have strong validation of the user and untampered software, we have the choice of A) using only organization supplied hardware in those case or B) using your own with some kind of restriction. I'd much prefer to use my own as much as possible ... if I can be ensured that it won't spy on me, or limit what I can do, for the non-organization specific purposes I've explicitly opted-in to enable.

> I'm uncomfortable letting organisations have control over the software that runs on my hardware.

I'm not, if we can sandbox. I'm fine with organizations running javascript in my browser for instance. Or running mobile apps that can access certain data with explicit permissions (like granting access to my photos so that I can share them in-app). I think we can do better with both more granular permissions, better UX, and cryptographic guarantees to both the user and the organization that both the computation and data is operating at the agreed level.