←back to thread

276 points chei0aiV | 9 comments | | HN request time: 1.212s | source | bottom
Show context
jbob2000 ◴[] No.10458486[source]
So I read the blog post and skimmed the PDF and I'm left with some questions. IF these security issues have been present for 10 years, but there hasn't been any widespread malicious action on them, are they really issues?

To create an analogy, my car doesn't have bullet proof glass, someone could easily shoot it up and i'd be dead. But nobody really goes around shooting up cars, so is it an issue?

replies(6): >>10458619 #>>10458631 #>>10458642 #>>10458718 #>>10458809 #>>10460889 #
1. pgeorgi ◴[] No.10458619[source]
The problem is that if you're trying to build a secure computing environment (like Joanna is with Qubes OS), you run into limitations all the time.

Those platform issues may not be a problem for Jane Doe on Windows 10, but when users decide that they need more security than that (and Qubes points in the right direction, although there's still some miles to go) they may have a reason (or just paranoia).

In either case, they won't be very happy with the sad state that is x86 "security" because there are way too many places where an undue trust into Intel is implied.

Eg. the SGX feature, which can run userland code in a way that even the kernel (or SMM) can't read it: The keys are likely mediated by the Management Engine (ME) - which also comes with network access and a huge operating system (for the purposes of an embedded system: the smallest version is 2MB) that you, the user, can't get rid of.

So who's SGX protecting you from if you fear involvement by nation state actors? x86 isn't for you in that case (Intel's version in particular, but pretty much all alternatives are just as bad) - and that's what this paper points out.

replies(1): >>10459270 #
2. pdkl95 ◴[] No.10459270[source]
Intel describe[1] SGX as a feature designed to "enable software vendors to deliver trusted[2] applications", where applications would "maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory".

This already suggests the owner of the CPU isn't who they are protecting, but it gets worse (even before we consider the risk from AMT). Starting an SGX enclave seems to require[3] a "launch key" that is only known by Intel, allowing Intel to control what software is allowed to be protected by SGX.

[1] https://software.intel.com/en-us/blogs/2013/09/26/protecting...

[2] Before the term "DRM" was coined, the same crap used to be called "trusted computing" (back when Microsoft was pushing Palladium/NGSCB)

[3] https://jbeekman.nl/blog/2015/10/intel-has-full-control-over...

replies(1): >>10460437 #
3. Spivak ◴[] No.10460437[source]
This kind of feature would be amazing for security if it wasn't going to be immediately abused with DRM encumbered vendors, MS, and vague yet menacing government agencies trying to lock users out of their own devices.

If I could provide all the keys my machine could be completely locked down and damn near impossible to break into even with complete physical access and a ECE degree.

replies(2): >>10460651 #>>10461737 #
4. derefr ◴[] No.10460651{3}[source]
One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter. In that kind of setup, the tenant does not want you to own "all the keys to your machine"—or, at least, they want to have some way to verify that you have disabled/discarded those keys.
replies(4): >>10461408 #>>10461759 #>>10461982 #>>10462598 #
5. AnthonyMouse ◴[] No.10461408{4}[source]
> One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter.

That would require all hardware to be secure against all attackers. As soon as one attacker breaks one hardware model, they can start extracting and selling private keys that allow anyone to emulate that piece of hardware in software.

I'm also having a hard time seeing the use case. What kind of thing has hard secrecy requirements but demands so much hardware that you can't justify owning it?

6. pgeorgi ◴[] No.10461737{3}[source]
immediately abused for DRM? If you look (somewhat) closely, it's hard to avoid the impression that this stuff was designed and built around DRM use cases.

The "Protected A/V Path" could be a neat feature for high security computers (consider the GPU driver, a horrible, buggy, complex piece of software, being unable to grab pixels of a security-labelled window) - but that's not what this was built for. SGX, the same.

Non-DRM use cases seem to be an afterthought, if possible at all (typically not).

7. MichaelGG ◴[] No.10461759{4}[source]
Bingo! You could implement, for instance, a verifiable, safe Bitcoin mixer with it. (I pick this as a nice example, because it's something that is in demand (for better or worse) and is impossible to do at the moment.)
8. andreasvc ◴[] No.10461982{4}[source]
I don't see the point of this. Either you trust your cloud provider, or you don't put it in the cloud. You could think of a technical solution to prevent monitoring, but how can you ever be sure that your provider has actually implemented it? Plus, I don't think providers would want something like this; if there's something gravely illegal going on, you want to be able to know and ban that user from your service.
9. anonymousDan ◴[] No.10462598{4}[source]
Exactly, cloud computing is a potentially much more important market for.SGX than DRM. Even though Intel could no doubt handover machine keys to any government agency on request without you knowing, it potentially protects you against e.g. malicious admins at a cloud provider. There has been some really interesting research recently on running applications in an sgx enclave where the OS itself runs outside the enclave and is completely untrusted (see e.g. the Haven paper from Microsoft Research at OSDI last year, it's extremely cool).