To create an analogy, my car doesn't have bullet proof glass, someone could easily shoot it up and i'd be dead. But nobody really goes around shooting up cars, so is it an issue?
To create an analogy, my car doesn't have bullet proof glass, someone could easily shoot it up and i'd be dead. But nobody really goes around shooting up cars, so is it an issue?
Those platform issues may not be a problem for Jane Doe on Windows 10, but when users decide that they need more security than that (and Qubes points in the right direction, although there's still some miles to go) they may have a reason (or just paranoia).
In either case, they won't be very happy with the sad state that is x86 "security" because there are way too many places where an undue trust into Intel is implied.
Eg. the SGX feature, which can run userland code in a way that even the kernel (or SMM) can't read it: The keys are likely mediated by the Management Engine (ME) - which also comes with network access and a huge operating system (for the purposes of an embedded system: the smallest version is 2MB) that you, the user, can't get rid of.
So who's SGX protecting you from if you fear involvement by nation state actors? x86 isn't for you in that case (Intel's version in particular, but pretty much all alternatives are just as bad) - and that's what this paper points out.
(= just because something isn't in widespread use yet/maybe hard to do doesn't mean it isn't used in targeted attacks. Or might become widespread after new discoveries or in combination with other vectors. And a lot of her work (e.g. Qubes OS) aims at making things secure on a very low level)
Also, some of these features are marketed and sold to us as additional protections, and I think it is important to see if they can actually do what they promise or if they just add complications, especially if they inconvenience users.
Exact same story with error oracle attacks in cryptography.
Attackers go after the low hanging fruit first, and then they move up the tree.
Does this mean we should stop worrying about hardware bugs? I don't know the answer to this question. A principal engineer in the group that does Intel's hardware security validating and pentesting told me that they felt their job was to maintain the status quo of hardware bugs being harder to exploit than software bugs. More security than this is probably not justified from a risk vs cost analysis perspective; while less security than will probably break a lot of assumptions that people designing software make.
My car has a software vulnerability that would allow somebody clever to take control of the steering remotely while I drive, but nobody really goes around remote controlling other people's cars, so is it an issue?
The fact it's very hard to achieve means it's not something that's likely, but if a government decides that it wants to commandeer your computing hardware, there's nothing you could do to stop them, plus you'd never know that it occurred.
This already suggests the owner of the CPU isn't who they are protecting, but it gets worse (even before we consider the risk from AMT). Starting an SGX enclave seems to require[3] a "launch key" that is only known by Intel, allowing Intel to control what software is allowed to be protected by SGX.
[1] https://software.intel.com/en-us/blogs/2013/09/26/protecting...
[2] Before the term "DRM" was coined, the same crap used to be called "trusted computing" (back when Microsoft was pushing Palladium/NGSCB)
[3] https://jbeekman.nl/blog/2015/10/intel-has-full-control-over...
If I could provide all the keys my machine could be completely locked down and damn near impossible to break into even with complete physical access and a ECE degree.
but it gets worse, every processor from PPro (1995) on to sandy bridge has a gaping security hole reported (conveniently only AFTER Intel patched it 2 generations ago) by a guy working for Battelle Memorial Institute, known CIA front and black budget sink
https://www.blackhat.com/docs/us-15/materials/us-15-Domas-Th...
surprisingly good writeup: http://www.theregister.co.uk/2015/08/11/memory_hole_roots_in...
list of CIA fronts: http://www.jar2.com/2/Intel/CIA/CIA%20Fronts.htm battelle is on it
On top of it, there's dozens of designs in academia and even less risky options in industry that counter most of this stuff with various tradeoffs. So, anyone that wants to build something better has quite the options. The problems are literally there for backwards compatibility and avoiding costs. Far as I can tell.
That would require all hardware to be secure against all attackers. As soon as one attacker breaks one hardware model, they can start extracting and selling private keys that allow anyone to emulate that piece of hardware in software.
I'm also having a hard time seeing the use case. What kind of thing has hard secrecy requirements but demands so much hardware that you can't justify owning it?
The "Protected A/V Path" could be a neat feature for high security computers (consider the GPU driver, a horrible, buggy, complex piece of software, being unable to grab pixels of a security-labelled window) - but that's not what this was built for. SGX, the same.
Non-DRM use cases seem to be an afterthought, if possible at all (typically not).