Most active commenters

    ←back to thread

    276 points chei0aiV | 20 comments | | HN request time: 1.161s | source | bottom
    1. jbob2000 ◴[] No.10458486[source]
    So I read the blog post and skimmed the PDF and I'm left with some questions. IF these security issues have been present for 10 years, but there hasn't been any widespread malicious action on them, are they really issues?

    To create an analogy, my car doesn't have bullet proof glass, someone could easily shoot it up and i'd be dead. But nobody really goes around shooting up cars, so is it an issue?

    replies(6): >>10458619 #>>10458631 #>>10458642 #>>10458718 #>>10458809 #>>10460889 #
    2. pgeorgi ◴[] No.10458619[source]
    The problem is that if you're trying to build a secure computing environment (like Joanna is with Qubes OS), you run into limitations all the time.

    Those platform issues may not be a problem for Jane Doe on Windows 10, but when users decide that they need more security than that (and Qubes points in the right direction, although there's still some miles to go) they may have a reason (or just paranoia).

    In either case, they won't be very happy with the sad state that is x86 "security" because there are way too many places where an undue trust into Intel is implied.

    Eg. the SGX feature, which can run userland code in a way that even the kernel (or SMM) can't read it: The keys are likely mediated by the Management Engine (ME) - which also comes with network access and a huge operating system (for the purposes of an embedded system: the smallest version is 2MB) that you, the user, can't get rid of.

    So who's SGX protecting you from if you fear involvement by nation state actors? x86 isn't for you in that case (Intel's version in particular, but pretty much all alternatives are just as bad) - and that's what this paper points out.

    replies(1): >>10459270 #
    3. detaro ◴[] No.10458631[source]
    Depends, are there people that might try to shoot you specifically, or does non-bullet-proof glass have weaknesses against other things that might happen more commonly?

    (= just because something isn't in widespread use yet/maybe hard to do doesn't mean it isn't used in targeted attacks. Or might become widespread after new discoveries or in combination with other vectors. And a lot of her work (e.g. Qubes OS) aims at making things secure on a very low level)

    Also, some of these features are marketed and sold to us as additional protections, and I think it is important to see if they can actually do what they promise or if they just add complications, especially if they inconvenience users.

    4. tptacek ◴[] No.10458642[source]
    Of course they are. We ran the Internet on C code that was positively riddled with trivially exploitable stack overflows for 7 years after the Morris Worm demonstrated RCE through overflows --- 6 years after the "microscope and tweezers" paper explained how the attack worked.

    Exact same story with error oracle attacks in cryptography.

    Attackers go after the low hanging fruit first, and then they move up the tree.

    replies(1): >>10458852 #
    5. wsxcde ◴[] No.10458718[source]
    The short answer is that there is a plethora of software level issues that are much easier to exploit, so people don't bother with hardware bugs.

    Does this mean we should stop worrying about hardware bugs? I don't know the answer to this question. A principal engineer in the group that does Intel's hardware security validating and pentesting told me that they felt their job was to maintain the status quo of hardware bugs being harder to exploit than software bugs. More security than this is probably not justified from a risk vs cost analysis perspective; while less security than will probably break a lot of assumptions that people designing software make.

    6. mangeletti ◴[] No.10458809[source]
    I think a more fitting analogy would be:

    My car has a software vulnerability that would allow somebody clever to take control of the steering remotely while I drive, but nobody really goes around remote controlling other people's cars, so is it an issue?

    7. jbob2000 ◴[] No.10458852[source]
    Well that was kind of my point, that hardware is so far up the security tree, it's almost moot (that's kind of my question I guess. Is it far enough up that tree to be moot?). To compare with my analogy, a hitman doesn't need to shoot me up while I'm driving my car, he can wait until I've exited the vehicle and negated any protection I might have had. Similarly, a hacker can avoid the hardware entirely and wait by a printer to read those secure financial documents. Or they can watch over your shoulder while you type your password. Etc. Etc.
    replies(3): >>10458940 #>>10458959 #>>10461111 #
    8. tehmaco ◴[] No.10458940{3}[source]
    It's the 'Holy Grail' of exploitation though - if you can back-door the hardware as she's suggested in the paper, nothing in the software stack can detect it, which means you cannot know if your machine is secure or not.

    The fact it's very hard to achieve means it's not something that's likely, but if a government decides that it wants to commandeer your computing hardware, there's nothing you could do to stop them, plus you'd never know that it occurred.

    replies(1): >>10461592 #
    9. tptacek ◴[] No.10458959{3}[source]
    Computer platform security is not like physical security. Once you write the software to accomplish a platform attack, it's usually about as simple to execute it as it would be to execute a simpler attack. The complexity is in the software, not the attack execution.
    10. pdkl95 ◴[] No.10459270[source]
    Intel describe[1] SGX as a feature designed to "enable software vendors to deliver trusted[2] applications", where applications would "maintain confidentiality even when an attacker has physical control of the platform and can conduct direct attacks on memory".

    This already suggests the owner of the CPU isn't who they are protecting, but it gets worse (even before we consider the risk from AMT). Starting an SGX enclave seems to require[3] a "launch key" that is only known by Intel, allowing Intel to control what software is allowed to be protected by SGX.

    [1] https://software.intel.com/en-us/blogs/2013/09/26/protecting...

    [2] Before the term "DRM" was coined, the same crap used to be called "trusted computing" (back when Microsoft was pushing Palladium/NGSCB)

    [3] https://jbeekman.nl/blog/2015/10/intel-has-full-control-over...

    replies(1): >>10460437 #
    11. Spivak ◴[] No.10460437{3}[source]
    This kind of feature would be amazing for security if it wasn't going to be immediately abused with DRM encumbered vendors, MS, and vague yet menacing government agencies trying to lock users out of their own devices.

    If I could provide all the keys my machine could be completely locked down and damn near impossible to break into even with complete physical access and a ECE degree.

    replies(2): >>10460651 #>>10461737 #
    12. derefr ◴[] No.10460651{4}[source]
    One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter. In that kind of setup, the tenant does not want you to own "all the keys to your machine"—or, at least, they want to have some way to verify that you have disabled/discarded those keys.
    replies(4): >>10461408 #>>10461759 #>>10461982 #>>10462598 #
    13. rasz_pl ◴[] No.10460889[source]
    25 years, SMM was born on 386SL in 1990

    but it gets worse, every processor from PPro (1995) on to sandy bridge has a gaping security hole reported (conveniently only AFTER Intel patched it 2 generations ago) by a guy working for Battelle Memorial Institute, known CIA front and black budget sink

    https://www.blackhat.com/docs/us-15/materials/us-15-Domas-Th...

    surprisingly good writeup: http://www.theregister.co.uk/2015/08/11/memory_hole_roots_in...

    list of CIA fronts: http://www.jar2.com/2/Intel/CIA/CIA%20Fronts.htm battelle is on it

    14. nickpsecurity ◴[] No.10461111{3}[source]
    Hardware weaknesses are being exploited right now by High Strength Attackers in intelligence services and stealthy contractors. The TAO supports this. Additionally, there were even malware in the past that used CPU errata for obfuscation. So, we can't ignore this.

    On top of it, there's dozens of designs in academia and even less risky options in industry that counter most of this stuff with various tradeoffs. So, anyone that wants to build something better has quite the options. The problems are literally there for backwards compatibility and avoiding costs. Far as I can tell.

    15. AnthonyMouse ◴[] No.10461408{5}[source]
    > One thing we would actually want, here, though, is a setup where you can rent out your computer (i.e. as an IaaS provider), without being capable of monitoring the renter.

    That would require all hardware to be secure against all attackers. As soon as one attacker breaks one hardware model, they can start extracting and selling private keys that allow anyone to emulate that piece of hardware in software.

    I'm also having a hard time seeing the use case. What kind of thing has hard secrecy requirements but demands so much hardware that you can't justify owning it?

    16. ◴[] No.10461592{4}[source]
    17. pgeorgi ◴[] No.10461737{4}[source]
    immediately abused for DRM? If you look (somewhat) closely, it's hard to avoid the impression that this stuff was designed and built around DRM use cases.

    The "Protected A/V Path" could be a neat feature for high security computers (consider the GPU driver, a horrible, buggy, complex piece of software, being unable to grab pixels of a security-labelled window) - but that's not what this was built for. SGX, the same.

    Non-DRM use cases seem to be an afterthought, if possible at all (typically not).

    18. MichaelGG ◴[] No.10461759{5}[source]
    Bingo! You could implement, for instance, a verifiable, safe Bitcoin mixer with it. (I pick this as a nice example, because it's something that is in demand (for better or worse) and is impossible to do at the moment.)
    19. andreasvc ◴[] No.10461982{5}[source]
    I don't see the point of this. Either you trust your cloud provider, or you don't put it in the cloud. You could think of a technical solution to prevent monitoring, but how can you ever be sure that your provider has actually implemented it? Plus, I don't think providers would want something like this; if there's something gravely illegal going on, you want to be able to know and ban that user from your service.
    20. anonymousDan ◴[] No.10462598{5}[source]
    Exactly, cloud computing is a potentially much more important market for.SGX than DRM. Even though Intel could no doubt handover machine keys to any government agency on request without you knowing, it potentially protects you against e.g. malicious admins at a cloud provider. There has been some really interesting research recently on running applications in an sgx enclave where the OS itself runs outside the enclave and is completely untrusted (see e.g. the Haven paper from Microsoft Research at OSDI last year, it's extremely cool).