Most active commenters
  • tptacek(3)
  • foota(3)

←back to thread

482 points sanqui | 28 comments | | HN request time: 1.4s | source | bottom
1. noitpmeder ◴[] No.42285295[source]
Not clear (to me) in the original post -- was this done accidentally or intentionally?
replies(4): >>42285340 #>>42285374 #>>42285593 #>>42285609 #
2. fguerraz ◴[] No.42285340[source]
Carelessly is the answer
3. ruined ◴[] No.42285374[source]
does that matter?
replies(1): >>42285414 #
4. altairprime ◴[] No.42285414[source]
Yes; malice is indefensible no matter the circumstances, mistakes may be defensible under certain circumstances or with certain responses by the mistakee.
replies(1): >>42285461 #
5. sabbaticaldev ◴[] No.42285461{3}[source]
as a brazilian i’m not sure if I’d prefer it to be malice or incompetence
replies(3): >>42285615 #>>42285686 #>>42286077 #
6. tptacek ◴[] No.42285593[source]
The certificate was registered in CT, so a reasonable assumption would be that this was accidental, because it was guaranteed to be noticed and to generate drama that would threaten the capability they arranged, presumably at some significant expense.
replies(1): >>42286349 #
7. woodson ◴[] No.42285609[source]
As a CA, how does one accidentally issue a certificate for google.com? I mean, is there a scenario that isn't malicious?
replies(3): >>42285625 #>>42286101 #>>42288078 #
8. griomnib ◴[] No.42285615{4}[source]
As an American…why not both?
9. tptacek ◴[] No.42285625[source]
Yes, if the interception system involved was meant only for resources within Brazil’s own agency networks.
replies(2): >>42285842 #>>42286581 #
10. lazide ◴[] No.42285686{4}[source]
There is also the option of malicious incompetence, of course.
11. lxgr ◴[] No.42285842{3}[source]
But that's not allowed for publicly trusted roots under any circumstances, right? Not sure if that would qualify as an accident.
replies(1): >>42285964 #
12. foota ◴[] No.42285964{4}[source]
I think the parent is saying that if they meant to use the cert only internally (e.g., to monitor employees) then that would arguably not be malicious.
replies(4): >>42285966 #>>42286063 #>>42286215 #>>42286226 #
13. lxgr ◴[] No.42285966{5}[source]
Not malicious, but also not exactly purely accidental, i.e. as part of some otherwise totally legitimate activity.
replies(1): >>42289711 #
14. grayhatter ◴[] No.42286063{5}[source]
> (e.g., to monitor employees) then that would arguably not be malicious.

If only there was a way to monitor company equipment without issuing a cert for a public 3rd party.

replies(1): >>42289210 #
15. altairprime ◴[] No.42286077{4}[source]
Incompetence: operating a CA is difficult enough that sometimes people fuck up, but if the CA is corrupted, then that’s much worse.
16. Thaxll ◴[] No.42286101[source]
You know testing stuff like example.com ...
17. tptacek ◴[] No.42286215{5}[source]
It would not be malicious. I don't think there's a serious argument here (bearing in mind that in the airless vacuum of a message we can, of course, argue anything).

I don't know that's what happened here, though; there are malicious possible explanations!

replies(1): >>42289730 #
18. JumpCrisscross ◴[] No.42286226{5}[source]
> if they meant to use the cert only internally (e.g., to monitor employees)

Or to redirect to an internal, no doubt pitched as more secure, search engine.

19. px43 ◴[] No.42286349[source]
What is CT here? Central Time? Connecticut? Maybe Certificate Transparency? I guess that last one might make the most sense. Abbreviations are hard.
replies(2): >>42287078 #>>42287199 #
20. 8organicbits ◴[] No.42286581{3}[source]
Note that this scenario happened for ANSSI and MCS Holdings, so there would be precedence. I'm eager to see what Google concludes this time.

https://security.googleblog.com/2013/12/further-improving-di...

https://security.googleblog.com/2015/03/maintaining-digital-...

21. FergusArgyll ◴[] No.42287078{3}[source]
Computed Tomography?
22. bart__ ◴[] No.42287199{3}[source]
Certificate Transparency, all CA's log their issued certificates to central log servers, managed by Cloudflare, google etc. If this is not done, the certificate will not be seen as trusted by Browsers. It was designed to have a publicly auditable source of issued certificates, exactly so we can notice rogue google.com certs.
replies(2): >>42288032 #>>42288645 #
23. tialaramex ◴[] No.42288032{4}[source]
Technically you don't have to log certificates during issuance, and actually doing so is slightly more trouble (because of a chicken & egg problem, you want the log proof in the certificate, so you must log special "poisoned" certificates to get that proof and then fasten that proof to the certificate.

A customer can take an unlogged cert, log it themselves, and then use the certificate and the separate proof of logging they received and use that just fine. Google have some services which do this. One clever thing this enables is you can buy the cert secret-product-name.example, unlogged, build the web site, check everything works, and log the certificate seconds before the product launch event, so snoops can't tell your new product is secret-product-name until the moment you announce it, yet the site works immediately. I have very rarely seen this done but it's possible. When there's an ordinary White House transition process both plausible transition site certs get logged, even though in practice one of those sites is never published. Since Trump I have no idea if this process is so smooth any more.

A CA can choose whether to have this "issue unlogged certs" process as something they offer, it's a niche thing, but it could make sense. They need to keep adequate records of every certificate they issue (that's required) and logging is a very easy way to satisfy that requirement, but it's not the only way.

In practice, the logged certificates are the easy consumer option, like selling ready-to-eat food in a deli. Some customers might be prepared to buy ingredients and go away to make food, but, many customers probably want to eat food immediately so for extra money you sell products that can just be eaten immediately. So, yes, the vast majority of certificates issued every day are indeed logged immediately so as to provide the product people want.

24. tialaramex ◴[] No.42288078[source]
Most Certificate Authorities have manual issuance†, at least as an option. There's a UI where an authorized employee can issue whatever they want, the UI may be fairly crude or something quite polished used in ordinary business processes.

So an employee can type in google.com and check any boxes about did you verify this is the correct name and it's OK to issue, and then they hit issue and the certificate is minted, just like that.

Why google.com? Well, if you're testing something, say a web browser, what web site comes to mind? Maybe google.com? Doesn't work. Oh - the cable is unplugged. Doesn't work. Wait, this checkbox isn't checked, try again. Aha, now it works... Oops we issued a certificate for google.com

This is a "Never" event, there should be countless things in place to ensure it doesn't happen. In practice, just like safety guards on dangerous machinery, too many people just can't be bothered with safety, it's a cultural issue.

† Let's Encrypt famously does not. As part of the Mozilla application process they need to show their certificates expire properly, usually people either manually issue a back-dated certificate which has expired already, or they manually issue one with a deliberately short lifetime to expire. Since they can't issue manually Let's Encrypt obtained an ordinary certificate from their own service and then waited ninety days for it to expire like a fucking boss.

25. probstal ◴[] No.42288645{4}[source]
Actually, it won't be trusted by most browsers. As of today, Firefox hasn't implemented it yet [0]

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1281469

26. switch007 ◴[] No.42289210{6}[source]
AI screen monitoring right
27. foota ◴[] No.42289711{6}[source]
I think the accidental part would be in the scope. I'm not an expert on these things, but they could have intended to create a self signed cert only valid within the scope of their IT, but accidentally created one from their CA.
28. foota ◴[] No.42289730{6}[source]
I largely agree, although I think there's some part of a slippery slope specifically when it comes to government, since you could argue that a government monitoring its citizens is also not malicious since (in a democratic society) the government derives its mandate from the people.

This isn't too different from the argument that (I believe reasonably) applies for how a company has the right to monitor employees, but I think many people are opposed to even democratic governments monitoring people and would consider such use malicious.

So a government monitoring its employees is one step closer even than a company, since it's the same organization in this case (though again, I think it's largely reasonable for a government to monitor their employees).