Most active commenters
  • immibis(7)
  • johncolanduoni(7)
  • samuel(4)
  • itopaloglu83(4)
  • sceptic123(3)

←back to thread

Stop Breaking TLS

(www.markround.com)
170 points todsacerdoti | 36 comments | | HN request time: 0.764s | source | bottom
1. samuel ◴[] No.46215799[source]
I agree with the sentiment, but I think it's a pretty naive view of the issue. Companies will want all info they can in case some of their workers does something illegal-inappropiate to deflect the blame. That's a much more palpable risk than "local CA certificates being compromised or something like that.

And some of the arguments are just very easily dismissed. You don't want your employer to see you medical records? Why were you browsing them during work hours and using your employers' device in the first place?

replies(3): >>46215855 #>>46216169 #>>46216703 #
2. immibis ◴[] No.46215855[source]
In Europe they prefer not to go to jail for privacy violations. It turns out most of these "communist" regulations are actually pretty great.
replies(1): >>46215994 #
3. johncolanduoni ◴[] No.46215994[source]
Does GDPR (or similar) establish privacy rights to an employee’s use of a company-owned machine against snooping by their employer? Honest question, I hadn’t heard of that angle. Can employers not install EDR on company-owned machines for EU employees?
replies(5): >>46216082 #>>46216180 #>>46216380 #>>46216557 #>>46218221 #
4. apexalpha ◴[] No.46216082{3}[source]
Yes, at least in the Netherlands it is generally accepted that employees can use your device personally, too.

Using a device owned by your company to access your personal GMail account does NOT void your legal right to privacy.

replies(1): >>46216551 #
5. itopaloglu83 ◴[] No.46216169[source]
I’m all for privacy of individuals, but work network is not a public internet either.

A solution is required to limit the network to work related activities and also inspect server communications for unusual patterns.

In one example someone’s phone was using the work WiFi to “accidentally” stream 20 GB of Netflix a day.

replies(1): >>46216814 #
6. zeeZ ◴[] No.46216180{3}[source]
They can, but the list of "if..." and "it depends..." is much longer and complicated, especially when getting to the part how the obtained information may be used
7. Msurrow ◴[] No.46216380{3}[source]
Yes. GDPR covers all handling of PII that a company does. And its sort of default deny, meaning that a company is not allowed to handle (process and/or store) your data UNLESS it has a reason that makes it legal. This is where it becomes more blurry: figuring out if the company has a valid reason. Some are simple, eg. if required by law => valid reason.

GDPR does not care how the data got “in the hands of” the company; the same rules apply. Another important thing is the pricipals of GDPR. They sort of unline everything. One principal to consider here is that of data minimization. This basically means that IF you have a valid reason to handle an individuals PII, you must limit the data points you handle to exactly what you need and not more.

So - company proxy breaking TLS and logging everything? Well, the company has valid reason to handle some employee data obviously. But if I use my work laptop to access privat health records, then that is very much outside the scope of what my company is allowed handle. And logging (storing) my health data without valid reason is not GDPR compliant.

Could the company fire me for doing private stuff on a work laptop? Yes probably. Does it matter in terms of GDPR? Nope.

Edit: Also, “automatic” or “implicit” consent is not valid. So the company cannot say something like “if you access private info on you work pc the you automatically content to $company handling your data”. All consent must be specific, explicit and retractable

replies(1): >>46216537 #
8. johncolanduoni ◴[] No.46216537{4}[source]
What if your employer says “don’t access your health records on our machine”? If you put private health information in your Twitter bio, Twitter is not obligated to suddenly treat it as if they were collecting private health information. Otherwise every single user-provided field would be maximally radioactive under GDPR.
replies(3): >>46216618 #>>46218243 #>>46222987 #
9. johncolanduoni ◴[] No.46216551{4}[source]
So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?
replies(2): >>46217001 #>>46229843 #
10. samuel ◴[] No.46216557{3}[source]
(IANAL) I don't think there is a simple response to that, but I guess that given that the employer:

- has established a detailed policy about personal use of corporate devices

- makes a fair attempt to block work unrelated services (hotmail, gmail, netflix)

- ensures the security of the monitored data and deletes it after a reasonable period (such as 6–12 months)

- and uses it only to apply cybersecurity-related measures like virus detection, UNLESS there is a legitimate reason to target a particular employee (legal inquiry, misconduct, etc.)

I would say that it's very much doable.

Edit: More info from the Dutch regulator https://english.ncsc.nl/publications/factsheets/2019/juni/01...

11. Msurrow ◴[] No.46216618{5}[source]
If the employer says so and I do so anyway then that’s a employment issue. I still have to follow company rules. But the point is that the company needs to delete the collected data as soon as possible. They are still not allowed to store it.
replies(1): >>46222518 #
12. NicolaiS ◴[] No.46216703[source]
TLS inspection can _never_ be implemented in a good way, you will always have cases where it breaks something and most commonly you will see very bad implementations that break most tools (e.g. it is very hard to trust a new CA because each of OS/browser/java/python/... will have their own CA store)

This means devs/users will skip TLS verification ("just make it work") making for a dangerous precedent. Companies want to protect their data? Well, just protect it! Least privilege, data minimization, etc is all good strategies for avoiding data leaking

replies(1): >>46226756 #
13. sceptic123 ◴[] No.46216814[source]
What's the security risk of someone streaming Netflix?

There are better ways to ensure people are getting their work done that don't involve spying on them in the name of "security".

replies(2): >>46217226 #>>46218211 #
14. samuel ◴[] No.46217001{5}[source]
I have found a definite answer from the Dutch Protection Agency (although it could be out of date).

https://english.ncsc.nl/binaries/ncsc-en/documenten/factshee...

replies(1): >>46222315 #
15. treesknees ◴[] No.46217226{3}[source]
Security takes many forms, including Availability.

Having branch offices with 100 Mbps (or less!) Internet connections is still common. I’ve worked tickets where the root cause of network problems such as dropped calls ended up being due to bandwidth constraints. Get enough users streaming Spotify and Netflix and it can get in the way of legitimate business needs.

Sure, there’s shaping/qos rules and dns blocking. But the point is that some networks are no place for personal consumption. If an employer wants to use a MITM box to enforce that, so be it.

replies(1): >>46217505 #
16. sceptic123 ◴[] No.46217505{4}[source]
I think that's a very loose interpretation of Availability in the CIA triad.

This looks a lot like using the MITM hammer to crack every nut.

If this is an actual concern, why not deny personal devices access to the network? Why not restrict the applications that can run on company devices? Or provide a separate connection for personal devices/browsing/streaming?

Why not treat them like people and actually talk to them about the potential impacts. Give people personal responsibility for what they do at work.

replies(2): >>46218274 #>>46231878 #
17. itopaloglu83 ◴[] No.46218211{3}[source]
What’s wrong with watching Netflix at work instead of working? That’s not for me to say, but I understand employers not wanting to allow it.
18. immibis ◴[] No.46218221{3}[source]
It has to have a good purpose. Obviously there are a lot of words written about what constitutes a good purpose. Antivirus is probably one. Wanting to intimidate your employees is not. The same thing applies to security cameras.

Privacy laws are about the end-to-end process, not technical implementation. It's not "You can't MITM TLS" - it's more like "You can't spy on your employees". Blocking viruses is not spying on your employees. If you take the logs from the virus blocker and use them to spy on your employees, then you are spying on your employees. (Virus blockers aiming to be sold in the EU would do well not to keep unnecessary logs that could be used to spy on employees.)

19. immibis ◴[] No.46218243{5}[source]
Many programmers tend to treat the legal system as if it was a computer program: if(form.is_public && form.contains(private_health_records)) move(form.owner, get_nearest_jail()); - but this is not how the legal system actually works. Not even in excessively-bureaucratic-and-wording-of-rules-based Germany.
replies(1): >>46222388 #
20. itopaloglu83 ◴[] No.46218274{5}[source]
Yes, but also it’s not an employer’s job to provide entertainment during work hours on a factory floor where there are machines that can kill you if you’re not careful.

There’s a famous fable where everyone is questioning the theft victim about what they should’ve done and the victim says “doesn’t the thief deserve some words about not stealing?”

Similarly, it’s a corporate network designed and controlled for work purposes. Connecting your personal devices or doing personal work on work devices is already not allowed per policy, but people still do it, so I don’t blame network admins for blocking such connections.

replies(1): >>46227464 #
21. johncolanduoni ◴[] No.46222315{6}[source]
What’s the definitive answer? From what I can tell that document is mostly about security risks and only mentions privacy compliance in a single paragraph (with no specific guidance). It definitely doesn’t say you can or can’t use one.
replies(2): >>46222484 #>>46228209 #
22. johncolanduoni ◴[] No.46222388{6}[source]
Yeah, that’s my point. I don’t understand why the fact that you could access a bunch of personal data via your work laptop in express violation of the laptop owner’s wishes would mean that your company has the same responsibilities to protect it that your doctor’s office does. That’s definitely not how it works in general.
replies(1): >>46222952 #
23. immibis ◴[] No.46222484{7}[source]
That's probably because there is no answer. Many laws apply to the total thing you are creating end-to-end.

Even the most basic law like "do not murder" is not "do not pull gun triggers" and a gun's technical reference manual would only be able to give you a vague statement like "Be aware of local laws before activating the device."

Legal privacy is not about whether you intercept TLS or not; it's about whether someone is spying on you, which is an end-to-end operation. Should someone be found to be spying on you, then you can go to court and they will decide who has to pay the price for that. And that decision can be based on things like whether some intermediary network has made poor security decisions.

This is why corporations do bullshit security by the way. When we on HN say "it's for liability reasons" this is what it means - it means when a court is looking at who caused a data breach, your company will have plausible deniability. "Your Honour, we use the latest security system from CrowdStrike" sounds better than "Your Honour, we run an unpatched Unix system from 1995 and don't connect it to the Internet" even though us engineers know the latter is probably more secure against today's most common attacks.

replies(1): >>46222638 #
24. johncolanduoni ◴[] No.46222518{6}[source]
I’ll give an example in more familiar with. In the US, HIPPA has a bunch of rules about how private health information can be handled by everyone in the supply chain, from doctor’s offices to medical record SaaS systems. But if I’m running a SaaS note taking app and some doctor’s office puts PHI in there without an express contract with me saying they could, I’m not suddenly subject to enforcement. It all falls on them.

I’m trying to understand the GDPR equivalent of this, which seems to exist since every text fields in a database does not appear to require the full PII treatment in practice (and that would be kind of insane).

25. johncolanduoni ◴[] No.46222638{8}[source]
Okay, thanks for explaining the general concept of law to me, but this provides literally no information to figure out the conditions under which an employer using a TLS intercepting proxy to snoop on the internet traffic a work laptop violates GDPR. I never asked for a definitive answer just, you know, an answer that is remotely relevant to the question.

I don’t really need to know, but a bunch of people seemed really confident they knew the answer and then provided no actual information except vague gesticulation about PII.

replies(1): >>46236465 #
26. immibis ◴[] No.46222952{7}[source]
The legal default assumption seems to be that you can use your work laptop for personal things that don't interfere with your work. Because that's a normal thing people do.
27. immibis ◴[] No.46222987{5}[source]
I suspect they should say "this machine is not confidential" and have good reasons for that - you can't just impose extra restrictions on your employees just because you want to.

The law (as executed) will weigh the normal interest in employee privacy, versus your legitimate interest in doing whatever you want to do on their computers. Antivirus is probably okay, even if it involves TLS interception. Having a human watch all the traffic is probably not, even if you didn't have to intercept TLS. Unless you work for the BND (German Mossad) maybe? They'd have a good reason to watch traffic like a hawk. It's all about balancing and the law is never as clear-cut as programmers want, so we might as well get used to it being this way.

28. tptacek ◴[] No.46226756[source]
Sure it can; it just requires endpoint cooperation, which is a realistic expectation for most corporate IT shops.
replies(1): >>46232930 #
29. lisbbb ◴[] No.46227464{6}[source]
I agree with all you said, but it's not like it is well advertised by the companies--they should come right out and say "we MITM TLS" but they don't. It's all behind the scenes smoke and mirrors.
replies(1): >>46230076 #
30. samuel ◴[] No.46228209{7}[source]
Your question So does nobody in Europe use an EDR or intercepting proxy since GDPR went into force?

Given that a regulator publishes a document with guidelines about DPI I think it rules out the impossibility of implementing it. If that were the case it would simply say "it's not legal". It's true that it doesn't explicitly say all the conditions you should met, but that wasn't your question.

31. apexalpha ◴[] No.46229843{5}[source]
You can do it but you'd have to have a good case for it to trump the right to privacy.

It's not as simple as in the US where companies consider everything on company device their property even if employees use it privately.

32. itopaloglu83 ◴[] No.46230076{7}[source]
I agree, that’s a bad business practice.

Normally no personal device have the firewall root certs installed, so they just experience network issues from time to time, and dns queries and client hello packets are used for understanding network traffic.

However, with recent privacy focused enhancements, which I love by the way because it protects us from ISP and other, we (as in everybody) need a way to monitor and allow only certain connections in the work network. How? I don’t know, it’s an open question.

33. treesknees ◴[] No.46231878{5}[source]
It’s not at all a loose interpretation.

Availability: Ensures that information and systems are accessible and operational when needed by authorized users

replies(1): >>46244398 #
34. acdha ◴[] No.46232930{3}[source]
You also need some decent support + auditing. There are a couple of places to configure (e.g. setting CURL_CA_BUNDLE globally covers multiple OSS libraries) but there will be cases where someone hits one of the edge clients and tries to ignore the error, which ideally would lead to a scanner-triggered DevOps intervention. I think a fair amount of the rancor on this issue is really highlighting deeper social problems in large organizations, where a CIO should be seeing that resentment/hostility toward the security group is a bigger risk than the surface problem.
35. immibis ◴[] No.46236465{9}[source]
Are they using it to snoop on the traffic, or are they merely using it to block viruses? Lack of encryption is not a guarantee of snooping. I know in the USA it can be assumed that you can do whatever you want with unencrypted traffic, which guarantees that if your traffic is unencrypted, someone is snooping on it. In Europe, this might not fly outside of three-letter agencies (who you should still be scared of, but they are not your employer).
36. sceptic123 ◴[] No.46244398{6}[source]
I would still say that is loose — are connection issues caused by staff using streaming services generally considered to be DoS?

And on balance I'd say losing Integrity is a bad trade off to make here.