Most active commenters
  • davidmurdoch(6)
  • theandrewbailey(4)
  • gear54rus(4)
  • allan_s(4)
  • evilpie(3)
  • pama(3)
  • eru(3)

180 points evilpie | 71 comments | | HN request time: 2.058s | source | bottom
1. davidmurdoch ◴[] No.43630753[source]
Firefox really needs to fix their CSP for extensions before this kind of thing.

Here is the 9 year old bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1267027

And their extension store does not permit workarounds, even though they themselves have confirmed it's a bug.

replies(4): >>43630784 #>>43630796 #>>43630948 #>>43630984 #
2. Semaphor ◴[] No.43630784[source]
Having fewer permissions for extensions than one might want seems fairly less important to making the browser more secure…
replies(2): >>43631143 #>>43641315 #
3. pama ◴[] No.43630796[source]
Wouldn’t fixing this bug reduce security?
replies(2): >>43630891 #>>43631166 #
4. theandrewbailey ◴[] No.43630873[source]
CSP is really great at plugging these kinds of security holes, but it flummoxes me that most developers and designers don't take them seriously enough to implement properly (styles must only be set though <link>, and JS likewise exists only in external files). Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.
replies(6): >>43630934 #>>43631184 #>>43631253 #>>43632334 #>>43633733 #>>43635528 #
5. shakna ◴[] No.43630891{3}[source]
If you are using filter scripts, to block specific domains or script payloads, that extension can't load on a properly secured CSP page. And that page may be using CSP to protect throwing up ads... Or malware.
replies(1): >>43633800 #
6. myfonj ◴[] No.43630918[source]
I am surprised there is no policy that would allow inline event handlers set in the initial payload (or stuff emitted by document.write), but neuter any done after initial render by `….setAttribute('on…', …)`.

That would keep "static form" helpers still functional, but disable (malicious) runtime templating.

7. pocketarc ◴[] No.43630934[source]
> should be frowned upon as hard as table-based layouts

I absolutely agree with you. I've been very very keen on CSP for a long time, it feels SO good to know that that vector for exploiting vulnerabilities is plugged.

One thing that's very noticeable: It seems to block/break -a lot- of web extensions. Basically every error I see in Sentry is of the form of "X.js blocked" or "random script eval blocked", stuff that's all extension-related.

8. gear54rus ◴[] No.43630948[source]
One of the possible workarounds would be to just remove the damn header before it causes any further inconvenience. I think they do allow `webRequest` API usage in the store, don't they?
replies(2): >>43630991 #>>43631303 #
9. evilpie ◴[] No.43630984[source]
While this is definitely annoying, most of the time this can be worked around by the extension without workarounds that themself weaken security.

For example I helped uBlock Origin out in 2022 when they ran into this: https://github.com/uBlockOrigin/uBlock-issues/issues/235#iss...

replies(2): >>43631179 #>>43631287 #
10. evilpie ◴[] No.43630991{3}[source]
Removing security headers like Content-Security-Policy is forbidden by the addons.mozilla.org policy.

https://extensionworkshop.com/documentation/publish/add-on-p...

replies(1): >>43631001 #
11. gear54rus ◴[] No.43631001{4}[source]
I don't think this is being enforced in practice, thankfully.
replies(1): >>43631306 #
12. SebFender ◴[] No.43631014[source]
CSP is a soothing cream but is most usually easily bypassed by other simple attacks relying on poor DOM management and security - to this day my team has never found so many web vulnerabilities just going into the DOM...
replies(1): >>43631204 #
13. joshuaissac ◴[] No.43631143{3}[source]
Arguably, it can make it less secure by reducing the user's control over what content the browser loads or what scripts it executes. For example, users may be using extensions to selectively replace harmful content (like intrusive JavaScript, tracking) with benign content. It is a balance between security for the user and security for the website owner.
replies(2): >>43631244 #>>43633386 #
14. foobar9898989 ◴[] No.43631152[source]
Mozilla's finally realizing what my paranoid uncle has been shouting for years: "They're coming for your browser UI!"Jokes aside, it's pretty cool seeing them implement CSP in the front-end. Kind of like putting a security guard at the entrance of a bank that already has 50 guards inside. But hey, that 51st guard might be the one who catches the bad guy!The separation between privileged and unprivileged processes reminds me of my relationship with coffee - I know I shouldn't let it access my system too often, but somehow it always finds a way in.What's actually impressive is how Firefox keeps evolving despite being around forever (in internet years). Most of us would have given up and said "eh, good enough" years ago. Next thing you know they'll be securing the about:config page with a pop quiz on quantum physics.
15. davidmurdoch ◴[] No.43631166{3}[source]
No, it's explained more in the issue. An extension is a part of the "User Agent". The CSP header in FF is almost seemingly arbitrarily applied to extensions.
replies(1): >>43633794 #
16. KwanEsq ◴[] No.43631179{3}[source]
And it's worth noting that since your comment later in that thread about sandbox being an issue, that's been fixed too as of Firefox 128: https://bugzilla.mozilla.org/show_bug.cgi?id=1411641
17. chrismorgan ◴[] No.43631184[source]
> Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.

I strongly disagree: inlining your entire CSS and JS is absurdly good for performance, up to a surprisingly large size. If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it, I would recommend deploying with only inline styles and scripts. The threshold where it becomes more subjective is, for most target audiences, possibly over half a megabyte by now.

Seriously, it’s ridiculous just how good inlining everything is for performance, whether for first or subsequent page load; especially when you have hundreds of milliseconds of latency to the server, but even when you’re nearby. Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.

It’s also a lot more robust. Fetching external resources is much more fragile than people tend to imagine.

replies(4): >>43631249 #>>43631792 #>>43632338 #>>43632478 #
18. sixaddyffe2481 ◴[] No.43631204[source]
Their blog has a lot of posts on trying to attack Firefox. If it's so simple, why are you not in the bug bounty hall of fame? :)
replies(2): >>43639666 #>>43640632 #
19. gear54rus ◴[] No.43631244{4}[source]
Exactly. It's been clearly established that web extensions' code is more priveleged than a page code, as it should be. The amount of people going 'muh sesoority' in this thread is baffling.
20. allan_s ◴[] No.43631249{3}[source]
note that for inline style/script, as long as you're not using `style=''` or `onclick=''` , you can use `nonce=` to have a hash and to my understanding, newly added inline script will not be tolerated, allowing to have the best of both world
replies(1): >>43632491 #
21. athanagor2 ◴[] No.43631253[source]
Honest question: I don't understand how forbidding inline scripts and style improves security. Also it would be a serious inconvenience to the way we distribute some of our software right now lol
replies(4): >>43631299 #>>43631348 #>>43631357 #>>43633910 #
22. yanis_t ◴[] No.43631255[source]
CSP is great in mitigating a whole bunch of security concerns, and it also forces some good practices (e.g. not using inline scripts).

I recently implemented a couple of tools to generate[1] and validate[2] a CSP. Would be glad if anybody tries it.

[1] https://www.csphero.com/csp-builder [2] https://www.csphero.com/csp-validator

23. davidmurdoch ◴[] No.43631287{3}[source]
Thanks for this! I'll look into implementing it soon.
24. theandrewbailey ◴[] No.43631299{3}[source]
CSP tells the browser where scripts and styles can come from (not just inline, but origins/domains, too). Let's pretend that an attacker can inject something into a page directly (like a SQL injection, but HTML). That script can do just about anything, like steal data from any form on the page, like login, address, or payments, or substitute your elements for theirs. If inline resources are forbidden, the damage can be limited or stopped.

https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP

replies(1): >>43631729 #
25. davidmurdoch ◴[] No.43631303{3}[source]
We modified the CSP to inject a per user generated nonce that exempts it script from the policy.

They said this was not allowed and removed it from the extension store.

26. davidmurdoch ◴[] No.43631306{5}[source]
It is. It happened to us a few weeks ago.
replies(1): >>43631428 #
27. bbarnett ◴[] No.43631341[source]
Do this, and then use Firefox's profiles to have weaker instances without these configs.

Why? Some sites implement then break this, sadly.

I have extremely locked down instances for banks and so on. On Linux I have an icon which lets me easily launch those extra profiles.

I also use user.js, which means I can just drop in changes, and write comments for each config line, and keep it version controlled too. Great for cloning to other devices too.

28. bryanrasmussen ◴[] No.43631348{3}[source]
sounds weird to me too, although I guess there could be a script that was not allowed to do CORS that then instead created an inline script and did its CORS stuff in that script - about the only way I can think of it being bad.
replies(1): >>43634318 #
29. flir ◴[] No.43631357{3}[source]
Cross-Site Scripting. If a user injects a malicious script into the page, it doesn't get run.
30. gear54rus ◴[] No.43631428{6}[source]
That's crazy. Did it happen to a public extension or an unlisted one?
replies(1): >>43631459 #
31. davidmurdoch ◴[] No.43631459{7}[source]
Public, with about half a million installations.

I think it was noticed only because this version had a major bug that broke a bunch of websites.

32. lol768 ◴[] No.43631552[source]
This is an entire class of vulnerabilities that would've never been possible with XUL, is that correct?

I appreciate they had to move for other reasons but I also really don't like the idea that the DevTools and browser chrome itself now has all of the same security issues/considerations as anything else "web" does. It was bad with Electron (XSS suddenly becoming an RCE) and makes me pretty nervous here too :(

replies(1): >>43631677 #
33. emiliocobos ◴[] No.43631677[source]
Xul would've had the same issues.
replies(2): >>43632263 #>>43633749 #
34. magicalhippo ◴[] No.43631729{4}[source]
Still recall the classic forum exploits of including Javascript in your signature or similar, before such software started escaping input.
35. eru ◴[] No.43631792{3}[source]
I think that's a limitation of our implementations. In principle, it's just bytes that we shoving down the pipe to the browser, so it shouldn't matter for performance whether those bytes are 'inline' or in 'external resources'.

In principle, you could imagine the server packing all the external resources that the browser will definitely ask for together, and just sending them together with the original website. But I'm not sure how much re-engineering that would be.

replies(2): >>43631989 #>>43632398 #
36. erikerikson ◴[] No.43631989{4}[source]
In principle there's no difference between principle and practice.
replies(1): >>43633267 #
37. WorldMaker ◴[] No.43632263{3}[source]
XUL would have had worse issues because it could make arbitrary XPCOM calls to all sorts of native components and nearly the full gamut of native component issues written mostly in C/C++.

XUL was in many ways always a ticking time bomb.

replies(1): >>43639122 #
38. myko ◴[] No.43632334[source]
I find not being able to use inline styles extremely frustrating
39. theandrewbailey ◴[] No.43632338{3}[source]
It's called Content Security Policy, not Content Performance Policy. My thoughts:

1. Inlining everything burns bandwidth, even if it's 100KB each. (I hope your cloud hosting bills are small.) External resources can be cached across multiple pageloads.

2. Best practice is to load CSS files as early as possible in the header, and load (and defer) all scripts at the end of the page. The browser can request the CSS before it finishes loading the page. If you're inlining scripts, you can't defer them.

3. If you're using HTTP/2+ (it's 2025, why aren't you?[0]), the connection stays open long enough for the browser to parse the DOM to request external resources, cutting down on RTT. If you have only one script and CSS, and they're both loaded from the same server as the HTML, the hit is small.

4. As allan_s mentioned, you can use nonce values, but those feel like a workaround to me, and the values should change on each page load.

> Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.

Source? I'd really like to know how and when slow caches can happen, and possibly how to prevent them.

[0] Use something like nginx, HAProxy, or Cloudflare in front of your server if needed.

replies(2): >>43632605 #>>43639858 #
40. Perseids ◴[] No.43632398{4}[source]
This feature actually existed (see https://en.wikipedia.org/wiki/HTTP/2_Server_Push ) but was deemed a failure unfortunately (see https://developer.chrome.com/blog/removing-push )
replies(1): >>43633273 #
41. bgirard ◴[] No.43632478{3}[source]
> If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it

Do you have data to back this up? What are you basing this statement on?

My intuition agrees with you for the reasons you state but when I tested this in production, my workplace found the breakeven point to be at around 1KB surprisingly. Unfortunately we never shared the experiment and data publicly.

replies(3): >>43632660 #>>43633346 #>>43642901 #
42. LegionMammal978 ◴[] No.43632491{4}[source]
It does seem like CSP nonces do not play well with caching (since they must have a different value on each page load), which would make them a detriment to performance.
replies(1): >>43632772 #
43. bgirard ◴[] No.43632605{4}[source]
> Source? I'd really like to know how and when slow caches can happen.

I don't have a source I can link to or share. But cache outliers are a real thing. If you aggregate Resource Timing results, you'll find some surprising outliers in that dataset where transferSize=0 (aka cached load on Chrome). You'll have users with a slow/contended disk where as they might have a fast link, but you'll also have the reverse where you'll have users with a fast cache and a slow network link (high latency, low bandwidth or both).

There's no universal answer here and I feel like the above poster tries to oversimplify a complex problem into one-size-fits-all answers. You'll have different users making up your distribution and you'll have to decide how you weight optimizations. This could very much depend on your product, the expectations and if your user are power users running a complex SaaS frontend, or a news site supporting a range of mobile devices.

A few years ago I traced and notice that Chrome has a pseudo O(n^2) behavior when pulling a bunch of sequential resources from its cache. I reported it but I'm not sure if it got fixed.

replies(1): >>43632670 #
44. ◴[] No.43632660{4}[source]
45. theandrewbailey ◴[] No.43632670{5}[source]
> If you aggregate Resource Timing results, you'll find some surprising outliers in that dataset where transferSize=0 (aka cached load on Chrome).

I've been digging into the JS resource timing API. You've suggested a fascinating avenue for me to explore!

46. SahAssar ◴[] No.43632772{5}[source]
You can also include a hash of the contents in the CSP, which plays well with caching.
replies(1): >>43638620 #
47. eru ◴[] No.43633267{5}[source]
Simple models are still useful: understanding exactly how and why they fail is instructive. There's a reason spherical cows in a vacuum come up again and again.
48. eru ◴[] No.43633273{5}[source]
Thanks for the links! Yes, my comment was based of a vague recollection of this kind of thing.

I'll read up on the '103 early hints' and 'preload' and 'preconnect' which might be close in enough practice.

49. ◴[] No.43633346{4}[source]
50. pessimizer ◴[] No.43633386{4}[source]
> It is a balance between security for the user and security for the website owner.

Which in the case of browsers should always be decided for the user, rather than balanced. The browser is a user agent. It is running on the user's hardware.

51. sebazzz ◴[] No.43633733[source]
> it flummoxes me that most developers and designers don't take them seriously enough to implement properly (styles must only be set though <link>, and JS likewise exists only in external files). Doing any styling or scripting inline should be frowned upon as hard as table-based layouts.

At our place we do abide by those rules, but we also use 3rd party components like Telerik/Kendo which require both unsafe-inline for scripting and styling. Sometimes you have no choice laxing your security policy.

52. sebazzz ◴[] No.43633749{3}[source]
It still surprises me parts of Firefox still use XUL.
53. pama ◴[] No.43633794{4}[source]
Thanks!
54. pama ◴[] No.43633800{4}[source]
Thanks.
55. allan_s ◴[] No.43633910{3}[source]
forbidding inline script protect you from

``` <h3> hello $user </h3> ```

with $user being equal to `<script>/* sending your session cookie out, or the value of the tag #credit-card etc. */</script>`

you will be surprised how many template library that supposedly escape things for you are actually vulnerable to this , so the "React escape for me" is not something you should 100% rely on. In a company I was working for the common vulnerably found was

`<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}` with unsafe deactivating the auto-escape because people wanted the feature to be released, and thinking of a way to translate the string intermixed with html was too time-consuming

for inline style, it may hide elements that may let you input sensitive value in the wrong field , load background image (that will 'ping' a recipient host)

with CSP activated, the vulnerability may exists, but the javascript/style will not be executed/applied so it's a safety net to cover the 0.01 case of "somebody has found an exploit in

replies(2): >>43634274 #>>43641386 #
56. diggan ◴[] No.43634274{4}[source]
> forbidding inline script protect you from

What you use as an example has nothing to do with inline/"external" scripts at all, but everything to do with setting DOM contents vs text content. Most popular frameworks/libraries handle that as securely by default as one could (including React) and only when you use specific ways (like React's dangerouslySetInnerHTML or whatever it is called today) are you actually opening yourself up to a hole like that.

If you cannot rely on the escaping of whatever templating engine/library you're using, you're using someone's toy templating library and probably should switch to a proper one that you can trust, ideally after actually reviewing it lives up to whatever expectation you have.

> `<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}`

This would have been the exact same hole regardless if it was put there/read by external/inline JavaScript.

I do agree with your last part that CSP does help slightly with "defense in depth", for when you do end up with vulnerabilities.

replies(1): >>43643839 #
57. diggan ◴[] No.43634318{4}[source]
> although I guess there could be a script that was not allowed to do CORS that then instead created an inline script and did its CORS stuff in that script

Wouldn't even matter, as it's the origin of wherever it ends up being executed that matters, not where the code was loaded from. So JS code loaded from cdn.jquery.com on mywebsite.com would have the origin mywebsite.com, even if loaded with a typical <script> tag.

In short, CORS applies to network requests made by scripts, not to the scripts themselves

replies(1): >>43634536 #
58. bryanrasmussen ◴[] No.43634536{5}[source]
ah yeah, sorry wasn't thinking clearly.
59. CamouflagedKiwi ◴[] No.43635427[source]
I can't help but wonder if this HTML-based setup is actually more trouble than it's worth. It seems there's a very complex ecosystem in there that is hard to reason about in this way, and it's a top-level requirement for a browser to sandbox the various bits of code being executed from a web page.

Obviously hard to say what those tradeoffs are worth, but I'd be a bit nervous about it. The work covered by this post is a good thing, of course!

60. midtake ◴[] No.43635528[source]
Why? If you're the content owner, you should be able to. If you factor out inline code, you will likely just trust your own other domain. When everything is a cdn this can lead to less security not more.

Do you mean people should be banned from inlining Google Analytics or Meta Pixel or Index Now or whatever, which makes a bunch of XHRs to who knows where? Absolutely!

But nerfing your own page performance just to make everything CSP-compliant is a fool's errand.

61. LegionMammal978 ◴[] No.43638620{6}[source]
True, a hash works as a good alternative. (Unless you're doing super weird stuff like generating inline scripts at runtime.)
62. fabrice_d ◴[] No.43639122{4}[source]
The current frontend still has the same XPCOM privilege access from JS, so as emiliocobos said, XUL vs. HTML does not change the security boundary. It's only a different markup language.
63. SebFender ◴[] No.43639666{3}[source]
Professional limits...
64. thayne ◴[] No.43639858{4}[source]
> It's called Content Security Policy, not Content Performance Policy

As is often the case with security, the downsides of locking something down may not be worth the increased security .

Another reason not to prohibit inline scripts and stylesheets is if you need to dynamically generate them (although I think strict-dynamic would allow that).

> External resources can be cached across multiple pageloads.

That only matters if the resource is actually shared across multiple pages

65. h4ck_th3_pl4n3t ◴[] No.43640632{3}[source]
The problem with CSP is that it's fixing the effect, not the cause.

It is also made in a way that it is optional (never break the web mentality), so what happens in practice is the same as with CORS: allow all, because web devs don't understand what to do, and don't have time to read the RFC.

For example: try getting a web page to run that uses a web assembly binary _and_ an external JS library. Come back after 2 weeks of debugging and let me know what your experience was like, and why you eventually gave up on it.

66. raxxorraxor ◴[] No.43641315{3}[source]
In the current browser landscape I would think not. Firefox is no less secure than Chrome or Safari and both are subject to economic incentives. You could even argue these issues negatively relate to security as well.
67. raxxorraxor ◴[] No.43641386{4}[source]
What is the security difference of someone injecting something into your page vs injecting something into external ressource?
replies(1): >>43643914 #
68. wizzwizz4 ◴[] No.43642901{4}[source]
I would expect it to be closer to 1KB, as well. 100KB is (at time of writing) about 5× the size of this webpage, and this doesn't load instantly for me.
69. allan_s ◴[] No.43643839{5}[source]
my point is that all your templates are at thend coded by human that make mistakes https://nvd.nist.gov/vuln/detail/CVE-2024-6578 , you may be not using "dangerouslySetInnerHTML" directly but your dependencies may

> What you use as an example has nothing to do with inline/"external" scripts at all, but everything to do with setting DOM contents vs text content.

I fail to understand your point (it's certainly an understanding problem on my side)

What I wanted to express is that: 1. you can with some CSP forbid loading of cross origin script (i.e forbid hacker.ru/toto.js ) to be loaded 2. but even if you do this, you also want to block inline script (or use inline+nonce) because evil script can be executed from within your origin by using a vulnerability somehwere between the code and the final dom

> This would have been the exact same hole regardless if it was put there/read by external/inline JavaScript.

yes we both agree that it's the same vulnerability at the end, i'm just saying that you can arrive there from different path, and these different path are protected by different CSP mechanism

70. allan_s ◴[] No.43643914{5}[source]
first, at the end the effect is the same (XSS which can be used to extract information), the difference is "how someone can get to it" and "how you protect yourself from it"

1.to be honest you should refrain yourself from loading resource outside of domain you control 2. even if you do, you can protect yourself from somebody replacing jquery.js by something totally different by using <script integrity='the_hash'> 3. if it's a CDN your control it's usually quite hard to inject something into the resource because a potential hacker has no obvious input on it (and you still can protect it by using the above script integrity= 4. so then most people feel safe and forget that inline script can still be dynamically created if you have a hole in your libraries generating DOM code, so this path of attack need to be blocked completly (forbidding inline script at all) or protected (using nonce)