Here is the 9 year old bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1267027
And their extension store does not permit workarounds, even though they themselves have confirmed it's a bug.
Here is the 9 year old bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1267027
And their extension store does not permit workarounds, even though they themselves have confirmed it's a bug.
That would keep "static form" helpers still functional, but disable (malicious) runtime templating.
I absolutely agree with you. I've been very very keen on CSP for a long time, it feels SO good to know that that vector for exploiting vulnerabilities is plugged.
One thing that's very noticeable: It seems to block/break -a lot- of web extensions. Basically every error I see in Sentry is of the form of "X.js blocked" or "random script eval blocked", stuff that's all extension-related.
For example I helped uBlock Origin out in 2022 when they ran into this: https://github.com/uBlockOrigin/uBlock-issues/issues/235#iss...
https://extensionworkshop.com/documentation/publish/add-on-p...
I strongly disagree: inlining your entire CSS and JS is absurdly good for performance, up to a surprisingly large size. If you have less than 100KB of JS and CSS (which almost every content site should be able to, most trivially, and almost all should aim to), there’s simply no question about it, I would recommend deploying with only inline styles and scripts. The threshold where it becomes more subjective is, for most target audiences, possibly over half a megabyte by now.
Seriously, it’s ridiculous just how good inlining everything is for performance, whether for first or subsequent page load; especially when you have hundreds of milliseconds of latency to the server, but even when you’re nearby. Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.
It’s also a lot more robust. Fetching external resources is much more fragile than people tend to imagine.
I recently implemented a couple of tools to generate[1] and validate[2] a CSP. Would be glad if anybody tries it.
[1] https://www.csphero.com/csp-builder [2] https://www.csphero.com/csp-validator
https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CSP
They said this was not allowed and removed it from the extension store.
Why? Some sites implement then break this, sadly.
I have extremely locked down instances for banks and so on. On Linux I have an icon which lets me easily launch those extra profiles.
I also use user.js, which means I can just drop in changes, and write comments for each config line, and keep it version controlled too. Great for cloning to other devices too.
I think it was noticed only because this version had a major bug that broke a bunch of websites.
I appreciate they had to move for other reasons but I also really don't like the idea that the DevTools and browser chrome itself now has all of the same security issues/considerations as anything else "web" does. It was bad with Electron (XSS suddenly becoming an RCE) and makes me pretty nervous here too :(
In principle, you could imagine the server packing all the external resources that the browser will definitely ask for together, and just sending them together with the original website. But I'm not sure how much re-engineering that would be.
XUL was in many ways always a ticking time bomb.
1. Inlining everything burns bandwidth, even if it's 100KB each. (I hope your cloud hosting bills are small.) External resources can be cached across multiple pageloads.
2. Best practice is to load CSS files as early as possible in the header, and load (and defer) all scripts at the end of the page. The browser can request the CSS before it finishes loading the page. If you're inlining scripts, you can't defer them.
3. If you're using HTTP/2+ (it's 2025, why aren't you?[0]), the connection stays open long enough for the browser to parse the DOM to request external resources, cutting down on RTT. If you have only one script and CSS, and they're both loaded from the same server as the HTML, the hit is small.
4. As allan_s mentioned, you can use nonce values, but those feel like a workaround to me, and the values should change on each page load.
> Local caches can be bafflingly slow, and letting the browser just execute it all in one go without even needing to look for a file has huge benefits.
Source? I'd really like to know how and when slow caches can happen, and possibly how to prevent them.
[0] Use something like nginx, HAProxy, or Cloudflare in front of your server if needed.
Do you have data to back this up? What are you basing this statement on?
My intuition agrees with you for the reasons you state but when I tested this in production, my workplace found the breakeven point to be at around 1KB surprisingly. Unfortunately we never shared the experiment and data publicly.
I don't have a source I can link to or share. But cache outliers are a real thing. If you aggregate Resource Timing results, you'll find some surprising outliers in that dataset where transferSize=0 (aka cached load on Chrome). You'll have users with a slow/contended disk where as they might have a fast link, but you'll also have the reverse where you'll have users with a fast cache and a slow network link (high latency, low bandwidth or both).
There's no universal answer here and I feel like the above poster tries to oversimplify a complex problem into one-size-fits-all answers. You'll have different users making up your distribution and you'll have to decide how you weight optimizations. This could very much depend on your product, the expectations and if your user are power users running a complex SaaS frontend, or a news site supporting a range of mobile devices.
A few years ago I traced and notice that Chrome has a pseudo O(n^2) behavior when pulling a bunch of sequential resources from its cache. I reported it but I'm not sure if it got fixed.
I've been digging into the JS resource timing API. You've suggested a fascinating avenue for me to explore!
Which in the case of browsers should always be decided for the user, rather than balanced. The browser is a user agent. It is running on the user's hardware.
At our place we do abide by those rules, but we also use 3rd party components like Telerik/Kendo which require both unsafe-inline for scripting and styling. Sometimes you have no choice laxing your security policy.
``` <h3> hello $user </h3> ```
with $user being equal to `<script>/* sending your session cookie out, or the value of the tag #credit-card etc. */</script>`
you will be surprised how many template library that supposedly escape things for you are actually vulnerable to this , so the "React escape for me" is not something you should 100% rely on. In a company I was working for the common vulnerably found was
`<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}` with unsafe deactivating the auto-escape because people wanted the feature to be released, and thinking of a way to translate the string intermixed with html was too time-consuming
for inline style, it may hide elements that may let you input sensitive value in the wrong field , load background image (that will 'ping' a recipient host)
with CSP activated, the vulnerability may exists, but the javascript/style will not be executed/applied so it's a safety net to cover the 0.01 case of "somebody has found an exploit in
What you use as an example has nothing to do with inline/"external" scripts at all, but everything to do with setting DOM contents vs text content. Most popular frameworks/libraries handle that as securely by default as one could (including React) and only when you use specific ways (like React's dangerouslySetInnerHTML or whatever it is called today) are you actually opening yourself up to a hole like that.
If you cannot rely on the escaping of whatever templating engine/library you're using, you're using someone's toy templating library and probably should switch to a proper one that you can trust, ideally after actually reviewing it lives up to whatever expectation you have.
> `<h3> {{ 'hello dear <strong>$user</strong>' | translate | unsafe }}`
This would have been the exact same hole regardless if it was put there/read by external/inline JavaScript.
I do agree with your last part that CSP does help slightly with "defense in depth", for when you do end up with vulnerabilities.
Wouldn't even matter, as it's the origin of wherever it ends up being executed that matters, not where the code was loaded from. So JS code loaded from cdn.jquery.com on mywebsite.com would have the origin mywebsite.com, even if loaded with a typical <script> tag.
In short, CORS applies to network requests made by scripts, not to the scripts themselves
Obviously hard to say what those tradeoffs are worth, but I'd be a bit nervous about it. The work covered by this post is a good thing, of course!
Do you mean people should be banned from inlining Google Analytics or Meta Pixel or Index Now or whatever, which makes a bunch of XHRs to who knows where? Absolutely!
But nerfing your own page performance just to make everything CSP-compliant is a fool's errand.
As is often the case with security, the downsides of locking something down may not be worth the increased security .
Another reason not to prohibit inline scripts and stylesheets is if you need to dynamically generate them (although I think strict-dynamic would allow that).
> External resources can be cached across multiple pageloads.
That only matters if the resource is actually shared across multiple pages
It is also made in a way that it is optional (never break the web mentality), so what happens in practice is the same as with CORS: allow all, because web devs don't understand what to do, and don't have time to read the RFC.
For example: try getting a web page to run that uses a web assembly binary _and_ an external JS library. Come back after 2 weeks of debugging and let me know what your experience was like, and why you eventually gave up on it.
> What you use as an example has nothing to do with inline/"external" scripts at all, but everything to do with setting DOM contents vs text content.
I fail to understand your point (it's certainly an understanding problem on my side)
What I wanted to express is that: 1. you can with some CSP forbid loading of cross origin script (i.e forbid hacker.ru/toto.js ) to be loaded 2. but even if you do this, you also want to block inline script (or use inline+nonce) because evil script can be executed from within your origin by using a vulnerability somehwere between the code and the final dom
> This would have been the exact same hole regardless if it was put there/read by external/inline JavaScript.
yes we both agree that it's the same vulnerability at the end, i'm just saying that you can arrive there from different path, and these different path are protected by different CSP mechanism
1.to be honest you should refrain yourself from loading resource outside of domain you control 2. even if you do, you can protect yourself from somebody replacing jquery.js by something totally different by using <script integrity='the_hash'> 3. if it's a CDN your control it's usually quite hard to inject something into the resource because a potential hacker has no obvious input on it (and you still can protect it by using the above script integrity= 4. so then most people feel safe and forget that inline script can still be dynamically created if you have a hole in your libraries generating DOM code, so this path of attack need to be blocked completly (forbidding inline script at all) or protected (using nonce)