My only concern is that they lose patience to their hype-driven competition and start doing hype-driven stuff themselves.
What’s horrible about V8?
From a security standpoint it really icks me when projects prominently ask their users to do the `curl mywebsite.com/foo.sh | sh` thing. I know risk acceptance is different for many people, but if you download a file before executing it, at least you or your antivirus can check what it actually does.
As supply chain attacks are a significant security risks for a node/deno stack application, the `curl | sh` is a red flag that signals to me that the author of the website prefers convenience over security.
With a curl request directly executed, this can happen:
- the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
- MITM attack gives you a different file than others receive
Node/deno applications using the npm ecosystem put a lot of blind trust into npm servers, which are hosted by microsoft, and therefore easily MITM'able by government agencies.
When looking at official docs for deno at https://docs.deno.com/runtime/getting_started/installation/ the second option behind `curl | sh` they're offering is the much more secure `npm install -g deno`. Here at least some file integrity checks and basic malware scanning are done by npm when downloading and installing the package.
Even though deno has excellent programmers working on the main project, the deno.land website might not always be as secure as the main codebase.
Just my two cents, I know it's a slippery slope in terms of security risk but I cannot say that `curl | sh` is good practice.
And depending on what "interesting" IP address you are coming from, NSA/Microsoft/Apple will MITM your npm install / windows update / ios update accordingly.
Same in the linux ecosystem, if you look at the maintainers of popular distributions, some of them had .ru / .cn email addresses before switching to more official email addressess using the project domain - IMO this change of email addressess happened due to public pressure on russia after the Ukraine invasion. Having access to main package signing keys for a linux distribution, you can provide special packages from your linux package mirror to interesting targets.
All of these scenarios are extremely hard to prove after the fact and the parties involved are not the type of people who do public writeups.
See `man systemd.exec`, `systemd-analyze security`, https://wiki.archlinux.org/title/Systemd/Sandboxing
E.g. --allow-net --deny-net=1.1.1.1
You cannot fetch "http://1.1.1.1" but any domain that resolves to 1.1.1.1 is a bypass...
It's crap security
How do I install npm? The npm webpage tells me to go and install nvm. At that tells me to use curl | sh .
So using npm for a new user is still requiring a curl | sh, just in a different place.
However for more generic code Linux'isms often signals a certain "works-on-my-machine" mentality that might even hinder cross-distro compatibility, let alone getting things to work on Windows and/or osX development machines.
I guess a Rust binding for V8 is a tad borderline, not necessarily low-level but still an indicator that there's a lack of care for getting things to work on other machines.
If this were a real blocker, then C/C++ wouldn't be used in production either, since both just lean on the language-agnostic CVE/GHSA/etc databases for any relevant vulnerabilities there... and C also heavily encourages just vendoring in entire files from the internet with no way to track down versions.
Anyway, doesn't "deno.lock" exist, and anyone who cares can opt-in to that, and use the versions in there to check vulnerability databases?
If the deno runtime implements the fetch module itself, then post-resolution checking definitely should be done though. It's more of an bug though than a principled security lapse.
It boils down to the question "is it more likely the attacker can impersonate or control `npm` servers or our own servers. If the answer to that question is "No" then curl pipe sh is not less secure than `npm install`.
This is security theater. If you're assuming an attacker can impersonate anyone in the internet your only secure option is to cut the cable.
However in general I don't think Deno's permission system is all that amazing, and I am annoyed that people call it "capability-based" sometimes (I don't know if this came from the Deno team ever or just misinformed third parties).
I do like that "deno run https://example.com/arbitrary.js" has a minimum level of security by default, and I can e.g. restrict it to read and write my current working dir. It's just less helpful for combining components of varying trust levels into a single application.
deno can do this via --(allow/deny)-read and --(allow/deny)-write for the file system.
You can do the same for net too
https://docs.deno.com/runtime/fundamentals/security/#permiss...
Most of these installation scripts are just simple bootstappers that will eventually download and execute millions lines of code authored and hosted by the same people behind the shell script.
You simply will not be capable of personally auditing those millions lines of code, so this problem boils down to your trust model. If you have so little trust towards the authors behind the project, to the point that you'd suspect them pulling absurdly convoluted ploys like:
> the web server behind mywebsite.com/foo.sh provides malware for the first request from your IP, but when you request it again it will show a different, clean file without any code
How can you trust them to not hide even more malicious code in the binary itself?
I believe the reason why this flawed argument have spread like a mind virus throughout the years is because it is something that is easy to do and easy to parrot in every mildly-relevant thread.
It is easy to audit a 5-line shell script. But to personally audit the millions lines of code behind the binary that that script will blindly download and run anyways? Nah, that's real security work and no one wants to actually do hard work here. We're just here to score some easy points and signal that we're a smart and security-conscious person to our peers.
> which are hosted by microsoft, and therefore easily MITM'able by government agencies.
If your threat model includes government agencies maliciously tampering your Deno binaries, you have far more things to worry about than just curl | sh.
See for instance...
Setup instructions for Pkgsrc on macOS with the SmartOS people's binary caches: https://pkgsrc.smartos.org/install-on-macos/
Spack installation instructions: https://spack-tutorial.readthedocs.io/en/latest/tutorial_bas...
Guix setup used to look like this but now they have a shell script for download. Even so, the instructions advise saving it and walk you through what to expect so you can have reasonable expectations while installing it.
Anyway, my point is that there are other ways to instruct people about the same kind of install process.
# Download package and its checksum
curl -fsSLO https://example.com/example-1.0.0.tar.gz
curl -fsSLO https://example.com/example-1.0.0.tar.gz.sha256
# Verify the checksum
sha256sum -c example-1.0.0.tar.gz.sha256
But if the server is compromised, the malicious actor would likely be able to serve a matching hash to their file?Here is an example of a small microcut I faced (which might be fixed now) https://github.com/honojs/hono/issues/1216
In contrast, Bun had less cognitive overhead and just "worked" even though it didn't feel as clean as Deno. Some things aren't perfect with Bun either like the lack of a Bun runtime on Vercel
Running `deno install` in a directory with package.json will create a leaner version of node_modules, running `deno task something` will run scripts defined in `package.json`.
Deno way of doing things is a bit problematic, as I too find it is often a timesink where things don't work, then if you have to escape back to node/npm it becomes a bigger hassle. Using Deno with package.json is easier.
FWIW, I don't have a strong opinion here, besides that I like Debian's model the most. Just felt that it was worth to point out the above.
Also, yeah, a lot of Deno's npm compatibility keeps getting better, as mentioned in these 2.4 release notes there are a few new improvements. As another comment in these threads points out, for a full stack like the one you were trying, using Deno package.json first can give a better compatibility feeling than deno.json first, even if the deno.json first approach is the nicer/cleaner one long term or when you can go 0-60 in Deno-native/ESM-native greenfields.
That's a bit of a silly model.
No, you can allow access to specific domains, IP addresses, filesystem paths, environment variables, etc, while denying everything else by default. You can for instance allow access to only a specific IP (e.g. `deno run --allow-net='127.0.0.1' main.ts`), while implicitly blocking every other IP.
What the commenter is complaining about is the fact that Deno doesn't check which IP address a domain name actually resolves to using DNS resolution. So if you explicitly deny '1.1.1.1', and the script you're running fetches from a domain with an A record pointing to '1.1.1.1', Deno will allow it.
In practice, I usually use allow lists rather than deny lists, because I very rarely have an exhaustive list on hand of every IP address or domain I'm expecting a rogue script to attempt to access.