I've definitely felt the awkwardness of gems being so compartmentalized by project that using system level dev tools that I like to have available for all my projects feel out of place within my project's Bundler-centric world.
It's likely that you get better per language features for something specific to the language though. We end up in exactly the same kind of frustration, that for some random project you need this specific tool that does dependency management of the specific runtime. asdf and mise both respect a .tool-versions file, I'd rather see things go more in that direction with some kind of standard.
We really don’t have the features they’ve been discussing including the npx like feature and easily just run Ruby without installer headaches that it seems they’ve gone after solving.
Reframing, id like to ask that .tool-versions be supported as a place where we can define Ruby versions. Then both tools with a little tweaking could pretty much be used side by side.
When I cd into a project directory I get all of the tools that project needs, at the exact versions they were tested with. When I cd out of the directory, they go away. If the dependencies are well behaved (e.g. they don't do any self-modification, which is annoyingly common in the npm world) then it's often pretty easy to track all of your deps this way, imported from your npm package-lock.json or similar.
Ask your favorite LLM to write your flake.nix file for you, they're pretty good at it. I've been able to drop all of the bespoke language-specific tool versioning stuff `nvm`, `uv`, `rvm`, etc for all my personal projects, and it makes it easy to add deps from outside the language-specific package managers, like ffmpeg and lame.
I've been very happy with `mise` since switching from asdf, and also very happy with uv in general. I think they play nice together.
Language Orchestrator?
Typically I wire up something like uv or rv (or Poetry or Bundler, which are fine but slower) to shell activation using Devenv, then autogenerate the Nix package from the language-native dependency specification files.
Microsoft chose Go for tsc rewrite. https://devblogs.microsoft.com/typescript/typescript-native-...
And then there's esbuild, also in Go, which revolutionized web bundling speed https://esbuild.github.io
Do I understand right it doesn't use bundler code for resolving gem requirements dependency tree, but uses it's own code meant to be compatible? Hmmm.
And also producing the `Gemfile.lock`, which has had kind of a lot of churn in bundler, which bundler has had to work to keep from breaking for people even when it's assumed they're all using (different versions of) bundler.
It's impossible to do this kind of rewrite from a GC language to a non GC one, especially Rust where the object soup of typescript will probably cause the borrow checker to explode.
I think that if MS or someone else decided to write a typescript type checker from scratch there is a high chance Rust will be chosen.
Which is not to say that Go can't do well in tooling. Only that Go was not necessarily their first choice.
It doesn't even have advanced generics like TypeScript, nor union types. No classes and no heritance either.
Unless you have a source, I'd say that's a very debatable speculation.
My guess is they chose Go for the same reason most users do: it's good enough, easy to grasp, has a decent std lib and is easy to grasp.
Around the 13 minute mark, Anders goes into it. IIRC, the big things were the GC and them both supporting cyclic data structures.
I've only ever just straight up downloaded the source and installed it myself, never had any issues with Ruby updates...
Flake.nix is nix specific I would guess?
I can think of one meaningful breaking change going from 2.7 to 3.0, where the longtime behavior of creating an implicit "options hash" as the last argument to a method was finally removed. It was gradual though. First we got keyword arguments. Then keyword arguments got faster. Then there were two versions of warnings about the removal of the old, implicit options hash. Thus if you really wanted to kick the can down the road, you had about 5 years where you could be on a "supported" Ruby version without having fixed your code. Plus the change required was relatively simple outside of a few edge cases.
The best part was that most of the community was pretty good about upgrading gems across the years of warnings we had for this. Hats off to the maintainers and contributors for that!
What if Google spent all that time and money on something from the outside instead of inventing their own language? Like, Microsoft owns npm now.
mise looks nice, uses PATH manipulation rather than asdf's slow wrappers, and it supports Windows, which is a point over nix. nix only supports unixy environments like Linux, Mac, and WSL.
What might tempt a mise user to try nix are its just truly stupendous collection of packages, so more tools are available. You can also easily add your own packages, either upstream or privately. nix is bigger, more ambitious, more principled, but more complicated. You can build an entire fully-reproducible operating system from a short nix config. It's really cool! But also a lot more to learn, more surface area, more places to get confused or to spend time fiddling with configs rather than solving the actual problem.
> nix-locate -r 'bin/uv'
Not perfect, but sort of useful for choosing names for executables for internal corporate projects, little wrapper scripts, etc. It's definitely still possible to find reasonable names!
But I've never done the analysis of such short names yet :D
As an idea: add advantages compared to rvm, rbenv, etc. Or a comparison table
> Ruby Versions: Ruby 3.4.1 and up
It turns out that this is only for the latest ruby versions :(
But I will follow the development!
If I understand correctly, rvm/rbenv only install Ruby versions, and you use bundler to install dependencies. rv seems to manage everything (like uv in Python) – Ruby versions and dependencies, and adds things on top of that (like `rv tool` for global tools installation).
Deserializing JSON and XML is a breeze from my experience. And it's available out of the box. But I guess C++ will get there with reflection having been approved in C++26.
So I don't think it will go away (in the coming years at least), since a lot of tools is written in it.
In your case, StringIO use to just be stdlib code so bundler (or rubygems) uses it. Later on it became a gem, so by requiring it before reading the Gemfile, bundler run into this problem of already having loaded the wrong version.
Everytime this happens the bundler team has to modify bundler, and as a user the fix is to upgrade bundler.
You can see they had to vendor a lot of code from default gems to avoid this problem: https://github.com/rubygems/rubygems/tree/c230844f2eab31478f...
Depending on a library that uses ducktyping (like any sane library following the Ruby conventions)? Good luck writing a wrapper for it. Or just disable type checking.
This goes so much against the Ruby vibe that I advise you to just go and use Rust instead if you hate Ruby that much that you want to butcher it with types.
It does welcome nil proliferation though! Just sprinkle some `nilable` around and you're set to continue the actual scourge of the language.
- By default uv is creating isolated environments in the project directory and will download all dependencies over the network. For small stuff this isn't too bad, but re-downloading 700mb pytorch each time you clone a repo gets annoying very fast. Of course there are trade-offs with running updates less frequently (and uv has flags such as --offline and --refresh to avoid or force online access) but more sensible default behavior would be nice so that uv (and rv) keep you on the happy path during developing. Maybe updates could be run in the background by default.
- Also because the environments aren't shared in any way, each project directory consume a lot of disk space (10x checkouts = 10x pytorch on disk). More sensible caching across environments would be nice (links?).
- Using uv to turn Python files into standalone/self-contained scripts is really great (https://peps.python.org/pep-0723/) and I hope rv can mirror this capability well. Because the lock file isn't included in the script header it requires some configuration options to make runs repeatable (e.g. https://docs.astral.sh/uv/guides/scripts/#improving-reproduc...).
- I am wondering if rv will support version operators such as rv install "~> 3.4.4" to get ruby ">= 3.4.4, < 3.5.0", which I think would help ensure everyone is running Ruby versions with security patches applied.
- uv includes the pip sub-command (similar to using gem install rather than bundle add) but because the environments are isolated this feels rather weird and I haven't really understood in which cases you should not just use "uv add" to update your project dependencies.
- Uv tries hard to support migration from legacy Python projects which don't have a pyproject.toml, but the Python eco-system is too fragmented for this to always work. I hope rv can avoid adding new config files, but really stick to the existing Gemfile approach.
- If a Python package includes a script with a different name than the package then the syntax is a bit annoying ('uvx --from package script' but not 'uvx script --from package' because this would get passed to the script). Uv already uses square brackets for optional project dependencies (e.g. 'uvx --from huggingface_hub[cli] hf') but since Ruby doesn't have these, maybe this would be an option for rv.
I put my hope in mise-en-place - https://mise.jdx.dev
What do people think? One tool per language, or one to rule them all?
However…more than once we've seen language runtimes that used to be available exclusively via plug-ins be migrated to be internal to mise, which broke everyone's setups in strange and hilarious ways, and caused countless hours of debugging.
Less bad overall than using individual runtime version managers for sure. But the next time mise costs us a bunch of hours fixing multiple engineers' setups, I intend to find another solution, even if that means writing my own. It’s burned us nearly one too many times.
On the other, I'm not sure if this is really needed. Most of this stuff already works fine in Ruby with Bundler. Did you know that Bundler already has a really nice syntax for inline requirements for single-file scripts?[0] Seems like a lot of people forgot. Installing Ruby hasn't generally been much of a hassle either AFAIK. Bundler also doesn't seem to have the Python venv problem - it works fine for keeping a bunch of gem versions around in the same Ruby install and only activating the specified ones. I think Gemfile and Gemfile.lock is what Python always wished they had. I guess more speed never hurt, but it never felt like bundler was painfully slow for me, even on huge codebases. So is there really a big win here?
Though I guess plenty of Python gurus probably feel the same way about the uv craze when their existing tooling works well enough for them.
[0] https://bundler.io/guides/bundler_in_a_single_file_ruby_scri...
* manage dependencies
* format and lint code
* publish package on crates.io
* open the project documentation
* install binaries
* build/run the project
* run tests
* run benchmarks
Uv/rv don't (yet?) do all of that but they also manage Ruby/Python versions (which is done separately by rustup in Rust).
Pip subcommands are here to ease the transition from the old ecosystem to the new.
I'm also excited about `rv tool` because I've been having to re-install rubocop and ruby-lsp gems every time the minor version of the system Ruby is updated. It's just a few commands every year (and I'm sure it's a skill issue) but having things "just work" with a single `rvx rubocop` command will be sweet.
Ruby/Bundler doesn’t have any of these problems, and nothing on their roadmap really excite me.
Except maybe the binary Ruby distribution, but it’s a once or twice a year thing, so not exactly a major pain point.
https://github.com/regularfry/rv
(Kidding! Looks interesting.)
If indirect is salty that the rubygems/bundler didn't turn out yet to be what he wanted, I wonder whether a simpler and faster alternative to bundler written in RUBY wouldn't be the answer, with incremental merges into bundler. Gel was mostly there, even if most never knew about it, but at least it got the bundler ppl to merge the pub grub resolver.
(looks like they've only got installing certain ruby versions working for now.)
[0] https://github.com/spinel-coop/rv/blob/main/docs/PLANS.md
Not true, each virtual environment has its own physical copy on disk.
Since creating venvs is usually per project you have many downloads / copies.
def somemethod(foo, bar)
foo => Integer
bar => MyBarClass
end
Personally I think the RBS-Inine format is the way forward. Sorbet has experimental support for it too.That, and with Ruby, Node, and at least one other language/tool IIRC, when support for those things moved internal, we had to make a bunch of changes to our scripts to handle that change with effectively no warning. That involved checking to see if the third-party plug-in was installed, uninstalling it if so, and then installing the language based on the built-in support. In the meantime, the error messages encountered were not super helpful in understanding what was going on.
I’m hopeful that these types of issues are behind us now that most of the things we care about are internal, but still, it’s been pretty annoying.
After decades of Java (ant/maven), then Scala (sbt), then JS (npm), then TS, switching to Go (Make) some years ago made many problems go away.
The tooling (test, ...) inside the main tool (Rust still a little better I think), now tools/versions inside go.mod, the result of my work being a binary I can run with systemd, embedding files into the binary, all of that removed a lot of dep management issues, building issues, bundling issues, and the need for e.g. Docker for many use cases (which I feel many people only use to package .jar files or gems into something that can run in production).
Seems rv wants the same, "Our end goal is a completely new kind of management tool, [...] Not a version manager, or a dependency manager, but both of those things and more. I’m currently calling this category a “language manager”"
Actually, I'd say this is where Go has a real advantage. Are any other mainstream languages both garbage-collected (for ease of development) and native-compiled (for ease of distribution)?
This: "Then, run the script in an isolated virtual environment"
Universal applicability may not be necessary to write a Ruby installer, but it certainly is to have any hope of taking C's crown.
My environment reasserts that the correct things are installed every time I change into the directory. A no-op Bundle install takes a couple hundred milliseconds, which is not great for something you want to run constantly and automatically. Getting that down to tens of milliseconds will be really nice for me!
There's also benefits to dynamically typed languages, namely runtime metaprogramming.
Dynamic typing + type annotations is the worst of both worlds for questionable benefit.
Python, Javascript (via Typscript), PHP, Elixir
have embraced Gradual Typing/Type Inference.You use typed variables/typed function signatures when it's convenient, they give you some compile-time contracts, easy documentation and probably even speed. Otherwise they don't exist. I don't do Ruby, but Gradual Types/Type Inference is a no-brainer for dynamic languages, practically no drawback, only benefits. (And popular statically typed languages such as C/C++, Java, Rust support Type Inference, or are going there too.)
> I don't do Ruby
So why have an opinion?
Languages I use: Ruby, C++, Odin, R. I'm not about to to around telling Rust, Python or Typescript people they're doing their languages wrong, even if there's things I hate about those languages. I just don't use them.
I could start taking bets every time I now see a "new kind of tool" for XYZ.
I am firmly in the camp tools for specific ecosystems should be written on the language of said ecosystem.
If one is hindered by the language, maybe that isn't the right ecosystem, or therein lies the opportunity for language toolchain improvements.
Ruby has a strong history and tradition of TDD/unit testing. This is partially a crutch for its lack of static type analysis. Ultimately checking for bugs before running code is very useful
It seems you have a lot of opinion here without really discussing your problem with type hints though. What is it you dislike?
I use Ruby regularly, have used it for more than a decade, and I wish it had something like a TypeScript equivalent (Sorbet is sorta this but not enough). Every time I work with a Ruby codebase without Sorbet, it’s a lot of guessing and praying that test coverage is good enough. It’s not fun having to find out in prod there’s dumb bugs that would have been caught by static analysis. Something I experience virtually never in TypeScript, the other language I’ve also used for a decade
It's runtime overhead (or you make a transpiler which then makes the language less dynamic), makes metaprogramming more annoying (other languages solve this with an "any" type which just defeats the purpose), and the "problem" it solves can be solved with tests which you should be writing anyway.
I do use statically typed languages BTW, I just prefer that if I'm going to go through the rigmarole of types that I'm going to get some actual benefit, namely an order of magnitude more performance.
My opinion is probably this since I don't work for a large corporation, I have my own startup, so I value my time a lot. I'm ok with trade-offs, I just think adding type hints to a dynamic language isn't a good trade off; it's almost all downside.
Edit:
> guessing and praying that test coverage is good enough.
You can always improve your test coverage...
On the other hand, a uv-like all-in-one for Ruby is really interesting/tempting.
If you actually want types in Ruby, you should check out Sorbet: https://sorbet.org/
I really appreciate cargo a lot for what it brings, even if it's calling different tools under the covers. Similarly, I appreciate deno in that it brings all the tooling and the runtime in a single executable (box). I've migrated most of my scripting to TypeScript using deno at this point because of that distributive ease. Even if the shebang itself is a bit sloppy.
Aside, would be cool to have a VS Code extension that would to file type detection based on a shebang at the top of the file. Making it easier to do extensionless script files with language support.
It's not as painful as earlier versions and is generally okay to work with... but there are definitely times you need to work around the typing as opposed to with it. For example extending/using context in hono/oak/koa so you get hinting, but want your endpoint handlers in separate modules. It gets very messy, very quickly.
Also if you're ok being dirty, short-lived processes can just leak. Some of those Go CLIs are probably not even GCing before the program exits.
I don't know if the trend started with uv, but please for future engineers trying to debug/learn make it Google-able.
Plus since when is AOT compilation required for CLI tools?
Half of UNIX userspace would be gone.
It's not uncommon to have to download pytorch many times, because some lib may specify one slightly different version than another: https://docs.astral.sh/uv/guides/integration/pytorch/#config...
pytorch causes many headaches in the ecosystem, and a better handling of it is one thing their commercial offering, pyx, wants to fix: https://astral.sh/blog/introducing-pyx
Agree to disagree!