This is one of the nicer ones.
It looks pretty conservative in it's use of Rust's advanced features. The code looks pretty easy to read and follow. There's actually a decent amount of comments (for rust code).
Not bad!
> Currently, Asterinas only supports x86-64 VMs. However, our aim for 2024 is to make Asterinas production-ready on x86-64 VMs.
I'm confused.
> By 2024, we aim to achieve production-ready status for VM environments on x86-64. > In 2025 and beyond, we will expand our support for CPU architectures and hardware devices.
Its more of a research OS but still cool.
The big concern I have however is hardware support, specifically networking hardware.
I think a very interesting approach would be to boot the machine with a FreeBSD or Linux kernel, just for the purposes of hardware as well as network support, and use a sort of Rust OS/abstraction layer for the rest, bypassing or simply not using the originally booted kernel for all user land specific stuff.
p.s.: i was wrong
>While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.
https://asterinas.github.io/book/kernel/linux-compatibility....
They might not yet implement everything that's needed to boot a standard Linux userland but you could say boot straight into a web server built for Linux, instead of booting into init for example.
> While we prioritize compatibility, it is important to note that Asterinas does not, nor will it in the future, support the loading of Linux kernel modules.
> If everything goes well, Asterinas is now up and running inside a VM.
Seems like the developers are very confident about it too
> [...] we accommodate the business need for proprietary kernel modules. Unlike GPL, the MPL permits the linking of MPL-covered files with proprietary code.
Glancing at the readme, it also looks like they are treating it as a big feature:
> Asterinas surpasses Linux in terms of developer friendliness. It empowers kernel developers to [...] choose between releasing their kernel modules as open source or keeping them proprietary, thanks to the flexibility offered by MPL.
Can't wait to glue some proprietary blobs to this new, secure rust kernel /s
You can check out hardware support here: https://core.dpdk.org/supported/nics/
There's no specification of that ABI, much less a compliance test suite. How complete is this compatibility?
Their conclusion is io_uring is still slower but not by much, and future improvements may make the difference negligible. So you're right, at least in part. Given the tradeoffs, DPDK may not be worth it anymore.
So lemme ask: what other languages and project (open/closed, big/small, web/mobile/desktop, game/consumerapp/bizapp) have you experience with as to come to this conclusion?
That's basically what you're getting with Docker containers and a shared kernel. AWS Lambda is doing something similar with dedicated kernels with Firecracker VMs
https://asterinas.github.io/book/kernel/linux-compatibility....
This is all paraphrased from my memory, so take it with a grain of salt. I think the gist of it is still valid: Projects like Asterinas are interesting and have a place, but they will not replace Linux as we have it today.
(Asterinas, from what I understood, doesn't claim to replace Linux, but it a common expectation.)
> Torvalds seemed optimistic that "some clueless young person will decide 'how hard can it be?'" and start their own operating system in Rust or some other language. If they keep at it "for many, many decades", they may get somewhere; "I am looking forward to seeing that". Hohndel clarified that by "clueless", Torvalds was referring to his younger self; "Oh, absolutely, yeah, you have to be all kinds of stupid to say 'I can do this'", he said to more laughter. He could not have done it without the "literally tens of thousands of other people"; the "only reason I ever started was that I didn't know how hard it would be, but that's what makes it fun".
Unprivileged services can exploit known compiler bugs and do anything they want in safe Rust. How this affects their security model?
Sure is a lot of text to say: We try to use unsafe as little as possible.
Which is the minimum you'd expect anyways ¯\_(ツ)_/¯
0. https://asterinas.github.io/book/kernel/the-framekernel-arch...
> utilize the more productive Rust programming language
Nitpick: it’s 2024 and these ‘more productive’ comparisons are silly, completely unscientific, And a bit of a red flag for your project: The most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with. Unless you’re comparing something rating Ruby vs RiscV assembly, it’s just hocus-pocus.
As the saying goes "We do this not because it is easy, but because we thought it would be easy."
Occasionally these are starts of great things.
lol. No. They just added a CPU and then offloaded all the closed source userspace driver code to it leaving behind the same dumb open sourceable kernel driver shim as before (ie instead of talking to userspace it talks to the GPU’s CPU).
> The past 30 years of the Linux kernel's evolution has proven that there is no need for a stable kernel ABI.
What the last 30 years have shown is that there is actually a need for it, otherwise DKMS wouldn’t be a thing. Heck, intel’s performance profiler can’t keep up with the kernel changes which means you get to pick running an up to date kernel or be able to use the open source out-of-tree kernel module. The fact that Linux is alone in this should make it clear it’s wrong. Heck Android even wrote their own HAL to try to make it possible to update the kernel on older devices. It’s an economics problem that the Linux kernel gets to pretend doesn’t exist but it’s a bad philosophical position. It’s possible to support refactoring and porting to new platforms while providing ABI compatibility and Linux is way past the point where it would even be a minor inconvenience - all the code has ossified quite a bit anyway.
Ah good times.
That's an interesting example because Huawei equipment is currently being removed by several Western countries (UK, Canada, US, Germany) specifically because it's Chinese.
https://www.nytimes.com/2024/07/11/business/huawei-germany-b...
https://www.cbc.ca/news/politics/huawei-5g-decision-1.631083...
https://www.gov.uk/government/news/huawei-to-be-removed-from...
https://www.reuters.com/business/media-telecom/us-open-progr...
A lot of the benefit of dpdk is colocating your data and network stack in the same virtual memory context. io_uring I can see getting you there if you have you're serving fixed files as a cdn kind of like netflix's appliances, but for cases where you're actually doing branchy work on the individual requests, dpdk is probably a little easier to scale up to the faster network cards.
If this catches on and has generally been subject to significant third party code review with positive results, I'm not sure any backdoor is lower cost to use than an equivalent linux vulnerability. To be fair, I'm not sure it isn't either.
You look at Linux's syscall table[0], read through the documentation to figure out the arguments, data types, flags, return values, etc., and then implement that in your kernel. The Linux ABI is just its "library" interface to userspace.
It's probably not that difficult; writing the rest of the kernel itself is more challenging, and, frankly, more interesting. Certainly matching behavior and semantics can be tricky sometimes, I'm sure. And I wouldn't be surprised if the initial implementation of some things (like io_uring, for example, if it's even supported yet) might be primitive and poorly optimized, or might even use other syscalls to do their work.
But it's doable. While Linux's internal ABI is unstable, the syscall interface is sacred. One of Torvalds' golden rules is you don't break userspace.
I guess it depends on what they mean by "easy". Certainly it's easier in the sense that you can just write code all day long, and not have to deal with the politics about Rust inside Linux, or deal with all the existing C interfaces, finding ways to wrap them in Rust in good, useful ways that leverage Rust's strengths but don't make it harder to evolve those C interfaces without trouble on the Rust side.
But the bulk of Linux is device drivers. You can build a kernel in Rust (like Asterinas) that can run all of a regular Linux userland without recompilation, and I imagine it's maybe not even that difficult to do so. But Asterinas only runs on x86_64 VMs right now, and won't run on real hardware. Getting to the point where it could -- especially on modern hardware -- might take years. Supporting all the architectures and various bits of hardware that Linux supports could take decades. I suppose limiting themselves to three or four architectures, and only supporting hardware made more recently could cut that down. But still, it's a daunting project.
But then there's this Arc, Ref, Pinning and what not - how deep is that rabbit hole?
Let's take example of network. There's IP address, gateway, DNS, routes etc. Depending on distribution we might see something like netplan reading config files and then calling ABI functions?
Or Linux kernel directly also reads some config files? Probably not...
It’s probably easier if the kernel’s key goal is to be compatible with the Linux ABI rather than being compatible with its earlier self while bolting on Linux compatibility.
FWIW that’s what the Linux compatibility layer in the BSDs does and also what WSL 1 did (https://jmmv.dev/2020/11/wsl-lost-potential.html).
It’s hard to get _everything_ perfectly right but not that difficult to get most of it working.
The _static_ borrow checker can only check what is _statically_ verifiable, which is but a subset of valid programs. There are few things more frustrating than doing something you know is correct, but that you cannot express in your language.
Edit: looks like iproute2 uses NETLINK, but non-networking tools might use syscalls or device ioctls.
What tends to make Rust complex is advanced use of traits, generics, iterators, closures, wrapper types, async, error types… You start getting these massive semi-autogenerated nested types, the syntax sugar starts generating complex logic for you in the background that you cannot see but have to keep in mind.
It’s tempting to use the advanced type system to encode and enforce complex API semantics, using Rust almost like a formal verifier / theorem prover. But things can easily become overwhelming down that rabbit hole.
Rust was a great idea, before LLMs, but I don't see the motivation for Rust when LLMs can be the solution initial for C/C++ 'problems'.
It doesn't matter how the language gets deployed, if the runtime is on a container, a distroless container, or directly running on an hypervisor.
The runtime provides enough OS like services for the programming language purposes.
Lifetimes aren't bad, the learning curve is admittedly a bit high. Post-v1 rust significantly reduced the number of places you need them and a recent update allows you to elide them even more if memory serves.
Arc isn't any different than other languages, not sure what you're referring to by ref but a reference is just a pointer with added semantic guarantees, and Pin isn't necessary unless you're doing async (not a single Pin shows up in the kernel thus far and I can't imagine why I'd have one going forward).
"SR-IOV was used on the NIC to enable the use of virtual functions, as it was the only NIC that was available during the study for testing and therefore the use of virtual functions was a necessity for conducting the experiments."
But what about tools like valgrind in context of C?
My theory is that this is essentially a long term project to bring the core of Chrome OS and Android to rely on Fuschia for its core which gives them syscall level compatibility with what they both use at the moment and that they would both essentially sit as products on top of that.
This is essentially the exact strategy they used if I remember correctly with the Nest devices where they swapped out the core and left the product on top entirely unchanged. Beyond that in a longer term scenario we might also just see a Fuchsia OS as a combined mobile / desktop workstation setup and I think part of that is also why we are seeing ChromeOS starting to take a dependency on Android’s networking stack as well right now.
[1] https://www.androidauthority.com/microfuchsia-on-android-345...
Wut? More than 10 years ago, a cheap beige box could saturated a 1Gbps link with a kernel as it came from e.g. Debian w/o special tuning. A somewhat more expensive box could get a good share of a 10Gbps link (using Jumbo frames), so these new results are, er, somewhat underwhelming.
The company I work for has both rust and python projects (through partially pre "reasonable python type linting" using mypy and co.) and the general consensus there is "overall" rust is noticeable more productive (and stable in usage/reliable), especially if you have code which changes a lot.
A company I worked previous for had used rust in the very early days (around 1.0 days) and had one of this "let's throw up a huge prototype code base in a matter of days and then rewrite it later" (basically 90% of code had huge tech dept). But that code base stuck around way longer then intended, caused way less issues then expected. I had to maintain it a bit and in my experience with similar code in Python and Js (and a bit Jave) I expected it to be very painful but surprisingly it wasn't, like at all.
Similar comparing my experience massive time wastes due to having to debug soundness/UB issues in Rust, with experiences in C/C++ it's again way more productive.
So as long as you don't do bad stuff like over obsessing with the type system everything in my experience tells me using Rust is more productive (for many tasks, definitely not all task, there are some really grate frameworks doing a ton of work for you in some languages against which the rust ecosystem atm. can't compete).
---
> Most productive language for a developer is the one they understand what is happening one layer below the level of abstraction they are working with.
I strongly disagree, the most productive language is the one where the developer doesn't have to care much about what happens in a layer below in most cases. At least as long as you don't want obsess over micro optimizations not being worth the time and opportunity cost they come with for most companies/use cases.
Arc is nothing more than reference counting. C++ can do that too, and I'm sure there are C libraries for it. That's not an admission of anything, it's actually solving the problem rather than ignoring it and hoping it doesn't crash your program in fun and unexpected ways.
Using Arc also comes with a performance hit because validation needs to be done at runtime. You can go back to the faster C/C++ style data exchange by wrapping your code in unsafe {} blocks, though, but the risks of memory corruption, concurrent access, and using deallocated memory are on you if you do it, and those are generally the whole reason people pick Rust over C++ in the first place.
Generally this is a very interesting question hat could be discussed in a very long thread, but still the reader will not get any value from it.
Is that the new generation of curl | bashism in action?
Provided you have virtio support you are ticking a lot of boxes already.
https://asterinas.github.io/book/osdk/guide/run-project.html
Have a look at AMD GPU driver. Massive, and full of 'stabilization/work around' code... happening all the time, for years.
I guess, the real "first thing first" is to design hardware, performant hardware on latest silicon process , with a, as simple as possible, modern, standard and stable hardware programing interface. Because, for many types of hardware, 'now we know how to do it properly' (command hardware ring buffers usually, or a good compromise for modern CPU architecture, like RISC-V).
Another angle of "cleanup", I guess it would be the removal of many of the C compiler extension (or "modern C") tantrums from linux, or at least proper alternatives with not-inline assembly to allow small and alternative compilers to step in.
Personally, I tend to write rv64 assembly (which I interpret on x86_64), but for the userland. If I code C, I push towards mostly "simple and plain C99".
The more I think about it, the more I get the following coming to my mind: 'hardware with simple standard interfaces' and standard assembly for the kernel.
Several kernels for example use type-stable memory, memory that is guaranteed to only hold objects of a particular type, though perhaps only providing that guarantee for as long as you hold an RCU read-lock (this is the case in Linux with SLAB_TYPESAFE_BY_RCU). It is possible in some cases to be able to safely deal with references to objects where the "lifetime" of the referent has ended, but where by dint of it being guaranteed to be the same type of object, you can still do what you want to do.
This comes in handy when you have a problem that commonly appears in kernels where you need to invert a typical lock ordering (a classic case is that the page fault codepath might want to lock, say, VM object then page queue, but the page-replacement codepath will want to lock page-queue then VM object.)
Unfortunately it's hard to think of how the preconditions for these tricks could be formally expressed.
Absolutely! Let's dive into writing a device driver for the Intel i350 4 Port Gigabit Ethernet Controller using Rust. This is an exciting project that combines low-level hardware interaction with the safety and performance benefits of Rust. I'll create a basic structure for our driver, focusing on the key components needed to interact with the device.
#![no_std]
#![feature(abi_x86_interrupt)]
...
but I'm not qualified to judge the quality from eyeballing and I'm certainly not going to go to the trouble of trying to test it.It's not a great jump from that to "port Linux device driver for XYZ to this new OS in Rust". Won't be perfect but a lot less hassle than doing it from scratch.
curl | bash has an actual problem: potential execution of an incomplete script (which can be mitigated with function calling). And there’s the mostly theoretical problem of the server being pwned / sending malicious code just to you (which of course also applies to any other unsigned channel). Arbitrary code execution is never a problem unique to it, but people dunk on it all the time because they saw another person dunking on it in the past.
- Technical problem (like connection problems) means I don't know what's in the db
- No technical problem, but no user entry
- No technical problem, and a user entry
You need the Result for the technical problems, and the Option for whether there's a user entry or not.
There was so little trust in the fragility of the original, it took a few months to convince everyone the refactored TS branch was safe.
After that, feature development was a lot faster in terms of productivity again.
If you're just starting out or doing something relatively simple, your goal is to get something working. This is so true regardless of the language.
No. I've written code that returns Result<Option<T>>. It was a wrapper for a server's query web API.
The Result part determines whether the request succeeded and the response is valid.
The Option part is because the parameter being queried might not exist. For example, if I ask the API for the current state of the user session with a given Session ID, but that Session ID does not exist, then the Rust wrapper could return OK(None) meaning that the request succeeded, but that no such session was found.
From the looks of it, this seems like a serious corporate backed project made by employees of the Ant Group, the chinese fintech giant. A more fair comparison would be with Google's Fuchsia OS (defunct) or Huawei's HarmonyOS. It may succeed, it may fail, but it's nothing like a couple of kids doing a passion project to learn Rust.
As well as some subset of the files expected in /dev, /proc, /sys, and similar, which are also part of the userspace ABI. And the startup mechanisms for processes, and the layout of AUXV...
It's absolutely doable, but the interface is wider than just the syscall layer.
An example that illustrates this: https://lwn.net/Articles/22991/
(And wow, it's been 22 years already...?)
That is why a query that successfully returns no items can be represented as Ok(None).
A successful query with items returned would instead be Ok(Vec<Item>).
An error in the completing the query (for example, problem with the database), would be Err(DatabaseError) or Err(SomeOtherError).
But in this case, a query using an invalid session ID is not an error. It is asking for details about something that does not exist.
>> cf File::open that could return Result<Option<File>> if file is not found.
This type of query is not like File::open which gets a handle to a resource. Trying to get a handle to a resource that does not exist is an error.
This type of query is read-only and does not allocate any resources or prepare to do anything with the session.
It simplifies the control flow because it distinguishes between errors in completing a query versus the presence or absence of items returned from the query.
The complexity on the other hand is architectural and logical to achieve scale to hundreds of CPUs, maximise bandwidth and reduce latency as much as possible.
Any normal Rust kernel will either have issues scaling on multi-cores or use tax-heavy synchronisation primitives. The kernel RCU and lock-free algorithm took a long time to be discovered and become mature and optimised aggressively to cater for the complex modern computer architectures of out-of-order execution, pipelining, complex memory hierarchies (especially when it comes to caching) and NUMA.
They've been working on it for a while so they can get rust into the linux kernel
Pin leverages the type system to expose to the programmer receiving a pointer to a Pin'ned object, that this object has some pointer on itself (a self referencial struct). You better be mindful not to move this object to a different memory location unless you know for sure that it is safe to do so. The Pin abstraction makes it harder to forget; and easier to notice during code review; by forcing you to use the keyword unsafe for any operations on the pinned object that could move it around.
In C, there is no such way to warn the programmer besides documentation. It is up to the programmer to be very careful.
Intuition tells me that Rust is young enough to attract a certain type of early adopter, the kind of programmer who is more likely to document their code well from the outset.
even if the LLM is trained on flawless C code (which it isn't) it still has no way of reasoning about a complex system, it's just "what token is statistically most likely to come next"
To reach a useful state, you only need to be highly performant on a handful of currently popular server architectures.
> Any normal Rust kernel will either have issues scaling on multi-cores or use tax-heavy synchronisation primitives.
I'm not sure how that applies to Asterinas. Is Asterinas any normal Rust kernel?
https://asterinas.github.io/book/kernel/the-framekernel-arch...
But like I said, I've not looked at any Rust despite its marketing success.
But in that case you're stuck paying the overhead 100% of the time, even though 90% of the lifetimes are simple. (Perhaps a little less so with escape analysis etc., but doing it at compile time in a way that's understandable in the source feels a lot more reliable)
Documentation and how SQL database queries work.
The documentation states that a valid session id will return a SessionInfo struct (since it is an Option the type is Some(SessionInfo) ), and that an invalid session id will return None.
If the SQL query is something like "SELECT * FROM USER_SESSIONS WHERE USER_SESSION_ID = $1" then if an invalid session id is provided the database returns zero rows. The query was successful, but there were no matching sessions with that session id.
>> You have conflated empty response with an erroneous situation. The simplest solution is just Result<T, E>.
Again, an empty response is not an error in this situation. If your database query returns zero rows, is it an error? The database query succeeded. There are no sessions with the provided session id. What error occurred?
The kernel is doing so much anyway with memory maps and flipping in / out pages for scheduling and context switching that Pin doesn't add any value in such cases anyway.
It was also specifically built for async rust. I've never personally seen it in the wild in any other context.
I was hired at Joyent largely to work on bhyve so that Triton and Joyent’s public cloud had a way to run Linux VMs when full Linux compatibility was more important than the efficiency of zones/containers.
As soon as I can financially retire, I'll make contributing to this my full time job!
You still have kernel modules for microkernel-like functionality
I guess the NT kernel needs to. Does Darwin?
- Overall too complex
- Wrong philosophy: demanding the user to solve problems instead of solving problems for the user
- Trying to provide infinite backwards compatibility with crates, which leads to hidden bitrot
- Slow compilation times
- Claims to be "safe" but allows arbitrary unsafe code, and it's everywhere.
- Adding features to fix misfeatures (e.g. all that lifetime cruft; arc pointers) instead of fixing the underlying problem
- Hiding implementations with leaky abstractions (traits)
- Going at great length to avoid existing solutions so users re-invent it (e.g. OOP with inheritance; GC), or worse, invent more complex paradigms to work around the lack (e.g. some Rust GUI efforts; all those smart pointer types to work around the lack of GC)
- A horrendous convoluted syntax that encourages bad programming style: lot's of unwrap, and_then, etc. that makes programs hard to read and audit.
- Rust's safe code is not safe: "Rust’s safety guarantees do not include a guarantee that destructors will always run. [...] Thus, allowing mem::forget from safe code does not fundamentally change Rust’s safety guarantees."
It already has similar complexity and cognitive demands as C++ and it's going to get worse. IMHO, that's also why it's popular. Programmers love shitty languages that allow them to show off. Boring is good.
Why would that be the case at all? What has Rust anything to do with that?
I find a lot of the complexities tend to come from devs with more experience in communities that tend to add complexity by nature (C# and Java devs in particular). YMMV of course, that's just been my take so far. I've written a few simple web (micro)services in Rust and a couple of playground Tauri apps. I will say the simpler tasks have been incredibly easy to work through.
Though I may not have always taken the absolutely most performant, least memory path of work, it's been smaller/faster than other platforms and languages I have more experience with. And that's without even getting into build/compile time optimization options.
Completely subjective. I've learned all there is to learn about Rust's syntax and most of its standard libraries, I think, and it's really not all that, in my personal opinion. There are certainly much more complex languages out there, even dynamic languages. I'd argue Typescript is more complex than Rust as a language.
> Wrong philosophy: demanding the user to solve problems instead of solving problems for the user
I have no idea what you mean by this. Do you mean you want more magic?
> Trying to provide infinite backwards compatibility with crates, which leads to hidden bitrot
Backwards compatibility reduces bitrot. Bitrot is when the ecosystem has moved on to a point of not supporting features used by stale code, thus making the code partially or completely unusable in newer environments as time progresses and the code doesn't update.
The Rust editions explicitly and definitively solve the bitrot problem, so I'm not sure what you're on about here.
> Slow compilation times
Sure, of course. That's really the biggest complaint most people have, though I've had C++ programs take just as long. Really depends on how the code is structured.
> Claims to be "safe" but allows arbitrary unsafe code, and it's everywhere.
Unsafe isn't a license to kill. It also doesn't allow "arbitrary" code. I suggest reading the rustnomicon, the book about Rust undefined behavior. All `unsafe` code must adhere to the postcondition that no undefined behavior is present. It also doesn't remove borrow checking and the like. Without `unsafe` you couldn't do really anything that a systems language would need to do in certain cases - e.g. writing a kernel requires doing inherently unsafe things (e.g. switching out CR3) where no compiler on earth currently written will understand those semantics.
People seem to parrot this same "unsafe nullifies rust's safety" without really understanding it. I suppose they could have renamed the `unsafe` keyword `code_the_does_stuff_unverifiable_by_the_compiler_so_must_still_adhere_to_well_formed_postrequisites_at_risk_of_invoking_undefined_behavior` but alas I think it'd be pretty annoying to write that so often.
It's pretty typical to abstract away `unsafe` code into a safe API, as most crates do.
> Adding features to fix misfeatures (e.g. all that lifetime cruft; arc pointers) instead of fixing the underlying problem
Lifetimes aren't "cruft", not sure what you mean. They've also been elided in a ton of cases.
An "arc pointer" isn't a thing; there's ARC (which is present in every unmanaged language, including C++, Objective-C, Swift, etc). I'm not sure what the "underlying problem" is you're referring to. Rust takes the position that the standard library shouldn't automatically make e.g. Mutexes an atomically reference counted abstraction, but instead allow the user to determine if reference counting if even necessary (Rc<Mutex>) and if it should be atomic so as to be shareable across cores (Arc<Mutex>). This type composure is exactly why Rust's type system is so easy to work with, refactor and optimize.
> Hiding implementations with leaky abstractions (traits)
Sorry for being blunt but this is a word salad. Traits aren't leaky abstractions. In my personal experience they compose so, so much better and have better optimization strategies than more rigid OOP class hierarchies. So I'm not sure what you mean here.
> Going at great length to avoid existing solutions so users re-invent it (e.g. OOP with inheritance; GC), or worse, invent more complex paradigms to work around the lack (e.g. some Rust GUI efforts; all those smart pointer types to work around the lack of GC)
Trait theory has been around for ages. GC is not a silver bullet and I wish people would stop pretending it was. There are endless drawbacks to GC. "All those smart pointer types" -- which ones? You just seem to want GC. I'm not sure why you want GC. GC solves few problems and creates many more. It can't be used in a ton of environments, either.
> A horrendous convoluted syntax that encourages bad programming style: lot's of unwrap, and_then, etc. that makes programs hard to read and audit.
This is completely subjective. And no, there's not a lot of `and_then`, I don't think you've read much Rust. Sorry if I'm sounding rude, but it's clear to me by this point in my response that you've played with the language only at a very surface level and have come to some pretty strong (and wrong) conclusions about it.
If you don't like it, fine, but don't try to assert it as being a bad language and imply something about the people that use it or work on it.
> Rust's safe code is not safe: "Rust’s safety guarantees do not include a guarantee that destructors will always run. [...] Thus, allowing mem::forget from safe code does not fundamentally change Rust’s safety guarantees."
You misunderstand what it's saying there but I'm honestly tired of rehashing stuff that's very easily researched that you seem to not be willing to do.
As long as the Rust fans stick to their favorite language, everybody can be happy.
Sigh. This is not true. Not the first part, and especially not the last part. `Unsafe` doesn't allow arbitrary, unsafe code. It resets the compiler to a level where most manually managed languages are all the time. You still have to uphold all guarantees the compiler provides, just manually. That's why Miri exists.
"Unchecked" or "Unconfirmed" would've perhaps been better choices, but Rust considers all other manual memory and reference management unsafe, so the word stuck.
It doesn't seem you're making an informed statement at all anywhere in this thread, choosing instead to be hung up on semantics rather than the facts plainly laid out for you.
If that makes me an "enthusiast" then so be it.
The only times I‘ve used unsafe code is for FFI and very rarely on bare metal machines.
A common Rust programmer will never use unsafe. They will use safe abstractions by the standard library. There is no need for direct use of unsafe in application code, and only very rarely in library code.
In fact, [1] reports that most unsafe calls in libraries are FFI calls into existing C/C++ code or system calls.
[1]: https://foundation.rust-lang.org/news/unsafe-rust-in-the-wil...
I love C and I have used it for more than a decade, but I wouldn‘t choose it again. The most important thing I save with Rust is time and also my sanity. The very fact that I can trust my code if it compiles and that I don’t have to spend hours in GDB anymore makes it worth my while.
That's a lot of unsafe code for an allegedly safe language. Of course, most of it calls into system libraries. I never claimed or insinuated anything to the contrary (except perhaps in your imagination). But if you compare that to typical Ada code, the latter is much safer. Ada programmers try to do more things in Ada, probably because many of them need to write high integrity software.
Anyway, Rust offers nothing of value for me. It's overengineered and the languages I use are already entirely memory safe. Languages are mere tools, if it suits you well, continue using your Rust. No problem for me. By the way, I welcome when people re-write C++ code in Rust. Rust is certainly better than that, but that's a low-hanging fruit!
Ada is fine, just verbose, kinda fun, no comments about it except that its kinda sad how weak their formal verification is. I prefer Frama-C over it. You can compare Ada and rust but ada is horrible, sincerely horrible at working with ownership. Frama-C can run laps around it as you can verify EVEN arbitrary pointer arithmetic.
Calling rust a horrible abomination is weird. As someone who dabbled in CL for an year, I love the fact that it has proc macros and even tho its harder to use it, i can make my own DSLs and make buildtime compilers!!
That opens up a world of possibilities.We can actually have safer and stricter math libraries! Maybe put us back in era of non-electron applications?
The horrible part might be syntax but eh, its a stupid thing to care about.
It could not verify dynamic allocations thats why it has such a huge toolset for working with static allocations.
Frama-C allows you to program in a safe subset of the unsafe language called C.
And these languages are the backbone of everything where lives are at risk. YOu can have a language that allows both unsafe and safe.
Safety is not binary and our trains run C/C++ [BOTH UNSAFE LANGUAGES]
If Ada was used in domains where rust is used, like desktop applications, servers, high perf stuff, it would also do unsafe stuff you could never verify using spark.
But instead it is used in microcontrollers with runtimes provided by adacore and other vendors. Can you fully know if those pieces of code are 100% verified and safe? the free ones are not. atleast the free x86 one.
How ridiculous. The language you use is not memory safe btw. unchecked_deallocation can be easily used without any pragmas iirc. You need to enable spark_mode which will restrict you to an even smaller subset! You cannot even safely write a doubly linked list in it![you can with great pain in rust] [with less pain in Frama-C] [never tried ats]
Not really. It's mostly a modernized version of Zetalisp. In many cases simpler as that, with some added new stuff (like type declarations).
Well, since Rust is explicitly a system programming language, you would expect it to call into underlying systems more often, hence the use of unsafe.
The difference is this: Like all programming languages, Rust lives close to the metal. The „unsafe“ keyword is merely a marker that a system call might happen here, which might be inherently unsafe (think of C‘s localization methods which are not thread safe).
That‘s it. You can call ADA more safer but it still has to adhere to the underlying complexity of the system it runs on, and upon interaction with it via FFI calls it will be just as unsafe, just without a marker.
The low hanging fruit is exactly what Rust is made for. It‘s explicitly overengineered for that one use case, where GC languages can not be used for whatever reasons. It lives in the twilight zone between a GC and calling alloc/free yourself.
I disagree with people rewriting everything in Rust that could be simpler and better done with Python/Csharp/Go/etc. But if you need to work with manual memory management or concurrency with shared references, Rust is certainly your best bet.
CL is fairly carefully designed with regards to compiling. This is why math functions are not generic for instance. Redefining standard functions is undefined behavior, as a self-modifying code. It omits features that don't integrate well with conventional run time and machine models like continuations. It doesn't even require implementations to optimize tail calls.
I have no idea why ANSI CL has such a large page count. In my mind it's such a small language. I think it could have benefited from an editorial pass to get it down to 600-something pages. But that would have delayed it even longer.
Once the horse escapes the barn it's risky. When you rewrite technical text you can very easily change the meaning of something, or take a particular interpretation where multiple are possible and such.
There were many unhappy, but from very different camps. Some were unhappy (for example people in the small Standard Lisp camp) because Common Lisp was not dynamic enough (it has features of static compilation, no fexprs, ...). Others were unhappy because it was too dynamic and difficult to compile to efficient code on small machines with stock CPUs. Some complained that it was too large and hard to fit onto some of the tiny machines of that time. Others complained that it was too small and lacked critical features from larger Lisp implementations (like stack groups, threads, a fully integrated object system, the first version had no useful error handling, gui functionality, extensible streams, ...).
Many more users/implementors from other Lisp dialects were unhappy, because it was clear that their favorite Lisp dialect would slowly fade away - funding was going away, new users would avoid it, existing users would port their code away, ...
> This is why math functions are not generic for instance
The math functions are generic (they work for several types). But there was no machinery behind that specified. They were not generic in the sense of CLOS generic functions (or similar). Also because with CLtL1 there was no such machinery in the language, but there are (non-extensible) generic numeric functions. CLOS later added a machinery for generic functions, but there was no experience to create optimized&fast code for it. The way of a CLtL1 Lisp implementation for fast numeric functions was to specify types and let a compiler generate type specific (non-generic) code. ANSI CL left the language in that state: the generic numeric functions were not implemented by CLOS, similar to so much in the language specification avoids further integration of CLOS and leaves it to implementations to decide how to implement them: I/O, condition handling, ...
> I have no idea why ANSI CL has such a large page count.
It was supposed to be a language specification for industrial users with detailed infos. There were standard templates how to specify a function, macro, ...
The Scheme reports OTOH were made to have the smallest page count possible with 2 columns of text, leaving out much of the detail of a real language spec. Why? Because it was material for a teaching language and thus was supposed to be read by students learning the language in a semester course at the university. Thus R5RS specified a teaching language, just barely, not as a full application programming language (for example it has zero error handling and basic things were just barely specified in its behavior and implementation).