I just want to mention that I disagree with the section titled "Rule: Resolve Paths Before Comparing Them". Generally, it is better to make calls to fstat and compare the st_dev and st_ino. However, that was mentioned in the article. A side effect that seems less often considered is the performance impact. Here is an example in practice:
$ mkdir -p $(yes a/ | head -n $((32 * 1024)) | tr -d '\n')
$ while cd $(yes a/ | head -n 1024 | tr -d '\n'); do :; done 2>/dev/null
$ echo a > file
$ time cp file copy
real 0m0.010s
user 0m0.002s
sys 0m0.003s
$ time uu_cp file copy
real 0m12.857s
user 0m0.064s
sys 0m12.702s
I know people are very unlikely to do something like that in real life. However, GNU software tends to work very hard to avoid arbitrary limits [1].Also, the larger point still stands, but the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true [2]. :)
[1] https://www.gnu.org/prep/standards/standards.html#Semantics [2] https://github.com/advisories/GHSA-w9vv-q986-vj7x
So how can I learn from this? (Asking very aggressively, especially for Internet writing, to make the contrast unmistakable. And contrast helps with perceiving differences and mistakes.) (You also don’t owe me any of your time or mental bandwidth, whatsoever.)
So here goes:
Question 1:
How come "speed", "performance", race conditions and st_ino keep getting brought up?
Speed (latency), physically writing things out to storage (sequentially, atomically (ACID), all of HDD NVME SSD ODD FDD tape, "haskell monad", event horizons, finite speed of light and information, whatever) as well as race conditions all seem to boil down to the same thing. For reliable systems like accounting the path seems to be ACID or the highway. And "unreliable" systems forget fast enough that computers don’t seem to really make a difference there.
Question 2:
Does throughput really matter more than latency in everyday application?
Question 3 (explanation first, this time):
The focus on inode numbers is at least understandable with regards to the history of C and unix-like operating systems and GNU coreutils.
What about this basic example? Just make a USB thumb drive "work" for storing files (ignoring nand flash decay and USB). Without getting tripped up in libc IO buffering, fflush, kernel buffering (Hurd if you prefer it over Linux or FreeBSD), more than one application running on a multi-core and/or time-sliced system (to really weed out single-core CPUs running only a single user-land binary with blocking IO).
In my experience latency and throughput are intrinsically linked unless you have the buffer-space to handle the throughput you want. Which you can't guarantee on all the systems where GNU Coreutils run.
> Does throughput really matter more than latency in everyday application?
IME as a user, hell yes
Getting a video I don't mind if it buffers a moment, but once it starts I need all of that data moving to my player as quickly as possible
OTOH if there's no wait, but the data is restricted (the amount coming to my player is less than the player needs to fully render the images), the video is "unwatchable"
The perception of speed in using a computer is almost entirely latency driven these days. Compare using `rg` or `git` vs loading up your banking website.
Linux desktop (and the kernel) felt awful for such a long time because everyone was optimizing for server and workstation workloads. Its the reason CachyOS (and before that Linux Zen and.. Licorix?) are a thing.
For good UX, you heavily prioritize latency over throughput. No one cares if copying a file stalls for a moment or takes 2 seconds longer if that ensures no hitches in alt tabbing, scrolling or mouse movement.
EDIT: got it. -bash: cd: a/a/a/....../a/a/: File name too long
You could probably make the loop more efficient, but it works good enough. Also, some shells don't allow you to enter directories that deep entirely. It doesn't work on mksh, for example.
> However, GNU software tends to work very hard to avoid arbitrary limits [1].
[0] https://learn.microsoft.com/en-us/windows/win32/fileio/maxim...
So I don't see why they would want to do that.
Why?
----
Many GNU, linux and other utils are pretty awesome, and obviously some effort has been spent in the past to port them to windows. However those projects are either old, abandonned, hosted on CVS, written in platform-specific C, etc.
Rust provides a good platform-agnostic way of writing systems utils that are easy to compile anywhere, and this is as good a way as any to try and learn it.
https://github.com/uutils/coreutils/blob/9653ed81a2fbf393f42...
Currently their usage is actively worsening the security of their distro
Didn't we learn from c, and the entire raison detre for rust, is that coders cannot be trusted to follow rules like this?
If coders could "(document) safety invariants that must be manually upheld in order for usage to be memory-safe." there's be no need for Rust.
This is the tautology underlying rust as I see it
Inversely, you can write whole applications in rust without ever touching `unsafe` directly, so that keyword by itself signals the need for attention (both to the programmer and the reviewer or auditor). An unsafe block without a safety comment next to it is a very easy red flag to catch.
They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.
I would not want to run any code on my machines made by people who think like this. And I'm pro-Rust. Rust is only "more secure" all else being equal. But all else is not equal.
A rewrite necessarily has orders of magnitude more bugs and vulnerabilities than a decades-old well-maintained codebase, so the security argument was only valid for a long-term transition, not a rushed one. And the people downplaying user impact post-rollout, arguing that "this is how we'll surface bugs", and "the old coreutils didn't have proper test cases anyway" are so irresponsible. Users are not lab rats. Maintainers have a moral responsibility to not harm users' systems' reliability (I know that's a minority opinion these days). Their reasoning was flawed, and their values were wrong.
The snap BS wasn't enough to move me since I was largely unaffected once stripping it out, but this might finally convince me to ditch.
I'm used to running experimental software but I wasn't ready for my computer to not boot one day because of uutils. The `-Z` flag for `cp` wasn't implemented in the 9 month old version shipped in Debian at that time so initramfs creation failed...
sudo apt install coreutils-from-gnu
https://computingforgeeks.com/ubuntu-2604-rust-coreutils-gui...And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.
So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.
`openat()` and the other `*at()` syscalls are also raw syscalls, which Rust's stdlib chose not to expose. While I can understand that this may not be straight forward for a cross-platform API, I have to disagree with your statement that Rust's stdlib is mistake prone because it's so low-level. It's more mistake prone than POSIX (in some aspects) because it is missing a whole family of low-level syscalls.
Why can I easily use "*at" functions from Python's stdlib, but not Rust's?
They are much safer against path traversal and symlink attacks.
Working safely with files should not require *const c_char.
This should be fixed .
The parent was asking for access to the C syscall, and C syscalls are unsafe, including in C. You can wrap that syscall in a safe interface if you like, and many have. And to reiterate, I'm all for supporting this pattern in Rust's stdlib itself. But openat itself is a questionable API (I have not yet seen anyone mention that openat2 exists), and if Rust wanted to provide this, it would want to design something distinct.
> Why can I easily use "*at" functions from Python's stdlib, but not Rust's?
I'm not sure you can. The supported pattern appears to involve passing the optional `opener` parameter to `os.open`, but while the example of this shown in the official documentation works on Linux, I just tried it on Windows and it throws a PermissionError exception because AFAIK you can't open directories on Windows.
And then there’s renameat(2) which takes two dirfd… and two paths from there, which mostly has all the same issues rename(2) does (and does not even take flags so even O_NOFOLLOW is not available).
I’m not sure what you’d need to make a safe renameat(), maybe a triplet of (dirfd, filefd, name[1]) from the source, (dirfd, name) from the target, and some sort of flag to indicate whether it is allowed to create, overwrite, or both.
As the recent https://blog.sebastianwick.net/posts/how-hard-is-it-to-open-... talks about (just for file but it applies to everything) secure file system interaction is absolutely heinous.
[1]: not path
I can't think of a case this API doesn't cover, but maybe there is one.
And you need to do that because nothing precludes having multiple entries to the same inode in the same directory, so you need to know specifically what the source direntry is, and a direntry is just a name in the directory file.
This can also be a pain on microcontrollers sometimes, but there you're free to pretend you're on Unix if you want to.
Almost all languages/standard libraries pick the latter, and many choose UNIX or Linux as the preferred platform, even though its file system API has flaws we’ve known about for decades (example: using file paths too often) or made decisions back in 1970 we probably wouldn’t make today (examples: making file names sequences of bytes; not having a way to encode file types and, because of that, using heuristics to figure out file types. See https://man7.org/linux/man-pages/man1/file.1.html)
A standard library for files and paths that lacks things like ACLs and locks is weirdly Unixy for a supposedly modern language. Most systems support ACLs now, though Windows uses them a lot more. On the other hand, the lack of file descriptors/handles is weird from all points of view.
Had Windows been an uncommon target, I would've understood this design, but Windows is still the most common PC operating system in the world by a great margin. Not even considering things like "multile filesystem roots" (drive letters) "that happen to not exist on Linux", or "case insensitive paths (Windows/macOS/some Linux systems)" is a mistake for a supposedly generic language, in my opinion.
The point of Rust is that you shouldn't have to worry about the biggest, easiest to fall in pitfalls.
I think the author's point of this article, is that a proper file system API should do the same.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
(Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)
Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
Less than that actually, considering Rust has its own definition of what "safe" means.
> It isn't possible to create a programing language that doesn't allow bugs to happen
Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.
Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.
I’ve had this conversation before, and it usually ends like https://www.smbc-comics.com/comic/aaaah
On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this and that any project that does not use Rust is doomed and that any of the existing decades-old software projects should be rewritten in Rust to decrease the chances that they may have bugs.
What is described in TFA is not surprising at all, because it is exactly what has been predicted about this and other similar projects.
Anyone who desires to rewrite in Rust any old project, should certainly do it. It will be at least a good learning experience and whenever an ancient project is rewritten from scratch, the current knowledge should enable the creation of something better than the original.
Nonetheless, the rewriters should never claim that what they have just produced has currently less bugs than the original, because neither they nor Rust can guarantee this, but only a long experience with using the rewritten application.
Such rewritten software packages should remain for years as optional alternatives to the originals. Any aggressive push to substitute the originals immediately is just stupid (and yes, I have seen people trying to promote this).
Moreover, someone who proposes the substitution of something as basic as coreutils, must first present to the world the results of a huge set of correctness tests and performance benchmarks comparing the old package with the new package, before the substitution idea is even put forward.
You’ve constructed a strawman with no basis in reality.
You know what actual Rust fans sound like? They sound like Matthias Endler, who wrote the article we’re discussing. Matthias hosts a popular podcast Rust in Production where talks with people about sharp edges and difficulties they experienced using Rust.
A true Rust advocate like him writes articles titled “Bugs Rust Won’t Catch”.
> Such rewritten software packages should remain for years as optional alternatives to the originals.
This project was started a decade ago. (https://news.ycombinator.com/item?id=7882211)
> must first present to the world the results of a huge set of correctness tests and performance benchmarks
Yeah, you can see those in https://github.com/uutils/coreutils. This project has also worked with GNU coreutils maintainers to add more tests over time. Check out the graph where the total number of tests increases over time.
> before the substitution idea is even put forward
I partly agree. But notice that these CVEs come from a thorough security audit paid for by Canonical. Canonical is paying for it because they have a plan to substitute in the immediate future.
Without a plan to substitute it’s hard to advocate for funding. Without funding it’s hard to find and fix these issues. With these issues unfixed it’s hard to plan to substitute.
Chicken and egg problem.
> less bugs
Fewer.
The goal claimed by all these rewrites is the elimination of bugs.
There are plenty of strong arguments to be made against rewriting something in Rust, but this is a pretty weak one.
Exactly what is the controversial take here?
> I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.
Nope. this is fine.
> Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Maybe this?
> Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
Nope, this is fine too.
What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.
For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.
If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.
Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.
You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.
Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.
Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.
The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.
The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.
(grandparent comment): "Cloudflare crashed a chunk of the internet with a rust app a month or so ago"
The actual bug had nothing to do with rust, yet rust is specifically brought up here.
(grandparent comment): "Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are."
No Rust programmer thinks it's a panacea! Rust has never advertised itself this way.
And then, it turned out to not really be any better than exceptions.
Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.
Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.
From what I understand, "assigned" probably isn't the best way to put it. uutils started off back in 2013 as a way to learn Rust [0] way before the present kerfuffle.
[0]: https://github.com/uutils/coreutils/tree/9653ed81a2fbf393f42...
https://stackoverflow.com/questions/392022/whats-the-best-wa...
The problem is that -DIGIT doubles as both "signal number" and process group. The right way to invoke kill for a process group however would be "kill [OPTS]... -- -PGID".
If they do not like the design mistakes, great, they should set for themselves the goal to write a new operating system together with all base applications, where all these mistakes are corrected.
As long as they have not chosen the second goal, but the first, they are constrained by the existing interfaces and they must use them correctly, no matter how inconvenient that may be.
Anyone who learns English may be frustrated by many design mistakes of English, but they must still use English as it is spoken by the natives, otherwise they will not be understood.
That "perfectly good code" that it sounds like no one should question included "split --line-bytes has a user controlled heap buffer overflow".
The code gets silently encumbered with those lessons, and unless they are documented, there's a lot of hidden work that needs to be done before you actually reach parity.
TFA is a good list of this exact sort of thing.
Before you call people amateur for it, also consider it's one of the most softwarey things about writing software. It was bound to happen unless coreutils had really good technical docs and included tests for these cases that they ignored.
uutils would be so much better imo if it was GPL and took direct inspiration from the coreutils source code.
It does not matter if it's in the GPL explicitly or not since we're talking about uutils and their stance on it, and they've written that:
https://github.com/uutils/coreutils/blob/6b8a5a15b4f077f8609...
> we cannot accept any changes based on the GNU source code [..]. It is however possible to look at other implementations under a BSD or MIT license like Apple's implementation or OpenBSD.
The wording of that clearly implies that you should not look at GNU source code in order to contribute to uutils.
Hmmmm....
This feels like a golden quote. Don't know if you intended for it to rhyme, but well done :D
It should be stressed that failure to document such lessons, or at least the bugs/vulnerabilities avoided, is poor practice. Of course one can't document the bugs/vulnerabilities one has avoided implicitly by writing decent code to begin with, but it is important to share these lessons with the future reader, even if that means "wasting" time and space on a bunch of documentation such as "In here we do foo instead of bar because when we did bar in conditions ABC then baz happens which is bad because XYZ."
If you do a rewrite, you should fully understand and learn from the predecessor, otherwise youre bound to repeat all the mistakes. Embarassing.
To be clear; I love Rust, I use it for various projects, and it's great. It doesn't save you from bad engineering.
[1]: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
> If you do a rewrite, you should fully understand and learn from the predecessor, otherwise youre bound to repeat all the mistakes. Embarassing.
Interestingly, the uutils project uses the GNU coreutils test suite.
EDITED to add: they also have a stated position of not allowing contributions based on reading the GPL'd source.
It's actually even worse than that somewhat, because the attacker with write access to a parent directory can mess with hard links as well... sure, it only messes with the regular files themselves but there is basically no mitigations. See e.g. [0] and other posts on the site.
[0] https://michael.orlitzky.com/articles/posix_hardlink_heartac...
I agree with you that that's more the story here than "OMG, somebody wrote Rust code with bugs in it".
I don't really care that some very amateur enthusiasts wrote some bad code for fun, but how in the world did anyone who knows anything about linux take this seriously as a coreutils replacement?
So does this mean that neither did the original utils have any test harness, the process of rewriting them didn't start by creating one either?
Sure there are many edge cases, but surely the OS and FS can just be abstracted away and you can verify that "rm .//" actually ends up doing what is expected (Such as not deleting the current directory)?
This doesn't seem like sloppy coding, nor a critique of the language, it's just the same old "Oh, this is systems programming, we don't do tests"?
Alternatively: if the original utils _did_ have tests, and there were this many holes in the tests, then maybe there is a massive lack in the original utils test suite?
Yes.
> Sure there are many edge cases, but surely the OS and FS can just be abstracted away and you can verify that "rm .//" actually ends up doing what is expected (Such as not deleting the current directory)?
I think people have been trying that since before I was born and haven't yet been successful, so I am much less sure than you are.
For example: How do you decide how many `/` characters to try?
For a better one: Can you imagine if "rm" could simply decide to refuse to delete files containing "important" as first 9 bytes? How would you think of a test for something like that without knowing the letters in that order? What if the magic word wasn't in a dictionary?
> This doesn't seem like sloppy coding, nor a critique of the language, it's just the same old "Oh, this is systems programming, we don't do tests"?
I've never heard anyone say that except as a straw man.
I've heard people say tests don't do what people think they do.
Of the bugs mentioned I think the most unforgivable one is the lossy UTF conversion. The mind boggles at that one!
That's kind of horrifying. Is there a reliable list somewhere of all the functions that do that? Is that list considered stable?
Sun engineers Thomas Maslen and Sanjay Dani were the first to design and implement
the Name Service Switch. They fulfilled Solaris requirements with the nsswitch.conf
file specification and the implementation choice to load database access modules as
dynamically loaded libraries, which Sun was also the first to introduce.
Sun engineers' original design of the configuration file and runtime loading of name
service back-end libraries has withstood the test of time as operating systems have
evolved and new name services are introduced. Over the years, programmers ported the
NSS configuration file with nearly identical implementations to many other operating
systems including FreeBSD, NetBSD, Linux, HP-UX, IRIX and AIX.[citation needed] More
than two decades after the NSS was invented, GNU libc implements it almost identically.
It's by design, you see.> The trap is that get_user_by_name ends up loading shared libraries from the new root filesystem to resolve the username. An attacker who can plant a file in the chroot gets to run code as uid 0.
To me such a get_user_by_name function is like a booby trap, an accident that is waiting to happen. You need to have user data, you have this get_user_by_name function, and then it goes and starts loading shared libraries. This smells like mixing of concerns to me. I'd say, either split getting the user data and loading any shared libraries in two separate functions, or somehow make it clear in the function name what it is doing.
Some, maybe, but if you've decided to rewrite coreutils from scratch, understanding the POSIX APIs is literally your entire job.
And in any case, their test for whether a path was pointing to the fs root was `file == Path::new("/")`. That's not an API problem, the problem is that whoever wrote that is uniquely unqualified to be working on this project.
> That's not an API problem, the problem is that whoever wrote that is uniquely unqualified to be working on this project.
To be fair, uutils started out with far smaller ambitions. It was originally intended to be a way to learn Rust.
[0]: https://github.com/uutils/coreutils/commit/7abc6c007af75504f...
Until we have a filesystem that can present a snapshot, everything has to checked all the time.
i.e. we need an API which gives input -> good result or failure. Not input -> good result or failure or error.
Seems and smells is weasel words. The root cause is not thinking: Why is root chrooting into a directory they do not control?
Whatever you chroot into is under control of whoever made that chroot, and if you cannot understand this you have no business using chroot()
> To me such a get_user_by_name function is like a booby trap
> I'd say, either split getting the user data and loading any shared libraries in two separate functions, or somehow make it clear in the function name what it is doing.
You'd probably still be in the trap: there's usually very little difference between writing to newroot/etc/passwd and newroot/usr/lib/x86_64-linux-gnu/libnss_compat.so or newroot/bin/sh or anything else.
So I think there's no reason for /usr/sbin/chroot look up the user id in the first place (toybox chroot doesn't!), so I think the bug was doing anything at all.
Because you can't call chroot(2) unless you're root. And "control a directory" is weasel words; root technically controls everything in one sense of the word. It can also gain full control (in a slightly different sense of the word) over a directory: kill every single process that's owned by the owner of that directory, then don't setuid into that user in this process and in any other process that the root currently executes, or will execute, until you're done with this directory. But that's just not useful for actual use, isn't it?
Secure things should be simple to do, and potentially unsafe things should be possible.
I did not choose the term to confuse you, that's from the definition document linked to the CVE:
https://cwe.mitre.org/data/definitions/426.html
The CVE itself uses the language "If the NEWROOT is writable by an attacker" which could refer to a shared library (as indicated in the report), or even a passwd file as would have been true since the origin of chroot()
> root technically controls everything in one sense of the word.
But not the sense we're talking about.
> Because you can't call chroot(2) unless you're root
Well you can[1], but this is /usr/sbin/chroot aka chroot(8) when used with a non-numeric --userspec, and the point is to drop root to a user that root controls with setuid(2). Something needs to map user names to the numeric userids that setuid(2) uses, and that something is typically the NSS database.
Now: Which database should be used to map a username to a userid?
- The one from before the chroot(2)?
- Or the one that you're chroot(2)ing into
If you're the author of the code in-question, you chose the latter, and that is totally obvious to anyone who can read because that's the order the code appears in, but it's also obvious that only the first one* is under control of root, and so only the first one could be correct.
[1]: if you're curious: unshare(CLONE_USERNS|CLONE_FS) can be used. this is part of how rootless containers work.
No, you can't, it's an entirely different syscall that does something vaguely similar. IMHO there are a bit too many root-restricted operations that should not have been; but they are, so we're stuck with setuid-enabled "confused deputies" — arguably, it's the root that should be prohibited from calling chroot(2).
> Now: Which database should be used to map a username to a userid? If you're the author of the code in-question, you chose the latter
That's the problem: the choice is implicit. If the author moved setuid/setgid calls way up in the call order, the implicit choice would've also been the safe one but it was literally impossible.
> unshare(CLONE_USERNS|CLONE_FS) can be used
Wait, CLONE_USERNS? That's not a real flag. Did you mean CLONE_NEWUSER?
Yes. And I agree, but it also enables chroot(2) to work without being root, which was the syscall we are talking about, and which I still maintain is not as important as reading.
> arguably, it's the root that should be prohibited from calling chroot(2).
> IMHO there are a bit too many root-restricted operations that should not have been
It's a popular opinion. It's also cheap. So what?
> so we're stuck with setuid-enabled "confused deputies"
chroot(8) is not setuid-enabled. This has nothing to do with anything.
> That's the problem: the choice is implicit. If the author moved setuid/setgid calls way up in the call order, the implicit choice would've also been the safe one but it was literally impossible.
False. The setuid/setgid calls are in the right place. The lookup of the database mapping usernames to userids is in the wrong place.
If the rust programmer just read what they wrote they would see this.
If you just read what they wrote you would see this.
Also, hi how's things? :)
Rust won't catch it, but now the agents will.
Edit: https://gist.github.com/fschutt/cc585703d52a9e1da8a06f9ef93c... for anyone who needs copying this
On a separate note: I have a private "coretools" reimplementation in Zig (not aiming to replace anything, just for fun), and I'm striving to keep it 100% Zig with no libc calls anywhere. Which may or may not turn out to be possible, we'll see. However, cross-checking uutils I noticed it does have a bunch of unsafe blocks that call into libc, e.g. https://github.com/uutils/coreutils/blob/77302dbc87bcc7caf87.... Thankfully they're pretty minimal, but every such block can reduce the safety provided by a Rust rewrite.
Probably will depend on what platform(s) you're targeting and/or your appetite for dealing with breakage. You can avoid libc on Linux due to its stable syscall interface, but that's not necessarily an option on other platforms. macOS, for instance, can and does break syscall compatibility and requires you to go through libSystem instead. Go got bit by this [0]. I want to say something similar applies to Windows as well.
This Unix StackExchange answer [1] says that quite a few other kernels don't promise syscall compatibility either, though you might be able to somewhat get away with it in practice for some of them.
This is what grinds my gears. Why all the hate against GNU?
Honestly, this is why I don't learn Rust, and why I didn't bother to read the rest of the article.
Plus AI is also good at catching, in other languages, errors that Rust tooling enforces. Like race conditions, use after free, buffer overflows, lifetimes, etc.
So maybe AI will become to ultimate "rust checker" for any language.
Countless time I have seen other people complain as well. There are articles about it even. Can't find the YouTube link now but recently a gamedev abandoned Rust due to compilation speed alone because iteration speed was paramount to their creative process.
Handwaving isn't going to make it any better. And thinking Go/TS compilation speed are comparable to Rust is, a handwave and a half to say the least.
Cargo check and friends are subpar for AI because they actually need to run the thing and unit tests for efficient agentic loops.
A single loop might recompile and rerun the application/unit tests enough times that slow compilers like Rust and Scala become detrimental.
> uutils read it as “send the default signal to PID -1”, which on Linux means every process you can see.
What's the use case for killing all process you can see?
* Let's rewrite thing in X, it is better
* Let's not look at existing code, X is better so writing it from scratch will look nicer
* Whoops, existing code was written like this for a reason
* Whoops, we re-introduce decade+ old problems that original already fixed at some point
TOCTOU means "Time-of-check to time-of-use"
See also: https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use
[0]: https://github.com/uutils/coreutils-tracking/commits/main/?a...
---
> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing
...
[List of bugs a diligent person would be mindful of, unix expert or not]
---
Only conclusion I can make is, unfortunately, the people writing these tools are not good software developers, certainly not sufficiently good for this line of work.
For comparison, I am neither a unix neckbeard nor a rust expert, but with the magic of LLMs I am using rust to write a music player. The amount of tokens I've sunk into watching for undesirable panics or dropped errors is pretty substantial. Why? Because I don't want my music player to suck! Simple as that. If you don't think about panics or errors, your software is going to be erratic, unpredictable and confusing.
Now, coreutils isn't my hobby music player, it's fundamental Internet infrastructure! I hate sounding like a Breitbart commenter but it is quite shocking to see the lack of basic thought going into writing what is meant to be critical infrastructure. Wow, honestly pathetic. Sorry to be so negative and for this word choice, but "shock" and "disappointment" are mild terms here for me.
Anyway, thanks for the author of this post! This is a red flag that should be distributed far and wide.
uutils did not start off as "let's make critical infrastructure in Rust", it started off as "coreutils are small and have tests, so we're rewriting them in Rust for fun". As a result there's needed to be a bunch of cleanup work.
> For fun
My idea of fun is reviewing my code and making sure I'm handling errors correctly so that my software doesn't suck. Maybe the people who are doing this, for fun, should be more aligned with that mentality?
How the f** did this sub-amateur slop end up in a big-name linux distribution? We've de-professionalized software engineering to such a degree that people don't even know what baseline competent software looks like anymore
I hate to armchair general, but I clicked on this article expecting subtle race conditions or tricky ambiguous corners of the POSIX standard, and instead found that it seems to be amateur hour in uutils.
1. uutils as a project started back in 2013 as a way to learn Rust, by no means by knowledgeable developers or in a mature language
2. uutils didn't even have a consideration to become a replacement of GNU Coreutils until.... roughly 2021, I think? 2021 is when they started running compliance/compatibility tests, anyway
3. The choice of licensing (made in 2013) effectively forbids them from looking at the original source
They're a group of people who want to replace pro-user software (GPL) with pro-business software (MIT).
I don't really want them to achieve their goal.
I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime, and compare it with this rewrite. Same with the number of memory bugs that are impossible in (safe) Rust.
Don't just downvote me, tell me how I'm wrong.
So let's talk about that. Well written C code, especially for the purpose of writing and continuing to maintain mature GNU coreutils, is not a big risk in terms of CVE. Between having an inexperienced Rust developer and an extremely experienced C developer (who's been through all the motions), I'd say the latter is likely the safer option.
The software with the best security track record of all time is written in C.
> I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime
The point is, those bugs had been discovered and fixed decades ago. Do you want to wait decades for coreutils_rs to reach the same robustness? Why do a rewrite when the alternative is to help improve the original which is starting from a much more solid base?
And even when a complete rewrite would make sense, why not do a careful line-by-line porting of the original code instead of doing a clean-room implementation to at least carry over the bugfixes from the original? And why even use the Rust stdlib at all when it contains footguns that are not acceptable for security-critical code?
For a project of this kind, this seems a rather stupid choice and it is enough to make hard to trust the rewritten tools.
Even supposing that replacing the GPL license were an acceptable goal, that would make sense only for a library, not for executable applications. For executable applications it makes sense to not want GPL only when you want to extract parts of them and insert them into other programs.
Perhaps one good reason is that once the initial bugs are fixed, over time the number of security issues will be lower than the original? If it could reach the same level of stability and robustness in months or a small number of years, the downsides aren't totally obvious. We will have to wait to judge I suppose. Maybe it's not worth it and that's fine, but it doesn't speak to Rust as a language.
Granted, the uutils authors are well experienced in Rust, but it is not enough for a large-scale rewrite like this and you can't assume that it's "secure" because of memory safety.
In this case, this post tells us that Unix itself has thousands of gotchas and re-implementing the coreutils in Rust is not a silver bullet and even the bugs Unix (and even the POSIX standard) has are part of the specification, and can be later to be revealed as vulnerabilities in reality.
I'm not sure that they were all that experienced in Rust when most of this code was written. uutils has been a bit of a "good first rust issue" playground for a lot of its existence
Which makes it pretty unsurprising that the authors also weren't all that well versed in the details of low-level POSIX API
In this case the filesystem API was perhaps not as well designed as it could have been. That can potentially be fixed though.
Some of the other bugs would be hard to statically prevent though. But nobody ever claimed otherwise.