The dedicated website: https://copy.fail
1. I would hope the default seccomp policy blocks AF_ALG in these containers. I bet it doesn’t. Oh well.
2. The write-to-RO-page-cache primitive STILL WORKED! It’s just that the particular exploit used had no meaningful effect in the already-root-in-a-container context. If you think you are safe, you’re probably wrong. All you need to make a new exploit is an fd representing something that you aren’t supposed to be able to write. This likely includes CoW things where you are supposed to be able to write after CoW but you aren’t supposed to be able to write to the source.
So:
- Are you using these containers with a common image or even a common layer in an image to isolate dangerous workloads from each other. Oops, they can modify the image layers and corrupt each other. There goes any sort of cross-tenant isolation.
- What if you get an fd backed by the zero page and write to it? This can’t result in anything that the administrator would approve of.
- What if you ro-bind-mount something in? It’s not ro any more.
I see a lot of projects blocking those sockets in containers as a response to this exploit, but it seems rather strange to me. We're disabling a cryptographic performance enhancement feature entirely because there was a security bug in them that one time? It's a rather weird default to use. It's not like we're mass-disabling kernel modules everywhere every time someone discovers an EoP bug, do we? Did we blacklist OpenSSL's binaries after Heartbleed?
I suppose it makes sense as a default on vulnerable kernels (though people running vulnerable kernels should put effort into patching rather than workarounds in my opinion), but these defaults are going to be around ten years from now when copy.fail is a distant memory.
The need for this feature/functionality in the fist place is questioned by some:
> As someone who works on the Linux kernel's cryptography code, the regularly occurring AF_ALG exploits are really frustrating. AF_ALG, which was added to the kernel many years ago without sufficient review, should not exist. It's very complex, and it exposes a massive attack surface to unprivileged userspace programs. And it's almost completely unnecessary, as userspace already has its own cryptography code to use. The kernel's cryptography code is just for in-kernel users (for example, dm-crypt).
> The algorithm being used in this [specific] exploit, "authencesn", is even an IPsec implementation detail, which never should have been exposed to userspace as a general-purpose en/decryption API. […]
* https://news.ycombinator.com/item?id=47952181#unv_47956312
But I am disappointed that we still don't have clear OpenSSL successor, there is nothing to be salvaged from this mess of a project
Yes, the syscall API is (famously) stable, but the drivers, for example, are such a mess that many non-Linux projects prefer to take BSD drivers for e.g. WiFi despite them supporting far fewer devices (even if the Linux ones would be license compatible).
To my knowledge, not many things were using the in-kernel code anyways, the recommended way is to use userland tools...
It's optional for openssl, systemd apparently needs it, but deleting the module from one of my systems didn't cause any issues. /shrug
there is no reason it would be default policy. Else might as well block every socket and just multiplex everything on stdin/out
share and enjoy!
You may be on to something…
Oh, an this [2] just happened
[1] https://github.com/containers/oci-seccomp-bpf-hook/pull/209 [2] https://github.com/moby/moby/pull/52501
Although using this to justify their migration to micro-VMs is very strange to me. Sure for this CVE it would have been better, but surely for a future attack it could hit a component shared across VMs but not containers? Are people really choosing technology based on CVE-of-the-week?
VMs are considered vastly better because the surface area where exploits can happen is smaller and/or better isolated within the kernel.
If you are arguing the latter is not true — and we are all collectively hand-waving away big chunk of the surface area so that may be the case — it would help to be explicit in why you believe an exploit in that area is similarly likely?
I would have thought they provide better isolation than using multiple users which is the traditional security boundary.
It might depends on what you mean by a container? Are sandboxes such as Bubblewrap and Firejail containers?
It is easy for security scanners to scan a Linux system, but will they inspect your containers, and snaps, and flatpaks, and VMs? It is easy for DevOps to ssh into your Linux server, but can they also get logged in to each container, and do useful things? Your patches and all dependencies are up-to-date on your server, but those containers are still dragging around legacy dependencies, by design. Is your backup system aware of containers and capable of creating backup images or files, that are suitable for restoring back to service?
Does this increase complexity? Yes, it does. Is it worth the cost? Depends on each individual case IMO.
Couldn't you then simply re-run the exploit again as unprivileged podman user and gain root on the host?
However copy fail can be used in many other ways not contained by containers or the above settings. For example it can modify the /etc/ssl/certs to prepare for MitM attacks. If you have multiple containers based on the same image then one compromised CA set affects another.
AmbientCapabilities=CAP_NET_BIND_SERVICE
CapabilityBoundingSet=CAP_NET_BIND_SERVICE
NoNewPrivileges=yes
to my .service. Is it good enough?But… does this escape the container? If not (the author seems to indicate it does not) then does it matter if you are in Docker or rootless Podman, right, since the end result is always: you have elevated to root within the container. If the rest of the container filesystem isolation does its job, the end result is the same? Though I guess another chained exploit to escape the container would be worse in Docker? Do I have that right?
“ While rootless containers prevent the attacker from escalating to host root, the page cache is still shared across the host. Containers that re-use the same base image layers share the same cached pages for those layers — if a malicious CI job corrupts a binary in the page cache, other containers launched from that same image could end up executing the poisoned version.”
echo -e 'install algif_aead /bin/false\n' > /etc/modprobe.d/disable-algif.conf
that just prevents the faulty module from loading. So you have time to fix it properly (kernel upgrade)Technically there should be zero impact (the very very few tools that use it will fall back to userspace), I haven't even found that module loaded in infrastructure
Then check if it is loaded, and if it is, unload/reboot
Wouldn’t the exploit then just use ip addresses directly?