47 points by dxs 5 hours ago | 13 comments
INTPenis 28 minutes ago
I use flatpaks daily but not many apps. Because I've been on Atomic Linux for a couple years now flatpak has become part of my daily life.

On this work laptop I have three flatpaks, Signal, Chromium and Firefox. They all take 1.6GiB in total.

On my gaming PC I have Signal, Flatseal, Firefox, PrismLauncher, Fedora MediaWriter and Steam, and obviously they take over 700G because of the games in Steam, but if I count just the other flatpaks they're 2.2GiB.

So yeah, not great, but on the other hand I don't care because I love the packaging of cgroups based software and I don't need many of them. I mean my container images take up a lot more space than my flatpaks.

loloquwowndueo 3 hours ago
“Storage is cheap” goes the saying. Other people’s storage has a cost of zero, so why not just fill it up with 100 copies of the same dependency.

These package formats (I’m looking at you snap as well) are disrespectful of users’ computers to the point of creating a problem where due to size, things take so long and bog the computer down so much, that the resource being used is no longer storage, but time (system and human time). And THAT is not cheap at all.

Don’t believe me, install a few dozen snaps, turn the computer off for a week, and watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.

musicnarcoman 8 minutes ago
"Storage is cheap" if you do not have to pay for it. It is not so cheap when you are the one paying for the organizations storage.
wtarreau 3 hours ago
Not to mention the catastrophic security that comes with these systems. On a local ubuntu, I've had exactly 4 different versions of the sudo binary. One in the host OS and 3 in different snaps (some were the same but there were a total of 4 different). If they had a reason to be different, it's likely for bug fixes, but not all of them were updated, meaning that even after my main OS was updated, there were still 3 bogus binaries exposed to users and waiting for an exploit to happen. I find that this is the most shocking aspect of these systems (and I'm really not happy with the disrespect of my storage, like you mention).
brlin2021 1 hour ago
The sudo binaries in the snaps are likely to have their SUID bit stripped, so they won't cause any trouble even if they have known vulnerabilities.
yjftsjthsd-h 1 hour ago
Why do snaps have sudo at all?
neuroelectron 1 hour ago
For a long time, storage was getting cheaper all the time but we've hit scaling walls in both CPUs and drives. I remember when I was a kid and bought Mechwarrior 2 a game that could use up to 500mb! The guy working the video game locker warned me "are you sure you have enough hard drive space?" after having just bought a 2gb drive for like $60, or something, I don't remember exactly. A concern that would have been valid maybe a year earlier.
m4rtink 3 hours ago
Snaps do zero deduplication and bundle everything AFAIK - flatpak at least does some deduplication on file level and has shared runtimes.
brlin2021 1 hour ago
This statement is false as snaps also have shared runtimes known as "content snaps".

A common example is the ones with the gnome- prefix and the ones that end with -themes suffix.

loloquwowndueo 49 minutes ago
Wherein snaps found themselves reinventing shared libraries - at which point, what’s really the point.
Seattle3503 39 minutes ago
I think the point is that maintainers and developers now have a choice of whether they want to share libraries or not. Before the only choice was to share dependencies.
api 1 hour ago
There are things like content defined chunking and content based lookup. Evidently that’s too hard.
throwaway314155 36 minutes ago
> watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.

A touch overly dramatic...

zdragnar 3 hours ago
It would be fantastic if there was a way for these to declare what libraries they needed bundled, and a manager that would install the necessary dependencies into a shared location, so that only what wasn't already installed got downloaded.

Oh wait...

eikenberry 1 minute ago
I would be even more fantastic if there was a way to compile everything into a single binary and distribute that so that there are no dependencies (other than the kernel).

Oh wait...

gjsman-1000 3 hours ago
Sure, but we’ve tried that technique for about 20 years.

We learned that most app developers hate it; to the point they don’t even bother supporting the platform unless they are FOSS diehards.

Those that do screech about not using the packaged version on almost all of their developer forums, most often because they are out of date and users blame them for bugs that were already fixed.

This actually is infuriating - imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update, and users are blaming you and still opening bug reports. The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.

Basically, Linux sucks terribly, either way, with app distribution. Linux distributions have nobody to blame but themselves for being ineffectual here.

dredmorbius 56 minutes ago
...imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update...

This grossly misstates the concept of a stable distribution (e.g., Debian stable, with which I'm most familiar).

Debian stable isn't "stable" in that packages don't change, to the point that updates aren't applied at all, it's stable in that functionality and interfaces are stable. The user experience (modulo bugs and security fixes) does not change.

Stable does receive updates that address bugs and security issues. What Stable does not do is radically revise programs, applications, and libraries.

Though it's more nuanced than that even: stable provides several options for tracking rapidly-evolving software, the most notorious and significant of which are Web browsers with the major contenders updating quite frequently (quarterly or monthly, for example, for Google Chrome "stable" and "dev" respectively). That's expanded further with Flatpack, k8s, and other options, in recent years.

The catch is that updates require package maintainers to work on integrating and backporting fixes to code. More prominent and widely-used packages do this. The issue of old bugs being reported to upstream ... is a breakage of the system in several ways: distro's bug-tracking systems (BTSes) should catch (and be used by) their users, upstream BTSes arguably should reject tickets opened on older (and backported) versions. The solutions are neither purely technical nor social, which makes solutions challenging. But in reality we should admit that:

- Upstream developers don't like dealing with the noise of stale bugs.

- Users are going to rant to upstream regardless of distro-level alternatives.

- Upstreams' BTSes should anticipate this and automate redirection of bugs to the appropriate channel with as little dev intervention as possible. Preferably none.

- Distros should increase awareness and availability of their own BTS systems to address bugs specific to the context of that distro.

- Distro maintainers should be dilligent about being aware of and backporting fixes and only fixes.

- Distros should increase awareness and availability of alternatives for running newer versions of software which aren't in the distro's own stable repos.

Widespread distance technological education is a tough nut regardless, there will be failings. The key is that to the extent possible those shouldn't fall on upstream devs. Though part of that responsibilty, and awareness of the overall problem, does* fall on those upstream devs.

rlpb 3 hours ago
> The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.

This is exactly what users want, though. Eg. if they want to receive updates more frequently on Ubuntu then they can use the six monthly releases, but most Ubuntu users deliberately choose the LTS over that option because they don't want everything updated.

martinald 13 minutes ago
At the end of the day the 'traditional' Linux packaging system in where distributions do it all for you is totally outdated. Tbh I can remember in the early/mid 2000s being extremely annoyed with this so I don't know if it was ever a good model.

On SaaS/mobile apps you have often daily new versions of software coming out. That's what users/developers want. They do not want 3 year+ stale versions of their software being 'supported' by a third party distro. I put supported in comments as it only really applies to security and what not; not terrible bugs in the software that are fixed in later versions.

Even on servers where it arguably makes more sense it has been entirely supplanted by Docker which ships the _entire OS_ more or less as the 'app'. And even more damingly, most/nearly all people will use a 3rd party Docker repo to manage the docker 'core' software updates itself.

And the reason noone uses the six monthly releases is because the upgrade process is too painful and regresses too much. But - even if it was 100% bulletproof, noone wants to be running 6-12 month out of date software on that either. Chrom(ium) is updated monthly and has a lot of important new features in it. You don't really want to be running 6-9 months out of date on that.

gjsman-1000 2 hours ago
But if you’re a developer, that doesn’t change that many users do not understand, will not understand, and will open bug reports regularly.

When that happens, guess what you do? You trademark your software’s name and use the law to force distributions to not package unless concessions are granted. We’re beginning to see this with OBS, but Firefox also did this for a while.

As Fedora quickly found, when trademark law gets involved, any hope of forcing developers to allow packaging through a policy or opinion vote becomes hilariously, comically ineffectual.

The other alternative is to just not support Linux. Almost all major software has been happily taking that path, and the whole packaging mess gives no incentive to change.

mananaysiempre 2 hours ago
> When that happens, guess what you do?

Ban the user that did not read go to the distro’s maintainers first.

dredmorbius 55 minutes ago
What's the Fedora trademark issue?
dheera 2 hours ago
To be fair, shared libraries have been problematic since the beginning of time.

In the Python world, something wants numpy>=2.5.13, another wants numpy<=2.5.12, yet Python has still not come up with a way to just do "import numpy==2.5.13" and have it pluck exactly that version and import it.

In the C++ world, I've seen code that spits out syntax errors if you use a newer version of gcc, others that spit out syntax errors if you use an older version of gcc, apt-get overwrites the shared library you depended on with a newer version, lots of other issues. Install CUDA 11.2, it tries to uninstall CUDA 11.1, never mind that you had something linked to it, and that everything else in that ecosystem disobeys semantics and doesn't work with later minor revisions.

It's such a shitshow that it fully makes sense to bundle all your dependencies if you want to ship something that "just works".

For your customer, storage is cheaper than employee time wasted getting something to work.

loloquwowndueo 48 minutes ago
Right but snaps don’t solve dependency hell (see content snaps which are shared library bundles).
o11c 36 minutes ago
That's what everybody uses `venv` for. Or `virtualenv` if you're stuck on old Python.

But as a rule, `<=` dependencies mean there's either a disastrous fault with the library, or else the caller is blatantly passing all the "do not enter" signs. `!=` dependencies by contrast are meaningful just to avoid a particular bug.

2OEH8eoCRo0 1 hour ago
Devil's advocate- We have hundreds of stupid distros making choices and the less I need to deal with their builds the better.
api 59 minutes ago
Containerization is and always was the ultimate “fuck it” answer to these problems.

“Fuck it, just distribute software in the form of tarballs of the entire OS.”

delusional 41 minutes ago
Yeah, I only trust the random developers that are probably running windows to package my Linux software.

The people making those "stupid distros" are (most likely by number) volunteers working hard to give us an integrated experience, and they deserve better than to be called "stupid".

qbane 3 hours ago
I hope articles like this can at least provide some hints when the size of a flatpak store grows without bound. It is definitely more involved than "it bundles everything like a node_modules directory hence..."

[Bug]: /var/lib/flatpak/repo/objects/ taking up 295GB of space: https://github.com/flatpak/flatpak/issues/5904

Why flatpak apps are so huge in size: https://forums.linuxmint.com/viewtopic.php?t=275123

Flatpak using much more storage space than installed packages: https://discussion.fedoraproject.org/t/flatpak-using-much-mo...

account-5 4 hours ago
I can't really comment about snap since I don't use Ubuntu but I thought flatpaks would work similar to how portable apps on windows do. Clearly I'm wrong, but how is it that windows can have portable apps of a similar size to their installable versions and Linux cannot? I know I'm missing something fundamental here, like how people blame Linux for lack of hardware support without acknowledging that hardware vendors do the work for windows to work correctly.

Either way disk space is cheap and abundant now. If I need thenlastest version of something I will use flatpaks.

blahaj 4 hours ago
Just a guess, but Windows executables probably depend on a bunch of Windows APIs that are guaranteed to be there, while Linux systems are much more modular and do not have a common, let alone stable ABI interface in the userspace. You can probably get small graphically capable binaries if you depend on QT and just assume it to be present, but Flatpak precisely does not do that and bundles all the dependencies to be independent from shared dependencies of the OS outside of its control. The article also mentions that AppImages can be smaller probably because they assume some common dependencies to be present.

And of course there are also tons of huge Windows software that come with all sorts of their own dependencies.

Edit: I think I somewhat misread your comment and progval is more spot on. On Linux you usually install software with a package manager that resolves dependencies and only installs the unsatisfied dependencies resulting in small install size for many cases while on Windows that is not really a thing and installers just package all the dependencies they cannot expect to be present and the portable version just does the same.

badsectoracula 4 hours ago
The equivalent of "Windows portable apps" on Linux isn't flatpaks (these add a bunch of extra stuff and need some sort of support from the OS) but AppImages[0]. AppImages are still not 100% the same (and can never be as Windows applications can rely on A LOT more stuff to be there than Linux desktop apps) but functionally/UX-wise they're the closest: you download some program, chmod +x it and run it like any other binary you'd have on your PC.

Personally i vastly prefer AppImages to flatpaks (in fact i do not use flatpaks at all, i'd rather build the program from source - or not use it if the build process is too convoluted - instead).

[0] https://appimage.org/

kmeisthax 4 hours ago
It's a matter of standardization and ABI stability. Linux itself promises an eternally stable syscall ABI, but everything else around it changes constantly. Windows is basically the opposite: no public syscall ABI, but you can always get a window on screen by linking USER.dll and poking it with the correct structures. As a result, Windows apps can assume more, while desktop Linux apps have to ship more.
progval 4 hours ago
Installable versions of Windows apps still bundle most of the libraries like portable apps do, because Windows does not have a package manager to install them.
maccard 4 hours ago
Windows does have a package manager and has for the last 5 years.
kbolino 3 hours ago
Apart from the Microsoft Visual C++ Runtime, there's not much in the way of third-party dependencies that you as a developer would want to pull in from there. Winget is great for installing lots of self-contained software that you as an end user want to keep up to date. But it doesn't really provide a curated ecosystem of compatible dependencies in the way that the usual Linux distribution does.
maccard 3 hours ago
Ok but that’s a different argument to “windows doesn’t have a package manager”
homebrewer 9 minutes ago
Not as understood by users of every other operating system, even macOS. It's more of an "application manager". Microsoft has a history of developing something and reusing the well-understood term to mean something completely different.
kbolino 2 hours ago
No, this is directly relevant to the comparison, especially since the original context of this discussion is about how Windows portable apps are no bigger than their locally installed counterparts.

A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.

So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).

maccard 1 minute ago
I never said that WinGet was a drop in replacement for yum - but the parents claim that windows doesn’t have a package manager isn’t true.

There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.

[0] https://docs.docker.com/engine/install/fedora/#install-using...

keyringlight 3 hours ago
Assuming you're talking about winget, that seems to operate either as an alternative CLI interface to the MS store with a separate database developers would need to add their manifests to, or to download and run normal installers in silent mode. For example if you do winget show "adobe acrobat reader (64-bit) you can see what it will grab. It's a far cry from how most linux package managers operate
mjevans 3 hours ago
Windows 2020 - Welcome to Linux 1999 where the distro has a package manager that has just about everything most users will ever need as options to install from the web.
maccard 3 hours ago
I can say the same thing about Linux - it’s 2025 and multi monitor, Bluetooth and WiFi support still doesn’t work.
yjftsjthsd-h 1 hour ago
Er, yes they do? I guess things could be spotty if you don't have drivers (which... is true of any OS), but IME that's rare. But I have to ask because I keep hearing variations of this: What exactly is wrong with */Linux handling of multi-monitor? The worst I think I've ever had with it is having to go to the relevant settings screen and tell it how my monitors are laid out and hitting apply.
maccard 7 minutes ago
>I guess things could be spotty if you don’t have drivers

Sure, and this unfortunately isn’t uncommon.

> What exactly is wrong with */Linux handling of multi-monitor?

X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.

account-5 2 hours ago
The only thing you can say in the context of the few bleeding edge hardware that isn't supported by Linux is that:

1. The hardware vendors are still not providing support the way they do for windows.

2. The Linux Devs haven't managed to adapt to these new hardwares.

mjevans 2 hours ago
FUD (Fear Uncertainty Doubt).

Every OS has it's quirks, things you might not recall as friction points because they're expected.

I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.

As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.

maccard 14 minutes ago
I don’t think I am spreading FUD. Hardware issues with Linux on non well trodden paths is a well known issue. X11 (still widely used on many distros) has a myriad of problems with multi monitor setups - particularly when resolutions and refresh rates don’t match.

You’re right that the manufacturers could provide better support, but they don’t.

wmf 3 hours ago
Unfortunately a lot of Windows devs are targeting 10 year old versions.
account-5 3 hours ago
I'm replying to myself in reply to everyone who replied to me.

Thanks all for the explanations, much appreciated, I thought I was missing something. I really should have known though, Ive been using portable apps for over 20 years on windows and remember.net apps not being considered portable way back when, which are now considered portable since the run time is on all modern windows.

dismalaf 4 hours ago
"Portable" apps on Windows just don't write into the registry or save state in a system directory. They can still assume every Windows DLL since the beginning of time will be there.

Versus Linux where you have Gnome vs. KDE vs. Other and there's less emphasis on backwards compatibility and more on minimalism, so they need to package a lot more dependencies (potentially).

If you only install Gnome Flatpaks they end up smaller since they can share a bunch of components.

butz 3 hours ago
If you are space concious, you should try to select Flatpak apps that are using the same runtime (Freedesktop, GNOME or KDE), and make sure all of them are using exactly the same version of runtime. Correct me if I'm wrong, but only two versions of Flatpak runtimes are supported at a time - current and previous. So during times when transitioning happens to newer runtime, some application upgrades are not done at once, and user ends up using more than one (and sometimes more than two) runtimes. In addition to higher disk space usage, one must account for usual updates too. The more programs and runtimes you have, more updates to download. Good thing, at least updates are partial.
jasonpeacock 4 hours ago
The article mentions that Flatpack is not suitable for servers because it uses desktop features.

Does anyone know what those features are or have more details?

Linux generally draws a thin line between server and desktop, having “desktop only” dependencies is unusual less it’s something like needing the KDE or Gnome GUI libraries?

mananaysiempre 3 hours ago
This may refer to xdg-desktop-portal[1], which is usable without Flatpak, but Flatpak forces you to go through it to access anything outside the app’s private sandbox. In particular, access to user files is mediated through a powerbox (trusted file dialog) [2] provided by the desktop environment. In a sense, Flatpak apps are normal Linux apps to about the same extent that WinRT/UWP apps are normal Windows apps—close, but more limited, and you’re going to need significant porting in either direction.

(This has also made an otherwise nice music player[3] unusable to me other than by dragging and dropping individual files from the file manager, as all of my music lives in git-annex, and accesses through git-annex symlinks are indistinguishable from sandbox escape attempts. On one hand, understandable; on the other, again, the software is effectively useless because of this.)

[1] https://wiki.archlinux.org/title/XDG_Desktop_Portal

[2] https://wiki.c2.com/?PowerBox

[3] https://apps.gnome.org/Amberol

ponorin 3 hours ago
It assumes that you have a DE running and depends on features like D-Bus. So it's not designed to run headless except for building flatpak packages.
LtWorf 4 hours ago
AFAIK it cannot do CLI applications at all.
jeroenhd 3 hours ago
It can, but because the Flatpak system depends on APIs like D-Bus getting those to work in headless environments (SSH, framebuffer console, raw TTY) is a pain.

Flatpak will even helpfully link binaries you install to a directory you can add to your $PATH to make command line invocation easy.

3 hours ago
wltr 2 hours ago
That was so useless and the style was so bad, I’m pretty sure it was written with (if not by) LLMs. Not even sure if I’m disappointed finding this low effort content here, or rather not surprised at all. I wish the content here would be more interesting, but maybe I’d want to find some other community for that.

I mean, the comments are much more interesting than this piece of content, but the content itself is almost offending. At least the discussion is much more valuable than what I’ve just read by following that link.

haunter 2 hours ago
What made Flatpaks more popular than Appimage? I thought the latter is "vastly" superior and really portable?
gjsman-1000 3 hours ago
It feels, to me, like the Linux desktop has become an overly complicated behemoth, never getting anywhere due to its weight.

I still feel the pinnacle for modern OS design might be Horizon, by Nintendo of all people. A capability-based microkernel OS that updates in seconds, fits into under 400 MB (WebKit and NVIDIA drivers included), is fast enough for games, and hasn’t had an unsigned code exploit in half a decade. (The OS is extremely secure, but NVIDIA’s boot code wasn’t.)

Why can’t we build something like that?

wk_end 3 hours ago
We can't build something quite like that because we demand a whole lot more from our general-purpose computing devices than we demand from our Switches.

For instance, the Switch - and I don't know where in the stack this limitation lies - won't even let you run multiple programs that use the network. You can't, say, download a new game while playing another one that happens to have online connectivity - even if you aren't using it!

On a computer, we want to be able to run dozens of programs at the same time, freely and seamlessly; we want them to be able to interoperate: share data, resources, libraries, you name it; we want support for a vast array of hardware and peripherals. And on and on.

A Switch, fundamentally, is a device that plays games. Simpler requirements leads to simpler software.

gjsman-1000 3 hours ago
This isn’t actually true, as you can use the Nintendo Switch Online app, or the eShop, while downloading games.

You just can’t play games at the same time one is downloading. That’s a deliberate storage speed and network use optimization than a software limitation. You can also tell this by the notifications about online players from the system, even as you are playing an online game.

(Edit for posting too fast: The Switch does have a web browser, full WebKit even, which is used for the eShop and for logging in to captive portal Wi-Fi. Exploits are found occasionally, but the sandboxing has so far rendered these exploits mostly useless. Personally, I support this, as then Nintendo doesn’t have to worry about website parental controls.)

m4rtink 2 hours ago
But AFAIK it still does not have a web browser, because they are scared of all the exploits webkit exploits people used to enable custom software on the PlayStation Vita. So rather than that they released Switch without a built-in web browser, even if it would be perfectly usable on the hardware and very useful in many cases.
yjftsjthsd-h 1 hour ago
> fits into under 400 MB (WebKit and NVIDIA drivers included),

I don't think that's particularly hard if you only include support for one set of hardware and a single API/ABI for applications. Notably, no general-purpose OS does either of these things and people would probably not be pleased if one tried.

jeroenhd 2 hours ago
Linux has supported online replacement for a while now, and can be compiled to dozens of megabytes in size. Whatever cruft Nvidia adds in their binary drivers will push the OS beyond 400MiB, but building a Linux equivalent isn't exactly impossible.

The problem with it is that it's a lot of work (just getting secure boot to work is a massive pain in itself) and there are a lot of drivers you need to manually disable or settings to manually toggle to get a Switch equivalent system. The Switch includes only code paths necessary for the Switch, so anything that looks like a USB webcam should be completely useless. Bluetooth/WiFi chipset drivers are necessary, but of course you only need the BLOBs for the specific hardware you're using.

Part of Nintendo's security strategy is the inability to get binary code onto the system that wasn't signed by Nintendo. You can replicate this relatively easily (basic GPG/etc. signature checks + marking externally accessible mount points as non-executable + only allowing execution of those mounts/copies from those mounts after full signature verification). Also add some kind of fTPM-based encryption mechanism to make sure your device storage can't be altered. You then need to figure out some method of signing all the software any user of your OS could possibly need to execute, but if you're an OEM that shouldn't be impossible.

Once you've locked down the system enough, you can start enforcing whatever sandboxing you need on top of whatever UI you prefer so your games can't be hacked. Flatpak/Snap/Docker all provide APIs for this already.

The tooling is all there, but there's no incentive for anyone to actually make it work. Some hardware OEMs do a pretty good job (Samsung's Tizen, for instance) but anything with a freely accessible debug interface or development interface is often quickly hacked. Most of the Linux user base want to use the same OS on their laptop and desktop and have all of their components work, and would also like the ability to run their own programs. To accomplish that, you have to give up a lot of security layers.

I doubt Nintendo's kernel is that secure, but without access to the source code and without a way to attack it, exploiting it is much harder. Add to that the tendency of Nintendo to sue, harass, and intimidate people trying to get code execution on their devices, and they end up with hardware that looks pretty secure from the outside.

Android and ChromeOS are also pretty solid operating systems in terms of general security, but their dependence on supporting a range of (vendor) drivers makes them vulnerable. Still, escalating from webkit to root on Android is quite the challenge, you'll need a few extra exploits for that, and those will probably only work on specific phones running specific software.

For what it's worth, you can get a pretty solid system by installing an immutable OS (Silverblue style) without root privileges. That still has some security mechansism disabled for usability purposes, but it's a solid basis for an easily updateable, low-security-risk OS when installed correctly.

anthk 3 hours ago
Alpine Linux?
gjsman-1000 3 hours ago
Close; but the security still isn’t anywhere close.

On Alpine, if there’s a zero day in WebKit, you’d better check how your security is set up, and hope there’s not an escalation chain.

On Horizon, dozens of bugs in WebKit, the Broadcom Bluetooth stack, and the games have been found; they are still found regularly. They are also boring and completely useless, because the sandboxing is so tight.

You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.

yjftsjthsd-h 1 hour ago
> Close; but the security still isn’t anywhere close. [...]

I think a lot of the security comes down to what compromises you're willing to make. Horizon doesn't have to support the same breadth of hardware or software as we expect out of normal OSs, so they can afford to reinvent the world on a secure microkernel. If we want to maintain backwards-compatibility (and we do, because otherwise it's dead on arrival) then we have to take smaller steps. Of course, we can take those steps; if you care about security then you should run your browser in a sandbox (firejail, bubblewrap, docker/podman) at which point a zero-day in the browser is lower impact (not zero risk, true, but again I don't see any way to fix that without throwing out performance or compatibility).

> You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.

I rather assumed that the Switch doesn't actually install OS updates in 5s either? The obvious way to do what they're doing is A/B updates in the background, after which you "apply" by rebooting, which Linux can do in 5s.

ReptileMan 3 hours ago
Why does it seems that we try to both avoid and reinvent the static linker poorly with every new technology and generation. Windows has been fighting with dll hell for 30 years now. Linux seems to not be able to produce alternative to dll hell. Not sure how osx world is.
2 hours ago
pdimitar 4 hours ago
[flagged]
dang 1 hour ago
Can you please follow the site guidelines when posting to HN? You broke them badly in this thread, and we've had to ask you this many times before.

https://news.ycombinator.com/newsguidelines.html

pdimitar 1 hour ago
Apparently I did. Seems I underestimated the impact of what I perceived as a small rant.

[no longer replying non-constructively to anyone in this sub-thread]

yjftsjthsd-h 1 hour ago
Okay, fair enough. Which part are you working on and how far have you gotten?
pdimitar 1 hour ago
Elixir -> Rust -> SQLite library (FFI bridge). The FFI library is completed (without some of SQLite's advanced features that I don't deem important for a v1) and I am just adding more tests now, though the integration layer with Elixir's de-facto data mapper library (Ecto) has not been started yet. Which means that an announcement would be met with crickets, hence I'll work on that integration layer VerySoon™. Otherwise the whole thing wouldn't ever help anyone.

I do feel strongly about it as I believe most apps don't need a full-blown database server. I've seen what SQLite can do and to me it's still a hidden gem (or a blind spot) to many programmers.

So I am putting my sweat where my mouth is and will provide that to the Elixir community, for 100% free, no one-time payments and no subscription software.

And yes, I do get annoyed by privileged people casually working on completely irrelevant stuff that's never going to move anything forward. Obviously everyone has the right to do whatever they like in their free time, but announcements on HN I can't combine with that and they do annoy me greatly. "Oh look, it's just a hobby project but I want you all to look at it!" -- don't know, it does not make any sense to me. Seems pretentious and ego-serving but I could be looking at it wrong. Triggers tend to remove nuance after all.

renewiltord 4 hours ago
But I don’t want to solve actual problems. I want to write the 3689th lisp interpreter in the world.
pdimitar 3 hours ago
Your right and prerogative, obviously.

But out there, a stranger you care nothing about, will think less of you.

Wish I had that free time and freedom though... The things I would do.

renewiltord 3 hours ago
You can have that free time. Stop posting on HN and write some code. I can do both but if I couldn’t I’d pick the latter.
pdimitar 2 hours ago
[flagged]
bigyabai 3 hours ago
Pay me
pdimitar 3 hours ago
[flagged]
bigyabai 3 hours ago
[flagged]
2 hours ago