On this work laptop I have three flatpaks, Signal, Chromium and Firefox. They all take 1.6GiB in total.
On my gaming PC I have Signal, Flatseal, Firefox, PrismLauncher, Fedora MediaWriter and Steam, and obviously they take over 700G because of the games in Steam, but if I count just the other flatpaks they're 2.2GiB.
So yeah, not great, but on the other hand I don't care because I love the packaging of cgroups based software and I don't need many of them. I mean my container images take up a lot more space than my flatpaks.
These package formats (I’m looking at you snap as well) are disrespectful of users’ computers to the point of creating a problem where due to size, things take so long and bog the computer down so much, that the resource being used is no longer storage, but time (system and human time). And THAT is not cheap at all.
Don’t believe me, install a few dozen snaps, turn the computer off for a week, and watch in amazement as you turn it back on and see it brought to its knees as your computer and network are taxed to the max downloading and applying updates.
A common example is the ones with the gnome- prefix and the ones that end with -themes suffix.
A touch overly dramatic...
Oh wait...
Oh wait...
We learned that most app developers hate it; to the point they don’t even bother supporting the platform unless they are FOSS diehards.
Those that do screech about not using the packaged version on almost all of their developer forums, most often because they are out of date and users blame them for bugs that were already fixed.
This actually is infuriating - imagine fixing a bug, but 2 years later, the distribution isn’t shipping your update, and users are blaming you and still opening bug reports. The distribution also will not be persuaded, because it’s the “stable” branch for the next 3 years.
Basically, Linux sucks terribly, either way, with app distribution. Linux distributions have nobody to blame but themselves for being ineffectual here.
This grossly misstates the concept of a stable distribution (e.g., Debian stable, with which I'm most familiar).
Debian stable isn't "stable" in that packages don't change, to the point that updates aren't applied at all, it's stable in that functionality and interfaces are stable. The user experience (modulo bugs and security fixes) does not change.
Stable does receive updates that address bugs and security issues. What Stable does not do is radically revise programs, applications, and libraries.
Though it's more nuanced than that even: stable provides several options for tracking rapidly-evolving software, the most notorious and significant of which are Web browsers with the major contenders updating quite frequently (quarterly or monthly, for example, for Google Chrome "stable" and "dev" respectively). That's expanded further with Flatpack, k8s, and other options, in recent years.
The catch is that updates require package maintainers to work on integrating and backporting fixes to code. More prominent and widely-used packages do this. The issue of old bugs being reported to upstream ... is a breakage of the system in several ways: distro's bug-tracking systems (BTSes) should catch (and be used by) their users, upstream BTSes arguably should reject tickets opened on older (and backported) versions. The solutions are neither purely technical nor social, which makes solutions challenging. But in reality we should admit that:
- Upstream developers don't like dealing with the noise of stale bugs.
- Users are going to rant to upstream regardless of distro-level alternatives.
- Upstreams' BTSes should anticipate this and automate redirection of bugs to the appropriate channel with as little dev intervention as possible. Preferably none.
- Distros should increase awareness and availability of their own BTS systems to address bugs specific to the context of that distro.
- Distro maintainers should be dilligent about being aware of and backporting fixes and only fixes.
- Distros should increase awareness and availability of alternatives for running newer versions of software which aren't in the distro's own stable repos.
Widespread distance technological education is a tough nut regardless, there will be failings. The key is that to the extent possible those shouldn't fall on upstream devs. Though part of that responsibilty, and awareness of the overall problem, does* fall on those upstream devs.
This is exactly what users want, though. Eg. if they want to receive updates more frequently on Ubuntu then they can use the six monthly releases, but most Ubuntu users deliberately choose the LTS over that option because they don't want everything updated.
On SaaS/mobile apps you have often daily new versions of software coming out. That's what users/developers want. They do not want 3 year+ stale versions of their software being 'supported' by a third party distro. I put supported in comments as it only really applies to security and what not; not terrible bugs in the software that are fixed in later versions.
Even on servers where it arguably makes more sense it has been entirely supplanted by Docker which ships the _entire OS_ more or less as the 'app'. And even more damingly, most/nearly all people will use a 3rd party Docker repo to manage the docker 'core' software updates itself.
And the reason noone uses the six monthly releases is because the upgrade process is too painful and regresses too much. But - even if it was 100% bulletproof, noone wants to be running 6-12 month out of date software on that either. Chrom(ium) is updated monthly and has a lot of important new features in it. You don't really want to be running 6-9 months out of date on that.
When that happens, guess what you do? You trademark your software’s name and use the law to force distributions to not package unless concessions are granted. We’re beginning to see this with OBS, but Firefox also did this for a while.
As Fedora quickly found, when trademark law gets involved, any hope of forcing developers to allow packaging through a policy or opinion vote becomes hilariously, comically ineffectual.
The other alternative is to just not support Linux. Almost all major software has been happily taking that path, and the whole packaging mess gives no incentive to change.
Ban the user that did not read go to the distro’s maintainers first.
In the Python world, something wants numpy>=2.5.13, another wants numpy<=2.5.12, yet Python has still not come up with a way to just do "import numpy==2.5.13" and have it pluck exactly that version and import it.
In the C++ world, I've seen code that spits out syntax errors if you use a newer version of gcc, others that spit out syntax errors if you use an older version of gcc, apt-get overwrites the shared library you depended on with a newer version, lots of other issues. Install CUDA 11.2, it tries to uninstall CUDA 11.1, never mind that you had something linked to it, and that everything else in that ecosystem disobeys semantics and doesn't work with later minor revisions.
It's such a shitshow that it fully makes sense to bundle all your dependencies if you want to ship something that "just works".
For your customer, storage is cheaper than employee time wasted getting something to work.
But as a rule, `<=` dependencies mean there's either a disastrous fault with the library, or else the caller is blatantly passing all the "do not enter" signs. `!=` dependencies by contrast are meaningful just to avoid a particular bug.
“Fuck it, just distribute software in the form of tarballs of the entire OS.”
The people making those "stupid distros" are (most likely by number) volunteers working hard to give us an integrated experience, and they deserve better than to be called "stupid".
[Bug]: /var/lib/flatpak/repo/objects/ taking up 295GB of space: https://github.com/flatpak/flatpak/issues/5904
Why flatpak apps are so huge in size: https://forums.linuxmint.com/viewtopic.php?t=275123
Flatpak using much more storage space than installed packages: https://discussion.fedoraproject.org/t/flatpak-using-much-mo...
Either way disk space is cheap and abundant now. If I need thenlastest version of something I will use flatpaks.
And of course there are also tons of huge Windows software that come with all sorts of their own dependencies.
Edit: I think I somewhat misread your comment and progval is more spot on. On Linux you usually install software with a package manager that resolves dependencies and only installs the unsatisfied dependencies resulting in small install size for many cases while on Windows that is not really a thing and installers just package all the dependencies they cannot expect to be present and the portable version just does the same.
Personally i vastly prefer AppImages to flatpaks (in fact i do not use flatpaks at all, i'd rather build the program from source - or not use it if the build process is too convoluted - instead).
A typical Linux package manager provides applications and libraries. It is very common for a single package install with yum/dnf, apt, pacman, etc. to pull in dozens of dependencies, many of which are shared with other applications. Whereas, a single package install on Windows through winget almost never pulls in any other packages. This is because Windows applications are almost always distributed in self-contained format; the aforementioned MSVCRT is a notable exception, though it's typically bundled as part of the installer.
So yes, Windows has a package manager, and it's great for what it does, but it's very different from a Linux package manager in practice. The distinction doesn't really matter to end users, but it does to developers, and it has a direct effect on package sizes. I don't think this situation is going to change much even as winget matures. Linux distributions carefully manage their packages, while Microsoft doesn't (and probably shouldn't).
There are plenty of padkages that require you to add extra sources to your package manager, that are not maintained by the distro. Docker [0] has official instructions to install via their package source. WinGet allows third party sources, so there’s no reason you can’t use it. It natively supports dependencies too. The fact that applications are packaged in a way that doesn’t utilise this for WinGet is true - but again, I was responding to the claim that windows doesn’t have a package manager.
[0] https://docs.docker.com/engine/install/fedora/#install-using...
Sure, and this unfortunately isn’t uncommon.
> What exactly is wrong with */Linux handling of multi-monitor?
X11’s support for multiple monitors that have mismatched resolutions/refresh rates is… wrong. Wayland improves upon this but doesn’t support g sync with nvidia cards (even in the proprietary drivers) You might say that’s not important to you and that’s fine, but it’s a deal breaker to me.
1. The hardware vendors are still not providing support the way they do for windows.
2. The Linux Devs haven't managed to adapt to these new hardwares.
Every OS has it's quirks, things you might not recall as friction points because they're expected.
I haven't found any notable issues with quality hardware, possibly with some need to verify support in the case of radio transmitter devices. You'd probably have the same issue for E.G. Mac OS X.
As consumers we'd have an easier time if: 1) The main chipset and 'device type ID' had to be printed on the box. 2) Model numbers had to change in a visible way for material changes to the Bill of Materials (any components with other specifications, including different primary chipset control methods). 3) Manufacturers at least tried one flavor of Linux, without non-GPL modules (common firmware blobs are OK) and gave a pass / fail on that.
You’re right that the manufacturers could provide better support, but they don’t.
Thanks all for the explanations, much appreciated, I thought I was missing something. I really should have known though, Ive been using portable apps for over 20 years on windows and remember.net apps not being considered portable way back when, which are now considered portable since the run time is on all modern windows.
Versus Linux where you have Gnome vs. KDE vs. Other and there's less emphasis on backwards compatibility and more on minimalism, so they need to package a lot more dependencies (potentially).
If you only install Gnome Flatpaks they end up smaller since they can share a bunch of components.
Does anyone know what those features are or have more details?
Linux generally draws a thin line between server and desktop, having “desktop only” dependencies is unusual less it’s something like needing the KDE or Gnome GUI libraries?
(This has also made an otherwise nice music player[3] unusable to me other than by dragging and dropping individual files from the file manager, as all of my music lives in git-annex, and accesses through git-annex symlinks are indistinguishable from sandbox escape attempts. On one hand, understandable; on the other, again, the software is effectively useless because of this.)
[1] https://wiki.archlinux.org/title/XDG_Desktop_Portal
Flatpak will even helpfully link binaries you install to a directory you can add to your $PATH to make command line invocation easy.
I mean, the comments are much more interesting than this piece of content, but the content itself is almost offending. At least the discussion is much more valuable than what I’ve just read by following that link.
I still feel the pinnacle for modern OS design might be Horizon, by Nintendo of all people. A capability-based microkernel OS that updates in seconds, fits into under 400 MB (WebKit and NVIDIA drivers included), is fast enough for games, and hasn’t had an unsigned code exploit in half a decade. (The OS is extremely secure, but NVIDIA’s boot code wasn’t.)
Why can’t we build something like that?
For instance, the Switch - and I don't know where in the stack this limitation lies - won't even let you run multiple programs that use the network. You can't, say, download a new game while playing another one that happens to have online connectivity - even if you aren't using it!
On a computer, we want to be able to run dozens of programs at the same time, freely and seamlessly; we want them to be able to interoperate: share data, resources, libraries, you name it; we want support for a vast array of hardware and peripherals. And on and on.
A Switch, fundamentally, is a device that plays games. Simpler requirements leads to simpler software.
You just can’t play games at the same time one is downloading. That’s a deliberate storage speed and network use optimization than a software limitation. You can also tell this by the notifications about online players from the system, even as you are playing an online game.
(Edit for posting too fast: The Switch does have a web browser, full WebKit even, which is used for the eShop and for logging in to captive portal Wi-Fi. Exploits are found occasionally, but the sandboxing has so far rendered these exploits mostly useless. Personally, I support this, as then Nintendo doesn’t have to worry about website parental controls.)
I don't think that's particularly hard if you only include support for one set of hardware and a single API/ABI for applications. Notably, no general-purpose OS does either of these things and people would probably not be pleased if one tried.
The problem with it is that it's a lot of work (just getting secure boot to work is a massive pain in itself) and there are a lot of drivers you need to manually disable or settings to manually toggle to get a Switch equivalent system. The Switch includes only code paths necessary for the Switch, so anything that looks like a USB webcam should be completely useless. Bluetooth/WiFi chipset drivers are necessary, but of course you only need the BLOBs for the specific hardware you're using.
Part of Nintendo's security strategy is the inability to get binary code onto the system that wasn't signed by Nintendo. You can replicate this relatively easily (basic GPG/etc. signature checks + marking externally accessible mount points as non-executable + only allowing execution of those mounts/copies from those mounts after full signature verification). Also add some kind of fTPM-based encryption mechanism to make sure your device storage can't be altered. You then need to figure out some method of signing all the software any user of your OS could possibly need to execute, but if you're an OEM that shouldn't be impossible.
Once you've locked down the system enough, you can start enforcing whatever sandboxing you need on top of whatever UI you prefer so your games can't be hacked. Flatpak/Snap/Docker all provide APIs for this already.
The tooling is all there, but there's no incentive for anyone to actually make it work. Some hardware OEMs do a pretty good job (Samsung's Tizen, for instance) but anything with a freely accessible debug interface or development interface is often quickly hacked. Most of the Linux user base want to use the same OS on their laptop and desktop and have all of their components work, and would also like the ability to run their own programs. To accomplish that, you have to give up a lot of security layers.
I doubt Nintendo's kernel is that secure, but without access to the source code and without a way to attack it, exploiting it is much harder. Add to that the tendency of Nintendo to sue, harass, and intimidate people trying to get code execution on their devices, and they end up with hardware that looks pretty secure from the outside.
Android and ChromeOS are also pretty solid operating systems in terms of general security, but their dependence on supporting a range of (vendor) drivers makes them vulnerable. Still, escalating from webkit to root on Android is quite the challenge, you'll need a few extra exploits for that, and those will probably only work on specific phones running specific software.
For what it's worth, you can get a pretty solid system by installing an immutable OS (Silverblue style) without root privileges. That still has some security mechansism disabled for usability purposes, but it's a solid basis for an easily updateable, low-security-risk OS when installed correctly.
On Alpine, if there’s a zero day in WebKit, you’d better check how your security is set up, and hope there’s not an escalation chain.
On Horizon, dozens of bugs in WebKit, the Broadcom Bluetooth stack, and the games have been found; they are still found regularly. They are also boring and completely useless, because the sandboxing is so tight.
You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
I think a lot of the security comes down to what compromises you're willing to make. Horizon doesn't have to support the same breadth of hardware or software as we expect out of normal OSs, so they can afford to reinvent the world on a secure microkernel. If we want to maintain backwards-compatibility (and we do, because otherwise it's dead on arrival) then we have to take smaller steps. Of course, we can take those steps; if you care about security then you should run your browser in a sandbox (firejail, bubblewrap, docker/podman) at which point a zero-day in the browser is lower impact (not zero risk, true, but again I don't see any way to fix that without throwing out performance or compatibility).
> You also can’t update Alpine in 5 seconds flat, even between a dozen major versions. That alone is amazing.
I rather assumed that the Switch doesn't actually install OS updates in 5s either? The obvious way to do what they're doing is A/B updates in the background, after which you "apply" by rebooting, which Linux can do in 5s.
[no longer replying non-constructively to anyone in this sub-thread]
I do feel strongly about it as I believe most apps don't need a full-blown database server. I've seen what SQLite can do and to me it's still a hidden gem (or a blind spot) to many programmers.
So I am putting my sweat where my mouth is and will provide that to the Elixir community, for 100% free, no one-time payments and no subscription software.
And yes, I do get annoyed by privileged people casually working on completely irrelevant stuff that's never going to move anything forward. Obviously everyone has the right to do whatever they like in their free time, but announcements on HN I can't combine with that and they do annoy me greatly. "Oh look, it's just a hobby project but I want you all to look at it!" -- don't know, it does not make any sense to me. Seems pretentious and ego-serving but I could be looking at it wrong. Triggers tend to remove nuance after all.
But out there, a stranger you care nothing about, will think less of you.
Wish I had that free time and freedom though... The things I would do.