Are Immutable Linux Distros right for you?
Comments
toprerules
Wowfunhappy
> 99.9999% of Linux deployments are not Arch installs on old Thinkpads.
Yes, but a majority of the remaining 0.0001% are people on Hacker News, so it's going to get discussed here!
rrix2
after all the nixpkgs/nix leadership failures and having dozens of hours of packaging improvement work ignored by individual package maintainers and entire SIGs, I've been evaluating bootc and other declarative options and i'm quite disappointed that they've been completely uninterested in providing any declarative solution for per-host "state" with bootc systems -- having to set up a boot Dockerfile (sorry daddy shadowman i meant Containerfile!!!) and then also use Ansible and/or cloud-init on top of that to set up a new host is just a complete non-starter when NixOS can handle both in one language framework and development environment even if that framework is heterodox jank that everyone outside of nixpkgs resents.
bigstrat2003
> Immutable distros are becoming a de-facto standard for server deployments
What? No they're not. I literally have never once in my career seen a server deployed with an immutable distro.
ghshephard
Plus 1 on this - I've probably had direct responsibility for managing fleets of roughly 50,000 linux hosts - never seen an immutable distro. We usually just burn a fresh image of whatever mainline ubuntu is offering every week or two into the fleet. Saying that containers are becoming a defacto standard is reasonable though - pretty much every company I've worked with and my coworkers have worked with have shifted everything into containers (at least in companies with x00k microservice instances running on ~100k machine environments).
SR2Z
That's probably because they are practically most used for the underlying OS of a Kubernetes host (seeing as how it is difficult by definition to configure an immutable install).
If you really think about it, what's the difference between spinning up a VM with a preconfigured image and spinning up a VM with an _immutable_ preconfigured image?
ilbeeper
The difference is that one is immutable and the other is not. One can be rolled back to earlier version while retaining user data and the other doesn't offer that ability.
SR2Z
Sure, but the GKE autoscaler will happily erase and recreate machines whenever it wants.
Divergences from the base image are inherently limited because of that.
ilbeeper
Despite the name, that's not what immutable distros are for. GKE won't let yet restore previous generation of configured and component versioned base image.
devops99
I personally know someone who runs an "endpoint as a service" with full MDM configurable via web-ui ; the endpoints (the laptops) run on a Linux kernel.
smilliken
Every professional programmer needs a desktop OS, and NixOS is really hard to beat. Switching to NixOS is like going from a car that is breaking down all the time to one that's reliable and easy to mod and repair. I don't recommend it to family members, but I do recommend it to programmers that care about their tools.
Of course there's many more Linux servers out there than there are programmers, but the OS the programmer uses to develop on is just as important as the OS they deploy to.
Thaxll
This is assuming that things break all the time, I've been using Ubuntu / Debian for the last 20 years I never had to re-install it because something broke.
Nowdays Linux is very stable, you can have things that don't work properly but you won't need a full re-install.
sosodev
Genuine question… how?
Every time I’ve tried to run a standard Linux distro like Ubuntu for more than a couple of years I inevitably end up breaking something in a way that I can’t recover.
Are you taking snapshots to roll back to?
ChocolateGod
Don't use custom repos, use container technologies (e.g. Flatpak, Docker etc) to install applications, update the system regularly (at least once a week).
Usually broken distro upgrades I see are because people run "curl randomdomain.ck/totallysafescript.sh | sudo bash -" to install things or use custom repos.
79a6ed87
This is why I like Arch's Pacman a lot, and the reason why I avoid Debian derivatives.
That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
IMO the reason why there are so many people running random scripts in Ubuntu/Debian is due to how more difficult/inconvenient it is to get a dpkg .deb when compared to a PKGBUILD file. Same for MacOS, in which you have to either rely on Homebrew wizardry or just running the script
ChocolateGod
> That `totallysafescript.sh` could at least be inside of the package manager scope. Most of the times someone already did it, and published it to AUR.
The AUR is still not as good as proper package management and shouldn't be considered a stable or reliable method of software distribution at scale.
binkHN
I hate Flatpaks; they're bloated monstrosities and I only run them when I have no other choice. Outside of that, distribution package maintainers tend to do a good job and that is my preferred way of running programs.
omolobo
In other words, you won't break your system if you keep the system installation pristine.
calvinmorrison
container stuff breaks the MOST for me. The hooks into the subsystems invariably are not working correctly be it like xdg preferences or finding things that are global, its nice to package things into their own sandboxes but those sandboxes have not played well with my wider systems. I am still thankful for snap getting me recent copies of popular software on my aged debian installs however.
ChocolateGod
can't speak for Snap other than the nightmare it is on non-Debian based systems, but not really had any issues with Flatpak.
calvinmorrison
My biggest issues are xdg, sound thru pulse, printer stuffs, things like that which have connections in snap but don't seem to work quite right.
IgorPartola
I have had the same experience. Don’t run random commands from the internet, don’t install anything that doesn’t come from the distro vendor (a few very notable exceptions can be made for things like Docker if you really must), don’t mess with configuration files, do upgrades their way. Generally speaking you will have zero problems. Sometimes they will do something like switch from one network manager to something like netplan but overall that stuff is trending towards ease of use, not complexity.
If you install the newest versions of whatever from random repos or compile stuff yourself you are very likely to mess things up. But nowadays there is very little reason to do that. And you can pick a distro that releases at a pace you are comfortable with, so you have choices.
marcosdumay
By not using Ubuntu.
It's not a good distro. I don't know why people insist on using it. Notice that the GP said Debian instead. (Probably Stable, because testing and unstable will break within 10 years.)
jokethrowaway
This experience has been unique to (k)ubuntu (more than 15 years ago) for me.
I've been running rolling release distros for a decade and never had any problems - you have to follow some software migrations when needed, but I managed to migrate to systemd on Arch without an issue while any dist upgrade on ubuntu was wrecking my system.
mixmastamyk
LiveCDs and flashdrives mean no issue is unrecoverable.
taeric
Agreed. I confess I assume I'm living in an alternative reality whenever I read folks talk about how hard it is to run Linux as a main OS. I have broken things, sure. I can't remember the last time, though. I have had issues trying to get CUDA working correctly. But even that hasn't been an issue in a long time, at this point.
My gut is that if I was to try and get my 3 monitor setup such that the seams are all pixel aligned, I would be in for a world of pain. I imagine that would be the same for other OSes, as well?
satvikpendem
That's funny, I've had more errors and disasters with NixOS than I've ever had with Windows or macOS. Repairing it is actually a pain.
shatsky
Repairing it is as easy as rebooting and selecting last generation which worked.
jokethrowaway
Indeed, that's nice
Fixing the issue ends up being rather difficult though
bsder
Nix the idea is fantastic. Nix the implementation is currently a disaster.
I liken Nix to source control in the time of CVS. We need two more implementation iterations before its going to be useful to the general public.
mongol
> ... disaster ...
Such an exaggregation. Many thousands of people use it by choice, despite all alternatives. Hardly a disaster, by any definition.
nightfly
Many thousands of people is a very small fraction
drdaeman
Very few people (among the world population) know what GNU/Linux is. Fewer care enough to switch to it. Even fewer know enough (or have the willpower, time and mental capacity to learn) to actually be proficient.
But among those who do, there are plenty of people who have learned Nix well enough it's no longer a weird arcane thingy that spews out incomprehensible errors for them. Although, I guess, among those no one will deny Nix can be better (but there are no multi-billion-dollar corporations spending tons of their resources on it).
It's like vim. First time you run it you probably can't even exit it - so, of course you think it's a disaster ;)
Terr_
> exaggregation
Is that an aggregation of exaggerations? :p
Actually, that portmanteau kinda-works in this context.
yoavm
Can you elaborate? Why is it a disaster? I've only used Nix as a package manager when my work distro doesn't have some tools I wanted to install, but the few people I know that use NixOS seem to swear by it.
clvx
debugging and error messages are still hard to deal with. Also, flakes should become standard at this point. Documentation on how to load modules and explore modules using nix repl is also lacking and/or frustrating. It definitely has rough edges. I do hope it will improve.
ChocolateGod
The CLI is also pretty badly documented, or documentation is outdated.
Chris_Newton
For perspective, I’ve been running NixOS on my main workstation going back a few releases now.
When it works, it’s great. I like that I can install (and uninstall) much of the software I use declaratively, so I always have a “clean” base system that doesn’t accumulate numerous little packages at strange versions over time in the way that most workstations where software is installed more manually tend to do.
This is a trade-off, though. Much is made of the size of the NixOS package repository compared to other distros, but anecdotally I have run into more problems getting a recent version of a popular package installed on my NixOS workstation than I had in probably a decade of running Debian/Ubuntu flavoured distros.
If the version of the package you want isn’t available in the NixOS repo, it can be onerous to install it, because by its nature NixOS doesn’t follow some popular Linux conventions like FHS. Typically, you write and maintain your own Nix package, which often ends up similar to fetching a known version of a package from a trusted source and then following the low-level build-from-source process, but all wrapped up in Nix incantations that may or may not be very well documented, and sometimes with a fair bit of detective work to figure out all the versions and hashes of not just the package you want but also all its dependencies, which may in turn need packaging similarly themselves if you’re unlucky.
It’s also possible to run into this when you’re not installing whole software applications, including those that are available from the NixOS package repository, but rather things like plug-ins for an application or libraries for a programming language. You might end up needing a custom package for the main application so that its plug-in architecture or build system can find the required dependencies in the expected places when you try to install the extra things. Again, this is all complexity and hassle that just doesn’t happen on mainstream Linux distros. If I install Python and then `pip install somepackage` then 99.9% of the time that just works everywhere else but frequently it won’t work out of the box on NixOS.
It’s one of those things that is actually perfectly reasonable given the trade-offs that are explicitly being made, yet still makes NixOS time-consuming and frustrating in a way that other systems simply aren’t when you do run into the limitations.
This comment is already way too long, so I’ll just mention as a footnote that NixOS also tries to reconcile two worlds, and not all Linux software is particularly nicely arranged to be managed declaratively. So in practice, you still end up with some things being done more traditionally/imperatively anyway, and then you have a hybrid system that compromises some of the main benefits of the declarative/immutable pattern. There are tools like Flakes and Home Manager that help to overcome some of this as well, and as others have said, they are promising steps in good directions, but we’re not yet realising the full potential of this declarative style and it’s hard to see how we get from here to there quickly.
wesapien
My Linux desktop experience has been 2 years of Ubuntu/Debian, 4 years of Fedora and 2 years of NixOS. Hands down, and NixOS is my favorite. It's easy to recover from issues since I've gotten the hang of the build error messages and/or I can just reset my config to the last commit. It took me one year before jumping into flakes and glad I did. Next year, I'm going into Home Manager.
A custom GPT have been surprisingly helpful after feeding it manuals for nix, nixpkgs and NixOS and other Linux books.
from-nibly
Until someone from security/hr needs you to install a proprietary package.
devops99
Like what? Crowdstrike's EDR?
jokethrowaway
I had the opposite experience because I want to run a lot of software in random repos.
I can make a nix-shell for each project but then every nix upgrade was forcing me to go through a lengthy reinstall + wrecking compatibility sometimes.
Not to mention the amount of derivations I had to write myself just to use latest packages.
Using things like virtualenv instead of nix-shell can fix the general instability, but packaging is too big of a problem.
I went back to Arch.
devops99
> because I want to run a lot of software in random repos.
Containers, and snapshots+clones are your friend. For a while I was doing ZFS snapshots and clones of Gentoo userlands.
However, if you knew how bad things really are with glibc and how not-well designed Linux is to resist badly behaving software, and how easily some big players can inject badly behaving software into the channels you are fetching from, you would probably seriously consider Qubes.
illumos is a kernel you can rely on to run somewhat arbitrary software.
shatsky
Another opinion: immutability is required to guarantee software integrity, but there is no need to make whole system or "apps" immutable units. NixOS also consists of "immutable units", but its "granularity" is similar to packages of traditional Linux distros, each unit (Nix store item) representing single program, library or config file. This provides a better tradeoff, allowing to change system relatively easily (much easier than in immutable distros described here, and in many cases as easy as in traditional Linux distros) while having advantages of immutability.
UltraSane
Immutable distros are a good fit for very mature Infrastructure as Code setups. They make drift from the original config impossible.
shatsky
>make drift from the original config impossible
NixOS makes that too, its whole "system output path closure" is as immutable as every single store unit within it. But NixOS "reuses" units which are unaffected by NixOS config changes when applying new config, making its "system rebuild" super fast and light on resources when something like a single config file is changed in NixOS config. And possible to be done "in place", unlike with "conventional immutable distro"
packetlost
IME you don't need a mature IaC setup to have it work well, especially if you've bought into containerization
toprerules
You don’t understand what immutable distros are for. Imagine you need to upgrade 500k machines and your options are either run an agent that has to make the same changes 500k times and hopefully converges onto the same working state no matter the previous state of the machines its running on, or you pull a well tested image that can be immediately rolled back to the previous image if something goes wrong.
Saying it’s just about integrity is like saying docker images are just about integrity… they absolutely are not. They give you atomic units of deployment, the ability to run the same thing in prod as you do in dev. Many other benifits.
shatsky
>hopefully converges onto the same working state >Saying it’s just about integrity is like saying docker images are just about integrity >atomic units of deployment, the ability to run the same thing in prod as you do in dev
In my understanding, integrity is exactly about software being in certain known correct state, including absence of anything which is not part of that state. Of course integrity of parts of software system like individual packages contents does not make it really reliable when the whole system does not have it. NixOS has it and also allows to "run the same thing"
IshKebab
> and hopefully converges onto the same working state no matter the previous state of the machines its running on
Isn't that exactly the point of NixOS?
flakes
I think the point they’re getting at, is that there are typically a lot of delta states in between pre-upgrade and post-upgrade states when using package managers. With immutable distros, the upgrade becomes more of an atomic operation than what is offered by more incremental package manager updates.
It also means you can completely leave out the package manager from the target machines, as it’s only used to bootstrap creation of the single deployable unit. Implementing that bootstrapping step is where nix and friends are helpful in this setup.
danieldk
With immutable distros, the upgrade becomes more of an atomic operation than what is offered by more incremental package manager updates.
Nix offers the same guarantees. The point was that you don't need the whole system als a single unit (e.g. an image). A system can also be a tree of immutable output paths in a store (where a single output path often, but not necessarily, corresponds to a package).
In that model a system is basically an output path in the store (in reality it's a bit more complex) and has other output paths as transitive dependencies.
Upgrades/downgrades are atomic, because they just consists of selecting a different output path that represents a system. Upgrading creates a new output path that represents a system (either by downloading from a binary cache or building that output path) and booting into that output path. Rolling back consists of booting into the previous output path that represents a system.
josephg
This sort of atomic change should be something the filesystem provides. I think it’s crazy that databases have had mechanisms for transactions and rollbacks since the 70s and they’re still considered a weird feature on a filesystem.
There’s all sorts of ways a feature like that could provide value. Adding atomicity to system package managers would be a large, obvious win.
pxc
NixOS has this. Atomicity is implemented via the filesystem: the final switch between states is changing where a single symlink points from once place to another, which is an atomic operation on Linux and maybe elsewhere.
Transactional filesystems still sound cool, but Unix filesystems already have enough features to implement atomic package installation on top of them.
josephg
> Unix filesystems already have enough features to implement atomic package installation on top of them.
Yes, but it’s pretty awkward. You need to do the final commit via a rename or symlink. It would be far more convenient if you didn’t need to do that.
IshKebab
I agree. The fact that we're still doing atomic writes by renaming files is laughable. That's also pretty much the only atomic thing you can do.
I think the issue is the posix filesystem API. Nobody writes better filesystems because no software would use the new features, and no software supports fancier filesystem features because the posix API doesn't expose them.
It'll probably take someone like Apple or Google to fix this. Similar to 16kB pages.
jodrellblank
> Nobody writes better filesystems
People tried: https://en.wikipedia.org/wiki/Transactional_NTFS and https://learn.microsoft.com/en-gb/windows/win32/fileio/depre...
"[Transactional NTFS (TxF)] was introduced with Windows Vista as a means to introduce atomic file transactions to Windows. It allows for Windows developers to have transactional atomicity for file operations in transactions with a single file, in transactions involving multiple files, and in transactions spanning multiple sources – such as the Registry (through TxR), and databases (such as SQL). While TxF is a powerful set of APIs, there has been extremely limited developer interest in this API platform since Windows Vista primarily due to its complexity and various nuances which developers need to consider as part of application development. As a result, Microsoft is considering deprecating TxF APIs in a future version of Windows"
marcosdumay
Not really.
It was tried by one specific team, in one politics-guided environment. It was bounded to a ton of unrelated features because of politics, it was bounded to a specific API because of politics, it was bounded to a timeline and a team because of politics, and it was about a completely different OS.
We have no idea even how it went, because we can't trust the people reporting about it are talking about the correct thing.
jodrellblank
"Nobody has written a hashing/bit-rot detecting filesystem"
"ZFS is one"
"It's not really because it was written by a team, and because it was made for Solaris which is a different operating system, and there was lots of politics involved with its licensing, and because it was bound to the POSIX file API, and because it does complex volume management as well, and because it happened within a timeline before Sun went out of business, and we even have no idea how ZFS went because every documentation and article which says it's about ZFS might be talking about something else".
uh huh.
lmm
Iteration is a lot easier in userspace than in the base system.
tkz1312
nixos updates are completely atomic.
bezier-curve
Is immutability's benefits not the "integrity" of a system? This seems pedantic.
mrkeen
The comment was reaffirming immutability's benefits, against the previous comment which said traditional packaging provides a better tradeoff.
__david__
The original comment was not about traditional packaging, it was about Nix which is different (and immutable in its own way).
javitury
I totally see the advantages of immutable distros, particularly in a professional or cloud environment. Even as a hobbist, I feel tempted to use immutable distros if it were not because of:
- Learning. Figuring out how to migrate a setup even to the most mainstream-like immutable distro (fedora silverblue) can take a while, and to niche distros like talos even longer. However, a k8s-friendly setup with low customization requirements would help to speed up the migration (but it requires more powerful machines).
- Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years. On the other hand, immutable distros would require much more frequent maintenance, once every 6 months. A weekend every 6 months is a sizeable part of my time budget for hobbies.
One aspect in which immutables distros have improved a lot is in resource usage. They used to require significantly more disk space and have slightly higher minimum requirements than regular distros, but that doesn't seem to be the case anymore.
mikae1
> Long term support. Regular distros like Debian and AlmaLinux offer free 5 and 10 year support cycles which means maintenance can be done every 1 or 2 years.
What's maintenance in the context if immutable distros? Running "ujust upgrade"? That's done automatically in the background for my Aurora installation.
Also, they're working on CentOS based LTS versions of Bluefin: https://universal-blue.discourse.group/t/call-for-testing-bl...
javitury
Yes, system upgrade is the main maintenance task. With some monitoring, security updates can be automated but after system upgrades I must check manually that everything is working. E.g. incompatible configuration files, changes in 3rd party repos, errors that surface one week after the upgrade, ...
There are also smaller maintenance tasks that are tipically ad-hoc solutions to unsolved problems or responses to monitoring alerts. One of this ad-hoc routines was checking that logs do not grow too large, which used to be a problem in my first systemd centos, although not anymore.
PD: thanks for the bluefin read, it made me discover devpod/devcontainer as an interesting alternative to compose files
flomo
> Long term support
Intuitively, this seems opposite, because you could obviously 'mutate' (or mutilate) your Debian system until the updates break. Isolating user changes should make updates easier, not harder. Also MacOS uses a 'sealed' system volume and updates are like butter there.
talldayo
> Also MacOS uses a 'sealed' system volume and updates are like butter there.
Smooth as in "no data loss", sure. Smooth as in "supports the software I buy and use for long periods of time" is most certainly not true, even despite half the software for Mac being statically linked. Windows and Linux arguably do better at keeping system functionality across updates even with their fundamental disadvantages.
Groxx
While true, this isn't even slightly related to the os being "immutable" or not. Immutable-OS upgrades can and do break things - that's the reason it's even a thing. They just give you a reliable rollback.
heresie-dabord
> the advantages of immutable distros
The high availability of ChromeOS is a good example of these advantages in a business of educational context.
toprerules
You’re missing the whole point of an immutable distro. If you have a hobby project on a regular distro, you run apt-get update or whatever, it installs 200 packages and half of them run scripts that do some script specific thing to your machine. If something goes wrong you just bought yourself a week’s worth of debugging to figure out how to roll back the state.
If you update using an immutable distro, you rebase back on to your previous deployment or adjust a pin and you’re done. Immutable distros save you tons of time handling system upgrades, and the best part is you can experimentally change to a beta or even alpha version of your distro without any fear at all.
bmicraft
> If something goes wrong you just bought yourself a week’s worth of debugging to figure out how to roll back the state.
But that basically doesn't happen between release upgrades, not unless you're doing something with third party repos at least.
> If you update using an immutable distro, you rebase back on to your previous deployment or adjust a pin and you’re done
I genuinely don't know, but can you do security updates without rebasing? Just keeping some working version pinned sounds like bad idea to me, and doesn't even save you time because you'll need it resolve that problem eventually anyways.
ChocolateGod
> But that basically doesn't happen between release upgrades
Nvidia would like a word
bmicraft
For pre-Turing closed drivers I'd count that as an unsupported third party, even if the distribution in questions tries to support it.
bigstrat2003
I have an Nvidia card and I've never had it cause problems.
ChocolateGod
Many people install Nvidia drivers by using their shipped .run binary (which is a bad idea) and thus breaks when the kernel is updated to something higher than the DKMS module supports.
plagiarist
I found Fedora is terrible at documentation, or at least around rpm-ostree they are. It has made learning more of a struggle than necessary. I think the basics are that there is some sort of container image builder that can work from a manifest, then some way to create a distro out of a container image. All of the content I can find is fragmented across many sites and not complete enough to actually use. Extremely frustrating.
gavindean90
Yea the docs on the Fedora side are rough. I would help but I don’t know enough because the learning was so hard.
eraser215
Fair call. In any case I think you'll find things moving towards bootc and away from having to know rpm-ostree at all. The bootc documentation for fedora is pretty good and the Universal Blue project has built some awesome distros that use bootc.
tayo42
I dont see how it helps in a cloud environment? With correct permissions users aren't making changes to live servers or even logging in and if you want to roll out upgrades you can do it with OS images already?
Maybe it would help in a datacenter
zuntaruk
In some aspects, I'd hope that there are potential benefits on the security side of things as well. Since the host FS is generally read only in these type of distros, there is the potential to make some security teams happy.
javitury
Immutable distros typically use a declarative configuration that is easier to manage with terraform
immibis
Exactly, and if it's immutable, you know they aren't. Not through SSH, and not through a vulnerability either. I assume there's something you can hash to determine prove that you haven't been hacked, as well.
xrd
I'm not seeing any discussion about disk space when using immutable distros. I was running nix for a while and generally loved it. I know I can run nix-gc to clean up unused components. But, when I'm using docker I'm constantly running out of disk space. Again, I know how to use docker system prune, but it's an annoyance.
The discussion in the article talks about using containers and flatpak and snap and all those things bundle dependencies and really swell the disk usage requirements. Is there a good solution other than owning a massive SSD?
It isn't as big a problem for servers which don't change as often and where you need instant rollbacks, but I'm using immutable (or atomic distros like nixos) on my laptop and having trouble.
It makes me think I'm not using these systems correctly.
ChocolateGod
Docker and Flatpak use storage in different ways, despite both being implementations on top of Linux namespaces/cgroups.
Docker uses layered images, one on top of another for each step of the build process, to deduplicate in Docker, you try to reuse layers, but it's not perfect and having duplicate files is very very common.
Flatpak uses OStree, which has a content addressable file store, meaning files are stored and linked based on their checksum, so duplicate files only exist once and are linked into their respective locations.
There is work to make Docker use a system similar to Flatpak (see composefs).
trissi1996
I just set up systemd-timers to nix-gc/docker-prune daily.
Still a bit of an annoyance, but one I don't notice once it's set up.
xrd
Great idea. I don't know why I didn't think of that.
nicksbg
Exactly. I think this is a massive problem, and also as someone that works on one of Ubuntu distributions, I always wonder how much strain it introduces together with flatpaks and snaps.
YorickPeterse
My `~/.var/app` directory is 14 GiB in size, 12 GiB of which is used by Signal (which is mostly photos and videos) and 1.8 GiB by Firefox. All other programs only take up a few MiB of space.
In terms of installation size it's not a problem either, as one can verify using `flatpak list --columns=size,name`:
1.3 MB Flatseal
2.9 MB Extension Manager
47.7 MB Celluloid
604.6 MB Freedesktop Platform
680.0 MB Freedesktop Platform
533.8 MB Mesa
533.8 MB Mesa (Extra)
469.8 MB Mesa
469.8 MB Mesa (Extra)
46.9 MB Intel VAAPI driver
50.9 MB Intel VAAPI driver
20.3 MB FFmpeg extension with extra codecs
790.0 kB openh264
763.9 kB openh264
243.7 MB GNU Image Manipulation Program
7.7 MB HEIC
17.6 MB Characters
14.2 MB Connections
25.4 MB Image Viewer
946.7 kB HEIC
25.5 MB Sushi
39.8 MB Papers
941.3 MB GNOME Application Platform version 46
1.0 GB GNOME Application Platform version 47
794.1 kB Fonts
137.7 MB gThumb Image Viewer
1.1 MB adw-gtk3 Gtk Theme
269.6 MB Firefox
482.3 MB Signal Desktop
The duplicate entries is because certain Flatpaks may require different versions
of e.g. the Freedesktop platform (that being possible is one of its big selling
points).In short, storage isn't a problem at all for any computer produced in the last 20 years.
nicksbg
Fair, but not strain in terms of storage space. More in terms of disk writes and how write intensive it could be, which can be a problem especially for SSDs.
robador
After a couple years of running manjaro I ended up switching to bazzite, a fedora silverblue based distro. For the past years, I stopped being a tinkerer, and started turning on my personal laptop less and less. But when I did, I'd find that doing an update would break things, and lead to hours of figuring out what broke, or why an update wouldn't install. It was so incredibly frustrating. My personal circumstances just changed so that I don't have the time to spend on those shenanigans anymore. I looked at Nixos for a long time, but the steep learning curve always held me back. And a fedora atomic desktop started to look pretty good, but it took me to get so fed up with the not being able to do an update after a couple months without things breaking again that I got over the fact that I would probably need to switch to GNOME or KDE to run a well supported atomic desktop. I got over that and settled on bazzite with gnome, because it's promise of setting up my hardware for casual gaming without effort. I've changed a couple months ago and honestly, it's made Linux fun for me again. The things I don't want to have to tinker with, the ui, desktop, software, it all just works and seems very stable. Software is installed with flatpaks, appimages, or in distrobox. If I want to tinker, I do what I always used to do; use docker (podman and distrobox on fedora). Its been an absolute pleasure so far, with hardly a learning curve for me (based on previous experience and practices I suppose). Highly recommend.
et1337
I just went all-in on Bluefin DX. It’s my first time using Linux where almost everything worked out of the box, even my 4070. Had to disable Bluetooth to get suspend working, but otherwise, this is the year of Linux on the desktop for me.
vondur
Having to disable Bluetooth seems like a big deal to me in order to get the computer to sleep correctly.
sphars
I know that bug, Bluetooth has been messing with sleep since installing the 6.11 kernel on my Fedora 40 desktop. And I've seen many users reporting the same thing. My current solution is a script that disables BT on sleep
3eb7988a1663
Never heard of this, but I just rebuilt my machine, which is still having issues with sleep. Seeing as how I have zero bluetooth devices (wires never fail me!), I will be disabling bluetooth immediately to see if this resolves my woes.
devops99
Something about the Bluefin artwork and outward communication really turns me off. The project is using some good concepts, for sure. Though, Bluefin will never be eligible for production in the way that other commercially supported Linux based user endpoints, Linux based systems with immutable patterns, have been for a while.
eraser215
I'm all in on bluefin-dx too, and Bluetooth is working fine for me in my lenovo x1 carbon. Fingers crossed you can sort your issue out.
swaits
Been running Kinoite for a good bit (~1 year). I'm a bit over it. Love the idea of immutability, but rebooting every time I get a new system image via rpm-ostree, which is often, is tiresome. Of course, I could update less frequently; alas, habits formed from years of using rolling releases.
I switched to EndeavourOS. Between flatpak and brew and mise, I have relatively well sandboxed applications. This gives me most of the benefits of the immutable OSes, although nowhere near as rigorous, obviously. For a technologist, though, it's fine.
pimeys
You might be interested in Serpent OS, which offers immutability but without reboots after each upgrade.
They just hit their first alpha release, but it has been under development for years already. They focus on rust-based tooling, so even their coreutils are the rust versions instead of GNU. I read the alpha announcement yesterday, and might give it a spin later next year.
So far I've been very happy with Kinoite. I upgrade the base system once a week, but everything is installed in my Arch based container, so updates are fast and do not require a reboot.
On my workstation I use the Aurora Linux, a spin of Kinoite with extra tools such as tailscale added to the base image. On that machine I haven't needed to use rpm-ostree at all.
swaits
Thanks for pointing met to Serpent!
I gave Aurora a quick spin before going back to Endeavour. Didn’t work well for me.
toprerules
The whole point of ostree is that your systems image has a minimal amount of stuff in it, a la you’re only doing upgrades when there is a kernel update (which is essentially impossible to avoid rebooting for no matter what OS you’re using, even SerpentOS the other commenter linked can’t do kexec updates).
You use something like distrobox to use a rolling release with regular package updates on the atomic core.
swaits
I understand the point of it. I’m enthusiastic about it. It was my home daily drive for more than a year. In the end, the pain didn’t outweigh the benefits for me.
johnny22
I think we're closer to the time where live updates are more feasible if you aren't changing the kernel although a log in/out might be required.
mrbluecoat
Isn't NixOS immutable? If so, surprised it wasn't mentioned.
phire
I certainly consider it to be immutable.
But NixOS is immutable in a very different way to all the mention distos, which are focused on containers, isolation, and layers; Maybe the author doesn't consider it to be in the same category?
Personally, I've decided that NixOS is not for me. The concept is great, but the actual experience seems to be held back by Nix (the language and the tool) being hard to understand and debug.
whateveracct
Have you ever used the nix repl? I find between that and having the failing build keep its working directory around for inspection, it's always easy to debug thing. I guess the third tool is overlaying the equivalent of debugging into derivations but that's rare.
phire
I did.
The problem with nix repl, is that it only seems to help if you already understand both nix and how the derivations are actually implemented in nix. It's pretty useless as a learning tool.
whateveracct
I don't think that's true. Because the main way I learned those things was poking in the repl. Hitting tab and stuff. And reloading changes to files or overlays and debugging what happened by inspecting things.
It's the same as learning Haskell. Outside of syntax and some basics, you don't need to have deep knowledge to use ghci. And Nix and Haskell are both just substitution-based evaluation (lambda calculus) which imo is 80% of understanding them.
phire
I really don't know how much of the problem is me.
For some reason, I have a really hard time groking Haskell, and Nix seems to fit in the same boat. I don't know why.
It's not the functional programming. I love doing functional style programming in Rust, Python and recent versions of c++. And I didn't have any problem with Prolog and Lisp for those few university courses. I have a suspicion that my brain just finds the concept of lazy evaluation by default to be deeply offensive.
What I do know is that personally: I could never grasp Nix; The repl didn't help; The repl was the extent of the debug tooling; I never found good documentation to help me learn; and I was getting anxious at the thought of doing anything on that server.
globular-toast
Tried Guix?
brnt
I think openSuse also calls their rpm+btrfs snapshots solution immutable, but afaik it doesn't use containers.
drdaeman
Not by default - some things like /etc/machine-id or SSH keys are not a part of your configuration and they're just generated in place and kept untouched. Plus you (or your software) can litter arbitrary files and they'll stay. And of course $HOME is a mess.
But with impermanence it can be effectively immutable.
bmacho
Nor they have mentioned Puppy Linux.
It uses SquashFS images and layer them on each other. You can choose to save your modifications in a new image, or discard them. E.g. you can run a Puppy Linux from a CD-R (one time writeable), by appending all your changes.
I think that's a great model to be immutable, but AFAIK Puppy Linux doesn't have the convenient tools to manage these snapshots, switch between them, roll back and such, and they don't seem to go in that direction. (I used Puppy Linux as my default system for a while, but I lost touch with them and I don't know how are they doing now.)
evanjrowley
It gets super immutable when the impermanence modules are used.
colordrops
I think technically NixOS is considered an atomic distro rather than immutable. You could mount the store rw and modify it, though you really shouldn't except in extreme cases.
arianvanp
Same for fedora CoreOS. RPM-ostree is just a bunch of symlinks and hard links just like NixOS is if I recall correctly. Or at least it used to be.
wkat4242
For me: no.
I want full control over my system. Immutability means leaving part of that to the OS developer. Definitely don't want that. Even though it's ostensibly better for security (though it's only really making one step in the kill chain harder, which is establishing persistence).
toprerules
First, you don’t have full control of your system. Your system is running an unknown amount of code as binary firmware blobs even if you’re using a completely open source kernel. Hopefully you’re compiling every package yourself and not using pre-compiled binaries from your distribution’s repositories.
Second, immutable distros are primarily a distribution and update mechanism that vastly improves the current model of running X number of package updates and scriplets on every machine and hoping it works. There’s nothing that stops you from remounting a filesystem as rw at least on any of the distributions that I know of. There’s also plently of stateful, rw holes for data and configuration on “immutable” distros.
wkat4242
I like the traditional package system, I don't like containerising everything (though I know that is not necessarily coupled with immutable distros). Because then every package can have different library versions and the dynamic loader can't do its thing.
But it's more the configuration that I want to be able to adjust, or to recompile things. As a typical example, on alpine I always need to recompile sudo as their standard version doesn't allow PAM which I need. On an immutable system such tools would usually be in the immutable part.
I had problems with macOS when they switched to immutable (and if you turn off the protection it turns off a whole load of other things too). If I as much as changed the /etc/ssh/sshd_config it would revert with updates.
And really the traditional package system works totally fine for me.
mazambazz
I think you're taking the term "immutable" too literally.
Immutable does not mean you cannot change it according to your wishes. It just means that each change must be explicitly declared in order to be included in the next system image.
In some ways, having a declarative, immutable distribution makes the process even easier, as is the case with NixOS. If you want to patch your sudo, it would be as easy as doing
security.sudo.package = { pkgs.sudo.overrideAttrs (old: { patches = [ (fetchPatch {url = "<url>"; sha256 = "<patch sha256sum>"})];})};
And then you're done.
100% truth be told, having a declarative, immutable distro has allowed me to experiment and configure my system way more than I would have otherwise. I mean, I can do anything because I have the safety net of rolling back if I mess up.
Furthermore, being declarative means I know exactly how I got to my end solution, instead of having to memorize a bunch of steps from different attempts that may or may not have been successful.
wkat4242
Declarative is very different from immutable. They're two separate concepts.
See Apple's implementation where the OS files are protected by signatures and the system won't boot if they're changed. Immutable does mean you can't change it, though I'm some cases you can enable and disable some parts. Nix is declarative and perhaps not immutable. It's advised not to mess with config files but you still can if you wish, it's just a bad idea because it'll be overwritten.
I'm not really against declarative management though I'd consider it something more appropriate for servers where I don't want to change stuff on the fly. On my workstation I don't want to do a complete change time every time I want to modify something. I also don't want to learn the complex syntax so I've never really dived into nix.
I like FreeBSD's compromise of having most configuration in one file but still a traditional system.
akikoo
He's talking about the management of his system, not the development of his system.
devops99
If you aren't already doing private cicd, effectively acting as "the OS developer", you don't have "full control" over your system.
akvadrako
I don't see how you are giving up any control. Look at how to make a custom ublue distro. It's basically just a docker build that results in an image. But you can do anything you want before that image is finalized.
0xDEAFBEAD
>it's only really making one step in the kill chain harder, which is establishing persistence
Yep. An attacker can just surreptitiously add a line to your .bashrc instead of modifying the base OS.
wkat4242
Indeed, though it won't give them root persistence but yes. It's a bit harder to weed out when it's hidden somewhere in the OS but it's not a serious protection IMO, even if file signatures are validated on every boot like Apple does.
But they also use this to enforce DRM, for example if you turn off system integrity protection you can't run iOS apps anymore. This is exactly the kind of thing that bothers me about it.
ChocolateGod
> Indeed, though it won't give them root persistence but yes.
If you gain root by editing .bashrc and replacing sudo or placing a file in .local/share/applications to replace an application that the user trusts (like the Settings app) to give their password to, then you can just inject your payload into the initramfs and get persistence.
I don't believe any desktop distro is signing and verifying the initramfs.
wkat4242
Yeah I know, there's always options to get persistence.
And yeah the problem is that the initramfs is built on the machine itself. So it would have to have the signing keys which defeats the purpose.
Apple does sign the entire boot process but they have the benefit of a strictly defined hardware set to support.
ChocolateGod
> Apple does sign the entire boot process but they have the benefit of a strictly defined hardware set to support
Windows can also sign the entire boot process, but they unlike Apple can't make the system folders read-only due to backwards compatability (Windows 10S experimented with this idea but was scrapped).
There's nothing stopping Linux distros from having a secure boot process, but the initramfs either has to be scrapped or pre-built by the distro.
0xDEAFBEAD
>And yeah the problem is that the initramfs is built on the machine itself. So it would have to have the signing keys which defeats the purpose.
Just brainstorming here.
What if the initramfs was rebuilt every time the OS was upgraded. During an OS upgrade, the user is asked to cold boot, the machine does a special boot, requests the user's disk decryption password, and uses it to build and sign the new initramfs, based on files signed by distro maintainers.
Then for every ordinary boot, immediately after disk decryption, we keep the disk decryption password in memory for just a little bit longer, and use it to check the signature on the initramfs before continuing with the boot.
The "signature" could be the secure hash of [the disk decryption password concatenated with the initramfs binary], or something (ask a crypto expert -- perhaps KDF+HMAC is better?)
I'm guessing the disk decryption password is much harder to steal than the user's root password?
(I might be totally out to lunch here, I know nothing about Linux boot. The above comment is written in the spirit of "learning about things by asking dumb questions"!)
AMD_DRIVERS
I run Fedora Kinoite full time on my primary machine, and it's great. Obviously a bit of a learning curve, but if your workflow can be achieved using Flatpaks and Toolbox, it's fine. You can (and I do) layer packages but I have only 3 or so I need to layer (asusctl, supergfxctl and asusctl-rog-gui).
My only real gripe is that Firefox still ships as an rpm in the base image. I understand that they want to include a working web browser at all costs, and I don't think they can distribute the Flatpak version with the base image, but it's annoying that I have to mess with the image (removing Firefox) to then re-install the (more up to date) Flatpak.
bogwog
And if you have an nvidia card and want to use cuda, Bazzite offers the same experience as Kinoite, but with nvidia drivers preinstalled out of the box.
A cuda dev environment is a 'toolbox create' away
jillesvangurp
I've been on Manjaro (arch based) for the past four years. It's mostly been fine but I've had to recover it from a botched Grub update once (an update randomly self destructed its configuration), which wasn't fun. But after four years it's in good shape, everything works, I run the latest kernel, etc. I have zero reason to wipe its installation and reinstall it again. Most other Linux distributions never lasted four years until I found a need to reinstall them or install some newer version.
And it's Linux so regardless of the distribution you'll be dealing with some amount of weird shit on regular basis. Has been true since I cycled home with a stack of slackware floppies almost thirty years ago. There's always configuration files to fiddle with, weird shit to install, etc.
But an immutable base OS makes a lot of sense and it's not mutually exclusive with that being updated regularly. Containerization is the norm for a lot of server side stuff. Effectively, I've been using immutable server operating systems for almost a decade. It's fine. All the stuff I care about runs in a container. And that container can run on anything that can run containers. Which is literally almost anything these days. I generally don't care much about the base OS aside from just running my containers hassle free on a server.
Containerization would make sense for a lot of end user software as well. IMHO things like flatpak and snap would be fine if they weren't so anal/flaky about "security". Because they are protecting a mutable OS from the evil foreign software. Running a bit of software that needs a GPU isn't a security problem, it's the main FFing reason I'm using the computer at all. Or own a GPU. This needs to be easy, not hard. And it shouldn't need a lot of manual overrides.
If I run a browser or things like Dartable, I usually have no reason to run them in crippled/unaccelerated mode. Sorry that's not a thing. It's the main reason I bypass flatpak on Manjaro for both packages. And I bypass PAC as well because I trust Firefox to have a good release process. So, I use the tar ball and it self updates without unnecessary delay. Which considering a lot of its updates are about security is exactly what I want.
Same with development tools. I use vs code and intellij. Both can self update. I have no need for a third party package manager second guessing those updates or dragging their heels getting those updates to me.
zelphirkalt
Your GNU/Linux distribution and its package manager acts like a shield against unwanted updates. If you rely on auto updates of VS Code or IntelliJ, you open yourself up to immediate damages inflicted by them. No maintainer with any kind of idea or vision stands between you and whatever MS and other tech giants push onto you.
jillesvangurp
What I like about the notion of an immutable OS is getting package maintainers to do their thing before it reaches my laptop in immutable form. Just put it in the next version of the immutable image and I'll get that when I next reboot. All the stuff that just needs to work should be tested and integrated before it hits my laptop. And it being immutable means no package manager can break it.
For the stuff I care about and use every day I like the direct connection to the developers. Mostly repackaging adds very little value. If somebody finds a bug, they should be reporting it upstream; not providing some workaround. Most mature projects are pretty good about releasing, packaging and testing their software. The only reason linux package managers exist is the gazillion ways there are to package things up for different distributions.
johnny22
I still use containers for all that stuff that is not yet suitable for flatpaks (or perhaps never will be), just via distrobox or toolbox while leaving the host OS untouched
fsflover
Did you consider Qubes OS? It's the same, except more secure/isolated and better UX than containers.
johnny22
I did not at all consider it. I'm already pushing resource limits on my current machine as is. Adding vms to the mix would kill it. Also, that security doesn't come for free, it makes things that are currently easily, much more difficult for security I don't personally concern myself with.
Modified3019
Have they may giving gpu to things that need it easy yet?
Lariscus
I am using Fedora Kinoite for a year now. It is finally the stable desktop Linux experience I was looking for. The limitations people are constantly talking about really don't seem like a big deal to me. For everything not available as a Flatpak there is distrobox and there is always layering as an escape hatch.
amluto
I’m in the process of installing Kinoite, and the installer is awful. There are two different manual partitioning tools and one automatic one, none of which work well at all.
For some reason, immutable Linux distros seem to struggle with the idea that a single physical disk might contain both volumes owned by the distribution and persistent volumes owned by the user that are not managed by the distro. Last time I checked, Talos was basically unusable on a single-disk system if you want persistent volumes.
Sadly, most M.2 NVMe devices don’t seem to support namespaces, which would otherwise be a decent way to kludge around this problem.
Lariscus
I don't understand half of the stuff you just wrote. I just selected the SSD in the installer and told it to install the OS. Why make things more difficult?
amluto
Because I want a data partition that I can keep if I decide to switch to a different distro. Or because I already have a data partition I want to keep. Or because I’m doing something that requires some space backed by a different filesystem.
Most old distros can do things like this with no particular difficulty. But the Kinoite installer (which is presumably the same as the Silverblue installer) is half-baked and buggy.
Lariscus
Have you submitted a bug report?
nilslindemann
I worked a while with Silverblue, it is great, but they should use Distrobox instead of Toolbox. In Distrobox one can also encapsulate the home folder and one can export a link to a software running in a box to the outer system. The last one is pleasant for example with VS Code, which will only work properly when installed in a box.
kccqzy
I was about to ask why openSUSE Aeon when the normal Tumbleweed supports immutable mode where / is mounted read only, when I realized that they actually removed it in https://bugzilla.opensuse.org/show_bug.cgi?id=1221742
But I'll share my experience: I think an immutable / really is the way forward. Just the ability to roll back and boot using an older snapshot is great: I have had an update break the boot, but I have the option of running a single command to roll back while I investigate the issue. At the time the issue happened I was busy with life and I simply rolled back and used that version for three months before I had time to investigate.
Strictly speaking this does not require the current / to be mounted read only, but merely requires periodic bootable snapshots be taken and these are available to be used as a read-only /.
cybercatgurrl
i don’t feel like immutable distros are ready for prime time because there are still some really big limitations with flatpaks that will take time to resolve. using a secondary drive on steam is still painful and works inconsistently. 1password can’t talk to firefox to unlock it. applications like steam can’t share rpc status with discord
bjoli
Yes. I am already running and Aeon desktop base system with gnu guix for the userland.
It's great.
amelius
Is this how embedded folks make sure that a device starts with exactly the same installation every time a machine is booted?
I wonder why embedded products like Nvidia Jetson do not come with an immutable Linux (and instead are based on Ubuntu which updates itself on every opportunity via apt and snap and whatnot).
chrisdalke
There are lots of companies using NixOS for this, BalenaOS (Yocto + Docker), or building their own bespoke tooling on top of a minimal Linux setup.
Although many places start with Ubuntu or Debian in my experience it’s common to invest a lot of time and energy in getting out of that unmanaged setup once the company scales.
amelius
The hardware usually comes with vendor-specific libraries (e.g. cuda in the case of nvidia) which are based on a specific version of libc, so then you will have to build your entire alternative OS around that version also.
chrisdalke
Which is… never trivial. I’d say 25-50% of my career so far has been repeatedly “fixing” clunky deployments of ROS, OpenCV, L4T, CUDA, cudnn, libc, etc. in Docker and Nix. Fun stuff!
throwaway173738
It’s common for hardware vendors to provide a working system for demonstration purposes so you can evaluate the hardware without having to learn an immutable OS toolkit. Then when you pick hardware you also do the bring up work to get the kernel compiling from source and integrated with your userspace of choice. At that point you’ll switch to an immutable system.
Hardware vendors in this space can’t be trusted, so you need to make sure the board is actually fit for purpose. Outside of the hobbyist space you have to be really careful. There are often business objectives that rely on the board working a certain way.
amelius
This is nice in theory but the amount of vendor-specific libraries can be quite large (e.g. Nvidia's CUDA, libcudnn etc.), which you then have to get working on your new OS.
KyleSanderson
OpenWRT is pretty much the oldest still running (and popular) with UCI. There's the classic nvram ones, but those are hardly manageable manually.
colordrops
What is UCI?
rollcat
https://openwrt.org/docs/guide-user/base-system/uci
Also the web UI counterpart, LuCI: https://openwrt.org/docs/guide-user/luci/luci.essentials
I've been running OpenWRT on my home router since ca 2017, and I found LuCI both quite intuitive, and immensely powerful. Simple things are simple, complex or difficult things are possible, with just clicking around.
Unfortunately if something can't be done with LuCI, you're pretty much on your own - the documentation for the internals is scarce and expects you to be an expert/developer.
fragmede
> The abbreviation UCI stands for Unified Configuration Interface, and is a system to centralize the configuration of OpenWrt services.
> UCI is the successor to the NVRAM-based configuration found in the White Russian series of OpenWrt.
evanjrowley
My first immutable distro was Illumos-based SmartOS. Everything the system needs is read from a read-only USB stick and run from RAM. I wish more distros worked that way. A recent submission on here gives me hope: https://news.ycombinator.com/item?id=42428722
I suppose TinyCore Linux in its default configuration also counts.
yjftsjthsd-h
The whole illumos family has degrees of this; SmartOS is of course full immutable with a ro OS that can be replaced to update, but even ex. OpenIndiana applies core OS updates to a clone of the root filesystem and you can always roll back to a snapshot.
cosmic_cheese
Are there any immutable distros that cleanly divide system/desktop and end-user programs, with only the former being immutable and the latter being business as usual for desktop Linux? So the kernel, drivers, and KDE/GNOME would be fall into the immutable “core”, but apps like Firefox, Krita, and Anki would be in a mutable space managed by a traditional package manager like apt.
Just wondering because it’s really just the system itself and my desktop environment that I find the benefits of immutability most pertinent, whereas it’s something of a bad fit for applications with the woes flatpak and friends bring for desktop integration and such.
tcrenshaw
Silverblue also has really good distrobox integration. Anything not available via flatpak (or things I don't want via flatpak for whatever reason) goes in an arch or debian container. You can then export apps or binaries from the container and have it show up in your desktop menu or path.
Silverblue also supports package management via brew, which works pretty well for CLI utilities
zamalek
Silverblue and family are like that. The user bits are installed with flatpak.
einsteinx2
> Just wondering because it’s really just the system itself and my desktop environment that I find the benefits of immutability most pertinent, whereas it’s something of a bad fit for applications with the woes flatpak and friends bring for desktop integration and such.
irunmyownemail
I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness and also for the immense amount of additional disk space used by Snap.
Having said that, no, I don't see any usage of immutable Linux in my future.
mkl
> I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness
My experience with Snap is that it bugs me about Firefox updates multiple times a day for two weeks. Okay, it does then update automatically and break the running program, but I can't claim to be unaware.
Apt is the thing that updates packages completely without my awareness, with Unattended Upgrades. Mostly it works, but I have to blacklist NVidia utilities, as they need to be in sync with the driver in use.
rlpb
> I don't use Snap on my Ubuntu Desktop systems because I don't like apps secretly updating without my awareness
https://snapcraft.io/docs/managing-updates#p-32248-pause-or-...
irunmyownemail
Unfortunately that creates a choice between an app that updates in an aloof manner or allowing it to exist in an insecure, not updated state.
rlpb
What do you mean by "aloof manner"? As far as I'm aware, snaps' updating mechanism is quite reasonable and doesn't suffer from the many update related issues that apt/debs have, especially when users want packages not included by their distribution.
amelius
You can also block the updater's internet access by adding this to your /etc/hosts file:
127.0.0.1 api.snapcraft.io
And for other updates: 127.0.0.1 archive.ubuntu.com
127.0.0.1 security.ubuntu.com
127.0.0.1 mirrors.kernel.org
127.0.0.1 deb.debian.org
127.0.0.1 ppa.launchpad.net
127.0.0.1 flathub.org
127.0.0.1 dl.flathub.org
Use at your own risk of course.setuid
Or you can just avoid hacking your hosts file and breaking other tools, and set your Snap and Apt proxy configuration to a non-existent value, or firewall their ability to reach those hosts.
Or configure them properly by disabling auto-updates, configure unattended-upgrades appropriately for your needs, and only update your apt packages from a known, internal mirror endpoint that doesn't change until you point it to a new timestamp.
That's how it works in the real world, in production. It's not 1994, we don't hack hosts files anymore.
0xDEAFBEAD
>I don't like apps secretly updating without my awareness
Any particular reason?
eraser215
Flatpak doesn't auto update out of the box on any distro I have used.
nullify88
Configuration as code has come as long way too along with these immutable OSs. For example I do not miss messing with preseed or kickstart files (I preferred working with kickstart files). Ignition / butane I find is much easier to work with and is a core part of configuring the OS.
deknos
I like immutable distros. What i do not like is that developers and maintainers do not give possibilities to admins and powerusers to build a immutable core themselves. This removes choice and learning experiences for the customer/user/admin.
Maybe this will finally change.
cprecioso
For CoreOS, you can create immutable images as easily as you can create Docker containers: https://coreos.github.io/rpm-ostree/container/ You can later just point the installer to your OCI image and it will just work
pimeys
Could the Universal Blue image builder solve this for you?
lrvick
If you also need determinism and full source bootstrapping (you care about supply chain security) check out https://codeberg.org/stagex/stagex
phendrenad2
I don't really understand the exact problem that immutable distros solve. Seems like it's some vague "instability" in normal distros?
> An immutable Linux distribution has its core system locked as read-only. This ensures the base operating system remains untouched during normal use, protecting it from accidental changes, unauthorized modifications and corruption.
So, in other words, I'm using an immutable system already! (Windows 11)
cosmic_cheese
The places where immutability is a benefit for most people are protecting against cases where the package manager gets confused and screws things up (as famously happened to Linus of LTT years ago when installing Steam on Mint rendered the system unbootable) and for the ability to cleanly roll back the system when an update does something like break video or networking drivers (surprisingly common with some hardware).
pxc
> as famously happened to Linus of LTT years ago when installing Steam on Mint rendered the system unbootable
The system booted fine! It just didn't have a graphical desktop environment installed anymore. But it was up and running, not crashing or anything like that! It was no more 'unbootable' for lacking a GUI than the server hosting this website is. :)
But yeah rollbacks are a great way to handle situations like that, so it's a great feature for a package manager to have.
kissgyorgy
Not sure how NixOS didn't make it to the list.
jmclnx
They are not for me, but I am glad they exist.
udev4096
A new breed of distros for sure but how immutable is it, really? What I'm interested in knowing is the mechanisms and techniques in place for making sure no one can change any core components of the system. It's just like randomness. At first, it sounds super secure but we all know nothing is truly random
linsomniac
Around 2000 I made a firewall-oriented Linux distro that made use of immutable bits and SELinux and various other security hardening. The bulk of the filesystem was immutable, and the system was then put into multi-user mode, where the kernel enforced that the filesystem couldn't go back to mutable.
During boot time, a directory was checked for update packages, and if the public key signature of the package matched, the updates would be applied before the filesystem went into immutable mode. This update directory was one of the few mutable directories on the system.
Fnoord
Back around that time I remember running such a firewall OS on a floppy disk. You would set the floppy readonly, and you could update the floppy by taking it out. It ran entirely in RAM. I forgot the name, it was either Linux 2.0.x or 2.2.x. I don't even remember if settings were kept after reboot. I installed it for a friend of a friend in his student apartment.
Years later, I gave a daughter of a friend of my mother my old PC. It would boot up a Linux live CD. That, too, is immutable, and you'd update it by burning a new live CD.
But where did we arrive to this? Well, computers had all services enabled for some reason (not with big bad internet in mind, but LAN). And updates were distributed via CDs or different media. Some airgapped environments are still going to work akin to that. Now, if the devices are connected to internet, they have to be updated because security vulnerabilities are going to have been discovered.
tcrenshaw
I don't think most immutable distros are designed to prevent users from mounting the root filesystem as read write. They're instead designed around delivering a core system that's guaranteed to work
TacticalCoder
> I don't think most immutable distros are designed to prevent users from mounting the root filesystem as read write.
Someone mentioned running Puppy Linux from a CD/DVD (write once).
I do wonder: it'd probably be possible for me to boot a Linux distro from a DVD and then launch Promox and my VMs/containers automatically. I take it I'd have to burn a new DVD every time a security patch affecting programs installed on the bare system comes out.
The "main" OS would be hard to compromise in a persistent way as you cannot remount a write-only DVD read-write.
dagmx
I’m honestly surprised Immutable distros is so controversial. I get why people choose not to do it, but I don’t know why I see so much hate towards them in a lot of Linux communities.
SteamOS is immutable and incredibly successful. macOS (not Linux of course) is also immutable and very successful.
As long as the OS’s have a concept of overlays, an immutable system rarely gives up much in the way of flexibility either.
bayindirh
macOS hides its immutability pretty well, with fine grained image immutability and keeping the behavior mostly unchanged.
Immutable Linux distros, esp. NixOS pull all kinds of shenanigans (ELF patching to begin with) to achieve what they want to achieve with a complete disregard how they change the behavior and structure of the system. What you get is something which resembles a Linux distro, but with monkey patches everywhere and a thousand paper cuts.
When a Linux distro can become transparently immutable, then we can talk about end user adoption en masse. Other than that, immutable distros are just glorified containers for the cloud and enthusiast applications, from my perspective.
dagmx
That’s fair. I agree that the ergonomics of the immutability matter , but I think that’s true of any aspect of a distro.
I think there’s been well done immutable systems and it’s something that can be achieved with a mainstream Linux distro.
bayindirh
> but I think that’s true of any aspect of a distro.
That's true, but these problems are worked on for a quite long time, and the core ethos of a Linux distribution is being able to be on both sides of the fence (i.e. as a user and as an administrator which can do anything).
For example, in macOS, you can't customize the core of the operating system since the eternity, and it's now sent you as an binary delta image for the OS, and you have no chance to built these layers or tweak them. This increases the ergonomics a ton, because you're not allowed to touch that part to begin with.
However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools. Adding esoteric targets like "I really need to be able to install two slightly different compilations of the same library at the exact same version" creates hard problems.
When these needs and the mentality of "This is old, let's bring it down with sledgehammers. They didn't know anything, they're old and wrinkly people" meets, we have reinventions of wheels and returning to the tried and working mechanisms (e.g.: Oh, dynamic linking is a neat idea. Maybe we should try that!)
Immutable systems are valuable, but we need less hype and more sane and down-to-earth development. I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built. Yes it won't be able to do that one weird trick, but it'd work for 99% of the scenarios where an immutable system would make sense.
dagmx
I very much disagree with this sentence
> However, with Linux, you need to be able to change anything and everything on the immutable part, and this requires a new philosophy and set of tools.
Taking macOS as a North Star as a successful mutable Os, most people don’t need to, and shouldn’t be, touching the immutable parts. I know the assumption is that you need to do that on Linux, but I don’t think a successful distro should require most casual users to go anywhere near that level of access.
If they do for some reason on Linux specifically, why would overlays not be sufficient? They achieve the same results, with minor overhead and significant reliability gains.
> I believe if someone can sit down and design something immutable with consideration to how a POSIX system works and what is reasonable and what's not, a good immutable system can be built.
But someone has done that. macOS is POSIX. SteamOS just works for most people.
bayindirh
I think there are a couple of problems with taking macOS as the so-called North Star of the immutable OSes.
First of all, macOS doesn't have package management at the core OS. You can't update anything brought in as the part of the OS from libraries to utilities like zsh, perl, even cp. .pkg files bring applications in to mutable parts of the operating system, and can be removed. However, OS updates are always brought in as images and applied as deltas.
In Linux, you need a way to modify this immutable state in an atomic way, package by package, file by file way. That'd work if you "unseal, update, seal" while using EXT4 FS. If you want revisions, you need the heavier BTRFS, instead, which is not actually designed for single disk systems to begin with.
You can also use overlays, but considering the lifetime of a normal Linux installation is closer to a decade (for Debian, it can be even eternal), overlays will consume tons of space in the long run. So you need to be able to flatten the disk at some point.
On the other hand, NixOS and Guix are obsessed with reproducibility (which is not wrong, but they're not the only and true ways to achieve that), and make things much more complicated. I have no experience with RPM-OSTree approach.
So, if you ask me we need another, simpler approach to immutable Linux systems, which doesn't monkey patch tons of things to make things work.
> But someone has done that. macOS is POSIX.
Yes, but it's shipped as a set in stone monolithic item, with binary delta updates which transform it to another set in stone item. You can't customize the core OS. Can you? Even SIP sits below you, and has the capability to stop you with an "Access denied." error even if you're root on that system. It's a vertically integrated silicon to OS level monolithic structure.
> SteamOS just works for most people.
SteamOS is as general purpose as a PlayStation OS or car entertainment system. It's just an "installable embedded OS". Their requirements are different. Building and maintaining a mutable distro is a nightmare fuel already, making it immutable yet user customizable is a challenge on another level.
It's not impossible, it's valuable, but we're not there yet. We didn't find the solutions, heck even can't agree on requirements and opinions yet.
dagmx
Even though macOS won’t let you replace the actual binaries that ship with the OS, you can still replace the binaries that get resolved when they’re called via brew/nix/macports etc.
I again disagree with your assertions that you need to replace the OS contents. You just need to be able to reflow them to other ones. That’s the macOS way, the flatpak way etc..
I think the issue is that you are claiming that you MUST be able to do these things on Linux, and I’d push back and say no you do not.
And your comparison of SteamOS to a console or in car entertainment is flat out incorrect. Have you never booted it into desktop mode? For 90% of users, what are you expecting they need to do in that mode that it’s not “general purpose” enough to do?
Yes, it’s not impossible. It’s been done multiple times , successfully as well.
imcritic
SteamOS is not a general purpose OS, yet you mention it as if it is one.
macOS is not immutable at all.
TheCapeGreek
For most users, including many developers, SteamOS absolutely can be general purpose.
There are two main "daily driver" usability issues on SteamOS by default if you need to do technical work:
- Limited software availability via the flatpak repositories.
- Not being able to install certain programs as easily without needing containerisation of some kind (if that even solves the problem in some cases).
Distrobox solves a good amount of both issues on SteamOS, for coding work at least. Slap a virtual Ubuntu on and you're off to the races.
dagmx
How would you define an immutable distro that would exclude macOS with SIP?
And steamOS is totally a general purpose OS, it’s just got a non-general purpose frontend it defaults to.
Figs
> SteamOS is not a general purpose OS
Uh... yeah it is. Have you ever switched it into desktop mode? I haven't pushed my Steam Deck as hard as my daily driver Linux system, but I've done all sorts of fun things on it like run a web server and write new Python scripts directly on the device. You can hook up a keyboard and mouse and monitor and use it like any other desktop Linux environment. It's basically just an Arch distro with KDE and some extra stuff on top to make it easy for people to run games.
fragmede
For a couple of years now, macOS has an RO System volume image, and then mounts an RW data volume on top of that, similar to how overlayfs works on Linux. That system volume isn't modifiable. If it is, then the system won't boot. So I'd say it's a little bit immutable.
https://eclecticlight.co/2021/10/29/how-macos-is-more-reliab...
freeone3000
/ being mounted ro with /etc and /home being mutable... kind of ruins the point? like you can still mutate whatever, you just have to install it as a user, and if you have overlays then what gain is there?
dagmx
The advantage is you always have a core system that you can revert into by removing any problematic overlays, and you can always quickly verify that a system is in a verified and unmodified state.
This is how macOS works with SIP, and how it handles rapid response updates for example.
It greatly reduces the ability for user space to compromise the system.
freeone3000
Ah, so malware can no longer manipulate boot state, just steal all of your passwords and credit cards and cryptocurrency and make user-level persistent processes.
dagmx
If that’s your view, then why have any security at all?
Why not just go back to the days when every process could access each others memory?
Let’s just let every process run as root too while we’re at it, right?
Heck, you can still crash a car, so why wear seatbelts or have airbags?
This is the problem with strawman arguments like yours. They’re not rooted in reason and extrapolate infinitely.
Imperfect safety is better than no safety. More safety is better than less safety.
It also completely ignores that you can have different approaches to security for different parts of any system. You don’t just have single silver bullets
TacticalCoder
> Ah, so malware can no longer manipulate boot state,
Which is an immense benefit.
> just steal all of your passwords
2FA, often now thanks to a HSM (Hardware Security Module) shielding your secrets precisely should your account be compromised (Yubikey, passkeys, ...)
> and credit cards
2FA. My credit cards companies (EU) ask me to sign on a physical hardware device the bank gave me any transaction I make with my credit card when it's either above a certain amount of to an unkown vendor (or both).
> and cryptocurrency
2FA. Cryptocurrencies hardware wallet use an HSM which shields the secret from attackers.
> and make user-level persistent processes.
Which you can detect from root, but only as long as root ain't compromised too.
A local exploit which can be detected and patched is bad but it's not anywhere near as bad as a root exploit which could potentially control the entire boot chain (maybe not SecureBoot if it's setup properly) and lie to you about everything.
Put it another way: it's precisely because a local exploit is not a root exploit that a system can be configured in such a way that should a local exploit happen, the system can make sure that that local exploit doesn't get to stay persistent.
A non root exploit cannot lie to root, which is why there's a distinction between a local exploit and a root one.
Now we begin to have the possibility to boot a minimal immutable Linux distro (maybe even from a read-only medium like a DVD [1]) , maybe from a UKI and a signature enforced by SecureBoot, and from that minimal immutable system, maybe launch something like a VM and/or containers (I prefer my containers to run inside VMs but YMMV).
For example we can begin to envision the following:
SecureBoot -> signed UKI -> Proxmox -> VM -> stateless containers
I am very excited that this now begins to be possible.
Don't you see any value in that?
I don't run an immutable distro yet but I already have throwaway user accounts, mounted on temporary and "noexec" mountpoints.
If you tell me: "Here's a system where it's guaranteed a malware can never ever manipulate boot state", I'll manage to find a way to build a system on top of that where local exploit cannot possibly persists.
Immutable distros are working towards that goal.
And I definitely see where the value is.
dartharva
For a long time Windows has dominated PCs, and all the software that run on it come packaged in their own little installers and with their own little updaters to manage versions, leaving users free to not care and focus on just using them. For us old desktop users, Linux's "enforced" centralization of software packaging and distribution is just too divergent of a concept to get immediately used to. Immutable distros take the restrictions even further, and they make one think you might as well just have an Android/Chromebook at that point.
I switched to Ubuntu after witnessing Windows 11 and am seeing there's now yet another confusing delivery channel (snap) added on top of what was already an overcomplicated system (apt). At least it still allows single installers (.deb files) so that works for now.
_-_-__-_-_-
I agree, in a sense. Windows has had mostly .exe (.msi) packaged installers that run their own install scripts to place files and modify the system to run the software. The software vendors all used the same format. Installing software was a practised exercise of double-clicking on a installer.exe file and then hitting next a bunch of times.
OSX (before the App Store model) had .dmg disk images and installer files, where you drop the entire application into a folder and that .app file contained everything the application needed to run. Easy enough to install applications.
But, what happened when you wanted to update a program or application? You went an manually downloaded the application or found the update button in the context menu and updated. This meant that many users would not update very often. Because the process was manual and because you were never sure what would work or when the new version would refuse to work with your hardware or OS version. Even with an app store, you still need to open the store app and click update or look through menus to find which software packages need updating.
For a long time, as both a Windows and OSX user, I saw the benefits of both approaches. Now, as a Linux user, I can update my entire system, with one bash alias I can update my chosen installed applications, but also my operating system files, and my flatpfak apps or containers and even my firmware (fwupd). It has changed the paradigm of computing for me. It makes me feel like a superhero. I don't have to worry about an update breaking my system. Better still, I don't have to manually update my operating system separately from each application I've chosen to install. I can do it all from the terminal and I can see all of the changes my system will make. It has been a great experience. Do things sometimes break? Yep. But, they broke on the other OS's too. I take it as a learning experience.
Maybe I've just talked myself into trying an immutable os.
dagmx
There’s a few inaccuracies in your points.
1. You can have apps on both windows and Mac auto update from their respective stores.
2. Macs still use DMG and PKG installers, for stuff outside the App Store.
fragmede
I'm curious which part of apt you see as being overcomplicated.
1oooqooq
because it's silly. period.
yeah, it have lots of advantages, and that's why it was the default decades ago for everything. (windows, bsd, etc)
then people would have lots of trouble installing different software or updating security issues. so we invented package managers and took all the time in the world to make the base as small as possible.
it's advantages still make sense in some places, like modems with old flash memory. openwrt is static base with overlays. but again, it still carries the same downsides but because of the different aspects of the hardware it makes sense.
it would make sense for tech illiterate end users (hence android, ios, chrome os, wii, macos to a degree, etc) and containers (which already have infinite ways to convert from package to static). but anything else will literary harm the distro ability to evolve and adapt to software changes. imagine every change like systemd or a new browser or wm being atomic.
now people forgot decades of history. and it's so tiring.
Timon3
I don't think I understand any of your objections.
When was Windows ever immutable in the sense of current immutable Linux distros? I wasn't able to find any reference to this ever being the case.
What do package managers and making the base as small as possible have to do with immutable distros? Package managers still exist, and the base is pretty much the same size as the non-immutable version of the same distro.
Why do immutable distros make more sense on modems with old flash memory?
How does being immutable harm the distros ability to evolve?
Either I'm not understanding your position at all, or you have a very different understanding of "immutable" than I do (after using Kinoite as my daily driver for a year).
dagmx
Can you share an example of when Windows was immutable? I’ve been using windows since 3.1, and I can’t recall a Windows version where I wasn’t able to muck around in the system itself. Closed source != immutable.
On an unrelated note, I despise the constant insinuation that using Linux is an indicator of intelligence whereas users on other systems are tech illiterate.
Actually, I believe your entire argument contradicts itself multiple times because you give examples on both sides, and don’t stop to reconcile your views.
talldayo
> but anything else will literary harm the distro ability to evolve and adapt to software changes.
Comparing nixpkgs to the AUR seems to reflect the opposite trend. Arch is hamstrung by a dependence on outdated and poorly maintained repos that cannot coexist with up-to-date packages. Unless you fully embrace static linking or atomic installs, you'll end up with breakage.
MacOS went the static linking route, and Windows wrote back-compat for most old software by hand. The "decades of history" hasn't proven any approach right or wrong. They're all flawed.
dagmx
macOS does not go the statically linked route. Apple encourage developers towards dynamic libraries (usually in the form of frameworks) in most scenarios.
Modified3019
I went to check and see if proxmox had any immutability proposed for it yet, and I came across this: https://github.com/ashos/ashos#proxmox
I’m not quite sure what’s going on here yet, but seems interesting
tmtvl
I've been meaning to fully commit to GNU Guix one of these days, now that Plasma has fully landed. I've tried Fedora Kinoite in the past, but I can't handle Plasma without Oxygen. I know that Kinoite has some kind of a way to force packages to be installed into the base system, but it kinda feels like it defeats the purpose.
dismalaf
I've been using immutable distros for a couple years right now (Silverblue and openSuse MicroOS/Aeon now), and all I can say is they're much, much better than "normal" distros.
Containerized apps are nice, containers for development is nice, but you can have those in "normal" distros with a lil work setting things up.
The real killer feature is you can have a bleeding edge system with zero fear of breakage.
maztaim
Somehow I managed to answer no to all the questions…
RalfWausE
I am now running EndlessOS for a while and i love it: Its a bit like going back to the home computer days when the OS was residing in a ROM and you didn't really have to care.
johanneskanybal
10k lines quick browsed the top 500 meanwhile all everyone cares about 2024 in the year of reducing costs is arm64 not distro flavor.
sys_64738
This just sounds like a problem solved a long time ago in the embedded space for using squashFS for the bootable Linux image.
xorcist
The term "stability" should not be used outside of the major Linux distributions such as Debian and Fedora. For a distribution to be stable over the long term it needs a large enough community, a stable governance model, and a reasonable build system where one maintainer cannot take unilateral action without it being discovered.
A cute name and a university student somewhere does not constitute stability, no matter how good the intentions. It's not a bad thing, but you have to know what you get yourself into. Most of the distributions listed in the article belong to the latter category.
Immutable systems are great for embedded, network equipment, appliances and industrial applications, and specialized distributions for those applications have largely been immutable for a long time already. Nobody really wants an immutable system for their main desktop, because working is all about mutating state. You write may write documents, save bookmarks, install plugins, or try new software. Those are the things really immutable systems like kiosks wants to disallow.
So in order to make for usable software these desktops generally split your system into a mutable user part and an immutable system part. That's basically how unix-like desktops have worked since forever. Stuff in /bin and /sbin is only changed by the package manager. So the fit is quite good, but it also means it really isn't as useful as it's made out to be. That's why most people don't use them.
The use case is mostly for rolling back updates, not really running from readonly filesystems or preventing change in other ways, but most distributions already do that. You can roll back updates with both dnf and apt. It's not perfect and doesn't always work, but mainly from a lack of testing. With snapshots it's pretty much infallible though.
My recommendation if you really want something that "just works" is to install one of the major and time tested distributions. Pick Debian if you don't know what to choose. And then learn how to use it. Anything these tiny experimental distributions offer, such as running off read only filesystems or rebuilding it for your brand of cpu, or testing a new desktop environment, is likely possible in Debian too. With the added benefit of it being around in 20 years. And the core distribution is less likely to break in some way because some maintainer found inspiration for something. As long as you don't run untrusted stuff as root, stay out of the system files, and generally let the package manager do its job, you're going to be fine.
What I would like to see a desktop distribution work on is basically the same things as 20 years ago which still isn't really done outside some exploratory work (probably because it's actually hard):
- Packages on a user level where it is easy to install new stuff without touching the system area. More tricky in practice than in theory because of state changes to configuration files, saved file formats etc. But some should be easier than others.
- Desktop software service accounts, just like we do for server software. Mostly relevant for larger packages such as Firefox, Libre Office, movie players.
- Integration with popular third party package managers from the language ecosystems. Most language packages are anemic. All the powers that a package manager gives, reporting, listing untracked files, listing changes, rolling back updates, should be available for them by integrating directly with them. Package definitions should be able to be imported without manual work.
- Package managers should have at least some knowledge of an application's access patterns to help with application confinement. Still today things like selinux policies are packaged are separate entities and managed with external tools, which brings a lot of complexity since all possible configurations must be supported there. A package manager knows more about the system and could handle these files. Confining desktop software is a usability problem more than a technical one, but it is clear that desktop environments needs something to build on to make it practical.
talldayo
> The term "stability" should not be used outside of the major Linux distributions such as Debian and Fedora.
RHEL peeks from behind some furniture
We all believe foolish things at some point in our life...
okasaki
You don't need a whole new distro
$ apt search btrfs apt
Sorting... Done
Full Text Search... Done
apt-btrfs-snapshot/noble,noble 3.5.7 all
Automatically create snapshot on apt operations
There’s a lot of comments in here about desktops, but IMO why even discuss Linux on the desktop… 99.9999% of Linux deployments are not Arch installs on old Thinkpads. Immutable distros *are* becoming a de-facto standard for server deployments, IoT devices, etc. They improve security, enable easy rollbacks, validation of a single non-moving target for systems/hardware developers…
There’s also been a ton of very advanced development in the space. You can now take bootable containers and use them to reimage machines and perform upgrades. Extend your operating system using a Dockerfile as you would your app images:
https://github.com/containers/bootc