

A few months ago I accidentally dd’d ~3GiB to the beginning of one of the drives in a 4 drive array… That was fun to rebuild.
A few months ago I accidentally dd’d ~3GiB to the beginning of one of the drives in a 4 drive array… That was fun to rebuild.
It seems a lot of new developers want to do some things differently; old guard devs can either make some compromises, or accept that fewer new devs will want to be part of upstream.
Dunno man, when what the dev of 30+ years said was more or less “fuck off”, it seems that advice was in fact heeded
It’s a chicken and egg problem; manufacturers aren’t going to care to upstream drivers if not enough of their users are on Linux, which slows new hardware. It’s much better than it was, but still ongoing.
Amd’s 7000 series amdgpu driver was busted in several ways for like a year post launch, and is still missing tunables for many GPU features.
Manufacturers are capable of making out of tree and unfree modules, but honestly I prefer the slow progress if it means most driver work stays in-tree.
I actually think this is more an attempt to exploit Trump’s worldview; he’s well-known to view inter-state relationships as purely transactional, and from that lens it seems like a good deal.
Thing is, depending on how the war goes either Russia or the US will take everything they possibly can from Ukraine; it may well be that offering Trump something the US was probably going to try to take anyway is just about the smartest way to turn somebody who was initially hostile to continued aid into someone personally invested in the outcome.
Either Linus or Greg K-H, likely after feedback from many others.
We should be looking at his given reasons, not making assumptions based on some ineffable set of considerations that he might have.
Christof’s given reason of complexity is sensible, it’s also one already considered when allowing R4L in the first place; adding rust language support has been deemed worth the additional complexity.
~/.config is probably a poor comparison on my part; it’s management is actually done by home-manager rather than Nixos proper, and I can’t think of another OS that fills this same role.
Nixos generates (for example) /etc/systemd/network to a path in /nix/store and symlinks it to it’s appropriate locations. After the files are generated the appropriate /nix/store paths are (re-mounted? Over-mounted? I’m not sure the implementation) made read-only (by default), but anything that isn’t generated is absolutely both mutable and untracked, and that “not tracking everything in /etc” is more what I’m going on about.
If you use Nixos as intended (when you find that a package is lacking a config option you want, create your own nix option internally) the distro is effectively immutable, but if you use Nixos for anything moderately complex that changes frequently e.g. a desktop os, you eventually run into the choice: become competent enough to basically be a nixpkgs contributor, or abandon absolute immutability.
I think the first option is worth it, and did go down that route, but it is unreasonable to expect the average Linux consumer to do so, and so something like fedora atomic is going to remain more “immutable” for them than nixos.
This need to git gud is thankfully lessening with every commit to nixpkgs, and most people can already get to most places without writing their own set of nix options or learning how to parse //random markup language// into nix, but you’ll eventually run into the barrier.
I’d argue it’s closer to a mutable distro than an immutable one.
Nixos tends to lean on the term reproducible instead of immutable, because you can have settings (e.g files in /etc & ~/.config) changed outside of nix’s purview, it just won’t be reproducible and may be overwritten by nix.
You can build an ‘immutable’ environment on nix, but rather than storing changes as transactions like rpm-ostree, it’ll modify path in /nix/store and symlink it. Sure, you can store the internal representation of those changes in a git repo, but that is not the same thing as the changes themselves; if the nixpkgs implementation of a config option changes, the translation on your machine does too.
Is that why they prevented it from being open sourced? I thought I read a while back that they just wanted to keep the code in-house.
It definitely has its roots in Debian, but when you need to use that weird closed source application for work, if it has a “supported” (for a given value of support) Linux distro it’ll be Ubuntu.
I personally prefer straight Debian myself, or something entirely different but when asked for a recommendation by friends it’s Ubuntu.
Cost to manufacture is not more than wages, but cost to purchase a good is always more than the total cost of labour needed to produce it, so long as profit exists.
The money isn’t free so much as redistributed from taxation elsewhere, think of it as the same as subsidising industry except only to the workers of that industry (instead giving it to owners and expecting the savings to trickle downwards). You could also consider it an income tax rebate with more fine-grained control of who gets it.
It doesn’t seem particularly ground-breaking of a concept; I see the value in investing money into necessary but unprofitable industry though my concern is that if you subsidise wages of a business with a profit incentive, management may lower wages to compensate.
I disagree about rejecting funding from intelligence agencies. I hate the concept of their existence, as well as what orgs like the CIA have done (and proceed to do) but given the fact of their existence, they do have legitimate reasons (in this case I mean reasons that align with Signal’s current goals rather than in order to change them) to fund Signal, and if that results in funding secure software, all the better.
I used ZFS with Arch for a while, the volume manager was what I’d call the largest benefit; in my opinion nothing else comes close to being as useful and well integrated.
I stopped because ZFS incompatibility with recent kernels (which I needed for GPU reasons) made me have to rescue my system more often than was ideal.
Some other minor downsides:
In addition to the downsides mentioned here about privacy regarding Google, there is a major upside to using this service: it offloads all of the authentication logic to google, so in theory it reduces your risk surface area, or it may be more accurate to say it concentrates your risk to your Google account.
You’d like to hope most websites use using common security best practices and keep on top of things but the amount of websites I had accounts on (on websites I had long forgotten) which have been pwned over the years tells me otherwise. Using google auth sets your account security to be exactly as secure as your Google account.
What you’re after, transparent wifi roaming, is actually mostly handled by the client; what you need is wifi access points that don’t get in the way.
I don’t have much experience with new OpenWRT supporting products, but the kicker is you only need one of them. If you have multiple routers, they will require some setup to play nice with each other. An “Access point” is just the wifi provider, can be hooked up to provide whatever the one router manages, and are generally cheaper than a router.
To that end, I’d suggest a single router, and multiple access points. I do this with Ubiquiti access points in my home, their PoE has been nice and they have been pretty “setup once and forget” for a few years now. I’m sure there are some other brands that’ll do well; Ruckus and Mikrotik come to mind.
Man if I was concerned about sinking the time to make a configuration for the compositor with a bus factor of 1 man-child, and a toxic community; I can’t imagine anybody investing the time to make a compositor is going to want to hitch themselves to that cart.
The compositor is really solid and makes for a great user experience but I’ll be fucked if every word vaxry writes doesn’t make me want to move to sway or niri.
Nixpkgs has more and newer packages than the aur.
The initial time to get shit done is longer; you can’t simply make install
, but honestly you shouldn’t have been doing so on arch anyway.
Making your own derivation is much easier than making your own PKGBUILD and should be considered in those terms because you’re not just shoving some binary into /usr/bin for it to explode later when glibc updates.
When things fuck up, reverting to your previous config is at worst a reboot away.
I have much less time than I used to, so moving from arch to Nixos has prevented the time otherwise wasted in an arch-chroot trying to fix issues like the kernel upgrading past what the zfs-dkms supports.
If you’re using specialised proprietary tools, working them in with Nix can be an absolute nightmare, but I use a debian container for them.
While I think the cynicism is well-earned, we should pay attention to when we’re proven wrong and highlight when companies do something right. Bitwarden’s fuck-up gave them an opportunity to signal that they’re not intending to build a wall for their garden, and they took it.
I wish.
It was a bcachefs array with data replicas being a mix of 1,2 & 4 depending on what was most important, but thankfully I had the foresight to set metadata to be mirrored for all 4 drives.
I didn’t get the good fortune of only having to do a resilver, but all I really had to do was fsck to remove references to non-existent nodes until the system would mount read-only, then back it up and rebuild it.
NixOS did save my bacon re: being able to get back to work on the same system by morning.