

For anyone too lazy to follow two links, here’s the demo: https://joelseverin.github.io/linux-wasm/


For anyone too lazy to follow two links, here’s the demo: https://joelseverin.github.io/linux-wasm/


The file that’s in use may not survive.
Yes, but at least the rest of the filesystem will


I’m not too versed in the intricacies of Windows, but I don’t think that’s the case on Linux at least.
There’s a difference between telling the processes to “fuck off” (by using umount -f) and actually yanking the drive.
umount -f will at least flush the caches to drive, including all filesystem metadata and journaling, while just yanking the drive off will definitely not, and if you’re unlucky you can ruin the FS (especially if it’s not a journaling one). I’ve lost data like that before, been using umount -f ever since.


I don’t fear for my career because of AI. I fear my career because of kids like this one.
Kudos to him! I definitely couldn’t maintain something so big and complex at 10. Heck, not sure I’d be able to pull it off now!


To this day i still plug my wh-mx10004 … (Or whatever their stupid number is cause sony thinks a ten digit alphanumeric code is the catchiest name for their products…) b/c every time they connect by bt they will ONLY do ‘handsfree’ codec, yknow, the one that sounds like shit for phone calls. I have done everything… pavucontrol, pipewire, wireplumber, blueman, cli system level shit and yes i can force it to proper high def audio after some really annoying steps… but then ill start up a game or something and it suddenly goes “nope! This calls cor handset audio!” And switches itself back.
If you’re talking about WH-1000XM4, they work for me. Sometimes on first connect they only have mSBC codec for me too, but if I just disconnect/reconnect them then all other codecs appear. If I switch to SBC-XQ or LDAC they then work fine until I turn them off (which can be hours and many different playback streams). I’m on pipewire+pipewire-pulse.
I’ve heard about Linux being highly customizable and decentralized OS, and suddenly I can’t define my own shortcuts because there is a list of un-features?
You can customize it to do whatever you want. Heck, you can write your own terminal emulator that does exactly what you need. But some things can be harder to do than others and require skills and experience. Once someone implements those harder things, they become a “feature”. Before then, therefore, they are an “un-feature”. See https://xkcd.com/1349/
E.g. it is probably possible to set up your shell to use shift-selection for the command you’re currently editing, but shift-selection for the output of a previous command will require terminal support. You will have to make sure that the two don’t interfere with each other, which can be quite complicated.
I already have my workflow and I’m trying to transfer it to Linux
Linux is a different OS that, by default, does things differently from others. You can configure it to emulate some other UX, but it won’t necessarily be easy. In the meantime, you can install the micro editor, set EDITOR=micro, and then Ctrl+x Ctrl+e in bash to edit the command in a more familiar setting.
I prefer compose keys because they are easier to remember.
Oh, also, I think GTK apps have that Ctrl+shift+U thing which allows you to enter characters by code. Never really got used to it though.
I think this is one of many un-features in Linux world where
As such, for you individually, I suggest just getting more comfortable with tmux (or zellij) and vi-like keybinds for text manipulation. Once you learn those (and set your readline mode to vi), you won’t look back. Oh, also, try Ctrl+x Ctrl+e in bash, it might help.
Or switch to emacs and using it as a terminal emulator. In that case you will have to learn and use emacs keybindings, but the selection semantics in the “terminal” will be the same as in the “editor”.


I assume many people just live in a sanitized, sterile internet created by Google/Meta et al. They might have never encountered the gooner/pervert culture before. Again, when most people see “cameo” their mind doesn’t jump to “fetish porn cameo”. As such, I don’t think there was real consent here.
Apart from the other comment: rootless podman is easier to set up (there’s basically no set up needed).


She consented to something but didn’t consider/understand what that something implies. While it might be obvious for terminally online people, most people don’t expect “cameos” to necessarily mean “fetish porn cameos”.
Also check out Nix. It’s a pretty weird package manager but has a lot of packages. Also allows you to configure your entire macOS install declaratively with a text-based config file, if you’re into that.
Are there any risks or disadvantages to building software from source, compared to installing a package?
Well, compiling from source is the “installing dodgy freeware .exe” of the Linux world. You have to trust whoever is distributing that particular version of the source code, and ideally vet it yourself. When installing a binary package from your distro’s repositories, presumably someone else did the vetting for you already. Another slight risk is that technically you are running some extra build scripts before you can even run the application, which is a slight security risk.
Can it mess with my system in any way?
Yeah, unless you take precautions and compile in a container or at least a sandbox, the build scripts have complete unadulterated access to your user account, which is pretty much game over if they turn out to be malicious (see: https://xkcd.com/1200). Hopefully most FOSS software is not malicious, but it’s still a risk.
If you “install” the software on your system, it also becomes difficult to uninstall or update, because those files are no longer managed from any centralized location.
I recommend using a source-based package manager, and package your software with it (typically won’t be any more difficult than just building from source) to mitigate all of those (as typically source-based PMs will use sandboxing and keep track of the installed files for you).
All x86_64 CPUs support a certain “base” set of instructions. But most of them also support some additional instruction sets: SIMD (single instruction multiple data - operations on vectors and matrices), crypto (encryption/hashing), virtualization (for running VMs), etc. Each of those instructions replaces dozens or hundreds of “base” instructions, speeding certain specific operations dramatically.
When compiling source code into binary form (which is basically a bunch of CPU instructions plus extra fluff), you have to choose which instructions to use for certain operations. E.g. if you want to multiply a vector by a matrix (which is a very common operation in like a dozen branches of computer science), you can either do the multiplication one operation at a time (almost as you would when doing it by hand), or just call a single instruction which “just does it” in hardware.
The problem is “which instruction sets do I use”. If you use none, your resulting binary will be dogshit slow (by modern standards). If you use all, it will likely not work at all on most CPUs because very few will support some bizarre instruction set. There are also certain workarounds. The main one is shipping two versions of your code: one which uses the extensions, the other which doesn’t; and choosing between them at runtime by detecting whether the CPU supports the extension or not. This doubles your binary size and has other drawbacks too. So, in most cases, it falls on whoever is packaging the software for your distro to choose which instruction sets to use. Typically the packager will try to be conservative so that it runs on most CPUs, at the expense of some slowdown. But when you the user compile the source code yourself, you can just tell the compiler to use whatever instruction sets your CPU supports, to get the fastest possible binary (which might not run on other computers).
In the past this all was very important because many SIMD extensions weren’t as common as they are today, and most distros didn’t enable them when compiling. But nowadays the instruction sets on most CPUs are mostly similar with minor exceptions, and so distro packagers enable most of them, and the benefits you get when compiling yourself are minor. Expect a speed improvement in the range of 0%-5%, with 0% being the most common outcome for most software.
TL;DR it used to matter a lot in the past, today it’s not worth bothering unless you are compiling everything anyways for other reasons.
Well, I still hear it a lot. Would be great to replace it with some other greeting that made more literal sense.
Personally I’m more partial to nom. Serde is quite verbose and complex for a parser.
I’ll be real with you. What you need to do is, whenever faced with a task that sounds like it needs CLI, go search stackoverflow for that task. It probably has something slightly relevant to what you need, you take the commands from that answer and read their manuals.
In the old days, your OS would come with a paper manual describing all the commands in great detail. Nowadays the OS is so complex that you can’t be expected (and don’t really need to) know all the commands that are there. But getting one of those old UNIX/early Linux manuals and reading through it would be a great start.
aerc