

They had the “Steam Machine”, but effectively nobody bought it. Maybe now with the Deck people would be more open to it, who knows.
made you look


They had the “Steam Machine”, but effectively nobody bought it. Maybe now with the Deck people would be more open to it, who knows.


Yes, thank you! I knew there was something like it on the *nix side, but the only thing that was coming to mind was overlayfs, which ain’t it.


Yeah, junctions would be most similar to a mount point. Though you can also mount one directory under another, so it’s more like a directory hardlink in that case.
And symlinks were actually introduced in Vista, but for some reason you needed to be an Admin to create one. With Win10 they removed that restriction, but for some reason kept it behind a “developer mode” anyway, it’s strange.


It was Apple. Or rather, regulators and partnering companies leaning on Apple to manage the content on their app store better, including the content that you could find via those apps.
Could say something about how the app stores are a monopoly power, and the chilling effect these wide ranging and heavy handed content policies have, and why the open web (and web apps) are a better option. But we also handed the web over to Google anyway, so it’s not that much better.


We’re as close to quantum computers as we are to ChatGPT becoming sentient.


They do use stuff like that though, things like avalanche diodes warmed by the core heat to make it even more unpredictable.
But sometimes things don’t work the way they’re supposed to.


Þere must be a half dozen cheap ways to generate true random numbers.
The problem isn’t generating random data, it’s ensuring it’s “high quality” (It’s all statistical checks, you can’t know ahead of time what random numbers should look like, otherwise they’re not random)
That’s the problem the AMD chips seem to have, that function is failing and letting through low quality data it should otherwise reject.


Because it’s not about the files anymore, it’s the free space on the disk you care about (Or rather, the filesystem metadata describing it, the free-space bitmap in the case of exFAT)
If the files are highly fragmented and spread out, then the empty space around the files is also broken up and spread around, which makes it harder for a filesystem to efficiently store new stuff as it now has to break up and pack new file data into the gaps.


The calculator leaked 32GB of RAM, because the system has 32GB of RAM. Memory leaks are uncontrollable and expand to take the space they’re given, if you had 16MB of RAM in the system then that’s all it’d be able to take before crashing.
Abstractions can be super powerful, but you need an understanding of why you’re using the abstraction vs. what it’s abstracting. It feels like a lot of them are being used simply to check off a list of buzzwords.


And here, they are donating for a project by DHH, because they like the project
Said project is an Arch installer with some extra packages thrown in by default, not exactly groundbreaking stuff.


Ehh, bots have always presented nonsense UAs to servers. And since modern browsers hard-code the OS version in the UA string, pretending to be an old browser on an old OS could be a (probably ineffectual) way to bypass fingerprinting.


Anything that polls location data can record it and sell it, probably more apps that sell it than don’t.


Funny thing is, it was actually the device they connected that was faulty, the build of Windows they were using just didn’t handle that failure condition at the time.
MS at least learnt that lesson (for the most part), actually test things first.


JXL is two separate image formats stuck together. An improved version of JPEG that can also losslessly and reversibly recode most existing JPEG images at a smaller size, and the PNG like format (evolved from FLIF/FUIF) that can do lossless or lossy encoding.
“VarDCT” (The improved JPEG) turns out to be good enough that the “Modular” mode (The FLIF/FUIF like one) isn’t needed much outside of lossless encoding. One neat feature of modular mode though is that it progressively encodes the image in different sizes, that is if you decode the stream as you read in bytes you start with a small version of the image and get progressively larger and larger output sizes until you get the original.
Why is that useful? Well you can encode a single high DPI image (e.g. 2x scale), and then clients on 1x scale monitors can just stop decoding the image at a certain point, and get a half sized image out of it. You don’t need separate per-DPI variants.


iirc the main reason for QOI was to have a simple format because “complexity is slow”, so by stripping things that the author didn’t consider important the idea was the resulting image format would be quicker and smaller than something like PNG or WebP.
Not sure how well that held up in practice, a lot of that complexity is actually necessary for a lot of use cases (e.g. you need colour profiles unless you’re only ever dealing with sRGB), and I remember a bunch of low hanging fruit optimisations for PNG encoders at the time that improved encoding speed by quite a bit.
AVIF is funny because they kept the worst aspects of WebP (lossy video based encoding), while removing the best (lossless mode) There was an attempt at WebP2, using AV1 and a proper lossless mode, but Google killed that off as well.
But hey, now that they’re releasing AV2 soon, we’ll eventually have an incompatible AVIF2 to deal with. Good thing they didn’t support JPEG-XL, it’d just be too confusing to have to deal with multiple formats.
Lossless is fine, lossy is worse than JPEG.
That’d just be overall worse, it’d never be smaller than a comparable JPEG image, and it wouldn’t allow for any compression/quality benefits.
Yep, their frontend used a shared caller that would return the parsed JSON response if the request was successful, and error otherwise. And then the code that called it would use the returned object directly.
So I assume that most of the backend did actually surface error codes via the HTTP layer, it was just this one endpoint that didn’t (Which then broke the client side code when it tried to access non-existent properties of the response object), because otherwise basic testing would have caught it.
That’s also another reason to use the HTTP codes, by storing the error in the response body you now need extra code between the function doing the API call and the function handling a successful result, to examine the body to see if there was actually an error, all based on an ad-hoc per-endpoint format.
Funny thing is that it does (
winget), but it’s a terminal app. Windows users who look down on Linux users for “needing” to use a terminal don’t want to bring it up, so Linux users also aren’t aware of it and never point to it as a counter example.