spoiler

made you look

  • 0 Posts
  • 104 Comments
Joined 1 year ago
cake
Cake day: July 27th, 2024

help-circle
  • Yep, their frontend used a shared caller that would return the parsed JSON response if the request was successful, and error otherwise. And then the code that called it would use the returned object directly.

    So I assume that most of the backend did actually surface error codes via the HTTP layer, it was just this one endpoint that didn’t (Which then broke the client side code when it tried to access non-existent properties of the response object), because otherwise basic testing would have caught it.

    That’s also another reason to use the HTTP codes, by storing the error in the response body you now need extra code between the function doing the API call and the function handling a successful result, to examine the body to see if there was actually an error, all based on an ad-hoc per-endpoint format.


  • Ehh, that really feel like “But other people do it wrong too” to me, half the 4xx error codes are application layer errors for example (404 ain’t a transport layer error, neither is 403, 415, 422 or 451)

    It also complicates actually processing the request as you’ve got to duplicate error handling between “request failed” and “request succeeded but actually failed”. My local cinema actually hits that error where their web frontend expects the backend to return errors, but the backend lies and says everything was successful, and then certain things break in the UI.


  • Well no, the HTTP error codes are about the entire request, not just whether or not the actual header part was received and processed right.

    Like HTTP 403, HTTP only has a basic form of authentication built in, anything else needs the server to handle it externally (e.g. via session cookies). It wouldn’t make sense to send “HTTP 200” in response to trying to access a resource without being logged in just because the request was well formed.









  • 1 of the main things i think is how memory is laid out is different somehow? so every memory access needs extra clock cycles to accomplish in standard arm64

    It’s down to “memory ordering”, as different cores interact with RAM there’s rules that govern how those cores see changes made by other cores. ARM systems are “weak”, so rely on developers to be explicit about the sharing, while x86’s “Total Store Order” is considered “strong” and relies on the hardware to disentangle it all so software can make assumptions and play fast and loose.

    You can do software emulation of strong memory ordering on a weak system, but it’s slow. What Apple did was provide a hardware implementation of strong ordering in their ARM chips, and Rosetta enables that when running x86 code, so users don’t encounter that slowdown.



  • It was an issue for a long time that browsers just ignored the caching headers on content delivered over HTTPS, a baked in assumption that they must be private individual content. That’s not the case now, so sites have to specifically mark those pages as uncachable (I think Steam got hit by something like this not that long ago, a proxy was serving up other peoples user pages it had cached).

    But for something like Google Fonts, the whole point of it was that a site could embed a large font family, and then every other visited site that also used it would simply share the first cached copy. Saving the bandwidth and amortizing the initial cost over the shared domains. Except now that no longer holds, instead of dividing the resources by the amount of sites using it, it’s multiplying it. So while a CDN might put the contents physical closer to the users, it doesn’t actually save any bandwidth (and depending on how it’s configured, it can actually slow page loads down)