• rottingleaf@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    23 hours ago

    Fucking simple concept which major businesses are economically compelled to gaslight you out of.

    So the problem is in economics.

    Each such business provides all of their infrastructure, expensive, good and well-maintained (Google has its own Internet cables), which is not separated from their application services.

    So one provider of infrastructure (in the wide sense, solving all the problems) usually serves many users of their own application and many application providers (I’m inventing terms) without their own infrastructure.

    While user of an application generally can’t switch infrastructure providers as they want. It’s kinda technically fine and normal (there are NTP server pools, one could in the olden days search many FTP servers for the needed file, and so on), but doesn’t happen IRL. Because there’s no standard way for pooling resources and tracking them, and there’s no applications using it.

    So - the data model (cryptographic global person identities, globally identified by some derived hash posts (a post is, say, datetime, author, some tags, content, hash of it all, signatures, I dunno) (creation of a group or a vote or a changing of privileges or moderation can be a post too), for forming a representation for the user a group is “replayed” in the right order to know which user had a privilege to, say, moderate posts etc ; one can also generate group snapshots from time to time when replaying thus, by the group owner identity, to make it faster) is orthogonal to the service model. That’s important so that it were fit for alternative service models, like sneakernet or offline-enabled mesh or anything delay-tolerant. Or at least a p2p kademlia DHT-based service model.

    The service model - the core of it all is a tracker service. It works like a tracker in BitTorrent (or maybe Hotline, but that’s old), except with signed announces, and it tracks search and storage and relay and maybe even computation services (which announce themselves to it). A search service gets storage services from trackers and indexes their contents (one can even announce objects to a search service similarly to trackers, might be better) to search by tags. A storage service just stores objects and yields them. A relay service must be harder, you the user must somehow announce (to trackers too?) which relay service you are registered on at this moment, a bit like SIP or like SMTP (only very temporary), so that messages to that relay service would reach you.

    The client would just request a bunch of trackers for all things they need - to search for stuff for services, then request these services and merge their results. Forming a group representation is “searching for stuff” too, and then getting the objects referenced by index service responses from a bunch of storage services. To notify another user that you’ve sent them a message one can use a relay service.

    I think it’s easy to see that it’s kinda primitive other than requiring proper cryptography. And it’s a global system working over the Internet (except no, it doesn’t exist). Similar to NOSTR, but I think better due to separation of data model and service model.

    The advantages of this - one still can make any kinds of applications using such common infrastructure, but the resource-based feudalism we have this might hurt. Similar to how BitTorrent keeps working despite quite a few people not liking it.

    The disadvantages - well, stuff will get lost, there are paid BT trackers but no paid BT peers, while in such a system paid storage and other services would be a thing (still much better than Facebook).