• interdimensionalmeme@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    7 months ago

    We are in the age of the toy internet, it is all about to crumple like a house of card bought on cheap credit and unviable business models. Youtube is not long for this world and nobody will miss it. The only question is how much of it Archive Team can save before if goes up in flames. Well, the good parts of it, that’s easy but can we save the garbage too, I’m not sure. Take any channel on youtube and its creator can easily serve it’s entire catalog out of a obsolete chromebox with two usb sticks on the side. Even as small as a terabyte would still be mostly empty space. Youtube was built defective by design using 1970s ideology, it is immensely wasteful.

    • Schmeckinger@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      I want to see how you can serve thousands or millions of people with a Chromebook in your closet. And if you say p2p, that doesn’t deal with spikes in demand and a lot of old content will just vanish even easier than on YouTube. Also it would rely on people being willing to seed.

      • interdimensionalmeme@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        The main limitation is the 1 gigabit network. It can push out 260 3megabit streams or 50 15megabit streams at the most.

        That’s already an enormous amount of concurrent viewers that covers 99% of content on youtube.

        To achieve this, you can’t be wasting processing power anywhere, a straight copy to network from pre encoded files, no live transcoding.

        No scripting, no encryption either. If you really need that, which you almost certainly don’t, then install a recerse proxy on your openwrt router.

        Now, if you want to scale, which almost no video really needs, then you’ll send the client a script. The client is a source of inifinite scaling, compute and bandwidth.

        Each client just needs to rebroadcast two streams of the file.

        As excess clients connect, you tell them to get the stream from the stun/turn server. This punches through both sides of the nat. And puts two clients in communication. First client sends its copies of the received stream chunks, with preference from the beginning of the file. One client can get the stream from multiple other client and once it has a few stream chunks in the cache it can serve them to new clients.

        It doesn’t take many doublings before you have more bandwidth than the whole internet. All the logic for organisation, hash checking, stream block ordering etc etc is a small text file from the server, signed by the server’s certificate. It runs entirely inside the client’s browser.