I’ve never known cable providers of failures to broadcast live TV in its history. MASH (not live) amongst many others had 70-100+ million viewers, many shows had 80%+ of the entire nation viewing something on its network without issue. I’ve never seen buffering on a Superbowl show.

Why do streaming services suffer compared to cable television when too many people watch at the same time? What’s the technical difficulty of a network that has improved over time but can’t keep up with numbers from decades ago for live television?

I hate ad based cable television but never had issues with it growing up. Why can’t current ‘tech’ meet the same needs we seemed to have solved long ago?

Just curious about what changed in data transmission that made it more difficult for the majority of people to watch the same thing at the same time.

  • jqubed@lemmy.world
    link
    fedilink
    arrow-up
    50
    ·
    5 days ago

    Broadcasting and the Internet work in fundamentally different ways. With Broadcasting a transmitter sends radio waves out into the world (and beyond). It does not care how many receivers there are; there could be millions or there could be none—literally broadcasting into the void. There is a bit of a disconnect between the transmitter and the receiver. The transmitter doesn’t need to know anything about the receiver; its transmission is ultimately independent of the receiver. The receiver can tune in or not, much like a boat raising its sails to catch a passing wind. One receiver generally will not impact another, just as many boats can sail on the same wind.

    This is a really efficient way to get a lot of identical data to many people at once. Especially when we switched to digital television this became easily apparent. ATSC 1.0, the standard the US switched to about 15 years ago, was able to carry about 19.3 mbps of data over a normal TV channel. Because the system was designed in the 1990s this is MPEG-2 video (the same as used on DVDs), but it still works pretty well for 1080i or 720p. In fact as encoders improved we could usually fit two HD streams in there at 6-8 mbps that looked pretty decent and still have room for one or two more SD streams.

    At the same time we were able to pretty significantly reduce the power of the transmitters. I think the last station I worked at was something like 125 kilowatts out of the transmitter in the analog days but with the switch to digital we were at 28 or 40 kilowatts (it’s been about a decade since I left television engineering). In the analog days a huge percentage of the power usage was to keep an adequate picture at the very fringes of our broadcast license, which effectively meant an increasingly crummy picture was pushed well beyond our license area (this was factored into how the system was designed). With digital, you either get enough of a signal to produce the picture or you don’t; there’s not really an in-between (other than a picture that keeps freezing up). This means a weaker signal far away from the transmitter that would produce a marginal signal in the analog days can produce a picture that looks just as good as it does much closer to the transmitter with a stronger signal.

    All of this means that with just 19.3 mbps of data coming from the transmitter, potentially millions of people can see the same video in real-time. Satellite is basically the same thing except instead of an antenna on top of a tower that’s 3,000 feet tall and can cover an area maybe 150-200 miles in diameter, the antenna is placed 22,000 miles high and can cover an entire continent. Cable works pretty similar except instead of transmitting through the air, the coaxial cable carries the entire spectrum protected from outside interference. It pushes all the signals out of its “transmitter” (called the head end) down a cable and then splits that cable and amplifies the signal (and then splits and amplifies again and again and again) as needed until it reaches all the customers. There can be some complications with digital cable, but that’s the basic concept.

    In contrast, the Internet very much designed for one-to-one communication. This works fine for everyday communication, but if you have something where a lot of people want to see the same thing, each of those people have to make their own connection to the server. Even if the video stream is only 5 mbps, if 100 different people want that same stream at the same time, you now need 500 mbps of bandwidth to handle all those connections. You also need a computer that can handle all those connections simultaneously. If you have thousands, hundreds of thousands, millions of people trying to stream the same video at once you can see how much of a problem this becomes. It’s one thing if the video is already recorded, like a movie, you can just distribute it to many servers in advance. But if it’s a live event that’s ultimately coming from one source you have to set up multiple servers to connect to the source and then forward that, perhaps to other servers that will forward to other servers that forward to other servers until you have enough servers and bandwidth for the end customers to connect to. If you have a million people trying to watch your 5 mbps video one might think you need 5 million mbps of bandwidth, but actually you need even more to connect all your servers back to the source, plus many servers. This is a hugely intensive usage of resources. Streaming companies will try to setup in advance for the number of viewers they expect, but if they guess too low they’ll have to scramble to increase capacity. I suspect this is more challenging for companies like Netflix that rarely do live video as opposed to companies that do it every day like YouTube or Twitch.

    This isn’t even getting into complexities like TCP vs UDP for the protocol. At the end of the day, the way the Internet is designed each client needs to be sent their own personal stream of data. It just can’t compete with the efficiency of everybody sharing the same stream of data that comes from broadcasting. In that sense, for big, shared experiences, it’s kind of a shame that broadcasting is dwindling away. How many people do you know who still can get a TV signal from an antenna or cable/satellite?

    • tal@lemmy.today
      link
      fedilink
      English
      arrow-up
      12
      ·
      5 days ago

      In contrast, the Internet very much designed for one-to-one communication.

      It’s not widely used today the way broadcast TV was, but there is multicast. Twenty years ago, I was watching NASA TV streamed over the Mbone.

      There, the routers are set up to handle sending packets to multiple specific destinations, one-to-many, so it is pretty efficient.

      • DontNoodles@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        4 days ago

        I’ve wondered for a long time if it is possible to use WiFi as broadcast/multicast? I mean, i understand that it won’t work out of the box, but if one was to write code from ground up (ie different TCP protocol) can it be made possible to, say, transmit video lecture from one WiFi router and for multiple mobile phones to receive and view it without individually being connected to the network. Kind of like how everyone is able to view the SSID of the WiFi node without being connected.

        Or is it a hardware level problem that I can’t wrap my head around. I have wanted to understand this for a long time but I don’t have a background in this subject and don’t know the right questions to ask. Even the LLM based search tools have not been of much help.

        • tal@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          You can broadcast to everyone connected to a WiFi network. That’s just an Ethernet network, and there’s a broadcast address on Ethernet.

          Typically, WiFi routers aren’t set up to route broadcasts elsewhere, but with the right software, like OpenWRT, a very small Linux distribution, you can bridge them to other Ethernet networks if you want.

          Internet Protocol also has its own broadcast address, and in theory you can try to send a packet to everyone on the Internet (255.255.255.255), but nobody will have their IP routers set up to forward that packet very far, because there’s no good reason for it and someone would go and try to abuse it to flood the network. But if everyone wanted to, they could.

          I don’t know if it’s what you’re thinking of, but there are some projects to link together multiple WiFi access points over wireless, called a wireless mesh network. It’s generally not as preferable as linking the access points with cable, but as long as all the nodes can see each other, any device on the network can talk to others, no physical wires. I would assume that on those, Ethernet broadcasts and IP broadcast packets are probably set up to be forwarded to all devices. So in theory, sure.

          The real issue with broadcast on the Internet isn’t that it’s impossible to do. It’s just that unlike with TV, there’s no reason to send a packet to everyone over a wide area. Nobody cares about that traffic, and it floods devices that don’t care about it. So normally, the most you’ll see is some kind of multicast, where devices ask that they receive packets from a given sender, subscribe to them, and then the network hardware handles the one-to-many transmission in a sort of star architecture.

          You can also do multicast at the IP level today, just as long as the devices are set up for it.

          If there were very great demand for that today, say, something like Twitch TV or another live-streaming system being 70% of Internet traffic the way BitTorrent was at one point, I expect that network operators would look into multicast options again. But as it is, I think that the real problem is that the gains just aren’t worth bothering with versus unicast.

          kagis

          Today, looks like video is something like that much of Internet traffic, but it’s stuff like Netflix and YouTube, which is pretty much all video on demand, not a live stream of video. People aren’t watching the network at the same time. So no call for broadcast or multicast there.

          If you could find something that a very high proportion of devices wanted at about the same time, like an operating system update if a high proportion of devices used the same OS, you could maybe multicast that, maybe with some redundant information using forward error correction so that devices that miss a packet or two can still get the update, and ones that still need more data using unicast to get the remaining data. But as it stands, just not enough data being pushed in that form to be incredibly interesting bothering with.

          • DontNoodles@discuss.tchncs.de
            link
            fedilink
            arrow-up
            2
            ·
            4 days ago

            I’m honoured that you took the time to type all this out but it looks like I’ve failed yet again at conveying what I meant to ask and I’ll try to rephrase:

            What you have been explaining is broadcasting when all devices are connected to the same network. I want to understand if it is possible to use WiFi just like a radio to broadcast data, without actually connecting. A device can transmit/broadcast, say, a video over the EM waves using some kind of modulation like they do in FM. The receiving devices, like our phones, already have hardware to receive these waves and process it to extract the information. Like I said, it already happens where the SSID of the WiFi transmitter/router is seen by all devices without actually connecting.

            And before you say anything, yes I’m aware that it is a very small amount of data being ‘transmitted’ at a very low bitrate. But what is the limiting factor? Why can’t much more data be transmitted this way?

            I’m really sorry if there is a silly answer to this as I’m sure there must be. But, like I said, i could never find it in my searches.

            Thanks and cheers!

            • tal@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 days ago

              I want to understand if it is possible to use WiFi just like a radio to broadcast data, without actually connecting.

              Yes, at least some WiFi adapters can. Software used to attack WiFi connections, like aircrack, does this by listening and logging (encrypted) packets without authenticating to the access point, and then attempting to determine an encryption key. You can just send unencrypted traffic the way you do today, and software could theoretically receive it.

              However, this probably won’t provide any great benefit. That is, as far as I know, just being connected to a WiFi access point shouldn’t generate much traffic, so you could have a very large number of computers authenticated to the WiFi access point – just set it not to use a password – without any real drawback relative to having the same machines snooping on unencrypted traffic.

              WiFi adapters cannot listen to multiple frequencies concurrently (well, unless things have changed recently), so it won’t let you easily receive data from more access points simultaneously, if you’re thinking of having them all send data simultaneously.

              • DontNoodles@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                4 days ago

                The typical home routers don’t support more than 20-25 simultaneous connections last i checked. I’m sure there must be professional devices that allow thousands of connections like they use in public wifi spots but I’m also sure they would be much pricier.

                At this point, it is just a pursuit for understanding how these things work and if what I want can actually be made possible as alternative use case of WiFi, especially given how ubiquitous it is.

                Thank you for indulging me nonetheless.

    • betabob@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      4
      ·
      5 days ago

      These are all really good answers, but yours really nailed it for me. Such a fascinating development and change in infrastructure. Thank you for such a well thought out and informed reply.

    • count_dongulus@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      5 days ago

      At least with UDP we can avoid further doubling the stream transmission bandwidth cost, since it won’t expect acks and possible retransmissions. Great explanation!

      • stoy@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        5 days ago

        Using TCP for video streaming would be horrible at best, but probably actually unusable.

        This is due to the retransmissions of lost packets, which would not work well, or even just break up the stream.

        Same goes for TCP and multiplayer games.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      Your general explanation is all good, but it never seems like any of the platforms built for live events really have issues delivering content. I don’t think the issue is so much that streaming live broadcasts is insurmountable as it is that Netflix specifically doesn’t have their architecture managed in a way that works well with big live events. They lean heavily on having their content cached close to the end users and don’t have a lot of experience at real time.

  • palitu@aussie.zone
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    5 days ago

    Broadcast versus on demand.

    Cable sends the sane data to everyone at the same time. So it is something like, read from the hard drive once. Send it out once. Everywhere it goes, it is just the same thing replicated to each and every reciever, no changes, just copy and paste.

    Streaming is different. Every piece of information sent is basically unique, you need to send each piece of information perfectly, you need to read from the hard drive thousands of times, as everyone is watching something different, you need to send unique information to the right location perfectly and in order and at the right time. If it goes wrong, you get buffering.

    Cable and Broadcast, no buffer ing, but no choice.

    Streaming, choice but with buffering

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    5 days ago

    Short version: cable is more optimized for sending everyone the same content at the same time. (And all users connected to cable get all channels all the time, even if they’re only watching one or two at the time.) Internet is made for each user getting what they ask for when they ask for it.

    Either technology can be used for either use case, but they were originally built for different purposes and so are optimized differently.

    Just like a subway train would make a pretty crappy private one-person vehicle for commuting to work and the grocery store. As would a fleet of cars be crappy for public mass transit.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    The answer goes deep into networking technology. I try to explain a few major points:

    Streaming is built on IP networking, which in turn can be built on different cable technologies. IP transmits every piece of data reliably, but asynchronously - the sender cannot know how long it will take. This is good for things like the www: everybody gets his individual request fulfilled sooner or later.

    “Cable” is built directly on one cable technology. It transmits all things synchronously: you know exactly how long the transmission takes, and the amount of data is the same at all times. This is good for one long movie without any disturbances, but it does not give much flexibility when many different users have many different needs.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    Cable infrastructure is built different, with multicast streams. All the channels are broadcast on their network at all times to relays all the way up to your neighbourhood, if not your cable box. It’s got dedicated, guaranteed bandwidth. It can’t get overloaded.

    With streaming, each user is one connection using one stream worth of bandwidth, so it doesn’t scale too well to millions of viewers. Technically there’s multicast stuff but it only works locally, and I’m sure all those cable companies that are also ISPs aren’t all that interested in making it work either. So for now we have thousands of identical streams crossing the country at the same time hogging bandwidth and competing with everything else using bandwidth.