• vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    So network bandwidth became cheaper than cpu ? Clearly CPUs are stagnating.

  • Justin@lemmy.jlh.name
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 days ago

    Pretty neat. Right now 1gbps downloads can often be bottlenecked by CPU, so a more efficient algorithm like zstd will probably speed up downloads.

  • DaCrazyJamez@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    So if I’m reading this correctly, they are trading slightly larger downloads for considerably faster overall install speeds.

    Makes a lot of sense as most internet connections nowadays can handle the added bandwidth.

  • inzen@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 days ago

    I don’t know much about compression algorithms. What are the benefits of doing this?

    • Malix@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      2 days ago

      zstd is generally stupidly fast and quite efficient.

      probably not exactly how steam does it, or even close, but as a quick & dirty comparison: compressed and decompressed a random CD.iso (~375 MB) I had laying about, using zstd and lzma, using 1MB dictitionary:

      test system: Arch linux (btw, as is customary) laptop with AMD Ryzen 7 PRO 7840U cpu.

      used commands & results:

      Zstd:

      # compress (--maxdict 1048576 - sets the used compression dictionary to 1MB) :
      % time zstd --maxdict 1048576 < DISC.ISO > DISC.zstd
      zstd --maxdict 1048576 < DISC.ISO > DISC.zstd  1,83s user 0,42s system 120% cpu 1,873 total
      
      # decompress:
      % time zstd -d < DISC.zstd > /dev/null
      zstd -d < DISC.zstd > /dev/null  0,36s user 0,08s system 121% cpu 0,362 total
      
      • resulting archive was 229 MB, ~61% of original.
      • ~1.9s to compress
      • ~0.4s to decompress

      So, pretty quick all around.

      Lzma:

      # compress (the -1e argument implies setting preset which uses 1MB dictionary size):
      % time lzma -1e < DISC.ISO > DISC.lzma
      lzma -1e < DISC.ISO > DISC.lzma  172,65s user 0,91s system 98% cpu 2:56,16 total
      
      #decompress:
      % time lzma -d < DISC.lzma > /dev/null
      lzma -d < DISC.lzma > /dev/null  4,37s user 0,08s system 98% cpu 4,493 total
      
      • ~179 MB archive, ~48% of original-
      • ~3min to compress
      • ~4.5s to decompress

      This one felt like forever to compress.

      So, my takeaway here is that the time cost to compress is enough to waste a bit of disk space for sake of speed.

      and lastly, just because I was curious, ran zstd on max compression settings too:

      % time zstd --maxdict 1048576 -9 < DISC.ISO > DISC.2.zstd
      zstd --maxdict 1048576 -9 < DISC.ISO > DISC.2.zstd  10,98s user 0,40s system 102% cpu 11,129 total
      
      % time zstd -d < DISC.2.zstd > /dev/null 
      zstd -d < DISC.2.zstd > /dev/null  0,47s user 0,07s system 111% cpu 0,488 total
      

      ~11s compression time, ~0.5s decompression, archive size was ~211 MB.

      deemed it wasn’t nescessary to spend time to compress the archive with lzma’s max settings.

      Now I’ll be taking notes when people start correcting me & explaining why these “benchmarks” are wrong :P

      edit:

      goofed a bit with the max compression settings, added the same dictionary size.

      edit 2: one of the reasons for the change might be syncing files between their servers. IIRC zstd can be compressed to be “rsync compatible”, allowing partial file syncs instead of syncing entire file, saving in bandwidth. Not sure if lzma does the same.

      • empireOfLove2@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        24
        ·
        2 days ago

        Especially since lzma currently CPU bottlenecks on decompression for most computers on fast internet connections. Zstd can use the cpu much more efficiently.

      • vaguerant@fedia.io
        link
        fedilink
        arrow-up
        17
        ·
        2 days ago

        “Better” doesn’t always mean “smaller”, especially in this example. LZMA’s strength is that it compresses very small but its weakness is that it’s extremely CPU-intensive to decompress. Switching to ZSTD will actually result in larger downloads, but the massively reduced CPU load of decompressing ZSTD will mean it’s faster for most users. Instead of just counting the time it takes for the data to transfer, this is factoring in download time + decompression time. Even though ZSTD is somewhat less efficient in terms of compression ratio, it’s far more efficient computationally.

        • merthyr1831@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Bet that’ll save Valve on some server costs too. Storage is much cheaper than compute (though I imagine they’ll probably keep LZMA around for clients on slow connections).

          • anguo@lemmy.ca
            link
            fedilink
            English
            arrow-up
            6
            ·
            2 days ago

            Doesn’t decompression only happen client-side? I don’t imagine them compressing the files multiple times.

            • merthyr1831@lemmy.ml
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 days ago

              Hmm true. I was thinking that steam has a lot of games and respective builds it has to compress, even if the decompression benefits are clientside only.

              Each new game update would also be compressed too - I have no idea how Steam handles the update to work out what files need replacing on their end though, which might involve decompressing the files to analyse them.