• AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    24
    ·
    9 months ago

    Skimming through the linked paper, I noticed this:

    Scaling beyond a certain point will deteriorate the compression performance since the model parameters need to be accounted for in the compressed output.

    So it sounds like the model parameters needed to decompress the file are included in the file itself.

    • redcalcium@lemmy.institute
      link
      fedilink
      arrow-up
      7
      ·
      9 months ago

      So, you’ll have to use the same LLM to decompress the data? For example, if your friend send you an archive compressed with this LLM, then you won’t be able to decompress it without downloading the same LLM?

      • snargledorf@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        This is not dissimilar to regular compression algorithms. If I compress a folder using the 7zip format (.7z) the end user needs to use 7zip to decompress it since it is a proprietary algorithm. (I know Windows 11 is getting 7zip support)

        • redcalcium@lemmy.institute
          link
          fedilink
          arrow-up
          6
          ·
          edit-2
          9 months ago

          Except LLMs tend to be very big compared to standard decompression programs and often requires GPU with adequate VRAM in order to work reasonably fast enough. This is a very big usability issue IMO. If decompression can be done with a smaller and faster program (maybe also generated by the LLM?), it can be very useful and see pretty wide adoption (e.g. for future game devs who want to reduce their game size from 150GB to 130GB).

          • andruid@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            9 months ago

            Training tends to be more compute intensive while inference is more likely to be able to be ran on a smaller hardware foot print.

            The neater idea would be a standard model or set of models, so that a 30G program can be used on ~80% of target case, games and video seem good canidates for this.