Abacus.ai:

We recently released Smaug-72B-v0.1 which has taken first place on the Open LLM Leaderboard by HuggingFace. It is the first open-source model to have an average score more than 80.

  • @simple@lemm.ee
    link
    fedilink
    English
    415 months ago

    I’m afraid to even ask for the minimum specs on this thing, open source models have gotten so big lately

    • TheChurn
      link
      fedilink
      445 months ago

      Every billion parameters needs about 2 GB of VRAM - if using bfloat16 representation. 16 bits per parameter, 8 bits per byte -> 2 bytes per parameter.

      1 billion parameters ~ 2 Billion bytes ~ 2 GB.

      From the name, this model has 72 Billion parameters, so ~144 GB of VRAM

            • @Rai@lemmy.dbzer0.com
              link
              fedilink
              English
              45 months ago

              My 83 was ganked by some kid I knew so my folks bought me a silver. He denied it. I learned that day to write my name in secret spots.

              • 𝕸𝖔𝖘𝖘
                link
                English
                25 months ago

                That kid you knew was a dick. At least he taught you a valuable lesson, I guess.

                • @Rai@lemmy.dbzer0.com
                  link
                  fedilink
                  English
                  25 months ago

                  He absolutely was a dick. I stopped being mates with him after that. My school was like “yeah the cameras didn’t work that day actually”

      • FaceDeer
        link
        fedilink
        85 months ago

        It’s been discovered that you can reduce the bits per parameter down to 4 or 5 and still get good results. Just saw a paper this morning describing a technique to get down to 2.5 bits per parameter, even, and apparently it 's fine. We’ll see if that works out in practice I guess

        • @Corngood@lemmy.ml
          link
          fedilink
          English
          2
          edit-2
          5 months ago

          I’m more experienced with graphics than ML, but wouldn’t that cause a significant increase in computation time, since those aren’t native types for arithmetic? Maybe that’s not a big problem?

          If you have a link for the paper I’d like to check it out.

          • FaceDeer
            link
            fedilink
            125 months ago

            My understanding is that the bottleneck for the GPU is moving data into and out of it, not the processing of the data once it’s in there. So if you can get the whole model crammed into VRAM it’s still faster even if you have to do some extra work unpacking and repacking it during processing time.

            The paper was posted on /r/localLLaMA.

          • @L_Acacia@lemmy.one
            link
            fedilink
            English
            45 months ago

            You can take a look at exllama and llama.cpp source code on github if you want to see how it is implemented.

      • @rs137@lemmy.world
        link
        fedilink
        English
        15 months ago

        Llama 2 70B with 8b quantization takes around 80GB VRAM if I remember correctly. I’ve tested it a while ago.

    • @General_Effort@lemmy.world
      link
      fedilink
      English
      165 months ago

      CUDA 11.4 and above are recommended (this is for GPU users, flash-attention users, etc.) To run Qwen-72B-Chat in bf16/fp16, at least 144GB GPU memory is required (e.g., 2xA100-80G or 5xV100-32G). To run it in int4, at least 48GB GPU memory is requred (e.g., 1xA100-80G or 2xV100-32G).

      It’s derived from Qwen-72B, so same specs. Q2 clocks it in at only ~30GB.

  • Miss Brainfarts
    link
    fedilink
    English
    95 months ago

    That’s nice and all, but what are some FOSS models I can run on GPU with only 4GB?

    I’ve tried Deepseek Coder, and it’s pretty nice for what I use it for. Then there’s TinyLlama, which… well it’s fast, but I need to be veeeery exact in how I prompt it.

    • @Fisch@lemmy.ml
      link
      fedilink
      English
      6
      edit-2
      5 months ago

      Unfortunately LLMs need a lot of VRAM. You could try using koboldcpp, it runs on the CPU but let’s you offload layers onto the GPU. That way you might be able to stay withing those 4gb even with larger models.

      Edit: I forgot to mention there’s a fork of koboldcpp with rocm for AMD cards, which is about twice as fast if I remember correctly. Only relevant if you have an AMD card tho.

      Edit 2: This is the model I use btw

      • Miss Brainfarts
        link
        fedilink
        English
        25 months ago

        I’m currently playing around with the Jan client, which uses the nitro engine. I think I need to read up on it more, because when I set the ngl value to 15 in order to offload 50% to GPU like the Jan guide says, nothing happens. Though that could be an issue specific to Jan.

        • @Fisch@lemmy.ml
          link
          fedilink
          English
          25 months ago

          Maybe 50% GPU is already using too much VRAM and it crashes. You could try to set it to 0% GPU and see if that works.

          • Miss Brainfarts
            link
            fedilink
            English
            15 months ago

            I may need to lower it a bit more, yeah. Though when I try to to use offloading, I can see that vram usage doesn’t increase at all.

            When I leave the setting at its default 100 value on the other hand, I see vram usage climb until it stops because there isn’t enough of it.

            So I guess not all models support offloading?

            • @General_Effort@lemmy.world
              link
              fedilink
              English
              45 months ago

              Most formats don’t support it. It has to be gguf format, afaik. You can usually find a conversion on huggingface. Prefer offerings by TheBloke for the detailed documentation, if nothing else.

            • @Fisch@lemmy.ml
              link
              fedilink
              English
              25 months ago

              The models you have should be .gguf files right? I think those are the only ones where that’s supported

    • Toes♀
      link
      fedilink
      English
      65 months ago

      4GB is practically nothing in this space. Ideally you want at least 10GB of dedicated vram if you can’t get even more. Keep in mind you’re also probably trying to share that vram with your operating system. So it’s more like ~3GB before you even started.

      Kolboldcpp is capable of using both your GPU and CPU together, you might wanna consider that. (Using a feature called layers) There’s a trade-off that occurs between the memory available and the quality of its output and the speed of the calculation.

      The model mentioned in this post can be run on the CPU with enough system ram or swap.

      If you wanna keep it all on the GPU check out 4bit models. Also there’s been a lot of work into trying to do this with the raspberry Pi. I suspect that their work could help you out here as well.

    • @General_Effort@lemmy.world
      link
      fedilink
      English
      25 months ago

      Depends on your needs. Best look around in !localllama@sh.itjust.works or similar. (I don’t wanna say reddit but r/localLlama is much larger.)

      If you’re more into creative writing, maybe look for places that discuss SillyTavern (r/SillyTavernAI is an option). It’s software for role-play chats, which may not be what you want. But the community is (relatively) large and likely to have good tips for non-coding/less technical applications.

    • DarkThoughts
      link
      fedilink
      35 months ago

      Since I had an okay experience with EasyDiffusion I tried running text gen locally through oobabooga, but no matter which model I load, it just crashes whenever it tries to generate anything, regardless if it runs through the UI’s chat or SillyTavern. No error in the terminal either, it just stops and throws me back into the command line.

      • FaceDeer
        link
        fedilink
        05 months ago

        And at 72 billion parameters it’s something you can run on a beefy but not special-purpose graphics card.

        • @glimse@lemmy.world
          link
          fedilink
          English
          65 months ago

          Based on the other comments, it seems like this needs 4x as much ram than any consumer card has

          • FaceDeer
            link
            fedilink
            45 months ago

            It hasn’t been quantized, then. I’ve run 70B models on my consumer graphics card at a reasonably good tokens-per-second rate.

          • DarkThoughts
            link
            fedilink
            25 months ago

            I’m curious how local generation goes with potentially dedicated AI extensions using stuff like tensor cores and their own memory instead of hijacking parts of consumer GPUs for this.