Nvidia reveals new A.I. chip, says costs of running LLMs will ‘drop significantly’::Currently, Nvidia dominates the market for AI chips, with over 80% market share, according to some estimates.

  • BobKerman3999@feddit.it
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    2
    ·
    1 year ago

    So yeah Nvidia is mask off now and it is 100% an AI hardware company with a side business of graphics chips

      • BobKerman3999@feddit.it
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        4
        ·
        1 year ago

        Hmm, define “always” because at the time of riva and Tnt they definitely were a graphics card only company

        • Even_Adder@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          1
          ·
          1 year ago

          Yeah, I misspoke there, but for most of recent memory they’ve been doing big things besides consumer graphics cards. Nvidia launched its professional oriented graphics Quadro product line in 2000. They launched CUDA architecture in 2006 which opened up parallel processing capabilities of GPUs for use in science and research. They entered the data center and cloud computing market in the early 2010s, and in 2015 they launched the DRIVE product line.

          • BobKerman3999@feddit.it
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 year ago

            Unfortunately I know Nvidia since the big GC wars with matrix, 3dfx, ATI and the guys that built Kyro…

          • abhibeckert@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Companies definitely don’t care where their money comes from.

            They definitely do and I think NVIDIA doesn’t want their money to come customers who DIY build PC towers.

            • knotthatone@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              ·
              1 year ago

              I don’t think NVIDIA minds the money they get from the DIY builders market, but they get a lot more money from OEMs. They shouldn’t neglect the DIY market, though. If the enthusiasts stop recommending their GPUs then big OEMS will eventually drop them too.

              AI is just in a gold rush right now. Companies are throwing around piles of money to develop it.

    • Tatters@feddit.uk
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      1 year ago

      And before that it was a bit mining company, with a side line of gaming graphics hardware.

    • Haha@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      1 year ago

      It’s always been that way. Whether it’s been AI or something else. Nothing wrong with that.

    • Not_mikey@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Not all bad, compared to crypto the vector transformations done for ml are relatively similar to those done by graphics processing. So any innovations on the ml front will probably yield improvements in graphics.

  • Zerfallen@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    4
    ·
    1 year ago

    I’m sure the cost to the consumer will remain exactly the same, or somehow increase.

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      I’m not worried about that. There will be open competition, because most of this stuff is open-source. Cheaper hardware will open the door for anyone like you or me to set up our own services. Anyone can set up a server with their own hardware (or rent it from Amazon or wherever) and run their own chatbot (with blackjack! and hookers!) instead of using ChatGPT.

      This is already possible on consumer hardware, just not with the biggest and best networks. Right now, if I wanted to run, say, BLOOM (an open-source LLM), I’d need to spend close to $100K on hardware. Obviously, that’s out of reach for a hobbyist, so I’m limited to using smaller, less advanced networks like LLaMa or GPT-J. Cheaper hardware will help break the hold that the big players currently have over the industry.

      • abhibeckert@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        if I wanted to run, say, BLOOM (an open-source LLM), I’d need to spend close to $100K on hardware

        Doesn’t that dozens of notes with over a terabyte of RAM each? And state of the art networking?

        Sounds closer to $100M than $100K.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          If you want to train your own network like they did, you’d want something like that, yeah, but to run the trained network you “only” need ~360GB of memory.

          For context, even if you wanted to run this in CPU, there are currently no A5 mobos (Ryzen 7000 series) that support more than 192GB of memory. You literally can’t even run it on high-end consumer hardware.

    • Random Dent@lemmy.ml
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      I’m liking AMD still. They’re not perfect of course but they seem to have far less fuckery going on than Intel and Nvidia, and they have open source drivers that play nice with Linux.

      • dinckel@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 year ago

        I always have this thought in the back of my mind too, but the issue is that while raw performance is a bit better than the counterparts, Nvidia still offers more features for the money, and I don’t always have money to throw away. Typically i’d upgrade my gpu once every 5 years or so

    • leonardo_arachoo@lemm.ee
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      3
      ·
      1 year ago

      AI might not survive the next decade? I already use it every day at work. The productivity gains are enormous and far from saturated. I think it’s more likely that AI will survive and consumers (humans) will not survive.

      • Toribor@corndog.social
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 year ago

        I think people simultaneously overestimate the capability of current machine learning models while underestimating their long term impact. These models are going to be in everything. They are very resource hungry and will absolutely be a driver of hardware innovation for the next decade and probably longer.

    • phillaholic@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      How are they killing their consumer market? If they change their mind and put out a better gpu people will buy it.

      • dinckel@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        You’ve answered your own question. They used to release upgraded hardware with a reasonable generational boost almost yearly. Now the gap has widened, and they’re iterating on old hardware, by giving it more juice and a larger cooler. Not to mention the astronomical prices that have outclassed previous top-end cards at the current mid-range

        • phillaholic@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Not really. It’s not the right way to state it. They aren’t concerned with making money from the consumer market right now. Killing it implies it’s never coming back.