Four weeks ago, GPT-4 remained the undisputed champion: consistently at the top of every key benchmark, but more importantly the clear winner in terms of “vibes”. Almost everyone investing serious time exploring LLMs agreed that it was the most capable default model for the majority of tasks—and had been for more than a year.

Today that barrier has finally been smashed. We have four new models, all released to the public in the last four weeks, that are benchmarking near or even above GPT-4. And the all-important vibes are good, too!

Those models come from four different vendors.

  • @GBU_28@lemm.ee
    cake
    link
    fedilink
    English
    8
    edit-2
    4 months ago

    There are many free LLM models and platforms to access them. You can download and permanently posses the actual model files and weights.

    There are open source frameworks to run and interact with these models that run fully locally

      • @Gabu@lemmy.ml
        link
        fedilink
        44 months ago

        The problem is open models are nowhere close to stuff like GPT-4

        Of course not, you’d need the same class of hardware running 24/7 to get similar results, and ain’t nobody paying for that.

      • @slacktoid@lemmy.ml
        link
        fedilink
        English
        34 months ago

        Agreed but it’s still a good tool thats available. You can use it to summarize large documents. Yes prolly never as capable if you have elite monies. But still worth playing and learning how to use. Imho.

      • @GBU_28@lemm.ee
        cake
        link
        fedilink
        English
        1
        edit-2
        4 months ago

        I’ll acknowledge that right now to get model conclusions on par with gpt 4 you are going to need a custom pipeline with multiple adversarial models, RAG and more. But it all could be built by an eager hobbyist with a strong gaming pc

        To be clear this approach will not benchmark the same as gpt 4 but can indeed generate useful content.