I’ve recently played with the idea of self hosting a LLM. I am aware that it will not reach GPT4 levels, but beeing free from restraining prompts with confidential data is very nice tool for me to have.

Has anyone got experience with this? Any recommendations? I have downloaded the full Reddit dataset so I could retrain the model on this one as selected communities provide immense value and knowledge (hehe this is exactly what reddit, twitter etc. are trying to avoid…)

  • supert@lemmy.fmhy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Check out localllama community. Lots of info there.

    I use oobabooga + exllama.

    Things are a bit budget dependent. If you can afford a rtx 3090 off ebay you can run some decent models (30B) at very good speed. I ended up with 3090 + 4090. You can use system ram with ggml but it’s slow. Mac M1 is not bad for this .

    Where did you get the reddit dataset?