Hi,

I’m looking for something that could generate code and provide technical assistance on a level similar to ChatGPT4 or at least 3.5. I’m generally satisfied with it, but for privacy and security reasons I can’t share some details and code listings with OpenAI. Hence, I’m looking for a self-hosted alternative.

Any recommendations? If nothing specific comes to mind, what parameters should I look at in my search? I’ve never worked with LLMs yet and there are so many of them. I just know that I could use oobabooga/text-generation-webui to access a model in a friendly way.

Thanks in advance.

  • @yacgta
    link
    English
    41 year ago

    Specifically on what LLM to use, I’ve been meaning to try Starcoder, but can’t vouch for how good it is. In general I’ve found Vicuna-13B pretty good at generating code.

    As for general recommendations, I’d say the main determinant will be if you can afford the hardware requirements to locally host - I presume you’re familiar with the fact that you’ll (usually) need roughly 2x the number of parameters in VRAM (e.g. 7B parameters means 14GB of VRAM). Techniques like quantization to 8-bits halve the requirement, with the more extreme 4-bit quantization halving them again (at the expense of generation quality).

    And if you don’t have enough VRAM, there’s always llama.cpp - I think that list of supported models is outdated, and it supports way more than those.

    On the “what software to use for self-hosting” I’ve quite liked FastChat, they even have a way to run an OpenAI API compatible server, which will be useful if your tools expect OpenAI.

    Hope this is helpful!