How to run the vigogne model locally

Vigogne is a LLM model based on LLAMA2, but trained with french data. As I’m working mostly in french, it might be useful. The current models that I can get locally are in english.

The information I’ve found online are scarse and not so easy to follow, so here is a step by step tutorial you can follow. I’m using pipenv almost everywhere now, it’s so easy :-)

llm install -U llm-llama-cpp
llm llama-cpp add-model vigogne-2-7b-chat.Q4_K_M.gguf -a vigogne
llm models default vigogne

#llm - Posté dans la catégorie code