Masterarbeit notes on running local models LM LLM
TL;DR when I get to it, I should look into exposing a port on Rancher and using that as my poor man’s OpenAI endpoint.
-
Run LLMs locally | 🦜️🔗 Langchain has a good overview in general.
- ggerganov/llama.cpp: Port of Facebook’s LLaMA model in C/C++ seems the one compatible with the most models + HF infrastructure
- simonw/llm-llama-cpp: LLM plugin for running models using llama.cpp is the
llm
plugin
-
UI
Ask HN: Local LLM’s | Hacker News
- TheBloke (Tom Jobbins)’s HF mentioned often as a place to get models
General
- RAG using local models | 🦜️🔗 Langchain has more info about downloading and installing local models etc
Models
Nel mezzo del deserto posso dire tutto quello che voglio.
comments powered by Disqus