The Llama.cpp Provider allows for integrating locally running Llama.cpp models into Keep.
The Llama.cpp Provider supports querying local Llama.cpp models for prompt-based
interactions. Make sure you have Llama.cpp server running locally with your desired model.