Private Generative AI
Obtain models - for Huggingface we recommend:
Running larger models in 4-bit mode to save GPU memory
Run GGUF format models with our HuggingfaceGenerativeLlamaCpp provider
Troubleshooting
Last updated