-
Notifications
You must be signed in to change notification settings - Fork 3.3k
Description
Is your feature request related to a problem? Please describe.
Currently, Knowledge Base only supports OpenAI for embeddings, requiring external API calls. This prevents fully offline/private RAG workflows, despite Sim already supporting local LLMs via Ollama.
Describe the solution you'd like
Add embedding provider selection to Knowledge Base settings (similar to LLM provider selection):
OpenAI (current default)
Ollama (nomic-embed-text, mxbai-embed-large, etc.)
Describe alternatives you've considered
Adding custom function blocks to the workflow: It requires coding
Current OpenAI works but prevents offline/private deployments
Additional context
Follows existing architecture pattern (Ollama/vLLM already supported for LLMs) where users bring their own models (like current Ollama setup)
Enables 100% offline RAG (local LLM + local embeddings)
Privacy-conscious users/enterprises need this for compliance