Hey Jenkins Community,
I’ve been contributing to the resources aI chatbot plugin for a while now and really enjoying it and had a thought of,like right now the chatbot is tied to a specific LLM setup. I was wondering,would it make sense at some point to support local models through something like Ollama? the idea being that developers could run the chatbot entirely on their machine without needing cloud API keys, which might make it much easier to contribute to or demo the project locally.But,i dont know some important info regarding,like:
- is this something that’s already been discussed?
- would a provider abstraction (so it’s easy to swap between Ollama, OpenAI, etc.) be useful(which i often do with my projects)?
I don’t have strong opinions here at all,but curious whether this direction aligns with where the project is targeted. Happy to hear whatever…