Turn incoming chat messages into fast, helpful replies using a mix of cloud and local AI. Great for help desks or internal teams that need a responsive assistant that remembers recent context.
A chat event starts the flow. Messages can be handled by a simple LLM chain powered by a local Ollama model, or sent to DeepSeek using direct HTTP calls. There is also an AI Agent option with a memory window so the bot can keep track of the last messages in a conversation. The setup shows both JSON and raw body calls to DeepSeek Chat V3 and the Reasoner model, plus a system message that guides tone and role. You can choose between local processing for cost control and cloud calls for higher capacity.
You will need a DeepSeek API key and a running Ollama instance with the deepseek r1 model. Expect faster first replies, fewer repeated questions for agents, and more consistent answers. Useful for website chat, internal IT Q and A, and triage for common tickets. Follow the steps below to connect credentials, set model names, and run a quick end to end test.
Ask in the Free Futurise Community.
These templates were sourced from publicly available materials across the web, including n8n’s official website, YouTube and public GitHub repositories. We have consolidated and categorized them for easy search and filtering, and supplemented them with links to integrations, step-by-step setup instructions, and personalized support in the Futurise community. Content in this library is provided for education, evaluation and internal use. Users are responsible for checking and complying with the license terms with the author of the templates before commercial use or redistribution.Where an original author was identified, attribution has been provided. Some templates did not include author information. If you know who created this template, please let us know so we can add the appropriate credit and reference link. If you are the author and would like this template removed from the library, email us at info@futurise.com and we will remove it promptly.