Turn your chat inbox into a smart help desk. Messages are captured and answered by AI so customers get fast, clear replies. Ideal for teams that want fast support without complex tools.
A chat message starts the flow. The input can go to a simple LLM Chain that uses a local Ollama DeepSeek model with a large context window. An AI Agent is also available with a fixed system message and a window memory, powered by the DeepSeek OpenAI compatible API. Two HTTP nodes show direct calls to the DeepSeek endpoint using JSON and raw bodies. You can switch between local and cloud models to balance speed, privacy, and cost.
You will need a DeepSeek API key or a running Ollama server with the deepseek r1 model. Set credentials in n8n, choose the model in each node, and test a message in the chat UI. Expect faster replies, lower costs for common questions, and more consistent answers because the memory keeps context. Use it for FAQs, triage before handoff, or after hours self service.
Ask in the Free Futurise Community.
These templates were sourced from publicly available materials across the web, including n8n’s official website, YouTube and public GitHub repositories. We have consolidated and categorized them for easy search and filtering, and supplemented them with links to integrations, step-by-step setup instructions, and personalized support in the Futurise community. Content in this library is provided for education, evaluation and internal use. Users are responsible for checking and complying with the license terms with the author of the templates before commercial use or redistribution.Where an original author was identified, attribution has been provided. Some templates did not include author information. If you know who created this template, please let us know so we can add the appropriate credit and reference link. If you are the author and would like this template removed from the library, email us at info@futurise.com and we will remove it promptly.