Give your team a simple AI helper inside Slack. Users type a slash command and get an answer in the same channel within seconds. This is useful for internal support, FAQs, and quick guidance without leaving Slack.
A webhook receives the slash command payload and immediately returns a 204 code so Slack does not time out. A Switch node sorts requests by command, so you can handle different intents like ask or help with separate paths. The Basic LLM Chain sends the user’s text to an OpenAI chat model and builds a clear reply. The Slack node then posts the response back to the exact channel that sent the command, keeping the conversation in one place.
Setup is straightforward. Create a Slack app with the commands and chat write scopes, install it in your workspace, and map the slash command to the n8n webhook URL. Connect your OpenAI API key in n8n and pick the model. Teams usually cut answer time from minutes to seconds and reduce context switching. Ideal uses include support triage, policy lookups, and drafting quick responses for common questions.
Ask in the Free Futurise Community.
Credits: YouTube video - these templates were sourced from publicly available materials across the web, including n8n’s official website, YouTube and public GitHub repositories. We have consolidated and categorized them for easy search and filtering, and supplemented them with links to integrations, step-by-step setup instructions, and personalized support in the Futurise community. Content in this library is provided for education, evaluation and internal use. Users are responsible for checking and complying with the license terms with the author of the templates before commercial use or redistribution. If you are the author and would like this template removed from the template library, email us at info@futurise.com and we will remove it promptly.