What Is n8n and Why Version Updates Matter
If you run business automations -- lead nurture sequences, customer support bots, content pipelines -- and something breaks silently at 2 AM, you understand why platform stability matters. n8n is an open-source workflow automation platform that lets you connect APIs, AI models, databases, and SaaS tools through a visual node-based editor. Think of it as the plumbing behind your automated operations.
Version updates in n8n are not just feature announcements. They are operational events. A bug fix in how chat memory resolves expressions can be the difference between your AI chatbot remembering a customer's name and starting every conversation from scratch. That is why we track every release closely at ORBWEVA, where n8n powers our entire AER automation system.
n8n 2.9.0, released on February 16, 2026, landed with 29 bug fixes, 29 new features, and a performance improvement. Here is what actually matters for people building production workflows.
AI Builder Improvements: What Changed and Why It Matters
The AI Builder is n8n's visual tool for constructing AI agent workflows -- chains of LLM calls, tool usage, memory retrieval, and output formatting. In 2.9.0, it received several targeted fixes and new capabilities.
Code-Builder Evaluation Fixes (PR #25726)
The code-based workflow builder had issues with how it evaluated generated code. This fix addresses edge cases where the builder would produce syntactically valid but logically incorrect node configurations. If you have been using the AI Builder to generate workflows from natural language descriptions, this makes the output more reliable.
Single-Node AI Agent Execution Now Runs Tools (PR #25709)
Previously, if you executed a single AI Agent node in isolation (useful for testing), it would not actually invoke any connected tools. This was a significant testing gap -- you had to run the entire workflow to verify that your agent correctly called tools like calculators, API lookups, or database queries. Now single-node execution respects tool connections, which makes iterative development much faster.
LangChain Prompt Template Escaping (PR #25821)
Curly braces in LangChain prompt templates were causing parsing errors. If your prompts included JSON examples or template literals with curly braces, they would be misinterpreted as LangChain variables. The fix escapes these properly, which eliminates a frustrating class of "my prompt works in the playground but breaks in n8n" bugs.
OpenAI 429 Quota Error Handling (PR #25825)
When your OpenAI account hits rate limits or quota caps, the AI Builder now handles the 429 response gracefully instead of throwing an unhandled error. This is particularly important for production workflows where a temporary rate limit should trigger a retry, not crash the entire execution.
Introspection Diagnostic Tool (PR #25172)
A new diagnostic tool has been added specifically for the AI workflow builder. This lets you inspect what the builder "sees" -- which nodes are available, what context it has, and how it plans to structure a workflow. Useful for debugging when the builder produces unexpected results.
Focused Nodes Context (PR #25617, #25452)
The new Focused Nodes feature lets you highlight specific nodes and pass that context to the planner agent. Instead of the AI builder considering your entire workflow, you can direct its attention to a specific section. This improves the quality of AI-generated modifications on large, complex workflows.
Chat Memory Enhancements: The Technical Details
The headline change here is deceptively simple but has significant implications.
Per-Item Expression Resolution (PR #25570)
The Chat Memory Manager Node previously resolved sub-node expressions using only item 0, regardless of which item was being processed. In practical terms: if your workflow processed multiple chat sessions in a batch (common in queue-based architectures), every session would get the memory context from the first session in the batch.
This fix makes the Chat Memory Manager resolve expressions per item. Each chat session now correctly retrieves its own memory context. If you are running a support bot that handles multiple concurrent conversations through a webhook queue, this is the fix that prevents Customer A from receiving responses based on Customer B's conversation history.
Chat Hub Deadlock Fix (PR #25654)
For self-hosted n8n instances running on Postgres with a connection pool size of 1 (common in small deployments), the Chat hub could deadlock. The fix ensures proper connection handling, which matters for anyone running n8n on modest infrastructure.
Broader Tool Support on Chat Hub (PR #25571)
The Chat hub now supports most tools, not just a limited subset. This means you can build more capable conversational workflows that leverage a wider range of n8n's tool ecosystem directly within chat interactions.
How We Use n8n at ORBWEVA
Our entire client automation stack -- what we call the AER system (Acquire, Engage, Retain) -- runs on self-hosted n8n. Here is what that looks like in practice.
Acquire: Lead Capture and Nurture
Our ARC bot is built as an n8n webhook workflow. When a visitor interacts with the chat widget on a client's website, the webhook triggers an AI Agent node connected to GPT-4o with a system prompt tailored to the client's business. The conversation is logged to a Google Sheet, and qualified leads are automatically enrolled in a Brevo email nurture sequence -- all without human intervention.
The chat memory fix in 2.9.0 directly impacts this. We process multiple chat sessions through batch webhooks, and the per-item expression resolution ensures each visitor gets contextually accurate responses.
Engage: Content Pipelines
Our blog content pipeline is a multi-workflow system. A scheduler workflow selects topics from a Supabase queue, triggers a content generation workflow (EN, JA, KO -- one per language), generates hero images via AI, and commits the results to our Next.js codebase. The LangChain prompt escaping fix matters here because our content generation prompts include JSON structure examples with curly braces.
Retain: Client Health Monitoring
A daily health score calculator workflow checks the status of all active client workflows, aggregates metrics, and flags anomalies. The OpenAI quota handling improvement in 2.9.0 is relevant because our health scoring uses GPT-4o for anomaly classification, and a rate limit error should not crash the entire monitoring pipeline.
Setting Up a Chat Workflow in n8n
Here is a practical walkthrough for building a conversational AI workflow in n8n, incorporating the 2.9.0 improvements.
Step 1: Create the Trigger
Start with a Chat Trigger node. This creates a webhook endpoint and provides the built-in chat widget you can embed on any page. Configure it with:
- Authentication: None for testing, webhook header auth for production
- Initial messages: Set a greeting that establishes context
Step 2: Add Memory
Connect a Window Buffer Memory node (or Postgres Chat Memory for persistence). This is where the 2.9.0 per-item fix applies. Configure:
- Session ID: Use an expression like
{{ $json.sessionId }}to isolate conversations - Context window: 10-20 messages is usually sufficient; more increases token costs without proportional quality gains
Step 3: Configure the AI Agent
Add an AI Agent node and connect it to your memory node. Set up:
- Model: Connect an OpenAI or Anthropic chat model sub-node
- System prompt: Be specific. "You are a helpful assistant" produces generic responses. Instead: "You are a product specialist for [Company]. You help visitors understand [specific products]. When a visitor expresses interest, collect their email and company name."
- Tools: Connect relevant tool sub-nodes (Calculator, HTTP Request for API lookups, or custom Code nodes)
Step 4: Add Post-Processing
After the AI Agent, add nodes to:
- Log the conversation -- a Google Sheets or Supabase node to record session data
- Detect intent -- a Code node that checks for lead qualification signals in the response
- Trigger follow-up -- an IF node that routes qualified leads to your CRM or email platform
Step 5: Test with Focused Nodes
Use the new Focused Nodes feature to test sections in isolation. Highlight your AI Agent and memory nodes, then run a single-node execution to verify tool calling works correctly (now possible thanks to PR #25709).
n8n vs Competitors: The AI Feature Comparison
The automation platform market has consolidated around three main players for AI workflows. Here is how they compare as of early 2026.
n8n: Open-Source, Self-Hostable, Full Control
n8n's core advantage is that you can self-host it. Your data, your prompts, your API keys -- all stay on your infrastructure. For businesses handling sensitive customer data or operating under regulations like GDPR, this is not optional; it is a requirement.
The AI Builder in n8n gives you direct access to LangChain primitives. You can build agents with custom tool chains, configure memory strategies, and chain multiple LLM calls with full visibility into what is happening at each step. The 2.9.0 release pushes this further with the introspection diagnostic tool and focused nodes.
Trade-off: Higher setup complexity. You need to manage your own infrastructure, updates, and backups.
Zapier: Largest Integration Library, Limited AI Depth
Zapier has the broadest integration catalog (8,000+ apps) and recently launched AI Agents. Their approach prioritizes ease of use -- you can build basic AI workflows without understanding LangChain or prompt engineering. But the abstraction comes at a cost: limited control over memory strategies, no self-hosting option, and per-task pricing that scales poorly for high-volume workflows.
Make: Visual Builder, Mid-Range Flexibility
Make (formerly Integromat) offers a strong visual builder and better pricing than Zapier for complex scenarios. Their AI integration is growing but still trails n8n in terms of LangChain-native features. Make is cloud-only, so the data sovereignty question applies here too.
The Bottom Line
If you need maximum control over AI workflows and handle sensitive data, n8n is the clear choice. If you need breadth of integrations and minimal setup time, Zapier wins. Make sits in between, offering good visual building at reasonable pricing. We detailed this comparison in depth in our n8n vs Zapier analysis.
What to Watch in Future n8n Releases
Based on the trajectory of recent releases and the n8n roadmap, here are developments worth tracking.
MCP (Model Context Protocol) Integration
n8n 2.9.0 includes a fix for the MCP toggle in workflow settings (PR #25630), signaling active development on MCP support. MCP is Anthropic's open protocol for connecting AI models to external tools and data sources. Native MCP support in n8n would let you expose any workflow as an MCP tool, making your automations accessible to any MCP-compatible AI agent.
External Secrets Store Maturity
Five separate PRs in 2.9.0 relate to external secrets management -- exposing store configuration, enabling deletion, UX improvements, and event logging. This indicates a push toward enterprise-grade credential management, which matters for teams running n8n across multiple environments.
Node.js 24 as Default (PR #25707)
The update to Node.js 24 as the default runtime brings performance improvements and access to newer JavaScript APIs in Code nodes. If you self-host, plan your upgrade path accordingly.
Code-Based Workflow Builder Convergence
The merge of "Ask" and "Build" functionality (PR #25681) suggests n8n is moving toward a unified AI assistant that can both answer questions about your workflow and modify it. This could significantly reduce the learning curve for new users while making experienced builders more productive.
FAQ
How do I upgrade to n8n 2.9.0 on a self-hosted instance?
If you are running n8n via Docker (the most common self-hosted setup), update your docker-compose.yml to reference the n8nio/n8n:2.9.0 image tag and run docker compose pull && docker compose up -d. Always back up your database before upgrading. If you use the n8n desktop app, it will prompt you to update automatically. Check the n8n release notes for any breaking changes specific to your node versions.
Does the chat memory fix in 2.9.0 affect existing workflows, or only new ones?
The per-item expression resolution fix (PR #25570) applies to all workflows using the Chat Memory Manager Node, including existing ones. You do not need to modify your workflows to benefit from it. However, if you previously implemented workarounds for the item-0-only behavior (such as processing chat sessions one at a time instead of in batches), you can now remove those workarounds and process sessions concurrently.
Can I use n8n's AI Builder with models other than OpenAI?
Yes. n8n's AI nodes support multiple LLM providers through dedicated sub-nodes: OpenAI, Anthropic (Claude), Google Gemini, Ollama (for local models), Azure OpenAI, Mistral, and others. The AI Builder itself uses an LLM to generate workflows, but the workflows it generates can use any supported model. The OpenRouter Chat Model Node, which also received a fix in 2.9.0 (PR #25731), gives you access to dozens of additional models through a single integration point.
