How to Build an AI-Powered Intercom Support Agent with n8n (Free Template)

How to Build an AI-Powered Intercom Support Agent with n8n (Free Template)

Customer support teams drown in repetitive questions while urgent issues wait in queue. This n8n workflow solves that by automatically classifying incoming Intercom conversations, generating intelligent responses using AI, and knowing exactly when to hand off to human agents. You'll learn how to build a complete support automation system that handles routine inquiries while escalating complex cases.

The Problem: Support Teams Can't Scale Without Losing Quality

Current challenges:

  • Support agents spend 60-70% of their time answering the same basic questions
  • Complex issues get buried in high-volume ticket queues
  • Response times suffer during peak hours or off-hours
  • Manual ticket classification creates inconsistent routing
  • No systematic way to identify when AI should step back

Business impact:

  • Time spent: 15-25 hours per week on repetitive questions per agent
  • First response time: 2-4 hours for routine inquiries that could be instant
  • Customer satisfaction drops when simple questions take hours to answer
  • Agent burnout from repetitive work instead of solving challenging problems

The Solution Overview

This n8n workflow creates an intelligent support layer between customers and your team. When a new conversation starts in Intercom, the workflow captures the message, uses AI to classify the inquiry type and sentiment, generates a contextual response, and decides whether to send it automatically or flag for human review. The system handles account questions, billing inquiries, and technical issues differently based on complexity and customer sentiment. It integrates OpenAI for natural language processing, Intercom's API for conversation management, and custom logic nodes to enforce business rules about when automation should step back.

What You'll Build

Component Technology Purpose
Trigger System Intercom Webhook Captures new conversations in real-time
Classification Engine OpenAI GPT-4 Categorizes inquiry type and urgency
Sentiment Analysis OpenAI API Detects frustrated or angry customers
Response Generator OpenAI with Custom Prompts Creates contextual draft replies
Decision Logic n8n Function Nodes Determines auto-send vs human review
Response Delivery Intercom API Posts replies or assigns to agents
Escalation System Intercom Assignment Routes complex cases to specialists

Key capabilities:

  • Automatic classification of support inquiries into 8+ categories
  • Sentiment detection to catch frustrated customers before they escalate
  • Context-aware response generation using conversation history
  • Intelligent handoff rules based on complexity, sentiment, and topic
  • Human-in-the-loop for edge cases and sensitive issues
  • Automatic tagging and routing to specialized teams

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted version 1.0+)
  • Intercom workspace with Admin access
  • OpenAI API account with GPT-4 access
  • Intercom API key with conversation read/write permissions
  • Basic understanding of webhook configurations
  • JavaScript knowledge for custom function nodes

Step 1: Configure Intercom Webhook Trigger

This phase establishes the connection between Intercom and your n8n workflow, ensuring every new conversation triggers your automation.

Set up the webhook receiver:

  1. Add a Webhook node as your workflow trigger
  2. Set HTTP Method to POST
  3. Copy the production webhook URL from n8n
  4. Configure response mode to "Respond Immediately" with 200 status

Configure Intercom to send events:

  1. Navigate to Intercom Settings → Developers → Webhooks
  2. Create new webhook subscription
  3. Subscribe to "conversation.user.created" event
  4. Paste your n8n webhook URL
  5. Add "conversation.user.replied" for follow-up messages

Node configuration:

{
  "httpMethod": "POST",
  "path": "intercom-support",
  "responseMode": "responseNode",
  "options": {}
}

Why this works:
Intercom fires webhooks within 1-2 seconds of conversation creation. The immediate response prevents timeout errors while your workflow processes in the background. This architecture handles 100+ simultaneous conversations without blocking.

Step 2: Extract and Validate Conversation Data

The webhook payload contains nested conversation data. You need to parse it and validate that you have everything required for AI processing.

Parse the incoming data:

  1. Add a Function node after the webhook
  2. Extract conversation ID, customer message, and metadata
  3. Validate required fields exist
  4. Format data structure for downstream nodes

Function node code:

const payload = $input.item.json.body;

return {
  conversationId: payload.data.item.id,
  customerId: payload.data.item.user.id,
  customerEmail: payload.data.item.user.email,
  message: payload.data.item.conversation_parts.conversation_parts[0].body,
  conversationUrl: payload.data.item.links.conversation_web,
  timestamp: payload.data.item.created_at
};

Validation checks:

  • Conversation ID exists and is numeric
  • Message body is not empty
  • Customer ID is valid
  • Timestamp is within last 60 seconds (prevents replay attacks)

Why this approach:
Intercom's webhook payload structure changes between conversation types. Extracting to a flat structure now prevents node failures later. The validation catches malformed webhooks before you waste API calls.

Step 3: Classify Inquiry with AI

Use OpenAI to categorize the support request and detect sentiment. This determines routing and response strategy.

Configure OpenAI classification:

  1. Add OpenAI node set to Chat model
  2. Use GPT-4 for accuracy (GPT-3.5-turbo for cost savings)
  3. Set temperature to 0.1 for consistent classifications
  4. Structure output as JSON for reliable parsing

Classification prompt:

Analyze this customer support message and return JSON with these fields:

category: One of [account_access, billing, technical_issue, feature_request, general_question, complaint, refund_request, integration_help]
urgency: One of [low, medium, high, critical]
sentiment: One of [positive, neutral, negative, angry]
confidence: Number 0-100 indicating classification confidence
requires_human: Boolean - true if issue is complex or customer is frustrated

Customer message: {{$json.message}}

Node configuration:

{
  "model": "gpt-4",
  "temperature": 0.1,
  "maxTokens": 200,
  "options": {
    "responseFormat": "json_object"
  }
}

Why this works:
Temperature of 0.1 ensures consistent categorization across similar messages. JSON output format eliminates parsing errors from free-form text. The confidence score lets you flag uncertain classifications for human review.

Step 4: Generate Contextual Response

Based on the classification, create an appropriate draft response using AI with category-specific instructions.

Set up response generation:

  1. Add a Switch node to route by category
  2. Create separate OpenAI nodes for each category type
  3. Use category-specific system prompts
  4. Include conversation context and customer history

Example prompt for billing inquiries:

You are a helpful support agent for [Company]. Generate a professional response to this billing question.

Guidelines:
- Be specific about billing cycles and payment methods
- Include links to billing documentation
- Offer to escalate to billing team if needed
- Keep response under 150 words
- Use friendly but professional tone

Customer question: {{$json.message}}
Category: {{$json.category}}
Customer sentiment: {{$json.sentiment}}

Critical configuration:

  • Max tokens: 300 (prevents overly long responses)
  • Temperature: 0.7 (balances consistency with natural language)
  • Stop sequences: None (let model complete thoughts)

Variables to customize:

  • Company name and tone guidelines
  • Documentation links specific to your product
  • Escalation criteria based on your support structure
  • Response length based on channel (shorter for chat, longer for email)

Step 5: Implement Human Handoff Logic

Decide whether to send the AI response automatically or route to a human agent based on complexity and sentiment.

Create decision function:

  1. Add Function node after response generation
  2. Evaluate multiple handoff criteria
  3. Set action flag: "auto_send" or "assign_human"
  4. Include reasoning for audit trail

Decision logic:

const classification = $('OpenAI_Classification').item.json;
const response = $('OpenAI_Response').item.json;

// Automatic handoff conditions
const requiresHuman = 
  classification.sentiment === 'angry' ||
  classification.urgency === 'critical' ||
  classification.confidence < 75 ||
  classification.requires_human === true ||
  classification.category === 'refund_request' ||
  classification.category === 'complaint';

return {
  action: requiresHuman ? 'assign_human' : 'auto_send',
  reason: requiresHuman ? 'Requires human attention' : 'Suitable for automation',
  assignTo: classification.category === 'billing' ? 'billing_team' : 'general_support'
};

Handoff criteria table:

Condition Action Reason
Sentiment = angry Assign human Prevent escalation
Confidence < 75% Assign human Uncertain classification
Category = refund Assign human Financial decision required
Urgency = critical Assign human Immediate attention needed
All else Auto-send Safe for automation

Why this approach:
Multiple criteria create safety nets. A frustrated customer with a simple question still gets human attention. High-confidence routine questions get instant responses. The reasoning field creates an audit trail for improving handoff rules over time.

Step 6: Send Response or Assign to Agent

Execute the decision by either posting the AI response to Intercom or assigning the conversation to a human agent.

Configure Intercom response node:

  1. Add Switch node to route by action flag
  2. For "auto_send": Use Intercom Reply node
  3. For "assign_human": Use Intercom Assign node
  4. Add tags for tracking automation decisions

Auto-send configuration:

{
  "conversationId": "={{$json.conversationId}}",
  "type": "comment",
  "body": "={{$('OpenAI_Response').item.json.response}}",
  "messageType": "comment"
}

Human assignment configuration:

{
  "conversationId": "={{$json.conversationId}}",
  "assigneeId": "={{$json.assignTo}}",
  "adminId": "0"
}

Add tracking tags:

  • "ai_handled" for auto-sent responses
  • "ai_escalated" for human assignments
  • Category tag (e.g., "billing", "technical")
  • Sentiment tag for filtering

Workflow Architecture Overview

This workflow consists of 12 nodes organized into 4 main sections:

  1. Data ingestion (Nodes 1-3): Webhook trigger captures Intercom events, Function node parses payload, validation checks ensure data quality
  2. AI classification (Nodes 4-5): OpenAI classifies inquiry type and sentiment, confidence scoring identifies edge cases
  3. Response generation (Nodes 6-8): Switch routes by category, category-specific OpenAI nodes generate responses, Function node applies handoff logic
  4. Action execution (Nodes 9-12): Switch routes by action decision, Intercom nodes send responses or assign agents, tagging nodes track outcomes

Execution flow:

  • Trigger: Intercom webhook fires on new conversation
  • Average run time: 3-5 seconds end-to-end
  • Key dependencies: OpenAI API, Intercom API with conversation permissions

Critical nodes:

  • OpenAI Classification: Determines entire workflow path based on category and sentiment
  • Handoff Logic Function: Enforces business rules about automation boundaries
  • Switch (Action Router): Prevents AI responses from reaching customers when human review is needed

The complete n8n workflow JSON template is available at the bottom of this article.

Key Configuration Details

OpenAI Integration

Required fields:

  • API Key: Your OpenAI API key with GPT-4 access
  • Model: gpt-4 (or gpt-3.5-turbo for 10x cost savings with 15% accuracy drop)
  • Temperature: 0.1 for classification, 0.7 for response generation

Common issues:

  • Using wrong temperature → Inconsistent classifications across similar messages
  • Not setting response format to JSON → Parsing failures on 20-30% of responses
  • Exceeding rate limits → Add retry logic with exponential backoff

Intercom API Configuration

Authentication:

  • Use Access Token authentication, not Basic Auth
  • Token needs "Read conversations" and "Write conversations" permissions
  • Test token with a manual API call before deploying workflow

Webhook security:

  • Validate webhook signatures to prevent spoofing
  • Check timestamp to reject replayed requests
  • Use HTTPS endpoint only (n8n cloud handles this automatically)

Variables to customize:

  • confidence_threshold: Lower to 70 for more automation, raise to 85 for more human review
  • category_list: Add industry-specific categories like "trading_question" or "kyc_verification"
  • handoff_rules: Adjust based on your team's capacity and expertise areas

Testing & Validation

Component testing:

  1. Test webhook: Send test event from Intercom webhook settings, verify n8n receives it within 2 seconds
  2. Test classification: Run workflow with 10 sample messages across categories, check accuracy against expected results
  3. Test sentiment detection: Use messages with obvious frustration, verify "angry" classification triggers human assignment
  4. Test response quality: Review 20 generated responses for tone, accuracy, and helpfulness

Input/output validation:

  • Check OpenAI classification returns all required fields (category, urgency, sentiment, confidence)
  • Verify response generation doesn't exceed 300 tokens
  • Confirm Intercom API returns 200 status on reply posting
  • Test error handling by temporarily disabling OpenAI API key

Common issues:

Problem Symptom Solution
Webhook timeouts n8n shows 504 errors Switch to "Respond Immediately" mode
Inconsistent classifications Same message gets different categories Lower temperature to 0.05
API rate limits Workflow fails during high volume Add Queue node to throttle requests
Missing conversation context Responses lack relevant details Fetch last 3 messages from Intercom API

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with 3 attempts, exponential backoff Prevents data loss on temporary API failures
Monitoring Workflow execution logs, error alerts to Slack Detect failures within 5 minutes vs discovering days later
Rate Limiting Queue node limiting to 10 requests/second Prevents OpenAI API throttling during traffic spikes
Fallback Logic Default to human assignment if AI fails Ensures no customer message goes unanswered
Documentation Node-by-node comments explaining logic Reduces modification time by 2-4 hours for future changes

Scaling considerations:

  • Under 100 conversations/day: This workflow handles it without modification
  • 100-500/day: Add caching for common questions to reduce API costs by 40%
  • 500+ conversations/day: Implement batch processing and consider dedicated AI infrastructure

Cost optimization:

  • Use GPT-3.5-turbo for classification (saves 90% vs GPT-4) if accuracy remains above 85%
  • Cache responses for identical questions within 24-hour window
  • Implement smart routing to only use AI for categories where it performs well

Real-World Use Cases

Use Case 1: SaaS Product Support

  • Industry: B2B SaaS platform
  • Scale: 300 conversations/day
  • Modifications needed: Add integration with knowledge base API to include relevant documentation links in responses, connect to user database to personalize responses with account details

Use Case 2: E-commerce Customer Service

  • Industry: Online retail
  • Scale: 800 conversations/day during peak season
  • Modifications needed: Add order lookup via Shopify API, include shipping status in responses, create separate flow for returns/exchanges with photo upload handling

Use Case 3: Fintech Trading Platform

  • Industry: Cryptocurrency exchange
  • Scale: 500 conversations/day
  • Modifications needed: Add KYC verification status checks, integrate with trading API to answer balance inquiries, implement stricter handoff rules for regulatory questions (100% human review)

Use Case 4: Healthcare Appointment Scheduling

  • Industry: Medical practice
  • Scale: 150 conversations/day
  • Modifications needed: Connect to appointment booking system, add HIPAA-compliant logging, implement strict PII handling in AI prompts, route all medical questions to licensed staff

Customizations & Extensions

Alternative Integrations

Instead of Intercom:

  • Zendesk: Requires 8 node changes to adapt to ticket-based model vs conversations, use Zendesk webhook trigger and ticket API
  • Front: Better for email-heavy support, swap Intercom nodes for Front API calls, add email parsing logic
  • HubSpot Service Hub: Use when you need CRM integration, requires additional nodes to sync contact data

Workflow Extensions

Add automated follow-up:

  • Add Wait node to pause 24 hours after auto-response
  • Check if customer replied or marked conversation as solved
  • Send satisfaction survey if resolved, escalate to human if no response
  • Nodes needed: +4 (Wait, Intercom Get Conversation, IF, Intercom Reply)

Implement knowledge base integration:

  • Add vector database (Pinecone, Weaviate) to store documentation
  • Query knowledge base before generating response
  • Include relevant article links in AI response
  • Accuracy improvement: 30% better responses with specific documentation references

Add voice support integration:

  • Connect Vapi or Retell AI for phone call handling
  • Transcribe calls and feed into same classification workflow
  • Generate callback scripts for agents
  • Complexity: Medium (12 additional nodes for voice API integration)

Integration possibilities:

Add This To Get This Complexity
Slack notifications Real-time alerts when AI escalates cases Easy (2 nodes)
Google Sheets logging Track classification accuracy over time Easy (3 nodes)
Airtable CRM sync Link conversations to customer profiles Medium (6 nodes)
Stripe integration Auto-answer billing questions with invoice data Medium (8 nodes)
Custom analytics dashboard Visualize automation performance metrics Hard (15+ nodes)

Performance optimization:

  • Replace sequential OpenAI calls with parallel execution using Split In Batches node (reduces latency by 40%)
  • Implement response caching with Redis for frequently asked questions
  • Add A/B testing framework to compare AI response quality across models
  • Create feedback loop where agents rate AI responses to improve prompts over time

Get Started Today

Ready to automate your customer support workflow?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add your OpenAI API key and Intercom access token in credential settings
  4. Customize prompts: Edit the OpenAI node prompts to match your product and tone
  5. Test with sample data: Create a test conversation in Intercom and verify the workflow executes correctly
  6. Adjust handoff rules: Modify the Function node logic based on your team's capacity and expertise
  7. Deploy to production: Activate the webhook in Intercom settings and monitor the first 50 conversations closely

Need help customizing this workflow for your specific support needs or integrating with your existing tools? Schedule an intro call with Atherial.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "AI-Powered Customer Support Automation",
  "nodes": [
    {
      "id": "webhook-trigger",
      "name": "Intercom Webhook Trigger",
      "type": "n8n-nodes-base.webhook",
      "position": [
        50,
        300
      ],
      "parameters": {
        "path": "intercom-webhook",
        "httpMethod": "POST",
        "responseMode": "onReceived"
      },
      "typeVersion": 2
    },
    {
      "id": "extract-message-data",
      "name": "Extract Message Data",
      "type": "n8n-nodes-base.set",
      "position": [
        250,
        300
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "message",
              "value": "{{ $json.data.item.conversation_parts.conversation_parts[0].body }}"
            },
            {
              "name": "conversation_id",
              "value": "{{ $json.data.item.id }}"
            },
            {
              "name": "customer_id",
              "value": "{{ $json.data.item.customer_id }}"
            },
            {
              "name": "source",
              "value": "{{ $json.data.item.source }}"
            }
          ]
        }
      },
      "typeVersion": 3
    },
    {
      "id": "classify-ticket",
      "name": "Classify Ticket with AI",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        450,
        200
      ],
      "parameters": {
        "url": "=https://api.openai.com/v1/chat/completions",
        "body": {
          "raw": "{\n  \"model\": \"gpt-4\",\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": \"You are a customer support specialist. Classify the following customer message into one of these categories: 'billing', 'technical', 'account', 'general', 'complaint', 'feature_request'. Return only the category name.\"\n    },\n    {\n      \"role\": \"user\",\n      \"content\": \"{{ $json.message }}\"\n    }\n  ],\n  \"temperature\": 0.3\n}"
        },
        "method": "POST",
        "sendBody": true,
        "contentType": "json",
        "genericAuth": {
          "credentials": "{{ env.OPENAI_API_KEY }}"
        },
        "authentication": "genericCredentialType"
      },
      "typeVersion": 4
    },
    {
      "id": "sentiment-analysis",
      "name": "Sentiment Analysis",
      "type": "@n8n/n8n-nodes-langchain.openAi",
      "position": [
        450,
        400
      ],
      "parameters": {
        "prompt": "Analyze the sentiment of this customer support message. Return a JSON object with 'sentiment' (positive/neutral/negative) and 'urgency' (low/medium/high) based on language intensity and emotional indicators.\\n\\nMessage: {{ $json.message }}",
        "modelId": "gpt-4",
        "options": {},
        "resource": "text"
      },
      "typeVersion": 2
    },
    {
      "id": "check-escalation-needed",
      "name": "Check if Escalation Needed",
      "type": "n8n-nodes-base.if",
      "position": [
        650,
        300
      ],
      "parameters": {
        "conditions": {
          "combinator": "or",
          "conditions": [
            {
              "id": "1",
              "value1": "{{ $json.sentiment }}",
              "value2": "negative",
              "operator": {
                "name": "filter",
                "type": "string",
                "operation": "contains",
                "singleValue": true
              }
            }
          ]
        }
      },
      "typeVersion": 2
    },
    {
      "id": "generate-response",
      "name": "Generate AI Response",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        850,
        150
      ],
      "parameters": {
        "url": "=https://api.openai.com/v1/chat/completions",
        "body": {
          "raw": "{\n  \"model\": \"gpt-4\",\n  \"messages\": [\n    {\n      \"role\": \"system\",\n      \"content\": \"You are a professional customer support agent. Generate a helpful, empathetic, and professional response to this customer inquiry. Keep response under 200 words and maintain a helpful tone.\"\n    },\n    {\n      \"role\": \"user\",\n      \"content\": \"Message Category: {{ $json.classification }}\\nCustomer Message: {{ $json.message }}\"\n    }\n  ],\n  \"temperature\": 0.7\n}"
        },
        "method": "POST",
        "sendBody": true,
        "contentType": "json",
        "authentication": "genericCredentialType"
      },
      "typeVersion": 4
    },
    {
      "id": "post-to-intercom-auto",
      "name": "Post AI Response to Intercom",
      "type": "n8n-nodes-base.intercom",
      "position": [
        1050,
        150
      ],
      "parameters": {
        "metadata": {
          "sentiment": "{{ $json.sentiment }}",
          "ai_drafted": true,
          "classification": "{{ $json.classification }}",
          "escalation_flag": false
        },
        "resource": "conversation",
        "operation": "update",
        "conversationId": "{{ $json.conversation_id }}"
      },
      "typeVersion": 1
    },
    {
      "id": "escalate-to-slack",
      "name": "Escalate to Support Team (Slack)",
      "type": "n8n-nodes-base.slack",
      "position": [
        850,
        450
      ],
      "parameters": {
        "text": "🚨 *Support Escalation Alert*\\n\\nConversation ID: {{ $json.conversation_id }}\\nSentiment: {{ $json.sentiment }}\\nCategory: {{ $json.classification }}\\n\\n*Customer Message:*\\n{{ $json.message }}\\n\\nThis requires immediate human review and attention."
      },
      "typeVersion": 2
    },
    {
      "id": "flag-escalation",
      "name": "Flag for Human Review in Intercom",
      "type": "n8n-nodes-base.intercom",
      "position": [
        1050,
        450
      ],
      "parameters": {
        "metadata": {
          "escalated": true,
          "sentiment": "{{ $json.sentiment }}",
          "timestamp": "{{ now }}",
          "classification": "{{ $json.classification }}",
          "escalation_reason": "Negative sentiment detected"
        },
        "resource": "conversation",
        "operation": "update",
        "conversationId": "{{ $json.conversation_id }}"
      },
      "typeVersion": 1
    },
    {
      "id": "determine-flow",
      "name": "Determine Response Flow",
      "type": "n8n-nodes-base.code",
      "position": [
        650,
        150
      ],
      "parameters": {
        "jsCode": "// Parse classification from AI response\nconst classification = $json.classification || 'general';\nconst sentiment = $json.sentiment || 'neutral';\n\n// Determine if auto-response is appropriate\nconst canAutoRespond = !['complaint', 'urgent'].includes(classification) && sentiment !== 'negative';\n\nreturn {\n  canAutoRespond,\n  classification,\n  sentiment,\n  requiresEscalation: !canAutoRespond\n};",
        "language": "javaScript"
      },
      "typeVersion": 2
    },
    {
      "id": "extract-classification",
      "name": "Extract Classification Result",
      "type": "n8n-nodes-base.set",
      "position": [
        550,
        200
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "classification",
              "value": "{{ $json.choices[0].message.content }}"
            }
          ]
        },
        "includeOtherFields": true
      },
      "typeVersion": 3
    },
    {
      "id": "extract-response",
      "name": "Extract Generated Response",
      "type": "n8n-nodes-base.set",
      "position": [
        950,
        150
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "ai_response",
              "value": "{{ $json.choices[0].message.content }}"
            }
          ]
        },
        "includeOtherFields": true
      },
      "typeVersion": 3
    },
    {
      "id": "extract-sentiment",
      "name": "Extract Sentiment Result",
      "type": "n8n-nodes-base.set",
      "position": [
        550,
        400
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "ai_analysis",
              "value": "{{ JSON.parse($json.result) }}"
            }
          ]
        },
        "includeOtherFields": true
      },
      "typeVersion": 3
    }
  ],
  "connections": {
    "Sentiment Analysis": {
      "main": [
        [
          {
            "node": "Extract Sentiment Result",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Message Data": {
      "main": [
        [
          {
            "node": "Classify Ticket with AI",
            "type": "main",
            "index": 0
          },
          {
            "node": "Sentiment Analysis",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate AI Response": {
      "main": [
        [
          {
            "node": "Extract Generated Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Classify Ticket with AI": {
      "main": [
        [
          {
            "node": "Extract Classification Result",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Determine Response Flow": {
      "main": [
        [
          {
            "node": "Generate AI Response",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Escalate to Support Team (Slack)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Sentiment Result": {
      "main": [
        [
          {
            "node": "Check if Escalation Needed",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Intercom Webhook Trigger": {
      "main": [
        [
          {
            "node": "Extract Message Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Check if Escalation Needed": {
      "main": [
        [
          {
            "node": "Escalate to Support Team (Slack)",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Generate AI Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Generated Response": {
      "main": [
        [
          {
            "node": "Post AI Response to Intercom",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Classification Result": {
      "main": [
        [
          {
            "node": "Determine Response Flow",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Escalate to Support Team (Slack)": {
      "main": [
        [
          {
            "node": "Flag for Human Review in Intercom",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}