How to Build a Discount Content Automation Engine with n8n (Free Template)

How to Build a Discount Content Automation Engine with n8n (Free Template)

Creating fresh discount content manually is a time sink. You find deals, scrape product details, format everything, and write descriptions—only to repeat the process tomorrow. This n8n workflow eliminates that grind by automating the entire pipeline from URL list to published content. You'll learn how to build a self-sustaining content engine that scrapes discount URLs, stores structured data in Supabase, and leverages AI agents to produce ready-to-publish content. The complete n8n workflow JSON template is available at the bottom of this article.

The Problem: Manual Discount Content Creation Doesn't Scale

E-commerce sites, deal aggregators, and affiliate marketers face the same bottleneck. Finding discount opportunities is easy—turning them into compelling content is not.

Current challenges:

  • Manually visiting each discount URL to extract product details, pricing, and descriptions
  • Copying data into spreadsheets or databases without standardization
  • Writing unique content for each deal while maintaining brand voice
  • Repeating this process daily as new deals emerge

Business impact:

  • Time spent: 15-20 hours per week on content production for 50-100 deals
  • Opportunity cost: Missing time-sensitive deals because manual processing is too slow
  • Consistency issues: Content quality varies based on writer availability and fatigue

The real problem isn't finding deals. It's transforming raw URLs into structured, publishable content fast enough to capitalize on limited-time offers.

The Solution Overview

This n8n workflow creates a three-stage automation pipeline. First, it ingests a list of discount URLs. Second, it uses self-hosted Firecrawl to scrape product data from those URLs and stores everything in Supabase. Third, it retrieves stored data and feeds it to AI agents that generate optimized discount content.

The workflow leverages SearXNG for additional research, Supabase for persistent storage, and AI models to produce content that matches your brand voice. You maintain full control over data quality through validation nodes and can customize content templates for different product categories.

What You'll Build

This automation handles the complete discount content lifecycle with zero manual intervention after initial setup.

Component Technology Purpose
URL Ingestion Manual Trigger/Webhook Accept lists of discount URLs
Web Scraping Self-hosted Firecrawl Extract product data, pricing, images
Data Storage Supabase (PostgreSQL) Store structured discount data
Search Enhancement SearXNG Gather additional product context
Content Generation AI Agents (OpenAI/Claude) Produce unique discount descriptions
Output Delivery Database/API Store finished content for publishing

Key capabilities:

  • Batch process 50-100 URLs per execution
  • Extract product titles, prices, discount percentages, descriptions, and images
  • Store normalized data with timestamps and metadata
  • Query stored data to avoid duplicate processing
  • Generate SEO-optimized content with AI agents
  • Handle errors gracefully with retry logic

Prerequisites

Before starting, ensure you have:

  • n8n instance (self-hosted recommended for Firecrawl integration)
  • Supabase account with PostgreSQL database configured
  • Self-hosted Firecrawl instance with API access
  • SearXNG instance (self-hosted or public endpoint)
  • OpenAI or Anthropic API key for content generation
  • Basic SQL knowledge for database schema design
  • JavaScript familiarity for Function nodes

Step 1: Set Up Supabase Database Schema

Your database structure determines how efficiently you can query and reference discount data later.

Create the discounts table:

CREATE TABLE discounts (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  url TEXT UNIQUE NOT NULL,
  product_title TEXT,
  original_price DECIMAL(10,2),
  discount_price DECIMAL(10,2),
  discount_percentage INTEGER,
  description TEXT,
  image_url TEXT,
  scraped_at TIMESTAMP DEFAULT NOW(),
  content_generated BOOLEAN DEFAULT FALSE,
  generated_content TEXT,
  metadata JSONB
);

CREATE INDEX idx_url ON discounts(url);
CREATE INDEX idx_content_generated ON discounts(content_generated);

Why this schema works:

  • url with UNIQUE constraint prevents duplicate scraping
  • content_generated flag tracks which records need AI processing
  • metadata JSONB field stores flexible additional data (category, merchant, expiration)
  • Indexes on url and content_generated speed up lookups and batch queries

Configure Supabase credentials in n8n:

  1. Add Supabase credential in n8n
  2. Enter your project URL: https://[project-id].supabase.co
  3. Add service role API key (not anon key—you need write access)
  4. Test connection with a simple SELECT query

Step 2: Configure Firecrawl for Web Scraping

Firecrawl handles JavaScript-heavy sites better than basic HTTP requests. Self-hosting gives you control over rate limits and costs.

Set up Firecrawl HTTP Request node:

{
  "method": "POST",
  "url": "http://your-firecrawl-instance:3000/scrape",
  "headers": {
    "Content-Type": "application/json",
    "Authorization": "Bearer YOUR_FIRECRAWL_API_KEY"
  },
  "body": {
    "url": "{{$json.discount_url}}",
    "formats": ["markdown", "html"],
    "onlyMainContent": true,
    "waitFor": 2000
  }
}

Critical settings:

  • waitFor: 2000 ensures JavaScript-rendered prices load
  • onlyMainContent: true filters out navigation and footer noise
  • Request both markdown and HTML formats for flexible parsing

Common scraping issues:

  • Rate limiting → Add 2-3 second delays between requests with Wait node
  • Dynamic pricing → Increase waitFor to 3000-5000ms for slow sites
  • Anti-bot measures → Rotate user agents in HTTP Request headers

Step 3: Extract and Normalize Product Data

Raw HTML from Firecrawl needs transformation into structured database records.

Add Function node for data extraction:

// Extract product data from Firecrawl response
const html = $input.item.json.html;
const markdown = $input.item.json.markdown;

// Price extraction with regex
const priceRegex = /\$(\d+\.?\d*)/g;
const prices = markdown.match(priceRegex);

// Calculate discount
const originalPrice = prices && prices[0] ? parseFloat(prices[0].replace('$', '')) : null;
const discountPrice = prices && prices[1] ? parseFloat(prices[1].replace('$', '')) : null;
const discountPercentage = originalPrice && discountPrice 
  ? Math.round(((originalPrice - discountPrice) / originalPrice) * 100)
  : null;

// Extract title (usually first H1 in markdown)
const titleMatch = markdown.match(/^#\s+(.+)$/m);
const productTitle = titleMatch ? titleMatch[1] : null;

// Image extraction
const imageMatch = html.match(/<img[^>]+src="([^">]+)"/);
const imageUrl = imageMatch ? imageMatch[1] : null;

return {
  json: {
    url: $input.item.json.original_url,
    product_title: productTitle,
    original_price: originalPrice,
    discount_price: discountPrice,
    discount_percentage: discountPercentage,
    description: markdown.substring(0, 500), // First 500 chars
    image_url: imageUrl,
    metadata: {
      scraped_html_length: html.length,
      markdown_length: markdown.length
    }
  }
};

Why this approach:
Think of this node as a data translator. Firecrawl gives you a messy pile of HTML and markdown—like dumping a toolbox on the floor. This function sorts everything into labeled drawers. It hunts for dollar signs to find prices, grabs the biggest heading for the product name, and calculates discount percentages automatically. The regex patterns act like magnets, pulling specific data types from the chaos.

Variables to customize:

  • priceRegex: Adjust for international currencies (€, £, ¥)
  • description length: Change 500 to match your content requirements
  • Add category detection based on URL patterns or keywords

Step 4: Store Data in Supabase with Upsert Logic

Prevent duplicate entries while updating existing records when URLs are re-scraped.

Configure Supabase node (Upsert operation):

{
  "operation": "upsert",
  "table": "discounts",
  "conflictColumns": ["url"],
  "updateColumns": [
    "product_title",
    "original_price", 
    "discount_price",
    "discount_percentage",
    "description",
    "image_url",
    "scraped_at",
    "metadata"
  ]
}

Why upsert matters:
If you scrape the same URL twice, you want to update the price (it might have changed), not create a duplicate row. The conflictColumns: ["url"] tells Supabase "if this URL exists, update it; if not, insert it." This keeps your database clean and ensures you always have the latest discount data.

Error handling:
Add an Error Trigger node after Supabase to catch failed inserts. Common failures include null values in NOT NULL columns or data type mismatches. Log errors to a separate table for debugging.

Step 5: Enhance Data with SearXNG Research

AI agents produce better content when they have context beyond the product page.

Configure SearXNG HTTP Request:

{
  "method": "GET",
  "url": "http://your-searxng-instance:8080/search",
  "qs": {
    "q": "{{$json.product_title}} reviews",
    "format": "json",
    "engines": "google,duckduckgo",
    "safesearch": 1
  }
}

Extract relevant snippets:

// Function node to process SearXNG results
const results = $input.item.json.results;
const topSnippets = results
  .slice(0, 5)
  .map(r => r.content)
  .join('

');

return {
  json: {
    product_title: $input.item.json.product_title,
    research_context: topSnippets
  }
};

When to use SearXNG:

  • Product categories where reviews matter (electronics, appliances)
  • Comparing similar products to highlight unique value
  • Finding trending keywords for SEO optimization

Skip SearXNG for time-sensitive flash deals where speed matters more than depth.

Step 6: Generate Content with AI Agents

This is where stored data transforms into publishable content.

Query Supabase for unprocessed discounts:

SELECT * FROM discounts 
WHERE content_generated = FALSE 
LIMIT 10;

Configure OpenAI/Claude node:

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a discount content writer. Create compelling, SEO-optimized product descriptions that highlight savings and value. Use an enthusiastic but trustworthy tone. Include the discount percentage prominently."
    },
    {
      "role": "user",
      "content": `Product: {{$json.product_title}}
Original Price: ${{$json.original_price}}
Discount Price: ${{$json.discount_price}}
Discount: {{$json.discount_percentage}}% off
Description: {{$json.description}}
Research Context: {{$json.research_context}}

Write a 150-word discount content piece that emphasizes the value and urgency.`
    }
  ],
  "temperature": 0.7,
  "max_tokens": 300
}

Update Supabase with generated content:

UPDATE discounts 
SET generated_content = '{{$json.choices[0].message.content}}',
    content_generated = TRUE
WHERE id = '{{$json.id}}';

Content quality controls:

  • Set temperature: 0.7 for creative but consistent output
  • Use max_tokens: 300 to control length (150 words ≈ 200-250 tokens)
  • Add a Function node to validate output length before storing

Workflow Architecture Overview

This workflow consists of 12-15 nodes organized into 3 main sections:

  1. Data ingestion and scraping (Nodes 1-5): Manual trigger accepts URL list, Firecrawl scrapes each URL, Function node extracts structured data
  2. Storage and enhancement (Nodes 6-9): Supabase upsert stores data, SearXNG adds research context, data merges for AI processing
  3. Content generation (Nodes 10-15): Query unprocessed records, AI generates content, update database with results

Execution flow:

  • Trigger: Manual execution with URL array or scheduled daily run
  • Average run time: 45-90 seconds for 10 URLs (depends on Firecrawl response time)
  • Key dependencies: Firecrawl must be running, Supabase credentials valid, AI API key active

Critical nodes:

  • HTTP Request (Firecrawl): Handles all web scraping with retry logic
  • Function (Data Extraction): Normalizes scraped HTML into database schema
  • Supabase (Upsert): Prevents duplicates while updating changed data
  • OpenAI/Claude: Generates final content from stored data

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

Firecrawl Integration

Required fields:

  • API Endpoint: http://your-firecrawl-instance:3000/scrape
  • Authorization: Bearer token from Firecrawl setup
  • Timeout: 30 seconds (increase to 60 for slow-loading sites)

Common issues:

  • Using public Firecrawl endpoints → Rate limits hit quickly with batch processing
  • Not setting waitFor → JavaScript-rendered prices missing from scraped data
  • Always use self-hosted Firecrawl for production workflows with >100 URLs/day

Supabase Connection

Variables to customize:

  • batch_size: Process 10-50 URLs per execution (balance speed vs. API limits)
  • content_generated flag: Add additional statuses like "pending_review" or "published"
  • Database indexes: Add indexes on discount_percentage or scraped_at for complex queries

Testing & Validation

Test each component independently:

  1. Firecrawl scraping: Run with 3-5 test URLs, verify HTML and markdown outputs contain prices
  2. Data extraction: Check Function node output—all fields should have values (null is acceptable for missing data)
  3. Supabase storage: Query database directly to confirm records inserted with correct data types
  4. AI generation: Review 5-10 generated content pieces for tone, accuracy, and length

Common troubleshooting:

Issue Cause Fix
Prices not extracted Regex doesn't match site's format Update priceRegex to match currency symbols and decimal formats
Duplicate database entries Upsert not configured Verify conflictColumns: ["url"] in Supabase node
AI content too short/long Token limits misconfigured Adjust max_tokens and validate with word count Function node
Workflow times out Too many URLs processed at once Reduce batch size or add Split In Batches node

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with 3 attempts, 5-second delays Firecrawl occasionally times out—retries prevent data loss
Monitoring Webhook to Slack/Discord on workflow failure Detect scraping failures within minutes vs. discovering stale data days later
Rate Limiting 2-second delay between Firecrawl requests Prevents IP bans and respects server resources
Data Validation Function node checks for null prices before storage Catches scraping failures early, prevents bad data in database
Backup Strategy Daily Supabase backups via pg_dump Protects against accidental data deletion or corruption

Scaling considerations:

For 500+ URLs per day, implement these optimizations:

  • Split workflow into separate scraping and content generation workflows
  • Use Supabase's batch insert API (insert 100 records at once)
  • Add Redis caching layer for frequently accessed product data
  • Deploy multiple Firecrawl instances behind a load balancer

Real-World Use Cases

Use Case 1: Affiliate Deal Site

  • Industry: E-commerce affiliate marketing
  • Scale: 200 new deals per day across 10 product categories
  • Modifications needed: Add category classification Function node, create separate AI prompts per category, integrate with WordPress API for auto-publishing

Use Case 2: Price Comparison Platform

  • Industry: Consumer electronics
  • Scale: 50 products tracked continuously for price changes
  • Modifications needed: Schedule workflow to run every 6 hours, add price change detection logic, send alerts when discounts exceed 20%

Use Case 3: Email Newsletter Automation

  • Industry: Daily deals newsletter
  • Scale: 30-50 curated deals sent to 10,000 subscribers
  • Modifications needed: Add filtering logic for minimum discount percentage (>15%), integrate with Mailchimp API, generate HTML email templates with AI

Customizing This Workflow

Alternative Integrations

Instead of Firecrawl:

  • Apify: Best for sites with complex anti-bot measures—requires swapping HTTP Request node with Apify API calls
  • Puppeteer (via n8n Execute Command): Better control over browser automation—add 8-10 nodes for full implementation
  • Browserless: Use when you need headless Chrome at scale—similar API to Firecrawl but different response format

Workflow Extensions

Add automated content publishing:

  • Connect to WordPress REST API with HTTP Request node
  • Map generated content to post title, body, featured image
  • Set post status to "draft" for manual review or "publish" for full automation
  • Nodes needed: +3 (HTTP Request for auth, HTTP Request for post creation, Set node for field mapping)

Implement content quality scoring:

  • Add Function node after AI generation to analyze readability, keyword density, sentiment
  • Use TextRazor or similar NLP API for advanced scoring
  • Store quality scores in Supabase for performance tracking
  • Reject low-scoring content and regenerate with adjusted prompts
  • Nodes needed: +5 (HTTP Request to NLP API, Function for scoring logic, IF node for quality gate, Loop back to AI node)

Integration possibilities:

Add This To Get This Complexity
Airtable sync Visual content calendar with approval workflow Easy (3 nodes)
Google Sheets export Daily deal reports for non-technical team members Easy (2 nodes)
Telegram bot Real-time deal alerts to mobile devices Medium (6 nodes)
Shopify integration Auto-create discount products in your store Medium (8 nodes)
Instagram API Auto-post deals as Instagram stories Hard (12+ nodes, requires Meta approval)

Scale to handle more data:

  • Replace manual trigger with Webhook node for continuous URL ingestion
  • Implement queue system using Redis or RabbitMQ for URL processing
  • Add Split In Batches node to process 1000 URLs in chunks of 50
  • Performance improvement: 20x faster for >1000 URLs with parallel processing

Get Started Today

Ready to automate your discount content production?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from File, paste the JSON
  3. Configure your services: Add credentials for Supabase, Firecrawl, SearXNG, and OpenAI
  4. Set up database: Run the SQL schema creation script in your Supabase dashboard
  5. Test with sample data: Start with 5-10 URLs to verify scraping and content generation
  6. Deploy to production: Schedule the workflow or set up webhook triggers for continuous processing

Need help customizing this workflow for your specific discount content needs? Schedule an intro call with Atherial.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Web Scrape to AI Content Automation",
  "nodes": [
    {
      "id": "webhook-trigger",
      "name": "Webhook - URL Input",
      "type": "n8n-nodes-base.webhook",
      "position": [
        100,
        300
      ],
      "parameters": {
        "path": "scrape-urls",
        "httpMethod": "POST",
        "responseData": "firstEntryJson",
        "responseMode": "lastNode"
      },
      "typeVersion": 2
    },
    {
      "id": "parse-urls",
      "name": "Parse URLs",
      "type": "n8n-nodes-base.code",
      "position": [
        300,
        300
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Parse incoming URLs from webhook\nconst data = $input.first().json;\nconst urls = Array.isArray(data.urls) ? data.urls : [data.url];\n\nreturn urls.map((url, index) => ({\n  url: url.trim(),\n  id: index,\n  timestamp: new Date().toISOString(),\n  status: 'pending'\n}));"
      },
      "typeVersion": 2
    },
    {
      "id": "scrape-content",
      "name": "HTTP Request - Scrape URL",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        500,
        300
      ],
      "parameters": {
        "url": "={{ $json.url }}",
        "method": "GET",
        "options": {},
        "responseFormat": "text"
      },
      "typeVersion": 4
    },
    {
      "id": "extract-content",
      "name": "Extract Content Info",
      "type": "n8n-nodes-base.code",
      "position": [
        700,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Extract key information from HTML/text content\nconst content = $json.body || '';\nconst url = $json.url || '';\n\n// Basic HTML stripping and text extraction\nconst text = content\n  .replace(/<[^>]*>/g, ' ')\n  .replace(/&nbsp;/g, ' ')\n  .replace(/\\s+/g, ' ')\n  .trim()\n  .substring(0, 2000);\n\n// Extract potential discounts\nconst discountPatterns = /(\\d+%\\s*(?:off|discount|sale))|(save\\s*\\$?\\d+)|(free\\s+shipping)|(buy\\s+one\\s+get\\s+one)|(50%\\s*off)/gi;\nconst discounts = text.match(discountPatterns) || [];\n\n// Extract pricing if present\nconst pricePattern = /\\$?\\d+(?:,\\d{3})*(?:\\.\\d{2})?/g;\nconst prices = text.match(pricePattern) || [];\n\nreturn {\n  url: url,\n  extracted_text: text.substring(0, 500),\n  full_text: text,\n  discounts_found: Array.from(new Set(discounts)),\n  potential_prices: prices.slice(0, 5),\n  extraction_timestamp: new Date().toISOString(),\n  word_count: text.split(/\\s+/).length,\n  is_valid: text.length > 100\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "store-in-supabase",
      "name": "Store to Supabase",
      "type": "n8n-nodes-base.supabase",
      "position": [
        900,
        300
      ],
      "parameters": {
        "columns": {
          "url": "={{ $json.url }}",
          "status": "extracted",
          "is_valid": "={{ $json.is_valid }}",
          "full_text": "={{ $json.full_text }}",
          "word_count": "={{ $json.word_count }}",
          "extracted_text": "={{ $json.extracted_text }}",
          "discounts_found": "={{ JSON.stringify($json.discounts_found) }}",
          "potential_prices": "={{ JSON.stringify($json.potential_prices) }}",
          "extraction_timestamp": "={{ $json.extraction_timestamp }}"
        },
        "tableId": "scraped_content",
        "resource": "row",
        "operation": "create",
        "useCustomSchema": false
      },
      "typeVersion": 1
    },
    {
      "id": "filter-valid-content",
      "name": "Filter Valid Content",
      "type": "n8n-nodes-base.code",
      "position": [
        1100,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Filter valid content for AI processing\nreturn $json.is_valid === true ? $json : null;"
      },
      "typeVersion": 2
    },
    {
      "id": "generate-ai-content",
      "name": "Generate Content with AI",
      "type": "@n8n/n8n-nodes-langchain.chainLlm",
      "position": [
        1300,
        300
      ],
      "parameters": {
        "messages": {
          "messageValues": [
            {
              "type": "SystemMessagePromptTemplate",
              "message": "You are a marketing copywriter specializing in discount and promotional content. Create compelling, SEO-optimized marketing content focused on discounts, deals, and special offers. Keep the tone engaging and customer-focused."
            },
            {
              "type": "HumanMessagePromptTemplate",
              "message": "Create a discount-focused marketing article from this content:\n\nURL: {{ $json.url }}\nSource Text: {{ $json.extracted_text }}\nFound Discounts: {{ $json.discounts_found.join(', ') }}\nPrices: {{ $json.potential_prices.join(', ') }}\n\nGenerate a compelling promotional article (300-500 words) highlighting the discounts and benefits. Include:\n1. An engaging headline\n2. Key discount details\n3. Why customers should act now\n4. Call-to-action"
            }
          ]
        },
        "resource": "text",
        "operation": "response"
      },
      "typeVersion": 1
    },
    {
      "id": "prepare-for-storage",
      "name": "Prepare Content for Storage",
      "type": "n8n-nodes-base.code",
      "position": [
        1500,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Structure the AI-generated content for storage\nreturn {\n  url: $json.url,\n  original_extracted_text: $json.extracted_text,\n  discounts_found: $json.discounts_found,\n  ai_generated_content: $json.text,\n  generation_timestamp: new Date().toISOString(),\n  content_source: 'ai-generated',\n  source_id: $json.id\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "store-generated-content",
      "name": "Store Generated Content",
      "type": "n8n-nodes-base.supabase",
      "position": [
        1700,
        300
      ],
      "parameters": {
        "columns": {
          "url": "={{ $json.url }}",
          "status": "completed",
          "discounts": "={{ JSON.stringify($json.discounts_found) }}",
          "content_type": "promotional",
          "original_text": "={{ $json.original_extracted_text }}",
          "generated_content": "={{ $json.ai_generated_content }}",
          "generation_timestamp": "={{ $json.generation_timestamp }}"
        },
        "tableId": "generated_content",
        "resource": "row",
        "operation": "create",
        "useCustomSchema": false
      },
      "typeVersion": 1
    },
    {
      "id": "prepare-response",
      "name": "Prepare Final Response",
      "type": "n8n-nodes-base.code",
      "position": [
        1900,
        300
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Prepare final response\nconst items = $input.all().json;\nconst successCount = items.length;\nconst totalDiscounts = items.reduce((acc, item) => acc + (item.discounts?.length || 0), 0);\n\nreturn {\n  status: 'success',\n  processed_urls: successCount,\n  total_discounts_found: totalDiscounts,\n  timestamp: new Date().toISOString(),\n  message: `Successfully processed ${successCount} URLs and generated promotional content for ${successCount} items`,\n  items: items.map(item => ({\n    url: item.url,\n    content_preview: item.ai_generated_content?.substring(0, 200) + '...',\n    discounts: item.discounts\n  }))\n};"
      },
      "typeVersion": 2
    }
  ],
  "connections": {
    "Parse URLs": {
      "main": [
        [
          {
            "node": "HTTP Request - Scrape URL",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store to Supabase": {
      "main": [
        [
          {
            "node": "Filter Valid Content",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Webhook - URL Input": {
      "main": [
        [
          {
            "node": "Parse URLs",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Content Info": {
      "main": [
        [
          {
            "node": "Store to Supabase",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Filter Valid Content": {
      "main": [
        [
          {
            "node": "Generate Content with AI",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store Generated Content": {
      "main": [
        [
          {
            "node": "Prepare Final Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Content with AI": {
      "main": [
        [
          {
            "node": "Prepare Content for Storage",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "HTTP Request - Scrape URL": {
      "main": [
        [
          {
            "node": "Extract Content Info",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Content for Storage": {
      "main": [
        [
          {
            "node": "Store Generated Content",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}