How to Build a Discount Content Automation Agent with n8n (Free Template)

How to Build a Discount Content Automation Agent with n8n (Free Template)

Creating fresh discount content manually is a time drain. You're copying URLs, scraping product details, organizing data, and writing descriptions—all tasks that eat hours every week. This n8n workflow automates the entire pipeline: it scrapes discount URLs, stores structured data in Supabase, and uses AI agents to generate polished content automatically. You'll learn how to build this system yourself, with the complete JSON template provided at the end.

The Problem: Manual Discount Content Creation Kills Productivity

Publishing discount content at scale requires constant data collection and content production. You're manually visiting URLs, extracting product information, organizing it in spreadsheets, and writing descriptions for each deal.

Current challenges:

  • Scraping product data from dozens of URLs takes 2-3 hours daily
  • Manually organizing discount information in databases creates data inconsistencies
  • Writing unique content for each deal requires another 3-4 hours per day
  • Keeping discount data current means repeating this process continuously

Business impact:

  • Time spent: 25-30 hours per week on repetitive data tasks
  • Delayed publishing: Content goes live days after deals launch
  • Inconsistent quality: Manual writing produces variable results under time pressure

The Solution Overview

This n8n workflow creates an automated content production pipeline. It takes a list of discount URLs, scrapes each page using self-hosted Firecrawl, stores structured data in Supabase/PostgreSQL, and triggers AI agents to generate content by referencing that stored data. The system handles data ingestion, storage, retrieval, and AI-powered content generation in one continuous workflow. You control the URL input, the scraping logic, the database schema, and the AI prompts—making it fully customizable for your discount content needs.

What You'll Build

This automation delivers a complete discount content production system with data persistence and AI generation capabilities.

Component Technology Purpose
URL Input Manual Trigger/Webhook Submit discount URLs for processing
Web Scraping Self-hosted Firecrawl Extract product data, prices, descriptions
Data Storage Supabase/PostgreSQL Store scraped discount information
Search Integration SearXNG Supplement data with additional context
AI Content Generation AI Agent Nodes Create unique content from stored data
Data Retrieval Supabase Query Nodes Reference stored data for content production

Key capabilities:

  • Batch process multiple discount URLs simultaneously
  • Store structured discount data with timestamps and metadata
  • Query stored data to avoid re-scraping
  • Generate unique content variations using AI agents
  • Scale from 10 to 1000+ URLs without workflow changes

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted)
  • Supabase account with PostgreSQL database configured
  • Self-hosted Firecrawl instance with API access
  • SearXNG instance (self-hosted or public)
  • AI service credentials (OpenAI, Anthropic, or local LLM)
  • Basic understanding of SQL for database queries
  • JSON knowledge for data structure manipulation

Step 1: Set Up URL Input and Workflow Trigger

The workflow starts by accepting discount URLs for processing. You need a reliable trigger that handles both single URLs and batch submissions.

Configure the Manual Trigger

  1. Add a Manual Trigger node as your workflow entry point
  2. Configure it to accept a JSON array of URLs
  3. Set up the input schema to validate URL format

Input data structure:

{
  "urls": [
    "https://example.com/deal-1",
    "https://example.com/deal-2"
  ]
}

Why this works:
The Manual Trigger gives you control over when processing starts. You can submit URLs individually during testing or batch hundreds for production runs. The JSON array format makes it easy to integrate with external systems later—webhook triggers, scheduled imports from Google Sheets, or API endpoints.

Alternative trigger options:

  • Webhook node: For external system integration
  • Schedule node: For recurring URL list processing
  • HTTP Request node: To pull URLs from external sources

Step 2: Scrape Discount Data with Firecrawl

Firecrawl extracts structured data from discount pages. Self-hosting gives you control over rate limits and scraping behavior.

Configure Firecrawl Integration

  1. Add an HTTP Request node after your trigger
  2. Point it to your Firecrawl instance endpoint
  3. Configure the request to process each URL from the input array

Node configuration:

{
  "method": "POST",
  "url": "http://your-firecrawl-instance:3000/scrape",
  "authentication": "headerAuth",
  "sendBody": true,
  "bodyParameters": {
    "url": "={{$json.url}}",
    "formats": ["markdown", "html"],
    "onlyMainContent": true
  }
}

Why this approach:
Self-hosted Firecrawl eliminates API costs and rate limits. You control the scraping rules, timeout settings, and data extraction logic. The onlyMainContent parameter strips navigation and ads, giving you clean discount data. Requesting both markdown and HTML formats provides flexibility—markdown for AI processing, HTML for structured data extraction.

Variables to customize:

  • timeout: Increase to 60 seconds for slow-loading pages
  • waitFor: Add CSS selectors to wait for dynamic content
  • formats: Remove HTML if you only need markdown for AI agents

Common issues:

  • Firecrawl returns 504 errors → Increase timeout and add retry logic
  • Missing product prices → Inspect page structure and add custom selectors
  • Rate limiting from target sites → Add delays between requests using Wait nodes

Step 3: Store Scraped Data in Supabase

Supabase provides PostgreSQL storage with a REST API. You'll create a table schema that captures discount data and supports efficient querying.

Database schema setup

Create a discounts table in Supabase:

CREATE TABLE discounts (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  url TEXT UNIQUE NOT NULL,
  title TEXT,
  description TEXT,
  price DECIMAL(10,2),
  original_price DECIMAL(10,2),
  discount_percentage INTEGER,
  scraped_content TEXT,
  scraped_at TIMESTAMP DEFAULT NOW(),
  content_generated BOOLEAN DEFAULT FALSE
);

Configure Supabase Insert Node

  1. Add a Supabase node after Firecrawl
  2. Select "Insert" operation
  3. Map scraped data to database columns

Node configuration:

{
  "operation": "insert",
  "table": "discounts",
  "options": {
    "upsert": true,
    "onConflict": "url"
  },
  "columns": {
    "url": "={{$json.url}}",
    "title": "={{$json.metadata.title}}",
    "scraped_content": "={{$json.markdown}}",
    "scraped_at": "={{$now}}"
  }
}

Why this works:
The upsert option with onConflict: url prevents duplicate entries. If you re-scrape a URL, the workflow updates existing data instead of creating duplicates. This is critical for keeping discount information current—prices change, deals expire, and you need the latest data without cluttering your database.

Data extraction tips:

  • Use Function nodes to parse prices from scraped markdown
  • Extract discount percentages with regex: /(\d+)%\s*off/i
  • Store raw scraped content for future AI processing variations

Step 4: Enhance Data with SearXNG

SearXNG supplements scraped data with additional context. This improves AI-generated content quality by providing market context, competitor pricing, and product reviews.

Configure SearXNG Integration

  1. Add an HTTP Request node pointing to your SearXNG instance
  2. Query for product name + "review" or "price comparison"
  3. Extract top 3-5 results for context

Node configuration:

{
  "method": "GET",
  "url": "http://your-searxng-instance:8080/search",
  "qs": {
    "q": "={{$json.title}} review price",
    "format": "json",
    "engines": "google,bing"
  }
}

Why this approach:
Self-hosted SearXNG avoids Google API costs and rate limits. You control which search engines to query and how results are filtered. The additional context helps AI agents write more informed content—mentioning competitor prices, highlighting unique features, or referencing user reviews.

When to skip this step:

  • You have complete product data from scraping
  • Content generation speed matters more than depth
  • Your discount pages already include comprehensive information

Step 5: Generate Content with AI Agents

AI agents transform stored discount data into polished content. You'll configure prompts that reference Supabase data and produce consistent output.

Configure AI Agent Node

  1. Add an OpenAI/Anthropic node (or local LLM)
  2. Create a prompt template that pulls from stored data
  3. Set up output formatting for your content needs

Prompt template:

You are a discount content writer. Using the following product data, write a compelling 150-word description:

Product: {{$json.title}}
Price: ${{$json.price}} (was ${{$json.original_price}})
Discount: {{$json.discount_percentage}}% off
Details: {{$json.scraped_content}}
Market Context: {{$json.search_results}}

Requirements:
- Lead with the discount percentage
- Highlight key product features
- Include a clear call-to-action
- Use an enthusiastic but professional tone

Node configuration:

{
  "model": "gpt-4-turbo",
  "temperature": 0.7,
  "maxTokens": 300,
  "systemMessage": "You are an expert discount content writer focused on conversion-optimized product descriptions."
}

Why this works:
Temperature 0.7 balances creativity with consistency. The structured prompt ensures every generated description follows the same format—discount first, features second, CTA last. Referencing stored data means you can regenerate content anytime without re-scraping.

Variables to customize:

  • temperature: Lower to 0.3 for more consistent output, raise to 0.9 for creative variations
  • maxTokens: Adjust based on target word count (roughly 4 characters per token)
  • model: Use GPT-3.5 for cost savings, GPT-4 for quality, or local LLMs for privacy

Step 6: Update Database with Generated Content

Store AI-generated content back in Supabase for retrieval and publishing.

Configure Supabase Update Node

  1. Add a Supabase node after AI generation
  2. Select "Update" operation
  3. Match records by URL and store generated content

Node configuration:

{
  "operation": "update",
  "table": "discounts",
  "filterType": "manual",
  "filters": {
    "url": "={{$json.url}}"
  },
  "columns": {
    "generated_content": "={{$json.ai_output}}",
    "content_generated": true,
    "generated_at": "={{$now}}"
  }
}

Why this approach:
Storing generated content separately from scraped data lets you regenerate with different prompts without losing original information. The content_generated boolean flag makes it easy to query which discounts have finished content and which need processing.

Workflow Architecture Overview

This workflow consists of 12-15 nodes organized into 4 main sections:

  1. Data ingestion (Nodes 1-4): Manual trigger accepts URLs, Firecrawl scrapes content, Function node parses structured data
  2. Storage layer (Nodes 5-7): Supabase insert stores scraped data, SearXNG adds context, data validation checks completeness
  3. AI processing (Nodes 8-11): Query Supabase for unprocessed discounts, AI agent generates content, quality check validates output
  4. Output delivery (Nodes 12-15): Supabase update stores generated content, optional webhook notifies publishing system

Execution flow:

  • Trigger: Manual submission of URL array or scheduled batch processing
  • Average run time: 45-60 seconds for 10 URLs (6 seconds per URL including AI generation)
  • Key dependencies: Firecrawl must be running, Supabase credentials configured, AI service API key active

Critical nodes:

  • HTTP Request (Firecrawl): Handles web scraping with retry logic for failed requests
  • Supabase Insert: Stores raw discount data with upsert to prevent duplicates
  • AI Agent: Generates content using stored data and search context
  • Supabase Update: Persists generated content with timestamp tracking

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

Firecrawl Integration

Required fields:

  • API Endpoint: http://your-firecrawl-instance:3000/scrape
  • Authentication: Header auth with X-API-Key
  • Timeout: 45 seconds (increase to 90 for JavaScript-heavy pages)

Common issues:

  • Using public Firecrawl instances → Rate limits kill batch processing
  • Scraping without waitFor selectors → Dynamic content missing from results
  • Always set onlyMainContent: true to strip navigation and ads

Supabase Configuration

Required credentials:

  • Project URL: https://your-project.supabase.co
  • API Key: Service role key (not anon key) for full database access
  • Table permissions: Enable RLS policies for production security

Variables to customize:

  • upsert behavior: Change onConflict column based on your unique identifier
  • batch_size: Process 50 URLs at a time to avoid memory issues
  • retry_failed: Add error workflow to reprocess failed scrapes

Testing & Validation

Test each component independently:

  1. Scraping accuracy: Run Firecrawl on 5 test URLs, verify all product data extracts correctly
  2. Database storage: Check Supabase table for correct data types and no null values in required fields
  3. AI generation: Review 10 generated descriptions for tone, accuracy, and formatting consistency
  4. End-to-end flow: Submit 3 URLs and verify complete pipeline from scraping to stored content

Input validation:

  • Add a Function node to validate URL format before scraping
  • Check for duplicate URLs in the input array
  • Verify Firecrawl returns valid JSON before storing

Output quality checks:

  • Set minimum word count for AI-generated content (reject outputs under 100 words)
  • Validate that generated content includes required elements (price, discount, CTA)
  • Flag content with placeholder text or incomplete information

Common failure points:

  • Firecrawl timeout on slow sites → Add retry logic with exponential backoff
  • AI hallucination (inventing features) → Strengthen prompt with "only use provided data"
  • Database connection drops → Implement connection pooling and retry on failure

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with 3 attempts, 30-second delays Prevents data loss when Firecrawl or AI services have temporary failures
Monitoring Supabase function to track processing status Identifies stuck workflows within 10 minutes vs discovering failures days later
Rate Limiting 5-second delay between Firecrawl requests Prevents IP bans from target websites
Data Validation Function node to verify required fields exist Catches incomplete scrapes before AI generation wastes API credits
Logging Store error messages in separate Supabase table Enables debugging without accessing n8n execution logs

Scaling considerations:

  • Split large URL batches into sub-workflows (process 50 URLs per workflow instance)
  • Use Supabase queue table to manage processing order
  • Implement parallel processing for AI generation (run 5 concurrent AI requests)
  • Add caching layer to avoid re-scraping unchanged URLs

Security hardening:

  • Use Supabase RLS policies to restrict database access
  • Rotate API keys monthly
  • Store credentials in n8n environment variables, never in workflow JSON
  • Enable webhook signature verification if using external triggers

Real-World Use Cases

Use Case 1: E-commerce Deal Aggregator

  • Industry: Affiliate marketing, deal sites
  • Scale: 500 discount URLs per day
  • Modifications needed: Add product categorization node, integrate with WordPress API for auto-publishing, implement duplicate deal detection

Use Case 2: Price Monitoring for Retailers

  • Industry: Retail, competitive intelligence
  • Scale: 200 competitor URLs checked daily
  • Modifications needed: Replace AI content generation with price comparison logic, add email alerts for price drops, store historical pricing data

Use Case 3: Automated Newsletter Content

  • Industry: Email marketing, content curation
  • Scale: 50 hand-picked deals weekly
  • Modifications needed: Add scheduling node for Friday processing, integrate with Mailchimp API, include product images in scraping

Use Case 4: Social Media Discount Posts

  • Industry: Social media management
  • Scale: 100 deals per week across platforms
  • Modifications needed: Generate platform-specific content (Twitter character limits, Instagram hashtags), add image scraping, schedule posts via Buffer/Hootsuite API

Customizing This Workflow

Alternative Integrations

Instead of Firecrawl:

  • Apify: Best for JavaScript-heavy sites - requires HTTP Request node changes to Apify API
  • Puppeteer (self-hosted): Better control over browser automation - swap Firecrawl node with Execute Command node running Puppeteer script
  • Browserless: Cloud-based browser automation - use when self-hosting isn't an option

Instead of Supabase:

  • PostgreSQL (direct): Lower latency for high-volume processing - replace Supabase nodes with Postgres nodes
  • Airtable: Better for non-technical team collaboration - use Airtable nodes with same data structure
  • Google Sheets: Simplest setup for small-scale testing - limited to 1000 rows before performance degrades

Workflow Extensions

Add automated publishing:

  • Connect to WordPress REST API
  • Auto-create posts with generated content
  • Schedule publication times based on discount expiration
  • Nodes needed: +4 (HTTP Request for WordPress, Function for formatting, Schedule for timing, IF for conditional publishing)

Scale to handle more data:

  • Implement queue system with Supabase table
  • Add batch processing (process 100 URLs at a time)
  • Use n8n sub-workflows for parallel AI generation
  • Performance improvement: 5x faster for 500+ URLs

Add quality scoring:

  • Implement content quality checks with additional AI agent
  • Score generated content on readability, accuracy, persuasiveness
  • Auto-regenerate low-scoring content with adjusted prompts
  • Nodes needed: +6 (AI agent for scoring, IF for threshold check, Loop for regeneration)

Integration possibilities:

Add This To Get This Complexity
Slack integration Real-time alerts when processing completes Easy (2 nodes)
Image scraping Product images stored alongside discount data Medium (4 nodes)
Shopify API Auto-create discount codes in your store Medium (6 nodes)
Google Analytics Track which generated content drives clicks Advanced (8 nodes)

Advanced customization ideas:

  • Multi-language content generation (add translation node after AI generation)
  • A/B testing different AI prompts (store multiple content versions, track performance)
  • Sentiment analysis on scraped reviews (add NLP node to extract product sentiment)
  • Automated fact-checking (cross-reference AI output with scraped data to catch hallucinations)

Get Started Today

Ready to automate your discount content production?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add credentials for Supabase, Firecrawl, SearXNG, and your AI provider
  4. Set up your database: Run the SQL schema creation script in your Supabase project
  5. Test with sample data: Submit 3-5 discount URLs and verify the complete pipeline
  6. Deploy to production: Configure error handling, set up monitoring, and activate the workflow

Customization support:
This workflow is a starting point. Your discount content needs are unique—different data sources, specific content formats, custom publishing workflows.

Need help customizing this workflow for your specific needs? Schedule an intro call with Atherial.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Web Scrape to AI Content Automation",
  "nodes": [
    {
      "id": "webhook-trigger",
      "name": "Webhook - URL Input",
      "type": "n8n-nodes-base.webhook",
      "position": [
        100,
        300
      ],
      "parameters": {
        "path": "scrape-urls",
        "httpMethod": "POST",
        "responseData": "firstEntryJson",
        "responseMode": "lastNode"
      },
      "typeVersion": 2
    },
    {
      "id": "parse-urls",
      "name": "Parse URLs",
      "type": "n8n-nodes-base.code",
      "position": [
        300,
        300
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Parse incoming URLs from webhook\nconst data = $input.first().json;\nconst urls = Array.isArray(data.urls) ? data.urls : [data.url];\n\nreturn urls.map((url, index) => ({\n  url: url.trim(),\n  id: index,\n  timestamp: new Date().toISOString(),\n  status: 'pending'\n}));"
      },
      "typeVersion": 2
    },
    {
      "id": "scrape-content",
      "name": "HTTP Request - Scrape URL",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        500,
        300
      ],
      "parameters": {
        "url": "={{ $json.url }}",
        "method": "GET",
        "options": {},
        "responseFormat": "text"
      },
      "typeVersion": 4
    },
    {
      "id": "extract-content",
      "name": "Extract Content Info",
      "type": "n8n-nodes-base.code",
      "position": [
        700,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Extract key information from HTML/text content\nconst content = $json.body || '';\nconst url = $json.url || '';\n\n// Basic HTML stripping and text extraction\nconst text = content\n  .replace(/<[^>]*>/g, ' ')\n  .replace(/&nbsp;/g, ' ')\n  .replace(/\\s+/g, ' ')\n  .trim()\n  .substring(0, 2000);\n\n// Extract potential discounts\nconst discountPatterns = /(\\d+%\\s*(?:off|discount|sale))|(save\\s*\\$?\\d+)|(free\\s+shipping)|(buy\\s+one\\s+get\\s+one)|(50%\\s*off)/gi;\nconst discounts = text.match(discountPatterns) || [];\n\n// Extract pricing if present\nconst pricePattern = /\\$?\\d+(?:,\\d{3})*(?:\\.\\d{2})?/g;\nconst prices = text.match(pricePattern) || [];\n\nreturn {\n  url: url,\n  extracted_text: text.substring(0, 500),\n  full_text: text,\n  discounts_found: Array.from(new Set(discounts)),\n  potential_prices: prices.slice(0, 5),\n  extraction_timestamp: new Date().toISOString(),\n  word_count: text.split(/\\s+/).length,\n  is_valid: text.length > 100\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "store-in-supabase",
      "name": "Store to Supabase",
      "type": "n8n-nodes-base.supabase",
      "position": [
        900,
        300
      ],
      "parameters": {
        "columns": {
          "url": "={{ $json.url }}",
          "status": "extracted",
          "is_valid": "={{ $json.is_valid }}",
          "full_text": "={{ $json.full_text }}",
          "word_count": "={{ $json.word_count }}",
          "extracted_text": "={{ $json.extracted_text }}",
          "discounts_found": "={{ JSON.stringify($json.discounts_found) }}",
          "potential_prices": "={{ JSON.stringify($json.potential_prices) }}",
          "extraction_timestamp": "={{ $json.extraction_timestamp }}"
        },
        "tableId": "scraped_content",
        "resource": "row",
        "operation": "create",
        "useCustomSchema": false
      },
      "typeVersion": 1
    },
    {
      "id": "filter-valid-content",
      "name": "Filter Valid Content",
      "type": "n8n-nodes-base.code",
      "position": [
        1100,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Filter valid content for AI processing\nreturn $json.is_valid === true ? $json : null;"
      },
      "typeVersion": 2
    },
    {
      "id": "generate-ai-content",
      "name": "Generate Content with AI",
      "type": "@n8n/n8n-nodes-langchain.chainLlm",
      "position": [
        1300,
        300
      ],
      "parameters": {
        "messages": {
          "messageValues": [
            {
              "type": "SystemMessagePromptTemplate",
              "message": "You are a marketing copywriter specializing in discount and promotional content. Create compelling, SEO-optimized marketing content focused on discounts, deals, and special offers. Keep the tone engaging and customer-focused."
            },
            {
              "type": "HumanMessagePromptTemplate",
              "message": "Create a discount-focused marketing article from this content:\n\nURL: {{ $json.url }}\nSource Text: {{ $json.extracted_text }}\nFound Discounts: {{ $json.discounts_found.join(', ') }}\nPrices: {{ $json.potential_prices.join(', ') }}\n\nGenerate a compelling promotional article (300-500 words) highlighting the discounts and benefits. Include:\n1. An engaging headline\n2. Key discount details\n3. Why customers should act now\n4. Call-to-action"
            }
          ]
        },
        "resource": "text",
        "operation": "response"
      },
      "typeVersion": 1
    },
    {
      "id": "prepare-for-storage",
      "name": "Prepare Content for Storage",
      "type": "n8n-nodes-base.code",
      "position": [
        1500,
        300
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Structure the AI-generated content for storage\nreturn {\n  url: $json.url,\n  original_extracted_text: $json.extracted_text,\n  discounts_found: $json.discounts_found,\n  ai_generated_content: $json.text,\n  generation_timestamp: new Date().toISOString(),\n  content_source: 'ai-generated',\n  source_id: $json.id\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "store-generated-content",
      "name": "Store Generated Content",
      "type": "n8n-nodes-base.supabase",
      "position": [
        1700,
        300
      ],
      "parameters": {
        "columns": {
          "url": "={{ $json.url }}",
          "status": "completed",
          "discounts": "={{ JSON.stringify($json.discounts_found) }}",
          "content_type": "promotional",
          "original_text": "={{ $json.original_extracted_text }}",
          "generated_content": "={{ $json.ai_generated_content }}",
          "generation_timestamp": "={{ $json.generation_timestamp }}"
        },
        "tableId": "generated_content",
        "resource": "row",
        "operation": "create",
        "useCustomSchema": false
      },
      "typeVersion": 1
    },
    {
      "id": "prepare-response",
      "name": "Prepare Final Response",
      "type": "n8n-nodes-base.code",
      "position": [
        1900,
        300
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Prepare final response\nconst items = $input.all().json;\nconst successCount = items.length;\nconst totalDiscounts = items.reduce((acc, item) => acc + (item.discounts?.length || 0), 0);\n\nreturn {\n  status: 'success',\n  processed_urls: successCount,\n  total_discounts_found: totalDiscounts,\n  timestamp: new Date().toISOString(),\n  message: `Successfully processed ${successCount} URLs and generated promotional content for ${successCount} items`,\n  items: items.map(item => ({\n    url: item.url,\n    content_preview: item.ai_generated_content?.substring(0, 200) + '...',\n    discounts: item.discounts\n  }))\n};"
      },
      "typeVersion": 2
    }
  ],
  "connections": {
    "Parse URLs": {
      "main": [
        [
          {
            "node": "HTTP Request - Scrape URL",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store to Supabase": {
      "main": [
        [
          {
            "node": "Filter Valid Content",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Webhook - URL Input": {
      "main": [
        [
          {
            "node": "Parse URLs",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Content Info": {
      "main": [
        [
          {
            "node": "Store to Supabase",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Filter Valid Content": {
      "main": [
        [
          {
            "node": "Generate Content with AI",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store Generated Content": {
      "main": [
        [
          {
            "node": "Prepare Final Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Content with AI": {
      "main": [
        [
          {
            "node": "Prepare Content for Storage",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "HTTP Request - Scrape URL": {
      "main": [
        [
          {
            "node": "Extract Content Info",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Content for Storage": {
      "main": [
        [
          {
            "node": "Store Generated Content",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}