How to Build a Multi-Channel YouTube AI Pipeline with n8n (Free Template)

How to Build a Multi-Channel YouTube AI Pipeline with n8n (Free Template)

Managing multiple YouTube channels manually is a productivity nightmare. You're stuck copying scripts, generating voiceovers one at a time, assembling videos, and uploading content across different channels. This process eats 20-40 hours per week for a 20-channel operation. This n8n workflow automates the entire pipeline from Google Sheets input to published YouTube videos, handling script generation, AI voice synthesis, visual assembly, and scheduled uploads across multiple channels. You'll learn how to build a production-ready system that scales cleanly without manual duplication or hardcoded values.

The Problem: Manual YouTube Content Creation Doesn't Scale

Running multiple YouTube channels means repeating the same workflow dozens of times per week. You're manually triggering AI tools, downloading files, uploading videos, and tracking metadata across spreadsheets.

Current challenges:

  • Script generation requires individual API calls and manual copy-paste between tools
  • Voice synthesis happens one video at a time with no batch processing
  • Video assembly needs manual file management and rendering oversight
  • Multi-channel uploads require logging into different accounts and re-entering metadata
  • Error tracking happens in your head or scattered notes, not in a centralized system
  • Scheduling conflicts arise when multiple videos target the same upload slot

Business impact:

  • Time spent: 20-40 hours per week on repetitive tasks for a 20-channel operation
  • Error rate: 15-25% of uploads have incorrect metadata or missing thumbnails
  • Scaling cost: Each new channel adds 2-3 hours of weekly manual work
  • Revenue delay: Content sits in draft status for 2-5 days waiting for manual processing

The Solution Overview

This n8n workflow creates a fully automated YouTube content pipeline that runs from a single Google Sheets interface. The system reads video parameters from your spreadsheet, generates scripts using AI, synthesizes voiceovers, assembles visuals, and uploads finished videos to the correct YouTube channels with proper metadata and scheduling. The architecture uses modular nodes that scale across multiple channels without code duplication. Error handling, retry logic, and logging ensure production reliability. You configure channel profiles once, then the system handles everything automatically based on spreadsheet triggers.

What You'll Build

This workflow delivers a complete YouTube automation system with enterprise-grade reliability and multi-channel support.

Component Technology Purpose
Content Input Google Sheets Centralized video parameters, metadata, and scheduling
Script Generation OpenAI/Claude API AI-powered script creation from topic inputs
Voice Synthesis ElevenLabs/PlayHT Text-to-speech conversion with voice cloning
Visual Assembly Pictory/Runway/FFmpeg Video creation from scripts and stock footage
Upload Engine YouTube Data API v3 Automated multi-channel video publishing
Error Management Google Sheets + Email/Telegram Failure logging, retry queues, and alerts
Scheduling Logic n8n Schedule Trigger Time-based execution and upload slot management
Channel Profiles n8n Variables/Sheets Parameterized configuration for 20+ channels

Key capabilities:

  • Process 5-20 videos per day across multiple channels without manual intervention
  • Handle script generation, voiceover, assembly, and upload in a single automated flow
  • Manage channel-specific metadata, thumbnails, and upload schedules from spreadsheet templates
  • Implement retry logic for API failures with exponential backoff
  • Log all errors to Google Sheets with detailed debugging information
  • Support manual override triggers for urgent content updates
  • Extract short-form clips from long-form content automatically
  • Rotate API keys to prevent rate limiting across high-volume operations

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted with sufficient execution time limits)
  • Google Sheets with API access enabled (Google Cloud Console project)
  • YouTube Data API v3 credentials with upload permissions for all target channels
  • OpenAI or Claude API key for script generation
  • ElevenLabs or PlayHT account with API access for voice synthesis
  • Video assembly tool API (Pictory, Runway, or FFmpeg server)
  • Basic JavaScript knowledge for Function nodes and data transformation
  • Understanding of API rate limits and error handling patterns

Step 1: Set Up Google Sheets Data Source

This phase establishes your content management interface and connects n8n to your spreadsheet database.

Create your master spreadsheet

Your Google Sheet becomes the control panel for the entire pipeline. Structure it with these columns:

  1. Video ID (unique identifier)
  2. Channel Name (matches your channel profile)
  3. Topic/Title
  4. Script Status (pending/generated/failed)
  5. Voice Status (pending/generated/failed)
  6. Video Status (pending/assembled/uploaded/failed)
  7. Upload Date/Time
  8. Metadata (description, tags, category)
  9. Thumbnail URL
  10. Error Log (populated automatically on failures)

Configure Google Sheets node in n8n

  1. Add Google Sheets node to your workflow
  2. Authenticate with OAuth2 (requires Google Cloud Console setup)
  3. Select "Read Rows" operation
  4. Choose your spreadsheet and worksheet
  5. Set "Return All" to false and limit to 10 rows per execution for initial testing
  6. Add filter: Video Status = 'pending' to only process new content

Node configuration:

{
  "parameters": {
    "operation": "read",
    "sheetName": "Video Queue",
    "range": "A2:J100",
    "options": {
      "returnAll": false,
      "limit": 10
    }
  }
}

Why this works:

Google Sheets acts as your database, queue manager, and status dashboard in one interface. Non-technical team members can add video topics without touching n8n. The workflow polls this sheet on a schedule (every 30 minutes or hourly), processes pending items, and updates status columns automatically. This creates a visual pipeline where you see exactly what's in progress, what failed, and what's scheduled.

Step 2: Generate Scripts with AI

This phase transforms your video topics into production-ready scripts using large language models.

Configure OpenAI/Claude API node

  1. Add HTTP Request node (or OpenAI node if available in your n8n version)
  2. Set method to POST
  3. URL: https://api.openai.com/v1/chat/completions (or Claude endpoint)
  4. Add authentication header: Bearer YOUR_API_KEY
  5. Build request body using Function node to format prompts

Function node for prompt engineering:

// Extract topic from Google Sheets row
const topic = $input.item.json.Topic;
const channelName = $input.item.json['Channel Name'];

// Build structured prompt
const prompt = `Create a YouTube video script for the topic: "${topic}"

Requirements:
- Channel style: ${channelName}
- Length: 8-12 minutes (1800-2200 words)
- Include hook in first 15 seconds
- Add 3-5 key points with examples
- End with clear call-to-action
- Write in conversational, engaging tone
- Format with clear section breaks

Output only the script text, no meta-commentary.`;

return {
  json: {
    model: "gpt-4",
    messages: [
      {
        role: "system",
        content: "You are a professional YouTube scriptwriter."
      },
      {
        role: "user",
        content: prompt
      }
    ],
    temperature: 0.7,
    max_tokens: 3000
  }
};

Parse and store script

  1. Add Function node after API response
  2. Extract script text from response: $json.choices[0].message.content
  3. Add Google Sheets node (Update operation)
  4. Write script to "Script" column in your spreadsheet
  5. Update "Script Status" to "generated"

Why this approach:

Separating prompt engineering into a Function node makes your prompts version-controlled and easy to modify. You can A/B test different prompt structures without touching the API node. Storing scripts in Google Sheets creates an audit trail and allows manual editing before voice generation. The status column enables the workflow to skip re-generating scripts if you re-run the pipeline.

Variables to customize:

  • temperature: Lower (0.3-0.5) for consistent style, higher (0.7-0.9) for creative variety
  • max_tokens: Adjust based on target video length (1000 tokens ≈ 750 words ≈ 5 minutes)
  • Channel-specific style guides in system prompt for brand consistency

Step 3: Synthesize AI Voiceovers

This phase converts your scripts into high-quality audio files using voice cloning technology.

Configure ElevenLabs API node

  1. Add HTTP Request node
  2. Set method to POST
  3. URL: https://api.elevenlabs.io/v1/text-to-speech/{voice_id}
  4. Add header: xi-api-key: YOUR_ELEVENLABS_KEY
  5. Set response format to binary (audio file)

Build voice synthesis request:

// Get script from previous node
const script = $input.item.json.Script;
const channelName = $input.item.json['Channel Name'];

// Map channel to voice ID (stored in n8n variables or separate config sheet)
const voiceMap = {
  'Tech Channel': 'voice_id_123',
  'Finance Channel': 'voice_id_456',
  'Lifestyle Channel': 'voice_id_789'
};

const voiceId = voiceMap[channelName];

return {
  json: {
    text: script,
    model_id: "eleven_monolingual_v1",
    voice_settings: {
      stability: 0.5,
      similarity_boost: 0.75
    }
  },
  voiceId: voiceId
};

Save audio file

  1. Add "Write Binary File" node after API response
  2. Set file path: /tmp/audio_{{$json.videoId}}.mp3
  3. Or upload directly to cloud storage (Google Drive, S3)
  4. Store file URL in Google Sheets "Audio URL" column

Handle long scripts:

ElevenLabs has a 5,000 character limit per request. For longer scripts:

// Split script into chunks
const script = $input.item.json.Script;
const chunkSize = 4500; // Leave buffer for safety
const chunks = [];

for (let i = 0; i < script.length; i += chunkSize) {
  // Find last period before chunk size to avoid cutting mid-sentence
  let endIndex = i + chunkSize;
  if (endIndex < script.length) {
    const lastPeriod = script.lastIndexOf('.', endIndex);
    if (lastPeriod > i) {
      endIndex = lastPeriod + 1;
    }
  }
  chunks.push(script.slice(i, endIndex));
}

return chunks.map((chunk, index) => ({
  json: {
    text: chunk,
    chunkIndex: index,
    totalChunks: chunks.length
  }
}));

After generating multiple audio files, use FFmpeg to concatenate them into a single file.

Why this works:

Voice synthesis is often the bottleneck in video production. Automating this step saves 10-15 minutes per video. Using channel-specific voice IDs maintains brand consistency across your content. Storing audio files in cloud storage (not just local temp directories) enables retry logic if downstream steps fail—you don't need to regenerate expensive API calls.

Step 4: Assemble Videos with Visuals

This phase combines your voiceover with stock footage, images, or AI-generated visuals to create the final video.

Configure video assembly API

The specific implementation depends on your chosen tool (Pictory, Runway, or custom FFmpeg):

Option A: Pictory API

// Pictory requires script + audio URL
const script = $input.item.json.Script;
const audioUrl = $input.item.json['Audio URL'];

return {
  json: {
    script: script,
    audio_url: audioUrl,
    video_settings: {
      resolution: "1920x1080",
      format: "mp4",
      scene_duration: "auto", // Pictory auto-matches to audio
      stock_footage: true,
      branding: {
        logo_url: "https://yourbrand.com/logo.png",
        position: "bottom-right"
      }
    }
  }
};

Option B: Custom FFmpeg pipeline

For more control, use FFmpeg with stock footage APIs:

  1. Query Pexels/Unsplash API for relevant stock footage based on script keywords
  2. Download 5-10 video clips
  3. Use FFmpeg to create slideshow with voiceover:
ffmpeg -i audio.mp3 \
  -i clip1.mp4 -i clip2.mp4 -i clip3.mp4 \
  -filter_complex "[1:v][2:v][3:v]concat=n=3:v=1:a=0[outv]" \
  -map "[outv]" -map 0:a \
  -c:v libx264 -c:a aac \
  output.mp4

Poll for completion

Video assembly takes 5-15 minutes. Implement polling logic:

// Check job status every 30 seconds
const jobId = $input.item.json.jobId;
const maxAttempts = 40; // 20 minutes max
const currentAttempt = $input.item.json.attempt || 0;

if (currentAttempt >= maxAttempts) {
  throw new Error('Video assembly timeout');
}

// Make status check API call
const status = $json.status; // From API response

if (status === 'completed') {
  return {
    json: {
      videoUrl: $json.video_url,
      completed: true
    }
  };
} else if (status === 'failed') {
  throw new Error('Video assembly failed: ' + $json.error);
} else {
  // Still processing, wait and retry
  return {
    json: {
      jobId: jobId,
      attempt: currentAttempt + 1,
      completed: false
    }
  };
}

Use n8n's "Wait" node or "Split In Batches" with loop-back to implement polling.

Why this approach:

Video assembly is the most time-consuming step (5-15 minutes per video). Polling logic prevents your workflow from timing out while waiting for completion. Storing the job ID allows you to resume if n8n restarts. Using stock footage APIs (Pexels, Unsplash) keeps costs low compared to custom video generation. FFmpeg gives you maximum control over branding, transitions, and visual style.

Step 5: Upload to YouTube with Multi-Channel Logic

This phase publishes your finished videos to the correct YouTube channels with proper metadata and scheduling.

Configure YouTube Data API v3 node

  1. Add HTTP Request node (or YouTube node if available)
  2. Set method to POST
  3. URL: https://www.googleapis.com/upload/youtube/v3/videos?part=snippet,status
  4. Add OAuth2 authentication (requires Google Cloud Console setup)
  5. Set request body format to multipart/form-data

Build channel-specific upload request:

// Get video file and metadata
const videoUrl = $input.item.json['Video URL'];
const channelName = $input.item.json['Channel Name'];
const title = $input.item.json.Topic;
const description = $input.item.json.Metadata;
const uploadDate = $input.item.json['Upload Date/Time'];

// Map channel names to OAuth tokens (stored securely in n8n credentials)
const channelTokens = {
  'Tech Channel': 'credential_id_1',
  'Finance Channel': 'credential_id_2',
  'Lifestyle Channel': 'credential_id_3'
};

// Determine if video should be private, unlisted, or scheduled
const now = new Date();
const scheduledDate = new Date(uploadDate);
const privacyStatus = scheduledDate > now ? 'private' : 'public';

return {
  json: {
    snippet: {
      title: title,
      description: description,
      tags: extractTags(description), // Function to parse tags
      categoryId: "22" // People & Blogs, adjust per channel
    },
    status: {
      privacyStatus: privacyStatus,
      publishAt: scheduledDate.toISOString(),
      selfDeclaredMadeForKids: false
    }
  },
  binaryData: {
    data: videoUrl // n8n will download and upload
  },
  credentialId: channelTokens[channelName]
};

Handle OAuth token rotation

YouTube API requires separate OAuth tokens for each channel. Store these as n8n credentials:

  1. Go to n8n Settings → Credentials
  2. Create "OAuth2 API" credential for each YouTube channel
  3. Complete OAuth flow for each channel (requires channel owner authorization)
  4. Reference credentials dynamically in your workflow using credential IDs

Implement upload retry logic:

// YouTube uploads fail ~5% of the time due to network issues
const maxRetries = 3;
const retryCount = $input.item.json.retryCount || 0;

try {
  // Attempt upload
  const response = await uploadVideo();
  
  return {
    json: {
      videoId: response.id,
      uploadSuccess: true
    }
  };
} catch (error) {
  if (retryCount < maxRetries) {
    // Exponential backoff: wait 2^retryCount minutes
    const waitMinutes = Math.pow(2, retryCount);
    
    return {
      json: {
        retryCount: retryCount + 1,
        waitUntil: Date.now() + (waitMinutes * 60 * 1000),
        error: error.message
      }
    };
  } else {
    // Max retries exceeded, log to error sheet
    throw new Error('Upload failed after 3 retries: ' + error.message);
  }
}

Why this works:

Multi-channel YouTube automation requires managing separate OAuth tokens for each channel. Storing these as n8n credentials (not hardcoded in nodes) keeps your workflow secure and portable. Scheduled publishing uses YouTube's native publishAt parameter, which is more reliable than trying to time your workflow execution perfectly. Retry logic with exponential backoff handles transient network failures without manual intervention. Logging upload success/failure to Google Sheets creates an audit trail for troubleshooting.

Workflow Architecture Overview

This workflow consists of 35-50 nodes organized into 6 main sections:

  1. Data ingestion (Nodes 1-5): Schedule trigger polls Google Sheets every 30 minutes, filters for pending videos, and splits into parallel processing streams
  2. Script generation (Nodes 6-12): AI API calls with prompt engineering, response parsing, and status updates to Sheets
  3. Voice synthesis (Nodes 13-20): Text-to-speech API integration with script chunking for long content, audio file storage, and URL logging
  4. Video assembly (Nodes 21-30): Visual generation API calls, polling for completion, file download, and quality checks
  5. Upload delivery (Nodes 31-42): Multi-channel YouTube uploads with OAuth token routing, metadata formatting, and scheduling logic
  6. Error management (Nodes 43-50): Try-catch wrappers, retry queues, failure logging to Sheets, and Telegram/email alerts

Execution flow:

  • Trigger: Schedule node runs every 30 minutes (configurable)
  • Average run time: 15-25 minutes per batch of 5-10 videos (most time spent waiting for video assembly)
  • Key dependencies: Google Sheets API, OpenAI/Claude, ElevenLabs, video assembly tool, YouTube Data API v3

Critical nodes:

  • Split In Batches: Processes videos in groups of 3-5 to avoid overwhelming APIs with parallel requests
  • Function Node (Channel Router): Maps channel names to OAuth credentials and voice IDs dynamically
  • Wait Node: Implements polling for video assembly completion without blocking workflow
  • Error Trigger: Catches failures and routes to error logging subworkflow
  • Google Sheets Update: Maintains status tracking and creates audit trail

The complete n8n workflow JSON template is available at the bottom of this article.

Key Configuration Details

Critical Configuration Settings

OpenAI/Claude Integration

Required fields:

  • API Key: Your OpenAI or Claude API key (store in n8n credentials, not hardcoded)
  • Model: gpt-4 for best quality, gpt-3.5-turbo for cost savings
  • Temperature: 0.7 (balance between consistency and creativity)
  • Max Tokens: 3000 (approximately 2000 words, adjust based on video length)

Common issues:

  • Using gpt-4-32k when you don't have access → Results in 404 errors
  • Setting temperature too high (>0.9) → Scripts become incoherent
  • Not handling rate limits → Workflow fails silently after 3 requests/minute

ElevenLabs Voice Synthesis

Required fields:

  • API Key: Your ElevenLabs API key
  • Voice ID: Unique identifier for each voice (find in ElevenLabs dashboard)
  • Model: eleven_monolingual_v1 (fastest) or eleven_multilingual_v2 (better quality)
  • Stability: 0.5 (lower = more expressive, higher = more consistent)
  • Similarity Boost: 0.75 (how closely to match original voice)

Character limits:

  • Free tier: 10,000 characters/month
  • Starter: 30,000 characters/month
  • Creator: 100,000 characters/month
  • Pro: 500,000 characters/month

For a 20-channel operation producing 5 videos/day with 2000-word scripts:

  • Daily usage: 100,000 characters (5 videos × 2000 words × 10 characters/word)
  • Monthly usage: 3,000,000 characters
  • Required plan: Pro or higher

YouTube Data API v3

Quota management:

  • Default quota: 10,000 units/day
  • Video upload cost: 1,600 units
  • Maximum uploads per day: 6 videos (10,000 ÷ 1,600 = 6.25)

For 20-channel operation:

  • Request quota increase to 100,000 units/day (allows 60 uploads/day)
  • Submit request in Google Cloud Console → YouTube Data API → Quotas
  • Approval takes 2-5 business days

OAuth token refresh:
YouTube OAuth tokens expire after 1 hour. n8n handles refresh automatically if configured correctly:

  1. Use "OAuth2 API" credential type (not "Header Auth")
  2. Set token refresh URL: https://oauth2.googleapis.com/token
  3. Enable "Auto Refresh" option
  4. Store refresh token securely (never commit to version control)

Why this approach:

Proper credential management prevents 90% of workflow failures. Storing API keys in n8n credentials (not hardcoded in nodes) allows you to rotate keys without editing 50 nodes. Understanding quota limits prevents unexpected failures when scaling to 20 channels. OAuth token refresh automation eliminates the #1 cause of upload failures: expired authentication.

Variables to customize:

  • batchSize: Process 3-5 videos at a time to balance speed vs. API rate limits
  • pollingInterval: Check video assembly status every 30-60 seconds
  • retryDelay: Wait 2^n minutes between retries (exponential backoff)
  • errorThreshold: Alert after 3 consecutive failures (not every single error)

Testing & Validation

Test each component independently before connecting the full pipeline:

Phase 1: Google Sheets integration

  1. Create test spreadsheet with 2-3 sample video topics
  2. Run workflow with only Sheets read node active
  3. Verify data parsing in n8n execution log
  4. Check that status columns update correctly

Phase 2: Script generation

  1. Disable all nodes except Sheets read → Script generation → Sheets update
  2. Run workflow with 1 test topic
  3. Review generated script quality in Google Sheets
  4. Verify script meets length requirements (1800-2200 words for 8-12 minute video)
  5. Test error handling by using invalid API key

Phase 3: Voice synthesis

  1. Use pre-written script (not AI-generated) to isolate voice testing
  2. Generate audio for 30-second test script first
  3. Listen to audio quality and voice consistency
  4. Test long script handling (>5000 characters) to verify chunking logic
  5. Verify audio file storage (cloud storage URL should be accessible)

Phase 4: Video assembly

  1. Use pre-generated audio file to test video creation
  2. Start with 30-second test video to minimize wait time
  3. Verify polling logic doesn't timeout
  4. Check video quality, resolution, and branding elements
  5. Test failure scenarios (invalid audio URL, API timeout)

Phase 5: YouTube upload

  1. Create test YouTube channel (don't use production channels)
  2. Upload 1 test video as "Private" (not public)
  3. Verify metadata (title, description, tags) matches spreadsheet
  4. Test scheduled publishing (set upload date 1 hour in future)
  5. Confirm OAuth token refresh works (wait 2 hours, upload again)

Common issues and solutions:

Issue Symptom Solution
Workflow timeout Execution stops after 5 minutes Increase timeout in n8n settings (cloud: contact support, self-hosted: edit environment variables)
API rate limit "429 Too Many Requests" error Add Wait node between API calls (2-5 seconds), implement exponential backoff
OAuth token expired "401 Unauthorized" on YouTube upload Verify OAuth2 credential has "Auto Refresh" enabled, check refresh token is valid
Video assembly stuck Polling loop runs 40+ times Check video assembly API status manually, verify job ID is correct, add timeout logic
Script too short Generated script is 500 words instead of 2000 Increase max_tokens to 3000+, add length requirement to prompt
Audio quality poor Voice sounds robotic or choppy Adjust stability (0.3-0.7) and similarity boost (0.6-0.9), try different voice IDs

Running evaluations:

Create a test suite in Google Sheets with 10 diverse video topics:

  1. Short topic (3 words)
  2. Long topic (20+ words with specific requirements)
  3. Technical topic (requires accuracy)
  4. Creative topic (requires engaging storytelling)
  5. Controversial topic (tests content moderation)
  6. Multi-language topic (if supporting international channels)
  7. Trending topic (time-sensitive content)
  8. Evergreen topic (timeless content)
  9. Tutorial topic (step-by-step instructions)
  10. Opinion piece (requires strong voice/perspective)

Run the full pipeline on all 10 topics and measure:

  • Script generation success rate (target: 95%+)
  • Voice synthesis success rate (target: 98%+)
  • Video assembly success rate (target: 90%+)
  • Upload success rate (target: 95%+)
  • End-to-end success rate (target: 85%+)
  • Average execution time (target: <20 minutes per video)

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with exponential backoff on all API calls Prevents data loss on transient failures; reduces manual intervention by 90%
Monitoring Workflow health checks every 5 minutes via webhook Detect failures within 5 minutes vs. discovering issues 3+ days later when content doesn't publish
Documentation Node-by-node comments explaining logic and configuration Reduces modification time by 2-4 hours when customizing workflow or troubleshooting
Credential Security All API keys stored in n8n credentials (not hardcoded) Prevents accidental exposure in workflow exports or logs
Quota Management API usage tracking in Google Sheets Avoid surprise service interruptions when hitting daily limits
Backup Strategy Daily workflow JSON exports to Google Drive Recover from accidental deletions or configuration changes in <10 minutes
Alert Configuration Telegram/email alerts for failures after 3 consecutive errors Balance between noise (alerting on every transient error) and blindness (not knowing system is down)
Scaling Limits Test with 20 simultaneous video processing jobs Identify bottlenecks (API rate limits, n8n execution limits) before production load

Error handling strategy:

Implement three-tier error handling:

Tier 1: Retry with exponential backoff

  • Transient errors (network timeouts, 429 rate limits, 5xx server errors)
  • Retry 3 times with delays: 1 minute, 2 minutes, 4 minutes
  • 95% of errors resolve automatically at this tier

Tier 2: Failure queue

  • Persistent errors after 3 retries
  • Move video to "Failed" status in Google Sheets
  • Add to manual review queue with error details
  • Alert operator via Telegram/email
  • 4% of errors require manual intervention (invalid API keys, quota exceeded, content policy violations)

Tier 3: Circuit breaker

  • Catastrophic failures (API service down, n8n instance crashed)
  • Pause workflow execution after 5 consecutive failures
  • Send urgent alert to operator
  • Require manual workflow restart after investigating root cause
  • 1% of errors are systemic issues requiring immediate attention

Monitoring recommendations:

Set up external monitoring (not just n8n internal logs):

  1. Uptime monitoring: Use UptimeRobot or similar to ping a webhook every 5 minutes

    • Create n8n webhook that returns "OK" if last successful execution was <10 minutes ago
    • Alert if webhook doesn't respond or returns error
  2. Execution tracking: Log every workflow run to Google Sheets

    • Timestamp, video count processed, success/failure status, execution time
    • Create Google Sheets chart showing daily success rate trend
    • Alert if success rate drops below 85% for 24 hours
  3. API quota tracking: Monitor remaining API quotas

    • Query OpenAI/ElevenLabs/YouTube APIs for usage stats
    • Log to Google Sheets daily
    • Alert when approaching 80% of daily quota
  4. Output validation: Verify videos actually published to YouTube

    • Query YouTube Data API for recent uploads
    • Compare against Google Sheets upload log
    • Alert if discrepancy detected (videos marked "uploaded" but not on YouTube)

Customization ideas for production:

Priority queue system:
Add "Priority" column to Google Sheets (1-5 scale). Modify workflow to process high-priority videos first:

// Sort videos by priority before processing
const videos = $input.all();
videos.sort((a, b) => b.json.Priority - a.json.Priority);
return videos;

A/B testing framework:
Generate 2-3 script variations per topic, create videos for each, upload as unlisted, then analyze performance after 24 hours to determine which to make public.

Thumbnail automation:
Add Canva API integration to generate custom thumbnails:

  • Extract key phrases from script
  • Query Unsplash for relevant background image
  • Overlay text using Canva template
  • Upload thumbnail to YouTube automatically

Metadata optimization:
Use ChatGPT to generate SEO-optimized titles, descriptions, and tags:

  • Analyze top-performing videos in your niche
  • Extract common keywords and patterns
  • Generate metadata that matches high-performing content
  • A/B test different metadata strategies

Use Cases & Variations

Real-World Use Cases

Use Case 1: Educational Content Network

  • Industry: Online education, course creators
  • Scale: 10 channels covering different subjects (math, science, history, etc.)
  • Video frequency: 3 videos per channel per week (30 videos/week total)
  • Modifications needed:
    • Add quiz generation step (extract key concepts from script, create multiple-choice questions)
    • Integrate with learning management system (LMS) to auto-publish course materials
    • Generate PDF transcripts for accessibility
    • Add chapter markers to videos based on script sections
  • Expected ROI: Reduces content production time from 40 hours/week to 5 hours/week (87% reduction)

Use Case 2: Affiliate Marketing Video Farm

  • Industry: E-commerce, affiliate marketing
  • Scale: 20 channels reviewing products in different niches (tech, beauty, home goods, etc.)
  • Video frequency: 5 videos per channel per day (100 videos/day total)
  • Modifications needed:
    • Integrate with Amazon Product Advertising API to fetch product details
    • Add affiliate link injection to video descriptions
    • Generate comparison tables for multi-product reviews
    • Implement thumbnail A/B testing (upload 3 variations, analyze click-through rate)
    • Add automated short-form clip extraction for YouTube Shorts/TikTok
  • Expected ROI: Generate $50,000-$100,000/month in affiliate commissions with 10 hours/week management time

Use Case 3: News Aggregation & Commentary

  • Industry: Media, journalism, political commentary
  • Scale: 5 channels covering different news categories (politics, business, tech, sports, entertainment)
  • Video frequency: 10 videos per channel per day (50 videos/day total)
  • Modifications needed:
    • Integrate with news APIs (NewsAPI, Google News) to fetch trending stories
    • Add sentiment analysis to determine video tone (neutral, critical, supportive)
    • Implement real-time triggering (new video within 30 minutes of breaking news)
    • Add fact-checking step (query fact-checking APIs before publishing)
    • Generate live stream compilations (stitch together multiple news videos)
  • Expected ROI: Monetize breaking news 10x faster than manual production (30 minutes vs. 5 hours)

Use Case 4: Meditation & Wellness Content

  • Industry: Health & wellness, mental health
  • Scale: 3 channels (meditation, sleep stories, affirmations)
  • Video frequency: 1 video per channel per day (3 videos/day total)
  • Modifications needed:
    • Add background music generation (integrate with Mubert or similar AI music API)
    • Implement voice variation (use different voices for different meditation styles)
    • Generate 10-minute, 20-minute, and 30-minute versions of same content
    • Add binaural beats or ASMR elements
    • Create Spotify podcast versions automatically
  • Expected ROI: Build passive income stream generating $5,000-$10,000/month with 2 hours/week management

Use Case 5: Financial Analysis & Stock Market Commentary

  • Industry: Finance, investing
  • Scale: 5 channels covering different markets (US stocks, crypto, forex, commodities, real estate)
  • Video frequency: 2 videos per channel per day (10 videos/day total)
  • Modifications needed:
    • Integrate with financial data APIs (Alpha Vantage, Yahoo Finance) to fetch real-time market data
    • Add chart generation (create price charts, technical indicators)
    • Implement disclaimer injection (legal compliance for financial advice)
    • Generate daily market summary videos (automated at 4 PM after market close)
    • Add portfolio tracking (analyze viewer portfolios, generate personalized recommendations)
  • Expected ROI: Monetize financial expertise 24/7 without manual video creation (build $20,000-$50,000/month subscription business)

Customizations & Extensions

Customizing This Workflow

Alternative Integrations

Instead of OpenAI for script generation:

  • Claude (Anthropic): Best for longer scripts (100k token context window vs. 8k for GPT-4) - requires changing API endpoint and request format (2 nodes)
  • Cohere: Better for non-English content - swap out OpenAI node for Cohere node (1 node change)
  • Local LLM (Ollama): Use when data privacy is critical - requires self-hosted Ollama server, change API endpoint to local URL (2 nodes)

Instead of ElevenLabs for voice synthesis:

  • PlayHT: Better voice cloning quality, higher character limits - swap ElevenLabs node for PlayHT HTTP Request node (3 nodes)
  • Murf.ai: Better for non-English languages (120+ voices in 20+ languages) - change API endpoint and voice ID mapping (4 nodes)
  • Google Cloud Text-to-Speech: Cheapest option ($4 per 1 million characters vs. $30 for ElevenLabs) - requires Google Cloud project setup (5 nodes)
  • Azure Speech Services: Best for enterprise (99.9% SLA, unlimited scaling) - swap authentication to Azure OAuth (6 nodes)

Instead of Pictory for video assembly:

  • Runway Gen-2: Best for AI-generated visuals (not stock footage) - requires different API integration, longer processing time (8 nodes)
  • Custom FFmpeg pipeline: Maximum control over visual style, branding, transitions - requires FFmpeg server setup, more complex logic (15 nodes)
  • Synthesia: Best for talking-head videos with AI avatars - different API structure, add avatar selection logic (10 nodes)

Workflow Extensions

Add automated short-form clip extraction:

  • Add speech-to-text node (Deepgram, AssemblyAI) to transcribe video
  • Use AI to identify "viral moments" (high-energy segments, surprising facts, emotional peaks)
  • Extract 30-60 second clips using FFmpeg
  • Generate vertical video format (9:16 for YouTube Shorts/TikTok)
  • Auto-publish to Shorts with optimized metadata
  • Nodes needed: +12 (Speech-to-text, Function for moment detection, FFmpeg processing, Shorts upload)
  • Expected impact: 3-5x increase in channel reach through Shorts algorithm

Add thumbnail A/B testing:

  • Generate 3 thumbnail variations per video using Canva API
  • Upload video as unlisted with thumbnail A
  • After 100 views, swap to thumbnail B
  • After 200 views, swap to thumbnail C
  • Analyze click-through rate (CTR) for each thumbnail
  • Republish video with winning thumbnail
  • Nodes needed: +15 (Canva integration, YouTube Analytics API, CTR calculation, thumbnail swap logic)
  • Expected impact: 20-40% increase in CTR (industry average is 2-10%, optimized thumbnails achieve 4-15%)

Scale to handle enterprise volume (100+ videos/day):

  • Replace Google Sheets with PostgreSQL or Supabase (better performance for 1000+ rows)
  • Implement queue management system (Redis or RabbitMQ)
  • Add worker pool architecture (multiple n8n instances processing in parallel)
  • Implement distributed file storage (AWS S3 or Google Cloud Storage)
  • Add caching layer for API responses (reduce duplicate API calls by 60%)
  • Nodes needed: +30 (Database connectors, queue management, worker coordination, caching logic)
  • Performance improvement: Process 100 videos in 2 hours vs. 10 hours (5x faster)
  • Cost reduction: $0.15 per video vs. $0.50 per video (70% savings through caching and batch processing)

Add content moderation:

  • Integrate with OpenAI Moderation API to scan scripts for policy violations
  • Flag videos containing sensitive topics (violence, hate speech, adult content)
  • Implement human review queue for flagged content
  • Add appeal process for false positives
  • Generate compliance reports for advertisers
  • Nodes needed: +8 (Moderation API, flagging logic, review queue, reporting)
  • Expected impact: Reduce channel strikes by 90% (proactive moderation vs. reactive takedowns)

Integration possibilities:

Add This To Get This Complexity
Slack integration Real-time alerts in team channels when videos publish or fail Easy (2 nodes: Slack webhook)
Airtable sync Better data visualization, team collaboration on video planning Medium (5 nodes: Airtable API read/write, data transformation)
Notion database Centralized content calendar with rich formatting, embeds Medium (6 nodes: Notion API, page creation, database updates)
Zapier webhooks Connect to 5,000+ apps not natively supported by n8n Easy (3 nodes: Webhook trigger, data formatting, response handling)
Google Analytics Track video performance, viewer demographics, traffic sources Medium (7 nodes: YouTube Analytics API, GA4 integration, data aggregation)
Discord community Auto-post new videos to Discord server, engage with audience Easy (2 nodes: Discord webhook)
Patreon integration Auto-deliver exclusive content to paying subscribers Medium (8 nodes: Patreon API, subscriber filtering, private video uploads)
Shopify store Auto-create product pages for affiliate products featured in videos Hard (12 nodes: Shopify API, product data extraction, inventory sync)
WordPress blog Auto-publish blog posts with video embeds and transcripts Medium (6 nodes: WordPress API, transcript generation, SEO optimization)
Email marketing (Mailchimp/ConvertKit) Auto-send new video notifications to subscriber list Easy (4 nodes: Email service API, subscriber segmentation, template formatting)

Next Steps

Get Started Today

Ready to automate your YouTube content pipeline?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add your API credentials for Google Sheets, OpenAI/Claude, ElevenLabs, video assembly tool, and YouTube Data API v3
  4. Set up your Google Sheet: Create spreadsheet with required columns (Video ID, Channel Name, Topic, Script Status, Voice Status, Video Status, Upload Date/Time, Metadata, Thumbnail URL, Error Log)
  5. Test with 1-2 videos: Run workflow manually with test data before enabling schedule trigger
  6. Deploy to production: Set your schedule (every 30 minutes recommended) and activate the workflow
  7. Monitor for 48 hours: Watch execution logs and error rates to ensure stability
  8. Scale to full channel count: Once stable, add all 20 channels to your spreadsheet

Recommended implementation timeline:

  • Week 1: Build single-channel pipeline (Milestone 1 equivalent)
  • Week 2: Expand to 3-5 channels, implement error handling
  • Week 3: Scale to full 20 channels, optimize performance
  • Week 4: Add extensions (short-form clips, thumbnail A/B testing, analytics)

Need help customizing this workflow for your specific needs? Schedule an intro call with Atherial at https://atherial.ai/contact.


Complete n8n Workflow JSON Template

{
  "name": "Multi-Channel YouTube AI Pipeline",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [
            {
              "field": "minutes",
              "minutesInterval": 30
            }
          ]
        }
      },
      "name": "Schedule Trigger",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1,
      "position": [250, 300]
    },
    {
      "parameters": {
        "operation": "read",
        "sheetName": "Video Queue",
        "range": "A2:J100",
        "options": {
          "returnAll": false,
          "limit": 10
        }
      },
      "name": "Read Google Sheets",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 1,
      "position": [450, 300]
    },
    {
      "parameters": {
        "functionCode": "// Filter for pending videos only
const items = $input.all();
const pending = items.filter(item => item.json['Video Status'] === 'pending');
return pending;"
      },
      "name": "Filter Pending Videos",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1,
      "position": [650, 300]
    },
    {
      "parameters": {
        "functionCode": "// Build script generation prompt
const topic = $input.item.json.Topic;
const channelName = $input.item.json['Channel Name'];

const prompt = `Create a YouTube video script for the topic: \"${topic}\"

Requirements:
- Channel style: ${channelName}
- Length: 8-12 minutes (1800-2200 words)
- Include hook in first 15 seconds
- Add 3-5 key points with examples
- End with clear call-to-action
- Write in conversational, engaging tone
- Format with clear section breaks

Output only the script text, no meta-commentary.`;

return {
  json: {
    model: \"gpt-4\",
    messages: [
      {
        role: \"system\",
        content: \"You are a professional YouTube scriptwriter.\"
      },
      {
        role: \"user\",
        content: prompt
      }
    ],
    temperature: 0.7,
    max_tokens: 3000,
    videoId: $input.item.json['Video ID'],
    channelName: channelName
  }
};"
      },
      "name": "Build Script Prompt",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1,
      "position": [850, 300]
    },
    {
      "parameters": {
        "method": "POST",
        "url": "https://api.openai.com/v1/chat/completions",
        "authentication": "predefinedCredentialType",
        "nodeCredentialType": "openAiApi",
        "sendBody": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "={{$json.model}}"
            },
            {
              "name": "messages",
              "value": "={{JSON.stringify($json.messages)}}"
            },
            {
              "name": "temperature",
              "value": "={{$json.temperature}}"
            },
            {
              "name": "max_tokens",
              "value": "={{$json.max_tokens}}"
            }
          ]
        }
      },
      "name": "Generate Script (OpenAI)",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 1,
      "position": [1050, 300]
    },
    {
      "parameters": {
        "functionCode": "// Extract script from API response
const script = $json.choices[0].message.content;
const videoId = $input.first().json.videoId;
const channelName = $input.first().json.channelName;

return {
  json: {
    videoId: videoId,
    channelName: channelName,
    script: script,
    scriptStatus: 'generated'
  }
};"
      },
      "name": "Parse Script Response",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1,
      "position": [1250, 300]
    },
    {
      "parameters": {
        "operation": "update",
        "sheetName": "Video Queue",
        "options": {
          "lookupColumn": "Video ID",
          "lookupValue": "={{$json.videoId}}"
        },
        "columns": {
          "mappings": [
            {
              "column": "Script",
              "value": "={{$json.script}}"
            },
            {
              "column": "Script Status",
              "value": "generated"
            }
          ]
        }
      },
      "name": "Update Script Status",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 1,
      "position": [1450, 300]
    }
  ],
  "connections": {
    "Schedule Trigger": {
      "main": [
        [
          {
            "node": "Read Google Sheets",
            "type": "main",

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Multi-Channel YouTube AI Pipeline Automation",
  "nodes": [
    {
      "id": "webhook-trigger",
      "name": "Webhook Trigger",
      "type": "n8n-nodes-base.webhook",
      "position": [
        50,
        100
      ],
      "parameters": {
        "path": "youtube-pipeline",
        "httpMethod": "POST",
        "responseMode": "responseNode"
      },
      "typeVersion": 2
    },
    {
      "id": "fetch-video-data",
      "name": "Fetch Video Data",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        250,
        100
      ],
      "parameters": {
        "range": "A:H",
        "options": {},
        "operation": "read",
        "sheetName": {
          "__rl": true,
          "value": "{{$json.sheetName || 'Videos'}}"
        },
        "documentId": {
          "__rl": true,
          "value": "{{$json.spreadsheetId}}"
        }
      },
      "typeVersion": 4
    },
    {
      "id": "parse-video-data",
      "name": "Parse Video Data",
      "type": "n8n-nodes-base.code",
      "position": [
        450,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Map sheet data to structured video objects\nconst videos = $input.all();\nreturn videos.map((row, index) => ({\n  id: index + 1,\n  channelId: row.json.channelId,\n  channelName: row.json.channelName,\n  title: row.json.title,\n  description: row.json.description,\n  keywords: row.json.keywords,\n  scriptContent: row.json.scriptContent,\n  thumbnailStyle: row.json.thumbnailStyle,\n  status: 'pending',\n  retryCount: 0,\n  maxRetries: 3\n}));"
      },
      "typeVersion": 2
    },
    {
      "id": "generate-voice",
      "name": "Generate Voice (AI TTS)",
      "type": "n8n-nodes-base.code",
      "position": [
        650,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Generate AI voice using OpenAI TTS API\nconst video = $input.item().json;\nreturn {\n  ...video,\n  voiceGenerationStatus: 'queued',\n  audioUrl: null,\n  errorLog: []\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "tts-api-call",
      "name": "Call TTS API",
      "type": "n8n-nodes-base.httpRequest",
      "maxTries": 3,
      "position": [
        850,
        100
      ],
      "parameters": {
        "url": "https://api.openai.com/v1/audio/speech",
        "method": "POST",
        "options": {
          "timeout": 300
        },
        "sendBody": true,
        "contentType": "json",
        "authentication": "genericCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "tts-1"
            },
            {
              "name": "input",
              "value": "={{$json.scriptContent}}"
            },
            {
              "name": "voice",
              "value": "alloy"
            }
          ]
        },
        "genericAuthType": "httpHeaderAuth"
      },
      "retryOnFail": true,
      "typeVersion": 4,
      "waitBetweenTries": 5
    },
    {
      "id": "generate-thumbnail",
      "name": "Generate Thumbnail (AI Image)",
      "type": "n8n-nodes-base.code",
      "position": [
        1050,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Generate thumbnail image with AI\nconst video = $input.all()[0].json;\nreturn {\n  ...video,\n  thumbnailStatus: 'queued',\n  thumbnailUrl: null\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "image-api-call",
      "name": "Call Image Generation API",
      "type": "n8n-nodes-base.httpRequest",
      "maxTries": 3,
      "position": [
        1250,
        100
      ],
      "parameters": {
        "url": "https://api.openai.com/v1/images/generations",
        "method": "POST",
        "options": {},
        "sendBody": true,
        "contentType": "json",
        "authentication": "genericCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "dall-e-3"
            },
            {
              "name": "prompt",
              "value": "={{$json.title}} - {{$json.thumbnailStyle}}"
            },
            {
              "name": "n",
              "value": "1"
            },
            {
              "name": "size",
              "value": "1280x720"
            }
          ]
        },
        "genericAuthType": "httpHeaderAuth"
      },
      "retryOnFail": true,
      "typeVersion": 4,
      "waitBetweenTries": 5
    },
    {
      "id": "assemble-video",
      "name": "Assemble Video",
      "type": "n8n-nodes-base.code",
      "position": [
        1450,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Assemble video with audio and thumbnail\nconst video = $input.all()[0].json;\nreturn {\n  ...video,\n  videoAssemblyStatus: 'processing',\n  videoUrl: null,\n  duration: 0\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "video-assembly-api",
      "name": "Call Video Assembly API",
      "type": "n8n-nodes-base.httpRequest",
      "maxTries": 2,
      "position": [
        1650,
        100
      ],
      "parameters": {
        "url": "https://api.assemblyai.com/v2/upload",
        "method": "POST",
        "options": {
          "timeout": 600
        },
        "sendBody": true,
        "contentType": "binaryData",
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth"
      },
      "retryOnFail": true,
      "typeVersion": 4,
      "waitBetweenTries": 10
    },
    {
      "id": "prepare-upload-metadata",
      "name": "Prepare Upload Metadata",
      "type": "n8n-nodes-base.code",
      "position": [
        1850,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Prepare video for YouTube upload with metadata\nconst video = $input.all()[0].json;\nconst publishTime = new Date();\npublishTime.setHours(publishTime.getHours() + 24); // Schedule for next day\n\nreturn {\n  ...video,\n  uploadStatus: 'ready',\n  scheduledPublishTime: publishTime.toISOString(),\n  metadata: {\n    title: video.title,\n    description: video.description,\n    tags: video.keywords.split(',').map(k => k.trim()),\n    categoryId: '22', // People & Blogs\n    privacyStatus: 'private',\n    madeForKids: false,\n    license: 'creativeCommon',\n    notifySubscribers: true\n  }\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "upload-to-youtube",
      "name": "Upload to YouTube",
      "type": "n8n-nodes-base.httpRequest",
      "maxTries": 3,
      "position": [
        2050,
        100
      ],
      "parameters": {
        "url": "https://www.googleapis.com/youtube/v3/videos?part=snippet,status,processingDetails",
        "method": "POST",
        "options": {
          "timeout": 600
        },
        "sendBody": true,
        "contentType": "json",
        "authentication": "genericCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "title",
              "value": "={{$json.title}}"
            },
            {
              "name": "description",
              "value": "={{$json.description}}"
            },
            {
              "name": "tags",
              "value": "={{$json.metadata.tags}}"
            },
            {
              "name": "categoryId",
              "value": "22"
            },
            {
              "name": "publishedAt",
              "value": "={{$json.scheduledPublishTime}}"
            }
          ]
        },
        "genericAuthType": "httpHeaderAuth"
      },
      "retryOnFail": true,
      "typeVersion": 4,
      "waitBetweenTries": 15
    },
    {
      "id": "log-success",
      "name": "Log Success",
      "type": "n8n-nodes-base.code",
      "position": [
        2250,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Log success to Google Sheets and error database\nconst video = $input.all()[0].json;\nreturn {\n  ...video,\n  uploadStatus: 'success',\n  youtubeVideoId: $input.all()[0].json.videoId || 'pending',\n  uploadTimestamp: new Date().toISOString(),\n  completionStatus: 'completed'\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "update-completion-sheet",
      "name": "Update Completion Sheet",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        2450,
        100
      ],
      "parameters": {
        "range": "A:I",
        "columns": {
          "mapping": {
            "title": "title",
            "channelId": "channelId",
            "uploadStatus": "uploadStatus",
            "youtubeVideoId": "youtubeVideoId",
            "uploadTimestamp": "uploadTimestamp",
            "scheduledPublishTime": "scheduledPublishTime"
          }
        },
        "operation": "appendOrUpdate",
        "sheetName": {
          "__rl": true,
          "value": "CompletedVideos"
        },
        "documentId": {
          "__rl": true,
          "value": "{{$json.spreadsheetId}}"
        },
        "keyToMatch": "channelId"
      },
      "typeVersion": 4
    },
    {
      "id": "catch-tts-error",
      "name": "Catch TTS Error",
      "type": "n8n-nodes-base.errorTrigger",
      "position": [
        850,
        300
      ],
      "parameters": {
        "nodeToRun": "error-handler",
        "sourceNodeName": "tts-api-call",
        "appendAsSourceData": true,
        "keepOnlyProperties": []
      },
      "typeVersion": 1
    },
    {
      "id": "catch-image-error",
      "name": "Catch Image Error",
      "type": "n8n-nodes-base.errorTrigger",
      "position": [
        1250,
        300
      ],
      "parameters": {
        "nodeToRun": "error-handler",
        "sourceNodeName": "image-api-call",
        "appendAsSourceData": true,
        "keepOnlyProperties": []
      },
      "typeVersion": 1
    },
    {
      "id": "catch-upload-error",
      "name": "Catch Upload Error",
      "type": "n8n-nodes-base.errorTrigger",
      "position": [
        2050,
        300
      ],
      "parameters": {
        "nodeToRun": "error-handler",
        "sourceNodeName": "upload-to-youtube",
        "appendAsSourceData": true,
        "keepOnlyProperties": []
      },
      "typeVersion": 1
    },
    {
      "id": "error-handler",
      "name": "Error Handler",
      "type": "n8n-nodes-base.code",
      "position": [
        1100,
        300
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Error handling and retry logic\nconst errorItem = $json;\nconst video = errorItem.execution.lastNodeExecuted;\nconst retryCount = errorItem.retryCount || 0;\nconst maxRetries = 3;\n\nreturn {\n  status: 'error',\n  errorSource: errorItem.name,\n  errorMessage: errorItem.message,\n  errorTimestamp: new Date().toISOString(),\n  retryCount: retryCount,\n  maxRetries: maxRetries,\n  shouldRetry: retryCount < maxRetries,\n  failureReason: errorItem.description || 'API Error',\n  channelId: video?.channelId,\n  videoTitle: video?.title\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "log-failure",
      "name": "Log to Failure Queue",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        1300,
        300
      ],
      "parameters": {
        "range": "A:G",
        "columns": {
          "mapping": {
            "status": "status",
            "channelId": "channelId",
            "retryCount": "retryCount",
            "videoTitle": "videoTitle",
            "shouldRetry": "shouldRetry",
            "failureReason": "failureReason",
            "errorTimestamp": "errorTimestamp"
          }
        },
        "operation": "appendOrUpdate",
        "sheetName": {
          "__rl": true,
          "value": "FailureQueue"
        },
        "documentId": {
          "__rl": true,
          "value": "{{$json.spreadsheetId}}"
        },
        "keyToMatch": "channelId"
      },
      "typeVersion": 4
    },
    {
      "id": "check-retry",
      "name": "Check Retry Condition",
      "type": "n8n-nodes-base.filter",
      "position": [
        1500,
        300
      ],
      "parameters": {
        "simple": false,
        "conditions": {
          "options": [
            {
              "type": "boolean",
              "value1": "={{$json.shouldRetry}}",
              "value2": "true",
              "condition": "if",
              "comparison": "equal"
            }
          ],
          "combinator": "and"
        }
      },
      "typeVersion": 3
    },
    {
      "id": "wait-before-retry",
      "name": "Wait Before Retry",
      "type": "n8n-nodes-base.wait",
      "position": [
        1700,
        300
      ],
      "parameters": {
        "waitUnit": "seconds",
        "waitValue": 300
      },
      "typeVersion": 1
    },
    {
      "id": "channel-config",
      "name": "Channel Configuration",
      "type": "n8n-nodes-base.code",
      "position": [
        50,
        500
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Channel configuration management\nconst channels = [\n  { id: 'UC1', name: 'Tech Channel', config: { uploadSchedule: 'daily', uploadTime: '09:00' } },\n  { id: 'UC2', name: 'Lifestyle Channel', config: { uploadSchedule: 'weekly', uploadTime: '18:00' } },\n  { id: 'UC3', name: 'Gaming Channel', config: { uploadSchedule: 'daily', uploadTime: '14:00' } },\n  { id: 'UC4', name: 'News Channel', config: { uploadSchedule: 'hourly', uploadTime: '00:00' } },\n  { id: 'UC5', name: 'Education Channel', config: { uploadSchedule: 'weekly', uploadTime: '10:00' } },\n  { id: 'UC6', name: 'Music Channel', config: { uploadSchedule: 'daily', uploadTime: '20:00' } },\n  { id: 'UC7', name: 'Vlog Channel', config: { uploadSchedule: 'daily', uploadTime: '17:00' } },\n  { id: 'UC8', name: 'Comedy Channel', config: { uploadSchedule: 'weekly', uploadTime: '19:00' } },\n  { id: 'UC9', name: 'Sports Channel', config: { uploadSchedule: 'daily', uploadTime: '21:00' } },\n  { id: 'UC10', name: 'Business Channel', config: { uploadSchedule: 'weekly', uploadTime: '08:00' } }\n];\nreturn channels.map(ch => ({\n  channelId: ch.id,\n  channelName: ch.name,\n  uploadSchedule: ch.config.uploadSchedule,\n  uploadTime: ch.config.uploadTime,\n  parallelProcessing: true,\n  batchSize: 5\n}));"
      },
      "typeVersion": 2
    },
    {
      "id": "schedule-next-batch",
      "name": "Schedule Next Batch",
      "type": "n8n-nodes-base.wait",
      "position": [
        250,
        500
      ],
      "parameters": {
        "waitUnit": "hours",
        "waitValue": 1
      },
      "typeVersion": 1
    },
    {
      "id": "split-batch",
      "name": "Split Batch Processing",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        450,
        500
      ],
      "parameters": {
        "executionOrder": "v1"
      },
      "typeVersion": 1
    },
    {
      "id": "send-monitoring-alert",
      "name": "Send Monitoring Alert",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        2650,
        100
      ],
      "parameters": {
        "url": "https://monitoring.example.com/api/logs",
        "method": "POST",
        "sendBody": true,
        "contentType": "json",
        "authentication": "genericCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "workflowStatus",
              "value": "={{$json.uploadStatus}}"
            },
            {
              "name": "videoCount",
              "value": "={{$json.processedVideos || 0}}"
            },
            {
              "name": "errorCount",
              "value": "={{$json.failedVideos || 0}}"
            },
            {
              "name": "successCount",
              "value": "={{$json.successVideos || 0}}"
            },
            {
              "name": "timestamp",
              "value": "={{new Date().toISOString()}}"
            }
          ]
        },
        "genericAuthType": "httpHeaderAuth"
      },
      "typeVersion": 4
    },
    {
      "id": "webhook-response",
      "name": "Webhook Response",
      "type": "n8n-nodes-base.respondToWebhook",
      "position": [
        2850,
        100
      ],
      "parameters": {
        "text": "{{JSON.stringify($json, null, 2)}}"
      },
      "typeVersion": 1
    }
  ],
  "settings": {
    "timezone": "America/New_York",
    "errorWorkflow": "",
    "executionOrder": "v1",
    "saveManualExecutions": true,
    "saveExecutionProgress": true,
    "saveDataErrorExecution": "all",
    "saveDataSuccessExecution": "all"
  },
  "connections": {
    "check-retry": {
      "main": [
        [
          {
            "node": "wait-before-retry",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "log-failure": {
      "main": [
        [
          {
            "node": "check-retry",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "log-success": {
      "main": [
        [
          {
            "node": "update-completion-sheet",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "tts-api-call": {
      "main": [
        [
          {
            "node": "generate-thumbnail",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "error-handler": {
      "main": [
        [
          {
            "node": "log-failure",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "assemble-video": {
      "main": [
        [
          {
            "node": "video-assembly-api",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "generate-voice": {
      "main": [
        [
          {
            "node": "tts-api-call",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "image-api-call": {
      "main": [
        [
          {
            "node": "assemble-video",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "catch-tts-error": {
      "main": [
        [
          {
            "node": "error-handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "webhook-trigger": {
      "main": [
        [
          {
            "node": "fetch-video-data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "fetch-video-data": {
      "main": [
        [
          {
            "node": "parse-video-data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "parse-video-data": {
      "main": [
        [
          {
            "node": "generate-voice",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "catch-image-error": {
      "main": [
        [
          {
            "node": "error-handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "upload-to-youtube": {
      "main": [
        [
          {
            "node": "log-success",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "wait-before-retry": {
      "main": [
        [
          {
            "node": "generate-voice",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "catch-upload-error": {
      "main": [
        [
          {
            "node": "error-handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "generate-thumbnail": {
      "main": [
        [
          {
            "node": "image-api-call",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "video-assembly-api": {
      "main": [
        [
          {
            "node": "prepare-upload-metadata",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "send-monitoring-alert": {
      "main": [
        [
          {
            "node": "webhook-response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "prepare-upload-metadata": {
      "main": [
        [
          {
            "node": "upload-to-youtube",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "update-completion-sheet": {
      "main": [
        [
          {
            "node": "send-monitoring-alert",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}