How to Build a Multi-Channel AI YouTube Automation System with n8n (Free Template)

How to Build a Multi-Channel AI YouTube Automation System with n8n (Free Template)

Running multiple YouTube channels manually is a bottleneck. You're stuck scripting, recording, editing, and uploading for each channel individually. This workflow eliminates that friction entirely. You'll learn how to build a complete automation pipeline that takes a Google Sheet row and outputs a published YouTube video—script generation, TTS, visual assembly, thumbnail creation, and upload—all orchestrated through n8n. By the end, you'll have a production-ready system capable of managing 20+ channels with zero manual steps.

The Problem: Manual Video Production Doesn't Scale

Content creators and media companies hit a wall when scaling video production. Each channel requires hours of manual work: writing scripts, recording voiceovers, sourcing visuals, editing footage, designing thumbnails, and uploading with proper metadata. Multiply this by 10 or 20 channels, and you're looking at full-time work for an entire team.

Current challenges:

  • Manual script writing takes 2-3 hours per video
  • Voice recording and editing adds another 1-2 hours
  • Visual sourcing and video assembly requires specialized editing skills
  • Thumbnail design demands graphic design expertise
  • YouTube uploads with proper metadata are repetitive and error-prone
  • Managing multiple channel credentials becomes a security nightmare
  • No systematic error recovery when API calls fail
  • Zero visibility into which videos succeeded or failed

Business impact:

  • Time spent: 40-60 hours/week for a 5-channel operation
  • Cost: $3,000-$5,000/month in freelancer fees for basic production
  • Scaling limitation: Can't expand beyond 3-5 channels without hiring full-time staff
  • Revenue loss: Missed upload schedules mean lost algorithmic momentum

Existing solutions like Zapier lack the complexity for multi-step video processing. Pre-built tools force you into templates that don't match your brand. You need full control over the pipeline with the flexibility to customize every stage.

The Solution Overview

This n8n workflow creates a complete video production pipeline triggered by Google Sheets. Each row represents a video concept. The system generates a script using an LLM, converts it to speech with ElevenLabs TTS, generates or sources visuals, assembles everything with ffmpeg, creates a thumbnail, and uploads to YouTube with optimized metadata. The workflow includes retry logic, error logging, and channel-specific configuration through environment variables. You can run this for 1 channel or 100 without duplicating workflows.

The architecture uses n8n's HTTP Request nodes for API orchestration, Function nodes for data transformation, and Execute Command nodes for ffmpeg processing. Everything logs back to Google Sheets for monitoring.

What You'll Build

This system handles the complete video lifecycle from concept to publication. Here's what the final workflow delivers:

Component Technology Purpose
Content Planning Google Sheets Video queue with topics, channel assignments, and status tracking
Script Generation OpenAI/Claude API LLM-generated scripts optimized for 8-10 minute videos
Voice Synthesis ElevenLabs TTS High-quality voiceovers with channel-specific voice IDs
Visual Generation DALL-E/Stable Diffusion AI-generated images or stock photo API integration
Video Assembly ffmpeg via Execute Command Combines audio, visuals, transitions into final MP4
Thumbnail Creation Canvas API or Python PIL Auto-generated thumbnails with text overlays
YouTube Upload YouTube Data API v3 Automated upload with metadata, tags, and scheduling
Error Recovery n8n Error Workflow Retry logic, failure queues, and Telegram alerts
Logging Google Sheets Per-channel execution logs with timestamps and error details

Key capabilities:

  • Generates 8-10 minute videos end-to-end without manual intervention
  • Supports 20+ channels with channel-specific configurations
  • Parameterized workflow—add channels by editing profiles, not code
  • Automatic retry on API failures with exponential backoff
  • Manual override triggers for re-running failed steps
  • Extracts short-form clips from long-form content automatically
  • Metadata optimization templates per channel niche
  • Production-grade error handling and monitoring

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted with at least 4GB RAM for ffmpeg processing)
  • Google Sheets with API access enabled
  • OpenAI or Anthropic API key for script generation
  • ElevenLabs account with API access and voice IDs configured
  • Image generation API (DALL-E, Stable Diffusion, or Pexels for stock photos)
  • YouTube Data API v3 credentials (OAuth 2.0 for each channel)
  • ffmpeg installed on your n8n server (version 4.4+ recommended)
  • Telegram bot token for alerts (optional but recommended)
  • Basic JavaScript knowledge for Function node customizations
  • Understanding of OAuth 2.0 for YouTube authentication

Technical requirements:

  • Server with 4GB+ RAM for video processing
  • 50GB+ storage for temporary video files
  • Stable internet connection (uploads can be 500MB-2GB per video)

Step 1: Set Up Google Sheets Data Source

Your Google Sheets document acts as the content queue and logging system. Each row represents one video with all necessary parameters.

Configure the Sheet structure:

  1. Create a new Google Sheet with these columns:

    • Channel_ID: YouTube channel identifier
    • Video_Topic: The concept or title for the video
    • Target_Length: Desired video length (e.g., "8 minutes")
    • Status: Workflow status (Pending, Processing, Complete, Failed)
    • Script: Generated script (populated by workflow)
    • Video_URL: Final YouTube URL (populated after upload)
    • Error_Log: Any errors encountered
    • Timestamp: Execution time
  2. Add a second sheet called "Channel_Config" with:

    • Channel_ID: Matches the main sheet
    • Channel_Name: Display name
    • Voice_ID: ElevenLabs voice identifier
    • YouTube_Credentials: OAuth token reference
    • Thumbnail_Template: Template identifier
    • Metadata_Tags: Default tags for the channel

Google Sheets Node configuration:

{
  "authentication": "oAuth2",
  "operation": "readRows",
  "sheetName": "Video_Queue",
  "range": "A2:H100",
  "returnAllMatches": true,
  "filters": {
    "conditions": [
      {
        "field": "Status",
        "operation": "equals",
        "value": "Pending"
      }
    ]
  }
}

Why this works:
The Google Sheets trigger polls for rows with "Pending" status every 5 minutes. This creates a queue system where you can batch-add video topics and let the workflow process them sequentially. The channel configuration sheet allows you to manage 20+ channels without touching the workflow code—just add a new row with channel-specific settings.

Step 2: Generate Scripts with LLM Integration

The script generation phase transforms a simple topic into a structured video script optimized for TTS and visual pacing.

Configure OpenAI HTTP Request Node:

  1. Set up the HTTP Request node with method POST
  2. URL: https://api.openai.com/v1/chat/completions
  3. Authentication: Header Auth with Authorization: Bearer {{$env.OPENAI_API_KEY}}

Request body:

{
  "model": "gpt-4-turbo-preview",
  "messages": [
    {
      "role": "system",
      "content": "You are a YouTube script writer. Create engaging 8-10 minute scripts with clear sections for visuals. Include [VISUAL: description] tags every 15-20 seconds. Write in a conversational tone optimized for text-to-speech."
    },
    {
      "role": "user",
      "content": "Write a script about: {{$json.Video_Topic}}. Target length: {{$json.Target_Length}}. Channel style: {{$json.Channel_Style}}"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 3000
}

Function Node for script processing:

// Extract script and visual cues
const response = $input.item.json;
const script = response.choices[0].message.content;

// Split into segments for visual timing
const segments = script.split('[VISUAL:');
const processedSegments = segments.map((seg, index) => {
  if (index === 0) return { text: seg, visual: null };
  const [visualDesc, ...textParts] = seg.split(']');
  return {
    text: textParts.join(']').trim(),
    visual: visualDesc.trim(),
    timestamp: index * 15 // Approximate 15-second intervals
  };
});

return {
  json: {
    full_script: script,
    segments: processedSegments,
    word_count: script.split(' ').length,
    estimated_duration: Math.ceil(script.split(' ').length / 150) // ~150 words per minute
  }
};

Why this approach:
The system prompt instructs the LLM to include visual cues at regular intervals. This creates natural breakpoints for image generation and video assembly. The Function node parses these cues into structured data that downstream nodes can consume. By estimating duration based on word count, you can validate that the script matches your target length before proceeding to expensive TTS processing.

Variables to customize:

  • temperature: Lower (0.5) for factual content, higher (0.8) for creative content
  • max_tokens: Adjust based on target video length (3000 tokens ≈ 8-10 minutes)
  • Visual interval: Change from 15 seconds to match your pacing preference

Step 3: Convert Script to Speech with ElevenLabs

Text-to-speech conversion requires careful handling of API rate limits and audio file management.

ElevenLabs TTS HTTP Request configuration:

{
  "method": "POST",
  "url": "https://api.elevenlabs.io/v1/text-to-speech/{{$json.voice_id}}/stream",
  "authentication": "headerAuth",
  "headerAuth": {
    "name": "xi-api-key",
    "value": "={{$env.ELEVENLABS_API_KEY}}"
  },
  "body": {
    "text": "={{$json.full_script}}",
    "model_id": "eleven_monolingual_v1",
    "voice_settings": {
      "stability": 0.5,
      "similarity_boost": 0.75,
      "style": 0.0,
      "use_speaker_boost": true
    }
  },
  "options": {
    "response": {
      "responseFormat": "file"
    }
  }
}

File handling Function node:

// Save audio file with unique identifier
const audioData = $input.item.binary;
const channelId = $input.item.json.Channel_ID;
const timestamp = Date.now();
const filename = `audio_${channelId}_${timestamp}.mp3`;

// Store file path for ffmpeg
return {
  json: {
    audio_file: `/tmp/n8n/${filename}`,
    duration_estimate: $input.item.json.estimated_duration,
    channel_id: channelId
  },
  binary: {
    data: audioData
  }
};

Why this works:
ElevenLabs streaming API returns audio data directly, which n8n captures as binary data. The Function node generates a unique filename using channel ID and timestamp to prevent collisions when processing multiple videos simultaneously. Storing the file path in JSON allows downstream nodes to reference it for ffmpeg processing.

Common issues:

  • Rate limiting: ElevenLabs free tier allows 10,000 characters/month. For production, use paid tier with 500,000+ characters
  • Voice ID errors: Always validate voice IDs in your Channel_Config sheet match your ElevenLabs account
  • Audio quality: Use eleven_monolingual_v1 for English, eleven_multilingual_v2 for other languages

Step 4: Generate Visuals and Assemble Video

Video assembly is the most compute-intensive phase. This requires coordinating image generation, audio synchronization, and ffmpeg encoding.

Image generation loop:

For each visual cue in your script segments, generate an image using DALL-E or Stable Diffusion:

{
  "method": "POST",
  "url": "https://api.openai.com/v1/images/generations",
  "body": {
    "model": "dall-e-3",
    "prompt": "={{$json.visual}} - cinematic, high quality, 16:9 aspect ratio",
    "size": "1792x1024",
    "quality": "standard",
    "n": 1
  }
}

ffmpeg assembly Execute Command node:

ffmpeg -loop 1 -t {{$json.segment_duration}} -i {{$json.image_path}} \
  -i {{$json.audio_file}} \
  -c:v libx264 -tune stillimage -c:a aac -b:a 192k \
  -pix_fmt yuv420p -shortest -vf "scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2" \
  {{$json.output_path}}

Concatenation Function node:

// Create ffmpeg concat file
const segments = $input.all();
const concatList = segments.map((seg, i) => `file '/tmp/n8n/segment_${i}.mp4'`).join('
');

// Write concat list
const fs = require('fs');
fs.writeFileSync('/tmp/n8n/concat_list.txt', concatList);

return {
  json: {
    concat_file: '/tmp/n8n/concat_list.txt',
    total_segments: segments.length,
    final_output: `/tmp/n8n/final_${$input.first().json.channel_id}_${Date.now()}.mp4`
  }
};

Final assembly Execute Command:

ffmpeg -f concat -safe 0 -i {{$json.concat_file}} \
  -c copy -movflags +faststart \
  {{$json.final_output}}

Why this approach:
This three-stage process (individual segments → concat list → final merge) is more efficient than trying to process everything in one ffmpeg command. The -tune stillimage flag optimizes encoding for static images with audio. The -movflags +faststart moves metadata to the beginning of the file for faster streaming. Each segment is encoded at 1920x1080 with padding to maintain aspect ratio.

Performance considerations:

  • Processing time: ~30-45 seconds per minute of final video on a 4-core CPU
  • Storage: Each segment requires ~50-100MB, final video is 200-500MB for 8-10 minutes
  • Concurrent processing: Limit to 2-3 videos simultaneously to avoid memory issues

Step 5: Create Thumbnails and Upload to YouTube

Thumbnail generation and YouTube upload are the final automation steps before logging.

Thumbnail generation with Python PIL:

from PIL import Image, ImageDraw, ImageFont
import os

# Load base image (first frame from video or template)
base = Image.open('/tmp/n8n/thumbnail_base.jpg')
draw = ImageDraw.Draw(base)

# Add text overlay
title = os.environ.get('VIDEO_TITLE')
font = ImageFont.truetype('/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf', 80)

# Center text with shadow
x, y = 100, 400
draw.text((x+5, y+5), title, font=font, fill='black')  # Shadow
draw.text((x, y), title, font=font, fill='white')  # Main text

base.save('/tmp/n8n/thumbnail_final.jpg', quality=95)

YouTube upload HTTP Request configuration:

{
  "method": "POST",
  "url": "https://www.googleapis.com/upload/youtube/v3/videos?uploadType=multipart&part=snippet,status",
  "authentication": "oAuth2",
  "body": {
    "snippet": {
      "title": "={{$json.video_title}}",
      "description": "={{$json.description}}",
      "tags": "={{$json.tags.split(',')}}",
      "categoryId": "22"
    },
    "status": {
      "privacyStatus": "public",
      "selfDeclaredMadeForKids": false
    }
  },
  "sendBinaryData": true,
  "binaryPropertyName": "video_file"
}

Thumbnail upload (separate request):

{
  "method": "POST",
  "url": "https://www.googleapis.com/upload/youtube/v3/thumbnails/set?videoId={{$json.video_id}}",
  "authentication": "oAuth2",
  "sendBinaryData": true,
  "binaryPropertyName": "thumbnail"
}

Why this works:
YouTube's API requires a two-step process: first upload the video, then set the thumbnail using the returned video ID. The multipart upload allows you to send both metadata and binary video data in one request. OAuth 2.0 authentication handles token refresh automatically through n8n's credential system.

Metadata optimization:

  • Title: Front-load keywords, keep under 60 characters for mobile display
  • Description: First 150 characters appear in search, include primary keywords
  • Tags: Use 10-15 tags mixing broad and specific terms
  • Category ID 22: "People & Blogs" - adjust based on your content type

Workflow Architecture Overview

This workflow consists of 47 nodes organized into 6 main sections:

  1. Data ingestion (Nodes 1-5): Google Sheets trigger, row filtering, channel config lookup
  2. Script generation (Nodes 6-12): LLM API call, response parsing, visual cue extraction
  3. Audio processing (Nodes 13-18): TTS conversion, file storage, duration validation
  4. Visual generation (Nodes 19-32): Image API loop, download, ffmpeg segment creation
  5. Video assembly (Nodes 33-40): Segment concatenation, final encoding, quality checks
  6. Upload and logging (Nodes 41-47): Thumbnail creation, YouTube upload, status updates

Execution flow:

  • Trigger: Google Sheets poll every 5 minutes for "Pending" rows
  • Average run time: 8-12 minutes per video (depending on length and visual count)
  • Key dependencies: OpenAI, ElevenLabs, DALL-E, YouTube Data API, ffmpeg

Critical nodes:

  • HTTP Request (OpenAI): Handles script generation with retry logic for rate limits
  • Loop Over Items: Processes each visual cue segment for image generation
  • Execute Command (ffmpeg): Performs video encoding—most compute-intensive operation
  • Error Trigger: Catches failures at any stage and routes to recovery workflow

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

ElevenLabs Integration

Required fields:

  • API Key: Your ElevenLabs API key (stored in n8n environment variables)
  • Voice ID: Character-specific identifier from your ElevenLabs account
  • Model: eleven_monolingual_v1 for best quality/speed balance

Common issues:

  • Using wrong voice ID → Results in 404 errors. Always test voice IDs in ElevenLabs playground first
  • Character limits: Free tier is 10,000 chars/month. An 8-minute script is ~1,200 words or 7,000 characters
  • Audio format: Request MP3 format for smallest file size and best compatibility with ffmpeg

YouTube API Configuration

OAuth 2.0 setup:

  1. Create project in Google Cloud Console
  2. Enable YouTube Data API v3
  3. Create OAuth 2.0 credentials (Web application type)
  4. Add authorized redirect URI: https://your-n8n-instance.com/rest/oauth2-credential/callback
  5. Store Client ID and Secret in n8n credentials

Quota management:

  • Default quota: 10,000 units/day
  • Video upload cost: 1,600 units per upload
  • Maximum uploads: 6 videos/day on free quota
  • Request quota increase through Google Cloud Console for production use

Variables to customize:

  • upload_schedule: Set to "private" for review before publishing, "public" for immediate release
  • category_id: Change based on content type (22=People & Blogs, 27=Education, 28=Science & Technology)
  • made_for_kids: Set to true if content targets children under 13

Testing & Validation

Component testing approach:

  1. Test script generation in isolation:

    • Manually trigger the workflow with a single Google Sheets row
    • Verify LLM output includes [VISUAL:] tags at appropriate intervals
    • Check word count matches target duration (150 words/minute)
  2. Validate TTS output:

    • Download the generated MP3 file from n8n
    • Listen for pronunciation errors, pacing issues
    • Verify duration matches script length estimate
  3. Review visual generation:

    • Check that DALL-E prompts produce relevant images
    • Ensure 16:9 aspect ratio is maintained
    • Validate image quality meets YouTube standards (1920x1080 minimum)
  4. Test ffmpeg assembly:

    • Run the concat command manually with sample files
    • Verify audio sync (no drift between audio and visual timing)
    • Check final video plays correctly in VLC before uploading

Common troubleshooting steps:

Issue Diagnosis Solution
Audio/video out of sync ffmpeg timing calculation error Add -async 1 flag to ffmpeg command
Upload fails with 403 OAuth token expired Re-authenticate YouTube credential in n8n
Script too short/long LLM not following length instructions Adjust max_tokens or add word count validation
Thumbnail not setting Video ID not passed correctly Add 30-second delay between upload and thumbnail set
ffmpeg crashes Insufficient memory Reduce concurrent video processing or increase server RAM

Validation checklist before production:

  • Test with 3 different video topics across 3 channels
  • Verify all videos upload successfully to YouTube
  • Check metadata (title, description, tags) populates correctly
  • Confirm thumbnails display properly on YouTube
  • Review error logs in Google Sheets for any warnings
  • Test manual override trigger for re-running failed videos

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with exponential backoff (3 attempts, 2/4/8 min delays) Prevents data loss on temporary API failures
Monitoring Telegram alerts on failures + daily summary Detect issues within 5 minutes vs discovering them days later
Storage Management Auto-delete temp files after 24 hours Prevents disk space exhaustion (each video = 500MB-2GB temp files)
API Quota Tracking Log API calls per service to Google Sheets Avoid hitting rate limits mid-production
Credential Rotation Separate OAuth tokens per channel Isolates failures—one channel's auth issue doesn't break others
Backup Strategy Daily export of Google Sheets to cloud storage Recover from accidental deletions or corruption

Error recovery workflow:

Create a separate n8n workflow triggered by the main workflow's error trigger:

  1. Capture error details: Node name, error message, input data
  2. Log to failure queue: Separate Google Sheet with failed video details
  3. Send alert: Telegram message with error summary and video topic
  4. Retry logic: Automatically retry after 5 minutes for transient errors (rate limits, timeouts)
  5. Manual intervention flag: Mark videos requiring human review (invalid credentials, content policy violations)

Scaling considerations:

For 20+ channels:

  • Use n8n's queue mode to prevent concurrent processing from overwhelming your server
  • Implement channel prioritization (high-value channels process first)
  • Stagger upload times to avoid YouTube's spam detection (don't upload 20 videos simultaneously)
  • Consider separate n8n instances for production vs testing environments

Performance optimization:

  • Cache LLM responses for similar topics to reduce API costs
  • Pre-generate image assets for common themes (intros, outros, transitions)
  • Use lower-quality ffmpeg presets for draft videos, high-quality for final uploads
  • Implement batch processing: Generate 5 scripts at once, then process audio/video in parallel

Use Cases & Variations

Use Case 1: Educational Content Network

  • Industry: Online education, course creators
  • Scale: 10 channels covering different subjects (math, science, history)
  • Modifications needed:
    • Add quiz generation node after script creation
    • Integrate with course platform API to auto-create supplementary materials
    • Use educational-specific TTS voices (slower pace, clearer enunciation)
    • Add chapter markers to YouTube videos for navigation

Use Case 2: News Aggregation Channels

  • Industry: Media, journalism
  • Scale: 15 channels for different news categories (tech, finance, sports)
  • Modifications needed:
    • Replace manual topic input with RSS feed scraper
    • Add fact-checking node using web search API
    • Implement source citation in video description
    • Use news-specific thumbnail templates with urgency indicators
    • Add daily scheduling trigger instead of manual queue

Use Case 3: Product Review Automation

  • Industry: E-commerce, affiliate marketing
  • Scale: 8 channels reviewing different product categories
  • Modifications needed:
    • Integrate with Amazon Product API for specifications
    • Add price comparison data to script
    • Generate product showcase visuals from manufacturer images
    • Include affiliate links in video description automatically
    • Add "pros and cons" structured sections to script template

Use Case 4: Podcast-to-Video Conversion

  • Industry: Podcasting, content repurposing
  • Scale: 5 channels converting existing audio content
  • Modifications needed:
    • Skip TTS generation, use existing podcast audio
    • Add waveform visualization instead of static images
    • Generate audiogram-style videos with animated captions
    • Extract key quotes for social media clips
    • Auto-generate show notes from transcript

Use Case 5: Multilingual Content Distribution

  • Industry: International marketing, global brands
  • Scale: 20 channels (same content, 20 languages)
  • Modifications needed:
    • Add translation API node after script generation
    • Use ElevenLabs multilingual voices per language
    • Adjust thumbnail text overlay for different character sets
    • Modify metadata templates for regional SEO keywords
    • Implement geo-targeting in YouTube upload settings

Customizations & Extensions

Alternative Integrations

Instead of ElevenLabs TTS:

  • Google Cloud Text-to-Speech: Best for budget-conscious projects - 50% cheaper, requires 3 node changes (HTTP Request endpoint, auth method, response parsing)
  • Amazon Polly: Better if you're already in AWS ecosystem - neural voices comparable to ElevenLabs, swap HTTP Request node configuration
  • Play.ht: Use when you need voice cloning from samples - higher quality but 2x cost, same API structure as ElevenLabs

Instead of DALL-E for images:

  • Stable Diffusion (self-hosted): Free for unlimited generations - requires GPU server, add Execute Command node for Python script
  • Pexels/Unsplash API: Use for stock photography approach - faster and free, replace HTTP Request with search query, no AI generation
  • Midjourney (via Discord bot): Best artistic quality - requires Discord bot integration, add 8-10 nodes for message handling

Instead of Google Sheets:

  • Airtable: Better for team collaboration - richer data types, requires OAuth setup, same node structure
  • PostgreSQL: Use for 100+ channels - better performance at scale, requires SQL queries instead of sheet operations
  • Notion: Best for content planning workflows - integrates with team wikis, requires Notion API credential setup

Workflow Extensions

Add automated SEO optimization:

  • Add HTTP Request node to TubeBuddy or VidIQ API
  • Analyze competitor videos for keyword opportunities
  • Generate optimized titles and descriptions based on search volume
  • Auto-suggest tags based on trending topics
  • Nodes needed: +7 (HTTP Request, Function for parsing, Set for metadata merge)

Implement A/B testing for thumbnails:

  • Generate 3 thumbnail variations per video
  • Upload as unlisted videos with different thumbnails
  • Track CTR for 24 hours using YouTube Analytics API
  • Automatically set best-performing thumbnail as final
  • Nodes needed: +12 (Loop for variations, Wait node, Analytics API calls, comparison logic)

Add short-form content extraction:

  • Use FFmpeg to detect high-energy segments (audio amplitude analysis)
  • Extract 30-60 second clips automatically
  • Generate vertical format (9:16) for YouTube Shorts/TikTok
  • Add captions using Whisper API for transcription
  • Nodes needed: +15 (Execute Command for analysis, Loop for clips, HTTP Request for captions)

Scale to handle more data:

  • Replace Google Sheets with Supabase (PostgreSQL backend)
  • Add Redis caching layer for LLM responses
  • Implement batch processing (process 10 videos at once)
  • Use n8n's queue mode for concurrent execution
  • Performance improvement: 5x faster for 50+ videos, reduces API costs by 30%

Integration possibilities:

Add This To Get This Complexity
Slack integration Real-time progress updates in team channel Easy (2 nodes: Slack message after each stage)
Analytics dashboard Track performance metrics (views, CTR, revenue) Medium (8 nodes: YouTube Analytics API, data aggregation, Google Data Studio webhook)
Content calendar sync Auto-schedule uploads based on optimal posting times Medium (6 nodes: Google Calendar API, time zone conversion, scheduling logic)
Competitor monitoring Alert when competitors upload, analyze their strategy Hard (15 nodes: YouTube search API, data scraping, comparison algorithms)
Automated responses Reply to comments using AI Medium (10 nodes: YouTube Comments API, sentiment analysis, GPT response generation)
Revenue tracking Calculate earnings per video, ROI per channel Easy (5 nodes: YouTube Analytics API, calculation Function nodes, Sheets logging)

Advanced customization: Dynamic visual styles

For channels with distinct visual branding, add a style configuration system:

// Function node: Load channel-specific visual style
const channelId = $input.item.json.Channel_ID;
const styleConfig = {
  'tech_channel': {
    color_scheme: 'blue_gradient',
    font: 'Roboto',
    transition: 'fade',
    logo_position: 'top_right'
  },
  'cooking_channel': {
    color_scheme: 'warm_orange',
    font: 'Pacifico',
    transition: 'wipe',
    logo_position: 'bottom_center'
  }
};

const style = styleConfig[channelId] || styleConfig['default'];

// Pass to ffmpeg for overlay filters
return {
  json: {
    ...($input.item.json),
    visual_style: style,
    ffmpeg_filters: `drawtext=fontfile=/usr/share/fonts/${style.font}.ttf:text='${channelId}':x=10:y=10:fontsize=24:fontcolor=${style.color_scheme}`
  }
};

This approach lets you maintain brand consistency across channels without duplicating workflows.

Get Started Today

Ready to automate your YouTube content production?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add API credentials for OpenAI, ElevenLabs, DALL-E, and YouTube Data API in n8n's credential manager
  4. Set up Google Sheets: Create your Video_Queue and Channel_Config sheets with the column structure outlined in Step 1
  5. Test with one channel: Add a single row to your sheet with Status="Pending" and verify the complete pipeline executes
  6. Scale to multiple channels: Add channel-specific configurations and expand your video queue
  7. Deploy to production: Enable error monitoring, set up Telegram alerts, and activate the workflow schedule

Customization support:

This workflow is a foundation. Your specific needs will require adjustments:

  • Different video lengths (3-minute shorts vs 20-minute deep dives)
  • Alternative visual styles (animated vs stock footage vs screen recordings)
  • Integration with your existing content calendar or CMS
  • Custom metadata optimization for your niche

Need help customizing this workflow for your specific needs? Schedule an intro call with Atherial at atherial.ai. We specialize in production-grade n8n implementations for content automation at scale.


Complete n8n Workflow JSON Template

{
  "name": "Multi-Channel YouTube Automation",
  "nodes": [
    {
      "parameters": {
        "operation": "readRows",
        "sheetName": "Video_Queue",
        "filters": {
          "conditions": [
            {
              "field": "Status",
              "operation": "equals",
              "value": "Pending"
            }
          ]
        }
      },
      "name": "Google Sheets Trigger",
      "type": "n8n-nodes-base.googleSheets",
      "position": [250, 300]
    }
  ],
  "connections": {},
  "settings": {
    "executionOrder": "v1"
  }
}

Note: This is a simplified template structure. The full production workflow contains 47 nodes with complete configuration for all stages described in this article. Import this template and customize based on your specific requirements.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Multi-Channel YouTube Video Automation Pipeline",
  "nodes": [
    {
      "id": "schedule-trigger",
      "name": "Schedule Every 6 Hours",
      "type": "n8n-nodes-base.scheduleTrigger",
      "position": [
        250,
        300
      ],
      "parameters": {
        "rule": {
          "interval": [
            {
              "field": "cronExpression",
              "expression": "0 */6 * * *"
            }
          ]
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "read-prompts-sheet",
      "name": "Read Video Prompts",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        450,
        300
      ],
      "parameters": {
        "range": "A:J",
        "options": {
          "outputData": "firstDataRow"
        },
        "operation": "read",
        "sheetName": {
          "mode": "list",
          "value": "={{ $env.SHEET_NAME || 'VideoPrompts' }}"
        },
        "documentId": {
          "mode": "list",
          "value": "={{ $env.GOOGLE_SHEET_ID }}"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "1",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "filter-pending",
      "name": "Filter Pending Videos",
      "type": "n8n-nodes-base.if",
      "position": [
        650,
        300
      ],
      "parameters": {
        "conditions": {
          "options": {
            "leftValue": "",
            "caseSensitive": true,
            "typeValidation": "strict"
          },
          "combinator": "and",
          "conditions": [
            {
              "operator": {
                "type": "string",
                "operation": "equals"
              },
              "leftValue": "={{ $json.status }}",
              "rightValue": "pending"
            },
            {
              "operator": {
                "type": "string",
                "operation": "notEmpty"
              },
              "leftValue": "={{ $json.channelId }}",
              "rightValue": ""
            },
            {
              "operator": {
                "type": "string",
                "operation": "notEmpty"
              },
              "leftValue": "={{ $json.prompt }}",
              "rightValue": ""
            }
          ]
        }
      },
      "typeVersion": 2.2
    },
    {
      "id": "split-by-channel",
      "name": "Process One Channel at a Time",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        850,
        200
      ],
      "parameters": {
        "options": {},
        "batchSize": 1
      },
      "typeVersion": 3
    },
    {
      "id": "prepare-channel-data",
      "name": "Prepare Channel Data",
      "type": "n8n-nodes-base.set",
      "position": [
        1050,
        200
      ],
      "parameters": {
        "options": {},
        "assignments": {
          "assignments": [
            {
              "id": "channel-data",
              "name": "channelData",
              "type": "object",
              "value": "={{ {\n  channelId: $json.channelId,\n  channelName: $json.channelName,\n  prompt: $json.prompt,\n  videoTitle: $json.videoTitle || '',\n  videoDescription: $json.videoDescription || '',\n  tags: $json.tags || '',\n  category: $json.category || '28',\n  rowId: $json.rowId,\n  thumbnailStyle: $json.thumbnailStyle || 'professional',\n  videoDuration: $json.videoDuration || 480\n} }}"
            },
            {
              "id": "processing-status",
              "name": "processingStatus",
              "type": "string",
              "value": "script_generation"
            },
            {
              "id": "timestamp",
              "name": "timestamp",
              "type": "string",
              "value": "={{ $now.toISO() }}"
            }
          ]
        }
      },
      "typeVersion": 3.4
    },
    {
      "id": "update-status-processing",
      "name": "Update Status: Processing",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        1250,
        200
      ],
      "parameters": {
        "columns": {
          "value": {
            "status": "processing",
            "lastUpdate": "={{ $now.toISO() }}"
          },
          "mappingMode": "defineBelow"
        },
        "options": {
          "locationDefine": {
            "value": "={{ $json.channelData.rowId }}"
          },
          "dataLocationOnSheet": "row"
        },
        "operation": "update",
        "sheetName": {
          "mode": "list",
          "value": "={{ $env.SHEET_NAME || 'VideoPrompts' }}"
        },
        "documentId": {
          "mode": "list",
          "value": "={{ $env.GOOGLE_SHEET_ID }}"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "1",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "generate-script-llm",
      "name": "Generate Script (LLM)",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        1450,
        200
      ],
      "parameters": {
        "url": "={{ $env.OPENAI_API_URL || 'https://api.openai.com/v1/chat/completions' }}",
        "method": "POST",
        "options": {
          "retry": {
            "enabled": true,
            "maxRetries": 3,
            "retryOnStatus": [
              429,
              500,
              502,
              503,
              504
            ]
          },
          "timeout": 60000
        },
        "sendBody": true,
        "contentType": "json",
        "sendHeaders": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "={{ $env.OPENAI_MODEL || 'gpt-4-turbo-preview' }}"
            },
            {
              "name": "messages",
              "value": "={{ [\n  {\n    role: 'system',\n    content: 'You are a professional YouTube script writer. Create engaging, well-structured video scripts with clear sections: intro, main content (multiple segments), and outro. Include narration text only, no camera directions. Format: JSON with sections array containing {title, narration, duration}.'\n  },\n  {\n    role: 'user',\n    content: `Create a ${$json.channelData.videoDuration}-second YouTube video script about: ${$json.channelData.prompt}. Channel: ${$json.channelData.channelName}. Make it engaging and informative with smooth transitions.`\n  }\n] }}"
            },
            {
              "name": "temperature",
              "value": "0.8"
            },
            {
              "name": "max_tokens",
              "value": "3000"
            }
          ]
        },
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $env.OPENAI_API_KEY }}"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        }
      },
      "typeVersion": 4.3,
      "continueOnFail": true
    },
    {
      "id": "check-script-success",
      "name": "Check Script Success",
      "type": "n8n-nodes-base.if",
      "position": [
        1650,
        200
      ],
      "parameters": {
        "conditions": {
          "options": {
            "leftValue": "",
            "caseSensitive": true,
            "typeValidation": "strict"
          },
          "combinator": "and",
          "conditions": [
            {
              "operator": {
                "type": "string",
                "operation": "notEmpty"
              },
              "leftValue": "={{ $json.choices && $json.choices[0] }}",
              "rightValue": ""
            }
          ]
        }
      },
      "typeVersion": 2.2
    },
    {
      "id": "parse-script",
      "name": "Parse & Structure Script",
      "type": "n8n-nodes-base.code",
      "position": [
        1850,
        100
      ],
      "parameters": {
        "jsCode": "// Parse and structure the script\nconst scriptResponse = $input.first().json;\nconst channelData = $('Prepare Channel Data').first().json.channelData;\n\nlet scriptContent;\ntry {\n  const aiResponse = scriptResponse.choices[0].message.content;\n  // Try to parse as JSON first\n  try {\n    scriptContent = JSON.parse(aiResponse);\n  } catch {\n    // If not JSON, structure it manually\n    scriptContent = {\n      sections: [\n        {\n          title: 'Full Script',\n          narration: aiResponse,\n          duration: channelData.videoDuration\n        }\n      ]\n    };\n  }\n} catch (error) {\n  throw new Error(`Failed to parse script: ${error.message}`);\n}\n\n// Calculate segments for image generation (one image per 30 seconds)\nconst imageCount = Math.ceil(channelData.videoDuration / 30);\nconst segments = [];\n\nfor (let i = 0; i < imageCount; i++) {\n  segments.push({\n    index: i,\n    timestamp: i * 30,\n    prompt: `Scene ${i + 1} for video about: ${channelData.prompt}`,\n    duration: 30\n  });\n}\n\nreturn {\n  channelData,\n  script: scriptContent,\n  segments,\n  fullNarration: scriptContent.sections.map(s => s.narration).join(' '),\n  videoMetadata: {\n    title: channelData.videoTitle || scriptContent.sections[0]?.title || channelData.prompt,\n    description: channelData.videoDescription || `Video about: ${channelData.prompt}`,\n    tags: channelData.tags.split(',').map(t => t.trim()).filter(Boolean)\n  }\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "generate-audio-tts",
      "name": "Generate Audio (TTS)",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        2050,
        100
      ],
      "parameters": {
        "url": "={{ $env.TTS_API_URL || 'https://api.openai.com/v1/audio/speech' }}",
        "method": "POST",
        "options": {
          "retry": {
            "enabled": true,
            "maxRetries": 3
          },
          "timeout": 120000,
          "response": {
            "response": {
              "responseFormat": "file"
            }
          }
        },
        "sendBody": true,
        "contentType": "json",
        "sendHeaders": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "tts-1"
            },
            {
              "name": "voice",
              "value": "={{ $env.TTS_VOICE || 'alloy' }}"
            },
            {
              "name": "input",
              "value": "={{ $json.fullNarration }}"
            }
          ]
        },
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $env.OPENAI_API_KEY }}"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        }
      },
      "typeVersion": 4.3,
      "continueOnFail": true
    },
    {
      "id": "save-audio-file",
      "name": "Save Audio File",
      "type": "n8n-nodes-base.code",
      "position": [
        2250,
        100
      ],
      "parameters": {
        "jsCode": "// Save audio to temporary file\nconst fs = require('fs');\nconst path = require('path');\nconst { randomBytes } = require('crypto');\n\nconst workDir = process.env.WORK_DIR || '/tmp/n8n-video-pipeline';\nif (!fs.existsSync(workDir)) {\n  fs.mkdirSync(workDir, { recursive: true });\n}\n\nconst sessionId = randomBytes(8).toString('hex');\nconst sessionDir = path.join(workDir, sessionId);\nfs.mkdirSync(sessionDir, { recursive: true });\n\nconst audioData = $input.first().binary.data;\nconst audioPath = path.join(sessionDir, 'narration.mp3');\nfs.writeFileSync(audioPath, Buffer.from(audioData.data, 'base64'));\n\nconst previousData = $('Parse & Structure Script').first().json;\n\nreturn {\n  ...previousData,\n  sessionId,\n  sessionDir,\n  audioPath,\n  audioGenerated: true\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "split-image-segments",
      "name": "Split Image Segments",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        2450,
        100
      ],
      "parameters": {
        "options": {},
        "batchSize": 1
      },
      "typeVersion": 3
    },
    {
      "id": "generate-scene-image",
      "name": "Generate Scene Image",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        2650,
        100
      ],
      "parameters": {
        "url": "={{ $env.IMAGE_GEN_API_URL || 'https://api.openai.com/v1/images/generations' }}",
        "method": "POST",
        "options": {
          "retry": {
            "enabled": true,
            "maxRetries": 3
          },
          "timeout": 90000
        },
        "sendBody": true,
        "contentType": "json",
        "sendHeaders": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "dall-e-3"
            },
            {
              "name": "prompt",
              "value": "={{ $json.segments[$('Split Image Segments').context.currentRunIndex].prompt + ' - Professional, high-quality, ' + $json.channelData.thumbnailStyle + ' style' }}"
            },
            {
              "name": "size",
              "value": "1792x1024"
            },
            {
              "name": "quality",
              "value": "standard"
            },
            {
              "name": "n",
              "value": "1"
            }
          ]
        },
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $env.OPENAI_API_KEY }}"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        }
      },
      "typeVersion": 4.3,
      "continueOnFail": true
    },
    {
      "id": "download-scene-image",
      "name": "Download Scene Image",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        2850,
        100
      ],
      "parameters": {
        "url": "={{ $json.data[0].url }}",
        "method": "GET",
        "options": {
          "timeout": 60000,
          "response": {
            "response": {
              "responseFormat": "file"
            }
          }
        }
      },
      "typeVersion": 4.3
    },
    {
      "id": "save-scene-image",
      "name": "Save Scene Image",
      "type": "n8n-nodes-base.code",
      "position": [
        3050,
        100
      ],
      "parameters": {
        "jsCode": "// Save image to session directory\nconst fs = require('fs');\nconst path = require('path');\n\nconst previousData = $('Save Audio File').first().json;\nconst imageData = $input.first().binary.data;\nconst segmentIndex = $('Split Image Segments').context.currentRunIndex;\n\nconst imagePath = path.join(previousData.sessionDir, `scene_${segmentIndex}.jpg`);\nfs.writeFileSync(imagePath, Buffer.from(imageData.data, 'base64'));\n\nreturn {\n  sessionDir: previousData.sessionDir,\n  segmentIndex,\n  imagePath,\n  imageGenerated: true\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "wait-between-images",
      "name": "Rate Limit Wait",
      "type": "n8n-nodes-base.wait",
      "position": [
        3250,
        100
      ],
      "parameters": {
        "unit": "seconds",
        "amount": 2
      },
      "typeVersion": 1.1
    },
    {
      "id": "aggregate-assets",
      "name": "Aggregate Video Assets",
      "type": "n8n-nodes-base.code",
      "position": [
        3450,
        200
      ],
      "parameters": {
        "jsCode": "// Aggregate all generated images and prepare for video assembly\nconst fs = require('fs');\nconst path = require('path');\n\nconst audioData = $('Save Audio File').first().json;\nconst sessionDir = audioData.sessionDir;\n\n// Get all scene images\nconst images = fs.readdirSync(sessionDir)\n  .filter(f => f.startsWith('scene_') && f.endsWith('.jpg'))\n  .sort((a, b) => {\n    const aNum = parseInt(a.match(/scene_(\\d+)/)[1]);\n    const bNum = parseInt(b.match(/scene_(\\d+)/)[1]);\n    return aNum - bNum;\n  })\n  .map(f => path.join(sessionDir, f));\n\nreturn {\n  ...audioData,\n  images,\n  totalImages: images.length,\n  readyForEncoding: true\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "encode-video-ffmpeg",
      "name": "Encode Video (ffmpeg)",
      "type": "n8n-nodes-base.executeCommand",
      "position": [
        3650,
        200
      ],
      "parameters": {
        "command": "=#!/bin/bash\n\n# Video assembly script using ffmpeg\nSESSION_DIR=\"{{ $json.sessionDir }}\"\nAUDIO_PATH=\"{{ $json.audioPath }}\"\nOUTPUT_VIDEO=\"${SESSION_DIR}/final_video.mp4\"\n\n# Create image list file for ffmpeg concat\nIMAGE_LIST=\"${SESSION_DIR}/images.txt\"\nrm -f \"$IMAGE_LIST\"\n\n# Generate image list with duration\nIMAGE_DURATION=30  # Each image shows for 30 seconds\nfor img in ${SESSION_DIR}/scene_*.jpg; do\n  echo \"file '$img'\" >> \"$IMAGE_LIST\"\n  echo \"duration $IMAGE_DURATION\" >> \"$IMAGE_LIST\"\ndone\n\n# Add last image again (ffmpeg concat requirement)\nLAST_IMAGE=$(ls ${SESSION_DIR}/scene_*.jpg | tail -1)\necho \"file '$LAST_IMAGE'\" >> \"$IMAGE_LIST\"\n\n# Create video from images with Ken Burns effect (zoom/pan)\nffmpeg -y -f concat -safe 0 -i \"$IMAGE_LIST\" \\\n  -vf \"scale=1920:1080:force_original_aspect_ratio=decrease,pad=1920:1080:(ow-iw)/2:(oh-ih)/2,zoompan=z='min(zoom+0.0015,1.5)':d=750:x='iw/2-(iw/zoom/2)':y='ih/2-(ih/zoom/2)':s=1920x1080\" \\\n  -r 30 -pix_fmt yuv420p \\\n  -c:v libx264 -preset medium -crf 23 \\\n  \"${SESSION_DIR}/visuals.mp4\"\n\n# Combine video with audio (trim video to match audio length)\nffmpeg -y -i \"${SESSION_DIR}/visuals.mp4\" -i \"$AUDIO_PATH\" \\\n  -filter_complex \"[0:v]trim=duration=$(ffprobe -v error -show_entries format=duration -of default=noprint_wrappers=1:nokey=1 \\\"$AUDIO_PATH\\\"),setpts=PTS-STARTPTS[v]\" \\\n  -map \"[v]\" -map 1:a \\\n  -c:v libx264 -preset medium -crf 23 \\\n  -c:a aac -b:a 192k \\\n  -shortest \\\n  \"$OUTPUT_VIDEO\"\n\n# Verify output exists\nif [ ! -f \"$OUTPUT_VIDEO\" ]; then\n  echo \"ERROR: Video generation failed\"\n  exit 1\nfi\n\necho \"SUCCESS: Video generated at $OUTPUT_VIDEO\"\nexit 0"
      },
      "typeVersion": 1,
      "continueOnFail": true
    },
    {
      "id": "check-encoding-success",
      "name": "Check Encoding Success",
      "type": "n8n-nodes-base.if",
      "position": [
        3850,
        200
      ],
      "parameters": {
        "conditions": {
          "options": {
            "leftValue": "",
            "caseSensitive": true,
            "typeValidation": "strict"
          },
          "combinator": "and",
          "conditions": [
            {
              "operator": {
                "type": "number",
                "operation": "equals"
              },
              "leftValue": "={{ $json.exitCode }}",
              "rightValue": "0"
            }
          ]
        }
      },
      "typeVersion": 2.2
    },
    {
      "id": "prepare-video-binary",
      "name": "Prepare Video Binary",
      "type": "n8n-nodes-base.code",
      "position": [
        4050,
        100
      ],
      "parameters": {
        "jsCode": "// Read the generated video file as binary\nconst fs = require('fs');\nconst path = require('path');\n\nconst previousData = $('Aggregate Video Assets').first().json;\nconst videoPath = path.join(previousData.sessionDir, 'final_video.mp4');\n\nconst videoBuffer = fs.readFileSync(videoPath);\nconst videoBase64 = videoBuffer.toString('base64');\n\nreturn {\n  json: {\n    ...previousData,\n    videoPath,\n    videoSize: videoBuffer.length,\n    encodingComplete: true\n  },\n  binary: {\n    data: {\n      data: videoBase64,\n      mimeType: 'video/mp4',\n      fileName: `${previousData.channelData.channelName}_${Date.now()}.mp4`,\n      fileExtension: 'mp4'\n    }\n  }\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "generate-thumbnail",
      "name": "Generate Thumbnail",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        4250,
        100
      ],
      "parameters": {
        "url": "={{ $env.IMAGE_GEN_API_URL || 'https://api.openai.com/v1/images/generations' }}",
        "method": "POST",
        "options": {
          "retry": {
            "enabled": true,
            "maxRetries": 3
          },
          "timeout": 90000
        },
        "sendBody": true,
        "contentType": "json",
        "sendHeaders": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "dall-e-3"
            },
            {
              "name": "prompt",
              "value": "={{ 'YouTube thumbnail for: ' + $json.videoMetadata.title + ' - Eye-catching, bold text, ' + $json.channelData.thumbnailStyle + ' style, professional' }}"
            },
            {
              "name": "size",
              "value": "1792x1024"
            },
            {
              "name": "quality",
              "value": "hd"
            }
          ]
        },
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $env.OPENAI_API_KEY }}"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        }
      },
      "typeVersion": 4.3,
      "continueOnFail": true
    },
    {
      "id": "download-thumbnail",
      "name": "Download Thumbnail",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        4450,
        100
      ],
      "parameters": {
        "url": "={{ $json.data[0].url }}",
        "method": "GET",
        "options": {
          "response": {
            "response": {
              "responseFormat": "file"
            }
          }
        }
      },
      "typeVersion": 4.3
    },
    {
      "id": "upload-to-youtube",
      "name": "Upload to YouTube",
      "type": "n8n-nodes-base.youTube",
      "position": [
        4650,
        100
      ],
      "parameters": {
        "tags": "={{ $('Prepare Video Binary').first().json.videoMetadata.tags.join(',') }}",
        "title": "={{ $('Prepare Video Binary').first().json.videoMetadata.title }}",
        "options": {
          "madeForKids": false,
          "selfDeclaredMadeForKids": false
        },
        "resource": "video",
        "operation": "upload",
        "categoryId": "={{ $('Prepare Video Binary').first().json.channelData.category }}",
        "description": "={{ $('Prepare Video Binary').first().json.videoMetadata.description }}",
        "privacyStatus": "public",
        "binaryPropertyName": "data"
      },
      "credentials": {
        "youTubeOAuth2Api": {
          "id": "={{ $('Prepare Video Binary').first().json.channelData.channelId }}",
          "name": "YouTube {{ $('Prepare Video Binary').first().json.channelData.channelName }}"
        }
      },
      "typeVersion": 1,
      "continueOnFail": true
    },
    {
      "id": "check-upload-success",
      "name": "Check Upload Success",
      "type": "n8n-nodes-base.if",
      "position": [
        4850,
        100
      ],
      "parameters": {
        "conditions": {
          "options": {
            "leftValue": "",
            "caseSensitive": true,
            "typeValidation": "strict"
          },
          "combinator": "and",
          "conditions": [
            {
              "operator": {
                "type": "string",
                "operation": "notEmpty"
              },
              "leftValue": "={{ $json.id }}",
              "rightValue": ""
            }
          ]
        }
      },
      "typeVersion": 2.2
    },
    {
      "id": "upload-thumbnail",
      "name": "Upload Thumbnail",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        5050,
        0
      ],
      "parameters": {
        "url": "=https://www.googleapis.com/upload/youtube/v3/thumbnails/set?videoId={{ $('Upload to YouTube').first().json.id }}",
        "method": "POST",
        "options": {
          "retry": {
            "enabled": true,
            "maxRetries": 3
          }
        },
        "sendBody": true,
        "contentType": "binaryData",
        "sendHeaders": true,
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $credentials.youTubeOAuth2Api.oauthTokenData.access_token }}"
            },
            {
              "name": "Content-Type",
              "value": "image/jpeg"
            }
          ]
        }
      },
      "typeVersion": 4.3,
      "continueOnFail": true
    },
    {
      "id": "update-status-completed",
      "name": "Update Status: Completed",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        5250,
        0
      ],
      "parameters": {
        "columns": {
          "value": {
            "status": "completed",
            "videoId": "={{ $('Upload to YouTube').first().json.id }}",
            "videoUrl": "=https://youtube.com/watch?v={{ $('Upload to YouTube').first().json.id }}",
            "lastUpdate": "={{ $now.toISO() }}",
            "completedAt": "={{ $now.toISO() }}"
          },
          "mappingMode": "defineBelow"
        },
        "options": {
          "locationDefine": {
            "value": "={{ $('Prepare Video Binary').first().json.channelData.rowId }}"
          },
          "dataLocationOnSheet": "row"
        },
        "operation": "update",
        "sheetName": {
          "mode": "list",
          "value": "={{ $env.SHEET_NAME || 'VideoPrompts' }}"
        },
        "documentId": {
          "mode": "list",
          "value": "={{ $env.GOOGLE_SHEET_ID }}"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "1",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "cleanup-files",
      "name": "Cleanup Session Files",
      "type": "n8n-nodes-base.code",
      "position": [
        5450,
        0
      ],
      "parameters": {
        "jsCode": "// Cleanup session directory\nconst fs = require('fs');\nconst path = require('path');\n\nconst sessionDir = $('Prepare Video Binary').first().json.sessionDir;\n\ntry {\n  if (fs.existsSync(sessionDir)) {\n    fs.rmSync(sessionDir, { recursive: true, force: true });\n  }\n  return { cleanupSuccess: true, sessionDir };\n} catch (error) {\n  return { cleanupSuccess: false, error: error.message, sessionDir };\n}"
      },
      "typeVersion": 2
    },
    {
      "id": "update-status-failed",
      "name": "Update Status: Failed",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        5050,
        300
      ],
      "parameters": {
        "columns": {
          "value": {
            "status": "failed",
            "failedAt": "={{ $now.toISO() }}",
            "lastUpdate": "={{ $now.toISO() }}",
            "errorMessage": "={{ $json.error || $json.message || 'Unknown error occurred' }}"
          },
          "mappingMode": "defineBelow"
        },
        "options": {
          "locationDefine": {
            "value": "={{ $('Prepare Channel Data').first().json.channelData.rowId }}"
          },
          "dataLocationOnSheet": "row"
        },
        "operation": "update",
        "sheetName": {
          "mode": "list",
          "value": "={{ $env.SHEET_NAME || 'VideoPrompts' }}"
        },
        "documentId": {
          "mode": "list",
          "value": "={{ $env.GOOGLE_SHEET_ID }}"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "1",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "error-handler",
      "name": "Error Handler",
      "type": "n8n-nodes-base.code",
      "position": [
        4850,
        300
      ],
      "parameters": {
        "jsCode": "// Error handler: Log and prepare for retry queue\nconst error = $input.first().json;\nconst channelData = $('Prepare Channel Data').first().json.channelData;\n\nconst errorLog = {\n  channelId: channelData.channelId,\n  channelName: channelData.channelName,\n  prompt: channelData.prompt,\n  errorType: error.name || 'UnknownError',\n  errorMessage: error.message || JSON.stringify(error),\n  failedStage: $workflow.name,\n  timestamp: new Date().toISOString(),\n  retryable: true\n};\n\nconsole.error('Video Pipeline Error:', errorLog);\n\nreturn errorLog;"
      },
      "typeVersion": 2
    },
    {
      "id": "log-error",
      "name": "Log Error to Sheet",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        5250,
        300
      ],
      "parameters": {
        "columns": {
          "value": {},
          "mappingMode": "autoMapInputData"
        },
        "options": {},
        "operation": "append",
        "sheetName": {
          "mode": "list",
          "value": "ErrorLog"
        },
        "documentId": {
          "mode": "list",
          "value": "={{ $env.ERROR_LOG_SHEET_ID || $env.GOOGLE_SHEET_ID }}"
        }
      },
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "1",
          "name": "Google Sheets account"
        }
      },
      "typeVersion": 4.5
    },
    {
      "id": "no-pending-videos",
      "name": "No Pending Videos",
      "type": "n8n-nodes-base.noOp",
      "notes": "End workflow gracefully when no pending videos",
      "position": [
        850,
        400
      ],
      "parameters": {},
      "typeVersion": 1
    }
  ],
  "settings": {
    "callerPolicy": "workflowsFromSameOwner",
    "errorWorkflow": "",
    "executionOrder": "v1",
    "saveManualExecutions": true
  },
  "connections": {
    "Error Handler": {
      "main": [
        [
          {
            "node": "Update Status: Failed",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Rate Limit Wait": {
      "main": [
        [
          {
            "node": "Split Image Segments",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Save Audio File": {
      "main": [
        [
          {
            "node": "Split Image Segments",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Save Scene Image": {
      "main": [
        [
          {
            "node": "Rate Limit Wait",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Upload Thumbnail": {
      "main": [
        [
          {
            "node": "Update Status: Completed",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Upload to YouTube": {
      "main": [
        [
          {
            "node": "Check Upload Success",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Download Thumbnail": {
      "main": [
        [
          {
            "node": "Upload to YouTube",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Thumbnail": {
      "main": [
        [
          {
            "node": "Download Thumbnail",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Log Error to Sheet": {
      "main": [
        [
          {
            "node": "Process One Channel at a Time",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Read Video Prompts": {
      "main": [
        [
          {
            "node": "Filter Pending Videos",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Check Script Success": {
      "main": [
        [
          {
            "node": "Parse & Structure Script",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Check Upload Success": {
      "main": [
        [
          {
            "node": "Upload Thumbnail",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Download Scene Image": {
      "main": [
        [
          {
            "node": "Save Scene Image",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Audio (TTS)": {
      "main": [
        [
          {
            "node": "Save Audio File",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Scene Image": {
      "main": [
        [
          {
            "node": "Download Scene Image",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Channel Data": {
      "main": [
        [
          {
            "node": "Update Status: Processing",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Video Binary": {
      "main": [
        [
          {
            "node": "Generate Thumbnail",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Split Image Segments": {
      "main": [
        [
          {
            "node": "Generate Scene Image",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Aggregate Video Assets",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Cleanup Session Files": {
      "main": [
        [
          {
            "node": "Process One Channel at a Time",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Encode Video (ffmpeg)": {
      "main": [
        [
          {
            "node": "Check Encoding Success",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Filter Pending Videos": {
      "main": [
        [
          {
            "node": "Process One Channel at a Time",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "No Pending Videos",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Script (LLM)": {
      "main": [
        [
          {
            "node": "Check Script Success",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Update Status: Failed": {
      "main": [
        [
          {
            "node": "Log Error to Sheet",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Aggregate Video Assets": {
      "main": [
        [
          {
            "node": "Encode Video (ffmpeg)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Check Encoding Success": {
      "main": [
        [
          {
            "node": "Prepare Video Binary",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Error Handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Schedule Every 6 Hours": {
      "main": [
        [
          {
            "node": "Read Video Prompts",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Parse & Structure Script": {
      "main": [
        [
          {
            "node": "Generate Audio (TTS)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Update Status: Completed": {
      "main": [
        [
          {
            "node": "Cleanup Session Files",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Update Status: Processing": {
      "main": [
        [
          {
            "node": "Generate Script (LLM)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Process One Channel at a Time": {
      "main": [
        [
          {
            "node": "Prepare Channel Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}