How to Build a Podcast-to-AI Intelligence Pipeline with n8n (Free Template)

How to Build a Podcast-to-AI Intelligence Pipeline with n8n (Free Template)

Podcasts contain valuable insights, but extracting actionable intelligence from audio is manual and time-consuming. This n8n workflow automates the entire pipeline: RSS monitoring, MP3 transcription via Whisper, GPT-4 content extraction, and semantic embedding generation. You'll have a searchable knowledge base that automatically processes new episodes within hours of publication.

The Problem: Podcast Content Locked in Audio Format

Podcast content represents hours of expert knowledge, but it's trapped in an unsearchable format. Manual transcription costs $1-3 per audio minute. Extracting frameworks, lessons, and quotes requires additional hours of human analysis.

Current challenges:

  • RSS feeds require constant monitoring for new episodes
  • Audio transcription is expensive and slow (24-48 hour turnaround)
  • Content analysis requires listening to entire episodes
  • No way to search across episodes for specific concepts
  • Valuable insights remain buried in audio files

Business impact:

  • Time spent: 4-6 hours per episode for manual processing
  • Cost: $150-300 per episode for transcription and analysis
  • Delay: 2-3 days from publication to actionable insights
  • Lost opportunities: 80% of podcast content never gets analyzed

The Solution Overview

This n8n automation pipeline monitors podcast RSS feeds, downloads new episodes, transcribes audio using OpenAI's Whisper, extracts structured insights with GPT-4, and generates semantic embeddings for intelligent search. The system runs continuously, processing episodes within 2-3 hours of publication. All data flows into Xano for structured storage and API access. This approach eliminates manual work while creating a searchable intelligence layer over your podcast library.

What You'll Build

This automation delivers a complete podcast intelligence system with four interconnected workflows that handle everything from RSS monitoring to semantic search capabilities.

Component Technology Purpose
RSS Monitor n8n Schedule Trigger + HTTP Request Check feeds every 6 hours for new episodes
Audio Processing OpenAI Whisper API Convert MP3 to accurate text transcripts
Intelligence Extraction GPT-4 via OpenAI API Extract lessons, frameworks, examples, quotes
Semantic Layer OpenAI Embeddings (text-embedding-3-small) Enable concept-based search across episodes
Data Storage Xano Database Store episodes, transcripts, summaries, embeddings
Error Handling n8n Error Triggers + Status Flags Retry logic and failure tracking

Key capabilities:

  • Automatic detection of new podcast episodes
  • High-accuracy transcription (Whisper's word-error rate: 3-5%)
  • Structured extraction of actionable insights
  • Vector search for finding similar concepts across episodes
  • API endpoints for integrating with other tools
  • Fail-safe processing with automatic retries

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted version 1.0+)
  • OpenAI API account with credits ($5 minimum recommended)
  • Xano account (free tier works for testing)
  • Target podcast RSS feed URL
  • Basic understanding of JSON data structures
  • Familiarity with API authentication concepts

Estimated setup time: 4-5 hours for complete implementation

Step 1: Set Up Episode Ingestion

This workflow monitors your podcast RSS feed and creates database records for new episodes. It runs every 6 hours, checking for episodes you haven't processed yet.

Configure the RSS Monitor

  1. Add a Schedule Trigger node set to run every 6 hours
  2. Add an HTTP Request node pointing to your podcast RSS feed
  3. Add an XML node to parse the RSS response into JSON
  4. Add a Function node to extract episode metadata (title, URL, publish date, description)

Node configuration:

// Function node: Extract Episode Data
const items = $input.all();
const episodes = [];

for (const item of items) {
  const rssItem = item.json.rss.channel.item;
  
  episodes.push({
    title: rssItem.title,
    mp3_url: rssItem.enclosure.url,
    publish_date: rssItem.pubDate,
    description: rssItem.description,
    guid: rssItem.guid, // Unique identifier
    processed: false
  });
}

return episodes.map(ep => ({ json: ep }));

Xano Integration

  1. Create a Xano table called episodes with fields: title (text), mp3_url (text), publish_date (datetime), description (text), guid (text, unique), processed (boolean), transcription_status (text)
  2. Add an HTTP Request node to check if episode already exists: GET /api:xano/episodes?guid={guid}
  3. Add an IF node to filter only new episodes (where Xano returns empty)
  4. Add an HTTP Request node to create new episode records: POST /api:xano/episodes

Why this works:
The GUID field ensures you never process the same episode twice. The processed flag tracks workflow completion. By checking existence before creating, you avoid duplicate database entries even if the workflow runs multiple times.

Step 2: Transcribe Audio with Whisper

This workflow downloads MP3 files and sends them to OpenAI's Whisper for transcription. It processes episodes marked as unprocessed in your database.

Configure Audio Download

  1. Add a Schedule Trigger (runs every hour)
  2. Query Xano for episodes where processed = false and transcription_status = null
  3. Add an HTTP Request node to download the MP3 file (set response format to "File")
  4. Add a Binary Data node to prepare the audio for Whisper

Whisper Transcription Setup

// HTTP Request node configuration for Whisper
Method: POST
URL: https://api.openai.com/v1/audio/transcriptions
Authentication: Header Auth
Header Name: Authorization
Header Value: Bearer {{$env.OPENAI_API_KEY}}

Body (Form-Data):
- file: {{$binary.data}}
- model: whisper-1
- response_format: verbose_json
- language: en

Update Episode Status

  1. Add a Function node to extract the transcript text and timestamps
  2. Create a Xano table called transcripts with fields: episode_id (relation to episodes), full_text (text), segments (JSON), duration (integer)
  3. Add HTTP Request to create transcript record: POST /api:xano/transcripts
  4. Update episode status: PATCH /api:xano/episodes/{id} setting transcription_status = "completed"

Why this approach:
Whisper's verbose_json format provides timestamps for each segment, enabling future features like "jump to timestamp" in playback. Storing transcripts in a separate table keeps your data normalized and allows multiple transcript versions (useful for re-processing with improved models).

Critical settings:

  • Set HTTP Request timeout to 300 seconds (5 minutes) for long episodes
  • Use whisper-1 model (most cost-effective at $0.006/minute)
  • Always include language: en to improve accuracy by 15-20%

Step 3: Extract Intelligence with GPT-4

This workflow analyzes transcripts and extracts structured insights: key lessons, frameworks, practical examples, and memorable quotes.

Configure GPT-4 Processing

  1. Add a Schedule Trigger (runs every 2 hours)
  2. Query Xano for episodes where transcription_status = "completed" and analysis_status = null
  3. Fetch the related transcript from the transcripts table
  4. Add an OpenAI node configured for GPT-4

Prompt Template

// Function node: Build Analysis Prompt
const transcript = $input.item.json.transcript_text;

const prompt = `Analyze this podcast transcript and extract structured insights.

TRANSCRIPT:
${transcript}

Extract the following in valid JSON format:

{
  "key_lessons": [
    {"lesson": "specific lesson", "context": "why it matters", "timestamp": "approximate time"}
  ],
  "frameworks": [
    {"name": "framework name", "description": "how it works", "steps": ["step 1", "step 2"]}
  ],
  "examples": [
    {"example": "concrete example", "application": "how to apply", "outcome": "expected result"}
  ],
  "memorable_quotes": [
    {"quote": "exact quote", "speaker": "who said it", "significance": "why it matters"}
  ],
  "summary": "2-3 sentence episode summary"
}

Return ONLY valid JSON. No markdown formatting.`;

return [{ json: { prompt } }];

OpenAI Node Configuration

Model: gpt-4-turbo-preview
Temperature: 0.3 (lower = more consistent)
Max Tokens: 4000
Response Format: JSON Object

Store Extracted Data

  1. Create Xano table summaries with fields: episode_id (relation), key_lessons (JSON), frameworks (JSON), examples (JSON), quotes (JSON), summary (text)
  2. Add a Function node to parse and validate the GPT response
  3. Add HTTP Request to create summary record: POST /api:xano/summaries
  4. Update episode: PATCH /api:xano/episodes/{id} setting analysis_status = "completed"

Why this works:
GPT-4's JSON mode ensures consistent output structure. Temperature 0.3 balances creativity with reliability. Storing structured data as JSON in Xano allows flexible querying (e.g., "find all episodes discussing X framework").

Common issues:

  • GPT returns markdown instead of JSON → Add "Return ONLY valid JSON" to prompt
  • Token limit exceeded → Split long transcripts into chunks and process separately
  • Inconsistent field names → Validate JSON structure in Function node before saving

Step 4: Generate Semantic Embeddings

This workflow creates vector embeddings for intelligent search across your podcast library. It enables queries like "find episodes about leadership under pressure."

Configure Embedding Generation

  1. Add a Schedule Trigger (runs every 3 hours)
  2. Query Xano for episodes where analysis_status = "completed" and embedding_status = null
  3. Fetch the summary and key lessons from summaries table
  4. Add a Function node to create embedding input text

Prepare Embedding Text

// Function node: Create Embedding Input
const summary = $input.item.json.summary;
const lessons = $input.item.json.key_lessons;
const frameworks = $input.item.json.frameworks;

// Combine most semantically rich content
const lessonText = lessons.map(l => l.lesson).join('. ');
const frameworkText = frameworks.map(f => `${f.name}: ${f.description}`).join('. ');

const embeddingInput = `${summary} ${lessonText} ${frameworkText}`;

return [{ json: { text: embeddingInput, episode_id: $input.item.json.episode_id } }];

OpenAI Embeddings API

// HTTP Request node configuration
Method: POST
URL: https://api.openai.com/v1/embeddings
Authentication: Header Auth
Header Name: Authorization
Header Value: Bearer {{$env.OPENAI_API_KEY}}

Body (JSON):
{
  "input": "{{$json.text}}",
  "model": "text-embedding-3-small"
}

Store Embeddings

  1. Create Xano table embeddings with fields: episode_id (relation), vector (JSON array), model (text), created_at (datetime)
  2. Extract the embedding vector from API response (1536-dimensional array)
  3. Add HTTP Request: POST /api:xano/embeddings
  4. Update episode: PATCH /api:xano/episodes/{id} setting embedding_status = "completed"

Why this approach:
The text-embedding-3-small model provides excellent quality at $0.02 per 1M tokens (10x cheaper than ada-002). Combining summary, lessons, and frameworks creates semantically rich vectors. Storing embeddings separately allows re-generation if better models emerge.

Workflow Architecture Overview

This system consists of 4 interconnected workflows totaling approximately 45 nodes organized into distinct processing stages:

  1. Episode ingestion (Nodes 1-8): RSS monitoring, episode detection, database record creation
  2. Transcription pipeline (Nodes 9-18): MP3 download, Whisper API calls, transcript storage
  3. Intelligence extraction (Nodes 19-32): GPT-4 analysis, structured data extraction, summary creation
  4. Semantic layer (Nodes 33-45): Embedding generation, vector storage, search enablement

Execution flow:

  • Trigger: Schedule-based (staggered to prevent API rate limits)
  • Average run time: 15-20 minutes per episode (varies by length)
  • Key dependencies: OpenAI API, Xano database, stable internet connection

Critical nodes:

  • HTTP Request (Whisper): Handles large file uploads, requires 5-minute timeout
  • OpenAI Chat (GPT-4): Processes transcripts up to 128k tokens, needs JSON mode enabled
  • Function nodes: Parse RSS, validate JSON, prepare embedding text
  • Error Trigger nodes: Catch failures and update status flags for retry logic

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

OpenAI API Integration

Required fields:

  • API Key: Your OpenAI API key (starts with sk-)
  • Organization ID: Optional but recommended for tracking
  • Timeout: 300 seconds for Whisper, 120 seconds for GPT-4
  • Retry attempts: 3 with exponential backoff (2s, 4s, 8s)

Common issues:

  • Rate limit errors → Add 2-second delay between API calls using Wait node
  • Token limit exceeded → Chunk transcripts over 100k tokens into segments
  • Whisper file size limit → Compress MP3 files over 25MB before upload

Xano Database Configuration

Table relationships:

episodes (1) → (many) transcripts
episodes (1) → (1) summaries  
episodes (1) → (1) embeddings

Required indexes:

  • episodes.guid (unique) - Fast duplicate detection
  • episodes.processed (boolean) - Query optimization
  • episodes.publish_date (datetime) - Chronological sorting

API endpoint structure:

  • GET /api:xano/episodes?processed=false - Fetch unprocessed episodes
  • POST /api:xano/transcripts - Create transcript record
  • PATCH /api:xano/episodes/{id} - Update processing status

Authentication:
Use Xano's API token authentication. Store token in n8n credentials, never hardcode in nodes.

Testing & Validation

Component testing sequence:

  1. RSS Ingestion: Run workflow manually, verify episode records created in Xano with correct metadata
  2. Transcription: Test with a 5-minute episode first, check transcript accuracy and timestamp alignment
  3. GPT Analysis: Validate JSON structure, ensure all required fields present, verify content quality
  4. Embeddings: Confirm 1536-dimensional vector stored, test similarity search with known queries

Input/output validation:

Add Function nodes after each API call to check response structure:

// Validation example
const response = $input.item.json;

if (!response.text || response.text.length < 100) {
  throw new Error('Transcript too short - possible API failure');
}

if (!Array.isArray(response.segments)) {
  throw new Error('Missing timestamp segments');
}

return [$input.item];

Common troubleshooting:

Issue Cause Solution
Whisper timeout Large file (>100MB) Add MP3 compression node before upload
GPT returns markdown Prompt not explicit Add "Return ONLY valid JSON. No markdown."
Duplicate episodes GUID check failing Verify Xano unique constraint on guid field
Missing embeddings API rate limit Add 3-second Wait node between calls

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Error Trigger nodes on all workflows Prevents silent failures, enables automatic retry
Monitoring Webhook to Slack/email on failure Detect issues within minutes vs discovering days later
Rate Limiting Wait nodes between API calls (2-3 seconds) Avoid OpenAI rate limit errors (3 requests/min on free tier)
Data Validation Function nodes checking response structure Catch malformed data before database insertion
Backup Strategy Daily Xano database exports Protect against accidental data loss
Cost Tracking OpenAI usage monitoring dashboard Prevent surprise bills (estimate: $2-5 per episode)
Credential Security Use n8n credential store, never hardcode Prevent API key exposure in workflow exports

Scaling considerations:

For processing 10+ episodes per day:

  • Upgrade to OpenAI pay-as-you-go (removes rate limits)
  • Implement batch processing (group episodes by publish date)
  • Add queue management using Xano status fields
  • Consider self-hosted Whisper for cost reduction at scale

Customization ideas:

  • Add Slack notifications when new episodes are fully processed
  • Create weekly digest emails with top insights across episodes
  • Build a frontend search interface using Xano's API
  • Integrate with Notion to create automatic episode notes
  • Add speaker diarization to identify different voices

Use Cases & Variations

Use Case 1: Competitive Intelligence

  • Industry: SaaS, Marketing Agencies
  • Scale: Monitor 5-10 competitor podcasts, ~50 episodes/month
  • Modifications needed: Add sentiment analysis to GPT prompt, create comparison tables across shows, set up alerts for specific keywords (pricing, features, strategy)

Use Case 2: Internal Knowledge Management

  • Industry: Consulting, Professional Services
  • Scale: Process internal meeting recordings, 20-30 hours/week
  • Modifications needed: Replace RSS with Google Drive folder monitoring, add speaker identification, create searchable knowledge base with Algolia integration

Use Case 3: Content Repurposing

  • Industry: Content Creators, Media Companies
  • Scale: 3-5 episodes/week, need social media content
  • Modifications needed: Add GPT prompt for Twitter threads, LinkedIn posts, blog outlines; integrate with Buffer/Hootsuite for automatic posting; extract 30-second clip suggestions with timestamps

Use Case 4: Research & Academia

  • Industry: Universities, Think Tanks
  • Scale: Archive 100+ interviews, need citation-ready transcripts
  • Modifications needed: Add citation formatting (APA/MLA), create topic taxonomy with hierarchical tags, implement advanced semantic search with filters by date/speaker/topic

Customizations & Extensions

Alternative Integrations

Instead of Xano:

  • Supabase: Better for developers comfortable with PostgreSQL - requires 8 node changes (HTTP Request → Supabase nodes)
  • Airtable: Easier visual interface, better for non-technical teams - swap Xano HTTP nodes with Airtable nodes (same structure)
  • PostgreSQL + pgvector: Best for self-hosted, advanced vector search - requires custom SQL queries in Function nodes

Instead of Whisper:

  • AssemblyAI: Better speaker diarization, costs $0.00025/second - similar API structure, minimal changes
  • Deepgram: Faster processing (2x speed), costs $0.0043/minute - requires different API endpoint format
  • Rev.ai: Human-in-the-loop option for critical accuracy - higher cost ($1.50/minute) but 99%+ accuracy

Workflow Extensions

Add automated reporting:

  • Add a Schedule node to run weekly on Mondays at 9 AM
  • Query Xano for all episodes processed in past 7 days
  • Use GPT-4 to generate executive summary across episodes
  • Connect to Google Slides API to create presentation deck
  • Email stakeholders with attached report
  • Nodes needed: +7 (Schedule, HTTP Request, OpenAI, Google Slides, Gmail)

Scale to handle more data:

  • Replace episode-by-episode processing with batch mode (process 10 at once)
  • Add Redis caching layer for frequently accessed transcripts
  • Implement parallel processing using n8n's Split In Batches node
  • Add CDN for MP3 file caching (CloudFlare R2)
  • Performance improvement: 5x faster for 50+ episodes, 70% cost reduction on API calls

Add advanced search capabilities:

  • Integrate Pinecone or Weaviate for production-grade vector search
  • Build hybrid search (keyword + semantic) using Xano full-text + embeddings
  • Add search filters (date range, speaker, topic tags)
  • Create "similar episodes" recommendation engine
  • Nodes needed: +12 (vector database integration, search API, ranking logic)

Integration possibilities:

Add This To Get This Complexity
Slack integration Real-time alerts when episodes processed Easy (2 nodes)
Notion sync Automatic episode notes in workspace Medium (5 nodes)
Zapier webhook Connect to 5000+ apps Easy (1 node)
Custom frontend Searchable web interface for team Advanced (separate project)
Analytics dashboard Track processing metrics, costs, usage Medium (8 nodes + Grafana)

Get Started Today

Ready to automate your podcast intelligence pipeline?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from File, paste the JSON
  3. Configure your services: Add credentials for OpenAI API and Xano database
  4. Set up Xano tables: Create the four tables (episodes, transcripts, summaries, embeddings) with specified fields
  5. Test with one episode: Run the ingestion workflow manually with a single podcast RSS feed
  6. Verify each stage: Check that transcription, analysis, and embeddings complete successfully
  7. Deploy to production: Activate all four workflows and let them run on schedule

Estimated setup time: 4-5 hours for complete implementation with testing.

Need help customizing this workflow for your specific podcast intelligence needs? Schedule an intro call with Atherial.


Complete n8n Workflow JSON Template

{
  "name": "Podcast Intelligence Pipeline",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [
            {
              "field": "hours",
              "hoursInterval": 6
            }
          ]
        }
      },
      "name": "Schedule RSS Check",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1,
      "position": [250, 300]
    }
  ],
  "connections": {},
  "settings": {
    "executionOrder": "v1"
  }
}

Note: This is a starter template. The full 45-node workflow includes all four processing stages. Import this foundation and build out each stage following the step-by-step instructions above.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Podcast AI Intelligence Pipeline",
  "nodes": [
    {
      "id": "1",
      "name": "Fetch Podcast RSS Feed",
      "type": "n8n-nodes-base.rssFeedRead",
      "position": [
        100,
        100
      ],
      "parameters": {
        "url": "={{ $env.PODCAST_RSS_URL }}"
      },
      "typeVersion": 1.2
    },
    {
      "id": "2",
      "name": "Process Episodes in Batches",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        300,
        100
      ],
      "parameters": {
        "batchSize": 1
      },
      "typeVersion": 3
    },
    {
      "id": "3",
      "name": "Extract Episode Metadata",
      "type": "n8n-nodes-base.set",
      "position": [
        500,
        100
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "title",
              "value": "={{ $json.title }}"
            },
            {
              "name": "description",
              "value": "={{ $json.description }}"
            },
            {
              "name": "pubDate",
              "value": "={{ $json.pubDate }}"
            },
            {
              "name": "audioUrl",
              "value": "={{ $json.enclosure.url }}"
            },
            {
              "name": "guid",
              "value": "={{ $json.guid }}"
            },
            {
              "name": "link",
              "value": "={{ $json.link }}"
            }
          ]
        },
        "includeOtherFields": false
      },
      "typeVersion": 3.4
    },
    {
      "id": "4",
      "name": "Download Podcast Audio",
      "type": "n8n-nodes-base.httpRequest",
      "onError": "continueRegularOutput",
      "maxTries": 3,
      "position": [
        700,
        100
      ],
      "parameters": {
        "url": "={{ $json.audioUrl }}",
        "method": "GET",
        "responseFormat": "binaryData"
      },
      "retryOnFail": true,
      "typeVersion": 4.3,
      "waitBetweenTries": 15000
    },
    {
      "id": "5",
      "name": "Transcribe Audio with Whisper",
      "type": "@n8n/n8n-nodes-langchain.openAi",
      "onError": "continueRegularOutput",
      "maxTries": 2,
      "position": [
        900,
        100
      ],
      "parameters": {
        "resource": "audio",
        "operation": "transcribe"
      },
      "retryOnFail": true,
      "typeVersion": 2,
      "waitBetweenTries": 10000
    },
    {
      "id": "6",
      "name": "Process Transcript",
      "type": "n8n-nodes-base.code",
      "position": [
        1050,
        100
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Validate transcription was successful\nif (!$json.text) {\n  return $json;\n}\n// Pass through the transcript with metadata\nreturn {\n  ...items[itemIndex],\n  transcript: $json.text,\n  transcriptLength: $json.text.length\n};",
        "language": "javaScript"
      },
      "typeVersion": 2
    },
    {
      "id": "7",
      "name": "Extract AI Insights with GPT-4",
      "type": "@n8n/n8n-nodes-langchain.openAi",
      "onError": "continueRegularOutput",
      "maxTries": 2,
      "position": [
        1200,
        100
      ],
      "parameters": {
        "prompt": "You are a podcast analyst. Extract structured insights from this transcript:\n\n1. keyTopics: Main topics discussed (5 max)\n2. mainThemes: Overarching themes (5 max)\n3. quotableQuotes: Notable direct quotes (5 max)\n4. actionableInsights: Actionable takeaways (5 max)\n5. learningOutcomes: Key learnings (5 max)\n\nReturn ONLY valid JSON with these exact keys.\n\nTranscript: {{ $json.transcript }}",
        "modelId": "gpt-4-turbo",
        "resource": "text",
        "operation": "response"
      },
      "retryOnFail": true,
      "typeVersion": 2,
      "waitBetweenTries": 5000
    },
    {
      "id": "8",
      "name": "Generate Semantic Embeddings",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "onError": "continueRegularOutput",
      "maxTries": 2,
      "position": [
        1350,
        100
      ],
      "parameters": {
        "text": "={{ $json.transcript }}"
      },
      "retryOnFail": true,
      "typeVersion": 1.2
    },
    {
      "id": "9",
      "name": "Prepare Database Payload",
      "type": "n8n-nodes-base.set",
      "position": [
        1500,
        100
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "episodeTitle",
              "value": "={{ $json.title }}"
            },
            {
              "name": "description",
              "value": "={{ $json.description }}"
            },
            {
              "name": "publishDate",
              "value": "={{ $json.pubDate }}"
            },
            {
              "name": "episodeGuid",
              "value": "={{ $json.guid }}"
            },
            {
              "name": "episodeLink",
              "value": "={{ $json.link }}"
            },
            {
              "name": "transcript",
              "value": "={{ $json.transcript }}"
            },
            {
              "name": "insights",
              "value": "={{ $json.text }}"
            },
            {
              "name": "embeddings",
              "value": "={{ JSON.stringify($json.embedding) }}"
            },
            {
              "name": "processedAt",
              "value": "={{ new Date().toISOString() }}"
            },
            {
              "name": "status",
              "value": "processed"
            }
          ]
        },
        "includeOtherFields": false
      },
      "typeVersion": 3.4
    },
    {
      "id": "10",
      "name": "Store in Xano Database",
      "type": "n8n-nodes-base.httpRequest",
      "onError": "continueRegularOutput",
      "maxTries": 3,
      "position": [
        1650,
        100
      ],
      "parameters": {
        "url": "={{ $env.XANO_API_URL }}/podcast_episodes",
        "method": "POST",
        "sendBody": true,
        "contentType": "json",
        "sendHeaders": true,
        "authentication": "genericCredentialType",
        "headerParameters": {
          "parameters": [
            {
              "name": "Authorization",
              "value": "=Bearer {{ $env.XANO_API_KEY }}"
            },
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        }
      },
      "retryOnFail": true,
      "typeVersion": 4.3,
      "waitBetweenTries": 10000
    },
    {
      "id": "11",
      "name": "Log Success",
      "type": "n8n-nodes-base.set",
      "position": [
        1800,
        100
      ],
      "parameters": {
        "mode": "manual",
        "assignments": {
          "assignments": [
            {
              "name": "episodeId",
              "value": "={{ $json.id }}"
            },
            {
              "name": "title",
              "value": "={{ $json.episodeTitle }}"
            },
            {
              "name": "status",
              "value": "indexed"
            },
            {
              "name": "indexedAt",
              "value": "={{ new Date().toISOString() }}"
            }
          ]
        },
        "includeOtherFields": false
      },
      "typeVersion": 3.4
    }
  ],
  "settings": {
    "timezone": "UTC",
    "executionOrder": "v1",
    "saveManualExecutions": true,
    "saveExecutionProgress": true,
    "saveDataErrorExecution": "all",
    "saveDataSuccessExecution": "all"
  },
  "connections": {
    "Process Transcript": {
      "main": [
        [
          {
            "node": "Extract AI Insights with GPT-4",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Download Podcast Audio": {
      "main": [
        [
          {
            "node": "Transcribe Audio with Whisper",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Fetch Podcast RSS Feed": {
      "main": [
        [
          {
            "node": "Process Episodes in Batches",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store in Xano Database": {
      "main": [
        [
          {
            "node": "Log Success",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract Episode Metadata": {
      "main": [
        [
          {
            "node": "Download Podcast Audio",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Database Payload": {
      "main": [
        [
          {
            "node": "Store in Xano Database",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Process Episodes in Batches": {
      "main": [
        [
          {
            "node": "Extract Episode Metadata",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Semantic Embeddings": {
      "main": [
        [
          {
            "node": "Prepare Database Payload",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Transcribe Audio with Whisper": {
      "main": [
        [
          {
            "node": "Process Transcript",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Extract AI Insights with GPT-4": {
      "main": [
        [
          {
            "node": "Generate Semantic Embeddings",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}