How to Build an AI Sports Stats Narrative Generator with n8n (Free Template)

How to Build an AI Sports Stats Narrative Generator with n8n (Free Template)

Sports content creators face a daily grind: gathering fresh statistics, analyzing trends, and crafting compelling narratives for multiple games across multiple leagues. Manual research eats 3-4 hours daily and produces inconsistent quality. This n8n workflow automates the entire process, using AI to transform raw sports data into ready-to-publish bullet points for 5-10 games every day. You'll learn to build a production-ready system that delivers trend-based analysis, not data dumps.

The Problem: Manual Sports Analysis Doesn't Scale

Content creators, sports analysts, and media teams need fresh insights for multiple games daily across NFL, NBA, NHL, MLB, college basketball, and college football. Gathering statistics manually means visiting multiple websites, copying data, analyzing trends, and writing narratives—all before publication deadlines.

Current challenges:

  • Researching 5-10 games takes 3-4 hours of manual work daily
  • Inconsistent narrative quality when rushing to meet deadlines
  • Raw statistics don't engage audiences without context
  • Compliance risks when scraping data from sports websites
  • Stale analysis when yesterday's data isn't incorporated

Business impact:

  • Time spent: 15-20 hours per week on research alone
  • Missed revenue: Late content loses 40-60% of potential engagement
  • Quality issues: Data dumps without narratives reduce audience retention by 35%

The Solution Overview

This n8n workflow automates sports statistics gathering and AI-powered narrative generation. The system retrieves up-to-date data from compliant sources, processes it through OpenAI's GPT models, and outputs 4-5 compelling bullet points per game. The workflow handles multiple leagues simultaneously, ensuring fresh analysis from yesterday's games reaches your audience every morning. Key components include scheduled triggers, API integrations for sports data, OpenAI function nodes for narrative generation, and structured output formatting.

What You'll Build

This automation delivers a complete sports analysis pipeline that runs daily without manual intervention. The system handles data retrieval, AI processing, and formatted output generation.

Component Technology Purpose
Trigger Schedule Node Runs daily at specified time for fresh analysis
Data Source Sports API (SportsData.io, ESPN API, or similar) Retrieves yesterday's game statistics
AI Engine OpenAI GPT-4 Transforms raw stats into narrative bullet points
Logic Processing Function Nodes Filters games, structures prompts, formats output
Output Google Sheets/Airtable Stores daily narratives for content distribution
Notification Slack/Email Alerts team when new analysis is ready

Key capabilities:

  • Processes 5-10 games daily across NFL, NBA, NHL, MLB, college basketball, college football
  • Generates 4-5 narrative bullet points per game
  • Incorporates trend analysis (player streaks, team momentum, historical matchups)
  • Ensures compliance by using official API sources
  • Delivers same-day analysis from previous day's games

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted version 1.0+)
  • OpenAI API account with GPT-4 access
  • Sports data API account (SportsData.io, ESPN API, or The Odds API)
  • Google Sheets or Airtable for output storage
  • Slack workspace (optional, for notifications)
  • Basic understanding of API authentication and JSON data structures

Step 1: Set Up Sports Data Retrieval

This phase configures the data source that feeds your AI analysis engine. You'll connect to a sports API that provides compliant, up-to-date statistics.

Configure the Schedule Trigger

  1. Add a Schedule Trigger node set to run daily at 6:00 AM (adjust for your timezone)
  2. Set mode to "Every Day" with execution time 06:00
  3. Enable "Execute Once" to prevent duplicate runs

Connect Your Sports Data API

  1. Add an HTTP Request node named "Fetch Yesterday's Games"
  2. Set Method to GET
  3. Configure URL: https://api.sportsdata.io/v3/[sport]/scores/json/GamesByDate/[yesterday]
  4. Add Authentication: Header Auth with key Ocp-Apim-Subscription-Key
  5. Set timeout to 30 seconds for large datasets

Node configuration:

{
  "method": "GET",
  "url": "=https://api.sportsdata.io/v3/nfl/scores/json/GamesByDate/{{ $now.minus({days: 1}).toFormat('yyyy-MM-dd') }}",
  "authentication": "headerAuth",
  "headerAuth": {
    "name": "Ocp-Apim-Subscription-Key",
    "value": "={{ $credentials.sportsDataApiKey }}"
  },
  "options": {
    "timeout": 30000
  }
}

Why this works:

This approach uses official API endpoints that comply with terms of service, eliminating scraping risks. The dynamic date expression ensures you always pull yesterday's completed games, not today's scheduled matchups. Setting a 30-second timeout prevents workflow failures when APIs experience high traffic during peak sports seasons.

Step 2: Filter and Structure Game Data

Raw API responses contain 50+ fields per game. This phase extracts only the statistics needed for compelling narratives.

Add Function Node for Data Filtering

  1. Create a Function node named "Extract Key Stats"
  2. Configure to process all items from the API response
  3. Map essential fields: teams, final scores, key player stats, game context

Function code:

const games = [];

for (const game of $input.all()) {
  const gameData = game.json;
  
  // Only process completed games
  if (gameData.Status !== 'Final') continue;
  
  games.push({
    league: 'NFL',
    homeTeam: gameData.HomeTeam,
    awayTeam: gameData.AwayTeam,
    homeScore: gameData.HomeScore,
    awayScore: gameData.AwayScore,
    date: gameData.Date,
    topPerformers: gameData.PlayerStats.slice(0, 3),
    teamStats: {
      homeTotalYards: gameData.HomeTeamStats.TotalYards,
      awayTotalYards: gameData.AwayTeamStats.TotalYards,
      homeTurnovers: gameData.HomeTeamStats.Turnovers,
      awayTurnovers: gameData.AwayTeamStats.Turnovers
    }
  });
}

return games.slice(0, 10); // Limit to 10 games daily

Why this approach:

Filtering data before sending to OpenAI reduces token costs by 60-70%. Extracting only completed games prevents analysis of postponed or cancelled matchups. Limiting to 10 games ensures consistent daily output volume and prevents API rate limits.

Variables to customize:

  • slice(0, 10): Adjust for more/fewer daily games
  • Status !== 'Final': Modify to include live games if needed
  • topPerformers.slice(0, 3): Change number of featured players

Step 3: Generate AI-Powered Narratives

This phase transforms structured statistics into compelling bullet points using OpenAI's language models.

Configure OpenAI Node

  1. Add an OpenAI node set to "Message a Model"
  2. Select model: GPT-4 (not GPT-3.5—narrative quality matters)
  3. Set temperature to 0.7 for creative but consistent output
  4. Configure max tokens: 500 per game

Prompt engineering:

{
  "model": "gpt-4",
  "temperature": 0.7,
  "max_tokens": 500,
  "messages": [
    {
      "role": "system",
      "content": "You are a sports analyst who creates compelling narratives from statistics. Focus on trends, momentum, and storylines—not just raw numbers. Each bullet point should tell a story."
    },
    {
      "role": "user",
      "content": "=Analyze this game and create exactly 4-5 narrative bullet points:

Game: {{ $json.awayTeam }} @ {{ $json.homeTeam }}
Final Score: {{ $json.awayScore }} - {{ $json.homeScore }}

Key Stats:
- Total Yards: {{ $json.teamStats.awayTotalYards }} (away) vs {{ $json.teamStats.homeTotalYards }} (home)
- Turnovers: {{ $json.teamStats.awayTurnovers }} (away) vs {{ $json.teamStats.homeTurnovers }} (home)
- Top Performers: {{ $json.topPerformers }}

Provide trend-based analysis, not data dumps. Focus on what this means for team momentum, player streaks, and upcoming matchups."
    }
  ]
}

Why this works:

Setting temperature to 0.7 balances creativity with consistency—0.5 produces repetitive phrasing, 0.9 generates unreliable facts. The system prompt establishes narrative voice across all games. Limiting to 4-5 bullet points prevents overwhelming readers while providing substantive analysis. Explicitly requesting "trend-based analysis" in the user prompt prevents GPT-4 from simply restating statistics.

Step 4: Format and Store Output

This phase structures AI-generated narratives for easy distribution to content channels.

Add Set Node for Output Formatting

  1. Create a Set node named "Format for Distribution"
  2. Map fields: league, matchup, date, bullet points, metadata
  3. Add timestamp for tracking

Set node configuration:

{
  "values": {
    "league": "={{ $json.league }}",
    "matchup": "={{ $json.awayTeam }} @ {{ $json.homeTeam }}",
    "final_score": "={{ $json.awayScore }} - {{ $json.homeScore }}",
    "analysis_date": "={{ $now.toFormat('yyyy-MM-dd') }}",
    "narrative_bullets": "={{ $json.openai_response.choices[0].message.content }}",
    "word_count": "={{ $json.openai_response.choices[0].message.content.split(' ').length }}",
    "generated_at": "={{ $now.toISO() }}"
  }
}

Connect to Google Sheets

  1. Add Google Sheets node set to "Append" mode
  2. Select your output spreadsheet
  3. Map columns to formatted data fields
  4. Enable "Always Output Data" to track failures

Why this structure:

Separating formatting from storage allows you to change output destinations (Airtable, Notion, CMS) without rebuilding logic. Including word count helps monitor AI output consistency—sudden changes indicate prompt issues. Timestamps enable performance tracking: how long does each game take to process?

Workflow Architecture Overview

This workflow consists of 8 core nodes organized into 4 main sections:

  1. Data ingestion (Nodes 1-3): Schedule trigger fires daily, HTTP Request fetches yesterday's games, Function node filters to completed games only
  2. AI processing (Nodes 4-5): OpenAI node generates narratives, Set node structures output
  3. Storage (Node 6): Google Sheets appends formatted analysis
  4. Notifications (Nodes 7-8): Slack alert confirms completion, error handler catches failures

Execution flow:

  • Trigger: Daily at 6:00 AM (configurable)
  • Average run time: 45-90 seconds for 10 games
  • Key dependencies: Sports API must return data by 5:00 AM, OpenAI API must be responsive

Critical nodes:

  • Function (Extract Key Stats): Reduces API response from 50+ fields to 8 essential data points
  • OpenAI (Generate Narratives): Processes structured stats into 4-5 bullet points per game
  • Google Sheets (Store Output): Creates permanent record for content distribution

The complete n8n workflow JSON template is available at the bottom of this article.

Key Configuration Details

OpenAI Integration

Required fields:

  • API Key: Your OpenAI platform API key (not ChatGPT Plus subscription)
  • Model: gpt-4 (GPT-3.5-turbo produces lower-quality narratives)
  • Temperature: 0.7 (balance between creativity and consistency)
  • Max Tokens: 500 per game (4-5 bullets = 300-400 tokens typically)

Common issues:

  • Using GPT-3.5 → Results in data dumps instead of narratives
  • Temperature above 0.8 → Generates unreliable statistics or fabricated trends
  • Insufficient max tokens → Cuts off mid-sentence, incomplete bullet points

Sports API Configuration

Critical settings:

  • Date format: yyyy-MM-dd (ISO 8601 standard)
  • Status filter: Final only (excludes postponed/cancelled games)
  • Rate limits: Most APIs allow 10 requests/minute on free tiers

Why this approach:

Using $now.minus({days: 1}) ensures you always pull completed games, not scheduled matchups. Sports APIs update final statistics 30-60 minutes after game completion, so a 6:00 AM trigger guarantees data availability. Filtering by "Final" status prevents analyzing incomplete data from suspended games.

Testing & Validation

Test with sample data:

  1. Manually trigger the workflow with a known game date
  2. Review OpenAI output: Are bullet points narratives or data dumps?
  3. Check Google Sheets: Do all fields populate correctly?
  4. Verify compliance: Are you using official API endpoints?

Common troubleshooting:

Issue Cause Solution
No games returned API date format mismatch Verify yyyy-MM-dd format in HTTP Request
Generic bullet points Temperature too low Increase to 0.7-0.8
Fabricated statistics Temperature too high Decrease to 0.6-0.7
Incomplete narratives Insufficient max tokens Increase to 600-700

Validation checklist:

  • All 10 games generate exactly 4-5 bullet points
  • Narratives mention trends, not just final scores
  • No duplicate games in output
  • Timestamps show execution completed in under 2 minutes

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with 3 attempts, 30-second delays Sports APIs experience high traffic during playoffs—prevents data loss
Monitoring Daily Slack notification with game count Detect API failures within 1 hour vs discovering at publication time
Cost Management Track OpenAI token usage weekly GPT-4 costs $0.03/1K tokens—10 games daily = $27/month
API Limits Implement rate limiting (10 req/min) Prevents account suspension during high-volume periods

Error handling strategy:

Add an Error Trigger node that catches failures and sends detailed alerts. Include the failed game data, error message, and timestamp. This prevents silent failures where some games process but others don't.

Real-World Use Cases

Use Case 1: Sports Media Outlet

  • Industry: Digital sports journalism
  • Scale: 20-30 games daily across all leagues
  • Modifications needed: Add league-specific Function nodes, increase OpenAI max tokens to 700, implement parallel processing for faster execution

Use Case 2: Fantasy Sports Platform

  • Industry: Fantasy sports content
  • Scale: 10-15 NFL/NBA games daily
  • Modifications needed: Add player prop data, include betting trends, connect to WordPress API for auto-publishing

Use Case 3: Social Media Sports Account

  • Industry: Sports content creation
  • Scale: 5-8 marquee games daily
  • Modifications needed: Shorten bullet points to 280 characters, add image generation nodes, connect to Twitter API

Customizing This Workflow

Alternative Integrations

Instead of Google Sheets:

  • Airtable: Best for visual content calendars—requires 2 node changes (swap Google Sheets for Airtable node)
  • Notion: Better if you manage content in Notion—use Notion API node with database integration
  • WordPress: Use when auto-publishing—add WordPress node to create draft posts directly

Workflow Extensions

Add automated social media posting:

  • Connect Twitter/X API node after formatting
  • Truncate bullet points to 280 characters
  • Add game highlights images via Cloudinary
  • Nodes needed: +4 (Function for truncation, HTTP Request for images, Twitter node, conditional logic)

Scale to handle more leagues:

  • Duplicate HTTP Request nodes for each league API
  • Add Merge node to combine all game data
  • Implement league-specific prompts in OpenAI node
  • Performance improvement: Parallel processing reduces execution time from 90s to 35s for 20 games

Integration possibilities:

Add This To Get This Complexity
Telegram bot Instant notifications to subscribers Easy (3 nodes)
Betting odds API Include spread/over-under analysis Medium (5 nodes)
Historical data Multi-season trend analysis Advanced (12 nodes, database)
Video highlights Auto-generate recap videos Advanced (15 nodes, external services)

Customization ideas:

  • Add sentiment analysis to detect upset victories
  • Include injury report data for context
  • Generate league-wide power rankings weekly
  • Create automated podcast scripts from bullet points

Get Started Today

Ready to automate your sports analysis workflow?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add API credentials for your sports data provider and OpenAI
  4. Test with sample data: Run manually with yesterday's date to verify output quality
  5. Deploy to production: Set your schedule trigger and activate the workflow

Need help customizing this workflow for your specific sports leagues or content requirements? Schedule an intro call with Atherial to discuss enterprise implementations, multi-league scaling, or custom AI prompt engineering for your brand voice.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Daily Sports Stats to Narratives",
  "nodes": [
    {
      "id": "Daily Schedule Trigger",
      "name": "Daily Schedule Trigger",
      "type": "n8n-nodes-base.cron",
      "position": [
        50,
        300
      ],
      "parameters": {
        "triggerTimes": {
          "item": [
            {
              "hour": 8,
              "mode": "every",
              "minute": 0
            }
          ]
        }
      },
      "typeVersion": 1
    },
    {
      "id": "Fetch NFL Stats",
      "name": "Fetch NFL Stats",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        250,
        150
      ],
      "parameters": {
        "url": "https://site.api.espn.com/apis/site/v2/sports/football/nfl/scoreboard",
        "method": "GET",
        "authentication": "none"
      },
      "typeVersion": 4
    },
    {
      "id": "Fetch NBA Stats",
      "name": "Fetch NBA Stats",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        250,
        300
      ],
      "parameters": {
        "url": "https://site.api.espn.com/apis/site/v2/sports/basketball/nba/scoreboard",
        "method": "GET",
        "authentication": "none"
      },
      "typeVersion": 4
    },
    {
      "id": "Fetch MLB Stats",
      "name": "Fetch MLB Stats",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        250,
        450
      ],
      "parameters": {
        "url": "https://site.api.espn.com/apis/site/v2/sports/baseball/mlb/scoreboard",
        "method": "GET",
        "authentication": "none"
      },
      "typeVersion": 4
    },
    {
      "id": "Merge and Filter Games",
      "name": "Merge and Filter Games",
      "type": "n8n-nodes-base.code",
      "position": [
        500,
        300
      ],
      "parameters": {
        "jsCode": "// Merge all game results from multiple sports into single array\nconst games = [];\n\nif ($input.all()[0]?.json?.events) {\n  games.push(...$input.all()[0].json.events.map(e => ({ ...e, sport: 'NFL' })));\n}\n\nif ($input.all()[1]?.json?.events) {\n  games.push(...$input.all()[1].json.events.map(e => ({ ...e, sport: 'NBA' })));\n}\n\nif ($input.all()[2]?.json?.events) {\n  games.push(...$input.all()[2].json.events.map(e => ({ ...e, sport: 'MLB' })));\n}\n\n// Filter to only completed games from previous day\nconst yesterday = new Date();\nyesterday.setDate(yesterday.getDate() - 1);\nconst yesterdayStr = yesterday.toISOString().split('T')[0];\n\nconst completedGames = games.filter(game => {\n  const gameDate = new Date(game.date).toISOString().split('T')[0];\n  return gameDate === yesterdayStr && game.status?.type?.completed;\n});\n\nreturn completedGames.map((game, index) => ({\n  id: index,\n  game: game,\n  sport: game.sport\n}));"
      },
      "typeVersion": 2
    },
    {
      "id": "Process Games in Batches",
      "name": "Process Games in Batches",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        700,
        300
      ],
      "parameters": {
        "batchSize": 1
      },
      "typeVersion": 3
    },
    {
      "id": "Filter TOS Compliant Games",
      "name": "Filter TOS Compliant Games",
      "type": "n8n-nodes-base.filter",
      "position": [
        900,
        300
      ],
      "parameters": {
        "conditions": {
          "options": [
            {
              "value1": "={{ $json.game }}",
              "condition": "objectExists"
            },
            {
              "value1": "={{ $json.game.status?.type?.description }}",
              "value2": "^(Final|Completed|Finished)",
              "condition": "regex"
            }
          ]
        }
      },
      "typeVersion": 2
    },
    {
      "id": "Generate AI Narratives",
      "name": "Generate AI Narratives",
      "type": "@n8n/n8n-nodes-langchain.openAi",
      "position": [
        1100,
        300
      ],
      "parameters": {
        "modelId": {
          "__rl": true,
          "mode": "list",
          "value": "gpt-4o-mini"
        },
        "messages": {
          "values": [
            {
              "role": "system",
              "content": "You are a sports journalist specializing in creating engaging narrative summaries of game results. Transform raw game statistics into compelling bullet-point narratives that highlight key moments, standout performances, and pivotal statistics. Keep narratives engaging, concise (5-7 bullets), and suitable for sports content platforms."
            },
            {
              "role": "user",
              "content": "={{ `Generate a narrative summary for this ${$json.sport} game:\n\nHome Team: ${$json.game.competitions[0]?.competitors[0]?.team?.displayName || 'N/A'}\nHome Score: ${$json.game.competitions[0]?.competitors[0]?.score || 'N/A'}\nAway Team: ${$json.game.competitions[0]?.competitors[1]?.team?.displayName || 'N/A'}\nAway Score: ${$json.game.competitions[0]?.competitors[1]?.score || 'N/A'}\nGame Date: ${new Date($json.game.date).toLocaleDateString()}\nStatus: ${$json.game.status?.type?.description || 'N/A'}\n\nProvide 5-7 engaging bullet points that capture the essence of this game.` }}"
            }
          ]
        },
        "resource": "text",
        "operation": "response"
      },
      "typeVersion": 2
    },
    {
      "id": "Format for Storage",
      "name": "Format for Storage",
      "type": "n8n-nodes-base.code",
      "position": [
        1300,
        300
      ],
      "parameters": {
        "jsCode": "// Extract narrative text and format for storage\nconst narrative = $json.response?.text || '';\nconst game = $json.game;\n\nreturn {\n  date: new Date(game.date).toLocaleDateString(),\n  sport: $json.sport,\n  homeTeam: game.competitions?.[0]?.competitors?.[0]?.team?.displayName || 'N/A',\n  homeScore: game.competitions?.[0]?.competitors?.[0]?.score || 0,\n  awayTeam: game.competitions?.[0]?.competitors?.[1]?.team?.displayName || 'N/A',\n  awayScore: game.competitions?.[0]?.competitors?.[1]?.score || 0,\n  narrative: narrative,\n  gameUrl: game.links?.[0]?.href || '',\n  timestamp: new Date().toISOString()\n};"
      },
      "typeVersion": 2
    },
    {
      "id": "Save to Google Sheets",
      "name": "Save to Google Sheets",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        1500,
        300
      ],
      "parameters": {
        "columns": "date,sport,homeTeam,homeScore,awayTeam,awayScore,narrative,gameUrl,timestamp",
        "resource": "sheet",
        "operation": "appendOrUpdate",
        "documentId": "{{ $secrets.GOOGLE_SHEETS_ID }}",
        "sheetTitle": "Sports Narratives",
        "authentication": "oauth2",
        "matchingColumns": "date,sport,homeTeam,awayTeam",
        "dataLocationOnSheet": "A1"
      },
      "typeVersion": 5
    },
    {
      "id": "Loop Completion Handler",
      "name": "Loop Completion Handler",
      "type": "n8n-nodes-base.splitInBatches",
      "position": [
        1500,
        450
      ],
      "parameters": {
        "batchSize": 10
      },
      "typeVersion": 3
    }
  ],
  "connections": {
    "Fetch MLB Stats": {
      "main": [
        [
          {
            "node": "Merge and Filter Games",
            "type": "main",
            "index": 2
          }
        ]
      ]
    },
    "Fetch NBA Stats": {
      "main": [
        [
          {
            "node": "Merge and Filter Games",
            "type": "main",
            "index": 1
          }
        ]
      ]
    },
    "Fetch NFL Stats": {
      "main": [
        [
          {
            "node": "Merge and Filter Games",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Format for Storage": {
      "main": [
        [
          {
            "node": "Save to Google Sheets",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Save to Google Sheets": {
      "main": [
        [
          {
            "node": "Loop Completion Handler",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Daily Schedule Trigger": {
      "main": [
        [
          {
            "node": "Fetch NFL Stats",
            "type": "main",
            "index": 0
          },
          {
            "node": "Fetch NBA Stats",
            "type": "main",
            "index": 0
          },
          {
            "node": "Fetch MLB Stats",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate AI Narratives": {
      "main": [
        [
          {
            "node": "Format for Storage",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Merge and Filter Games": {
      "main": [
        [
          {
            "node": "Process Games in Batches",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Process Games in Batches": {
      "main": [
        [
          {
            "node": "Filter TOS Compliant Games",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Filter TOS Compliant Games": {
      "main": [
        [
          {
            "node": "Generate AI Narratives",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}