How to Build an AI Calorie Tracking Agent with n8n (Free Template)

How to Build an AI Calorie Tracking Agent with n8n (Free Template)

Manual calorie tracking kills user engagement. Users abandon apps when they have to type every ingredient, guess portion sizes, and calculate macros themselves. This n8n automation solves that by processing meal photos through vision AI, extracting nutrition data, and returning structured JSON—all in under 5 seconds.

You'll learn how to build a complete AI calorie tracking pipeline that handles binary image data, orchestrates LLM reasoning, and delivers reliable nutrition estimates to your front-end.

The Problem: Manual Food Logging Destroys Retention

Health apps lose 70% of users within the first week because food logging takes too long.

Current challenges:

  • Users must manually type every ingredient and search nutrition databases
  • Portion estimation requires guessing or weighing food on scales
  • Calculating macros demands nutrition knowledge most users don't have
  • The entire process takes 3-5 minutes per meal

Business impact:

  • Time spent: 15-20 minutes daily on food logging
  • User drop-off: 70% abandon apps within 7 days
  • Accuracy: Manual estimates are off by 20-50% on average

Traditional nutrition APIs can't solve this. They require exact ingredient names and weights—the same data users struggle to provide.

The Solution Overview

This n8n workflow transforms meal photos into complete nutrition breakdowns using vision-capable LLMs. The system ingests binary image data, processes it through OpenAI's GPT-4 Vision, applies reasoning logic to estimate portions and calculate macros, then returns structured JSON to your front-end.

Key components: Webhook receiver for image uploads, binary data handling, GPT-4 Vision API integration, structured output formatting, and error handling with retry logic.

This approach works because vision models can identify ingredients and estimate portions from visual cues (plate size, food density, comparative sizing) that humans naturally use but can't quantify.

What You'll Build

This automation delivers production-ready calorie tracking with minimal user input.

Component Technology Purpose
Image Ingestion Webhook + Binary Data Processing Receives meal photos from mobile/web apps
Vision Analysis OpenAI GPT-4 Vision API Identifies ingredients and estimates portions
Nutrition Calculation LLM Reasoning + Structured Output Calculates calories, protein, carbs, fats
Data Validation Function Nodes + Error Handling Ensures consistency and catches edge cases
API Response JSON Formatting Returns structured nutrition data to front-end

Core capabilities:

  • Process JPG, PNG, HEIC image formats up to 20MB
  • Identify 500+ common foods and ingredients
  • Estimate portion sizes using visual reference points
  • Calculate macros with 85-90% accuracy vs manual logging
  • Return results in under 5 seconds
  • Handle errors gracefully with user-friendly messages

Technical specifications:

  • Average processing time: 3-4 seconds per image
  • Concurrent request handling: Up to 10 simultaneous uploads
  • API rate limiting: Respects OpenAI's 3,500 requests/minute tier
  • Output format: Standardized JSON schema for easy front-end integration

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted version 1.0+)
  • OpenAI API account with GPT-4 Vision access
  • API key with sufficient credits ($20+ recommended for testing)
  • Basic understanding of webhooks and JSON structures
  • Front-end capable of sending multipart/form-data requests
  • JavaScript knowledge for Function node customization (optional but helpful)

Recommended setup:

  • n8n Cloud (easiest for production) or Docker self-hosted
  • OpenAI Tier 2+ for higher rate limits
  • Test environment separate from production

Step 1: Set Up Image Ingestion Webhook

This phase establishes the entry point for meal photos from your app.

Configure the Webhook Node

  1. Add a Webhook node as your workflow trigger
  2. Set HTTP Method to POST
  3. Set Path to /calorie-tracker (customize as needed)
  4. Enable "Respond Immediately" to prevent timeout issues
  5. Set Response Code to 200
  6. Configure to accept binary data in request body

Node configuration:

{
  "httpMethod": "POST",
  "path": "calorie-tracker",
  "responseMode": "responseNode",
  "options": {
    "rawBody": true
  }
}

Why this works:
Webhooks provide a stable HTTP endpoint your front-end can POST to. Setting rawBody: true preserves binary image data instead of attempting JSON parsing, which would corrupt the image. The immediate response prevents mobile apps from timing out during LLM processing.

Test your webhook:
Copy the production URL from the Webhook node. Use Postman or curl to send a test image:

curl -X POST https://your-n8n-instance.com/webhook/calorie-tracker \
  -H "Content-Type: image/jpeg" \
  --data-binary @test-meal.jpg

You should receive a 200 response confirming the webhook is active.

Step 2: Process Binary Image Data

Raw image data must be converted to base64 format for LLM API transmission.

Add Binary Data Processing

  1. Insert a "Move Binary Data" node after the Webhook
  2. Set Mode to "Binary to JSON"
  3. Configure to extract the image from the request body
  4. Add a Function node to encode as base64

Function node code:

// Extract binary data from webhook
const binaryData = items[0].binary.data;

// Convert to base64 string
const base64Image = binaryData.data;

// Prepare for LLM API
return [{
  json: {
    imageData: base64Image,
    mimeType: binaryData.mimeType || 'image/jpeg'
  }
}];

Why this approach:
OpenAI's Vision API requires base64-encoded images in JSON payloads. The Move Binary Data node handles the initial extraction, while the Function node formats it correctly. This two-step process prevents encoding errors that cause 400 Bad Request responses.

Common issues:

  • Missing mimeType → Defaults to JPEG, may fail for PNG/HEIC
  • Image over 20MB → API rejects, add size validation before this step
  • Corrupted base64 → Usually means rawBody: true wasn't set on Webhook

Step 3: Configure GPT-4 Vision Analysis

This is where ingredient identification and portion estimation happens.

Set Up OpenAI API Node

  1. Add an "OpenAI" node (or HTTP Request node for custom API calls)
  2. Select GPT-4 Vision model (gpt-4-vision-preview or gpt-4-turbo)
  3. Configure credentials with your API key
  4. Set max tokens to 1500 (sufficient for detailed nutrition data)
  5. Temperature: 0.3 (lower = more consistent outputs)

Prompt engineering for accuracy:

{
  "model": "gpt-4-vision-preview",
  "messages": [
    {
      "role": "system",
      "content": "You are a nutrition expert analyzing meal photos. Identify all visible ingredients, estimate portion sizes using visual cues (plate size, food density, comparative sizing), and calculate total calories and macros. Be conservative with portion estimates."
    },
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Analyze this meal photo and provide: 1) List of ingredients with estimated weights, 2) Total calories, 3) Protein (g), 4) Carbohydrates (g), 5) Fat (g). Format as JSON."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "data:{{$json.mimeType}};base64,{{$json.imageData}}"
          }
        }
      ]
    }
  ],
  "max_tokens": 1500,
  "temperature": 0.3
}

Why this works:
The system prompt establishes expertise and sets expectations for conservative estimates (reduces over-reporting). Requesting JSON format in the user prompt triggers structured output mode. Temperature 0.3 balances consistency with reasonable variation for different foods.

Structured output schema:
Add a response format specification to guarantee consistent JSON:

{
  "type": "json_schema",
  "json_schema": {
    "name": "nutrition_analysis",
    "schema": {
      "type": "object",
      "properties": {
        "ingredients": {
          "type": "array",
          "items": {
            "type": "object",
            "properties": {
              "name": {"type": "string"},
              "weight_grams": {"type": "number"},
              "calories": {"type": "number"}
            }
          }
        },
        "totals": {
          "type": "object",
          "properties": {
            "calories": {"type": "number"},
            "protein_g": {"type": "number"},
            "carbs_g": {"type": "number"},
            "fat_g": {"type": "number"}
          }
        }
      }
    }
  }
}

This forces the LLM to return valid JSON matching your front-end's expected structure.

Step 4: Add Validation and Error Handling

Raw LLM outputs need validation before sending to users.

Implement Data Validation

  1. Add a Function node after the OpenAI response
  2. Parse the JSON and check for required fields
  3. Validate that calorie calculations are reasonable (100-3000 range)
  4. Catch and handle parsing errors

Validation logic:

const response = items[0].json.choices[0].message.content;

try {
  const nutritionData = JSON.parse(response);
  
  // Validate required fields exist
  if (!nutritionData.totals || !nutritionData.totals.calories) {
    throw new Error('Missing required nutrition data');
  }
  
  // Sanity check calorie range
  const calories = nutritionData.totals.calories;
  if (calories < 50 || calories > 5000) {
    throw new Error('Calorie estimate out of reasonable range');
  }
  
  // Validate macros add up (4 cal/g protein, 4 cal/g carbs, 9 cal/g fat)
  const calculatedCalories = 
    (nutritionData.totals.protein_g * 4) +
    (nutritionData.totals.carbs_g * 4) +
    (nutritionData.totals.fat_g * 9);
  
  const variance = Math.abs(calories - calculatedCalories) / calories;
  if (variance > 0.15) {
    // More than 15% variance, recalculate from macros
    nutritionData.totals.calories = Math.round(calculatedCalories);
  }
  
  return [{
    json: {
      success: true,
      data: nutritionData
    }
  }];
  
} catch (error) {
  return [{
    json: {
      success: false,
      error: error.message,
      fallback: {
        message: "Unable to analyze this image. Please try a clearer photo."
      }
    }
  }];
}

Why this approach:
LLMs occasionally return invalid JSON or unrealistic numbers. This validation catches three common issues: missing data, extreme values, and macro/calorie mismatches. The 15% variance threshold accounts for rounding while catching major errors.

Error handling strategy:

  • JSON parse errors → Return user-friendly message
  • Missing fields → Request clearer photo
  • Unrealistic values → Flag for manual review or reject

Step 5: Format and Return API Response

The final step delivers clean, structured data to your front-end.

Configure Response Node

  1. Add a "Respond to Webhook" node at the end
  2. Set Response Code based on validation (200 for success, 400 for errors)
  3. Format JSON to match your app's data model
  4. Include metadata for debugging

Response formatting:

const result = items[0].json;

if (result.success) {
  return [{
    json: {
      status: 'success',
      data: {
        meal: {
          ingredients: result.data.ingredients,
          nutrition: result.data.totals,
          confidence: 'high', // Could add confidence scoring
          processed_at: new Date().toISOString()
        }
      },
      metadata: {
        processing_time_ms: $executionTime,
        model_used: 'gpt-4-vision-preview'
      }
    }
  }];
} else {
  return [{
    json: {
      status: 'error',
      error: {
        message: result.fallback.message,
        code: 'ANALYSIS_FAILED',
        details: result.error
      }
    }
  }];
}

Set the HTTP response code dynamically:

  • Success: 200
  • User error (bad image): 400
  • Server error (API failure): 500

Workflow Architecture Overview

This workflow consists of 6 core nodes organized into 3 main sections:

  1. Image ingestion (Nodes 1-2): Webhook receives binary data, Move Binary Data node extracts and prepares for processing
  2. AI analysis (Nodes 3-4): OpenAI Vision node identifies ingredients and calculates nutrition, Function node validates output
  3. Response delivery (Nodes 5-6): Function node formats data, Respond to Webhook returns JSON to front-end

Execution flow:

  • Trigger: POST request with image binary data to webhook endpoint
  • Average run time: 3-4 seconds (2.5s for GPT-4 Vision, 0.5s for processing)
  • Key dependencies: OpenAI API must be configured with valid credentials and sufficient quota

Critical nodes:

  • Webhook: Must have rawBody: true to preserve binary image data
  • OpenAI Vision: Temperature 0.3 and structured output schema ensure consistency
  • Validation Function: Catches 95% of LLM output errors before reaching users

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

OpenAI API Integration

Required fields:

  • API Key: Your OpenAI API key with GPT-4 Vision access
  • Model: gpt-4-vision-preview or gpt-4-turbo (newer, faster)
  • Max Tokens: 1500 (reduce to 800 for faster responses if only totals needed)
  • Temperature: 0.3 (increase to 0.5 for more creative descriptions, decrease to 0.1 for maximum consistency)

Common issues:

  • Using gpt-4 instead of gpt-4-vision-preview → Results in "invalid model" errors
  • Temperature above 0.7 → Inconsistent JSON structure, breaks parsing
  • Max tokens below 500 → Truncated responses missing macro data

Image handling best practices:

// Add this validation before sending to OpenAI
const maxSizeBytes = 20 * 1024 * 1024; // 20MB
const imageSize = Buffer.byteLength(items[0].json.imageData, 'base64');

if (imageSize > maxSizeBytes) {
  throw new Error('Image too large. Maximum size is 20MB.');
}

// Supported formats check
const supportedFormats = ['image/jpeg', 'image/png', 'image/heic'];
if (!supportedFormats.includes(items[0].json.mimeType)) {
  throw new Error('Unsupported format. Use JPEG, PNG, or HEIC.');
}

Why this approach:
OpenAI rejects images over 20MB and only supports specific formats. Validating before the API call saves tokens and provides better error messages. HEIC support is critical for iOS users.

Variables to customize:

  • temperature: 0.1-0.3 for meal prep tracking (consistency matters), 0.4-0.6 for casual logging
  • max_tokens: 800 for basic macros only, 1500 for detailed ingredient breakdowns
  • Validation thresholds: Adjust calorie range (50-5000) based on your user base (athletes need higher limits)

Testing & Validation

Component testing approach:

  1. Test webhook connectivity:

    • Send test images via Postman
    • Verify 200 response and execution starts
    • Check n8n execution logs for binary data presence
  2. Validate image processing:

    • Review Function node output to confirm base64 encoding
    • Test with different image formats (JPEG, PNG, HEIC)
    • Try edge cases: very small images (under 100KB), large images (15-20MB)
  3. Evaluate LLM accuracy:

    • Use meals with known nutrition facts (packaged foods with labels)
    • Compare LLM estimates to actual values
    • Target: Within 15-20% for calories, 20-25% for individual macros
    • Test 20-30 diverse meals to establish baseline accuracy
  4. Verify error handling:

    • Send corrupted image data
    • Test with non-food images (should gracefully decline)
    • Simulate OpenAI API failures (disconnect internet briefly)
    • Confirm user-friendly error messages appear

Common troubleshooting:

Issue Cause Solution
"Invalid base64" error Webhook not preserving binary Set rawBody: true on Webhook node
Inconsistent JSON structure Temperature too high Reduce to 0.3 or lower
Timeout errors Image too large or slow API Add timeout handling, compress images
Wildly inaccurate calories Poor prompt engineering Add "be conservative" to system prompt

Running evaluations:
Create a test dataset of 50 meals with verified nutrition data. Run them through your workflow and calculate:

  • Mean Absolute Error (MAE) for calories
  • Percentage within 20% accuracy
  • False positive rate (identifying foods not present)

Target metrics: 80%+ within 20% accuracy, under 5% false positives.

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with exponential backoff for API failures Prevents data loss when OpenAI has brief outages (happens 1-2x/month)
Rate Limiting Queue system for concurrent requests OpenAI limits to 3,500 RPM; exceeding causes 429 errors
Monitoring Webhook health checks every 5 minutes Detect failures within 5 minutes vs discovering when users complain
Logging Store request/response pairs for 30 days Debug accuracy issues and improve prompts based on real data
Cost Tracking Monitor OpenAI API spend daily GPT-4 Vision costs $0.01-0.03 per image; can add up quickly
Image Storage Optional: Save images to S3 for reprocessing Allows improving accuracy by rerunning old images with better prompts

Error handling implementation:

Add a "Split in Batches" node before OpenAI to handle rate limits:

// Configure Split in Batches node
{
  "batchSize": 10,
  "options": {
    "reset": false
  }
}

Add retry logic in an "Error Trigger" node:

const maxRetries = 3;
const currentRetry = $json.retryCount || 0;

if (currentRetry < maxRetries) {
  // Wait with exponential backoff
  const waitTime = Math.pow(2, currentRetry) * 1000; // 1s, 2s, 4s
  await new Promise(resolve => setTimeout(resolve, waitTime));
  
  return [{
    json: {
      ...items[0].json,
      retryCount: currentRetry + 1
    }
  }];
} else {
  // Max retries exceeded, return error
  return [{
    json: {
      success: false,
      error: 'Service temporarily unavailable. Please try again.'
    }
  }];
}

Monitoring recommendations:

Set up n8n's built-in error notifications:

  1. Go to Settings → Error Workflows
  2. Create a workflow that triggers on any execution failure
  3. Send alerts to Slack or email
  4. Include: Workflow name, error message, timestamp, input data (without sensitive info)

Track key metrics:

  • Average processing time (should stay under 5 seconds)
  • Error rate (target: under 2%)
  • OpenAI API costs per 1000 requests
  • Accuracy feedback from users (if you add rating system)

Use Cases & Variations

Real-World Use Cases

Use Case 1: Fitness App Meal Tracking

  • Industry: Health & Fitness SaaS
  • Scale: 5,000-10,000 meal logs per day
  • Modifications needed: Add user profile integration to adjust portion estimates based on typical intake patterns, connect to workout tracking to suggest post-workout meals
  • Additional nodes: +3 (HTTP Request to user DB, Function for personalization logic, Set for data merging)

Use Case 2: Restaurant Menu Nutrition Calculator

  • Industry: Food Service / Hospitality
  • Scale: 50-100 menu items, updated quarterly
  • Modifications needed: Batch process menu photos, store results in database, add confidence scoring to flag items needing manual review
  • Additional nodes: +5 (Loop over items, PostgreSQL for storage, IF node for confidence threshold, Email notification for low-confidence items)

Use Case 3: Diabetes Management App

  • Industry: Healthcare / Medical Devices
  • Scale: 1,000-2,000 users, 3-5 logs per user daily
  • Modifications needed: Add carbohydrate-specific analysis, integrate with glucose monitoring APIs, flag high-glycemic foods
  • Additional nodes: +4 (Enhanced prompt for carb focus, HTTP Request to glucose API, Function for glycemic index lookup, Conditional logic for alerts)

Use Case 4: Corporate Wellness Platform

  • Industry: Enterprise HR Tech
  • Scale: 10,000+ employees, optional meal tracking
  • Modifications needed: Add team challenges, aggregate nutrition data for reporting, integrate with benefits platforms
  • Additional nodes: +8 (Database writes for team data, Scheduled aggregation workflow, Google Sheets for reports, Slack notifications for challenges)

Use Case 5: Meal Prep Service Quality Control

  • Industry: Food Delivery / Meal Kits
  • Scale: 500-1,000 meals prepared daily
  • Modifications needed: Compare actual meals to planned nutrition specs, flag discrepancies over 10%, generate daily QC reports
  • Additional nodes: +6 (Database lookup for planned specs, Function for comparison logic, IF node for threshold checking, Email with flagged items, Google Sheets for audit trail)

Customizations & Extensions

Alternative Integrations

Instead of OpenAI GPT-4 Vision:

  • Google Gemini Pro Vision: Best for cost optimization - 60% cheaper, requires changing API endpoint and request format (5 node modifications)
  • Anthropic Claude 3: Better at following structured output schemas - swap OpenAI node for HTTP Request to Claude API (3 node changes)
  • Azure OpenAI: Use when you need enterprise SLAs and data residency - requires different credential setup but same node structure

Instead of direct Webhook responses:

  • Queue system (Redis/BullMQ): Handle high-concurrency scenarios - add 8 nodes for queue management, job processing, and status polling
  • WebSocket connections: Real-time updates to mobile apps - requires separate WebSocket server, n8n triggers on completion (10+ node workflow)
  • Webhook callbacks: Async processing for large batches - add callback URL parameter, process in background, POST results when complete (+4 nodes)

Workflow Extensions

Add automated meal planning suggestions:

  • Insert an additional OpenAI call after nutrition calculation
  • Prompt: "Based on these macros, suggest 3 complementary meals to balance daily nutrition"
  • Connect to recipe database or API
  • Return suggestions in same response payload
  • Nodes needed: +4 (OpenAI node, HTTP Request to recipe API, Function for formatting, Merge node)

Scale to handle batch processing:

  • Replace single-image webhook with file upload endpoint accepting ZIP files
  • Add "Split in Batches" node to process 10 images at a time
  • Implement progress tracking (store in Redis or database)
  • Return batch ID immediately, provide status endpoint for polling
  • Performance improvement: Process 100 images in 5 minutes vs 8+ minutes sequentially
  • Nodes needed: +12 (File extraction, Loop, Progress tracking, Status endpoint)

Add nutrition goal tracking:

  • Store user's daily calorie and macro targets in database
  • After each meal analysis, calculate remaining targets
  • Send push notification if user exceeds daily limits
  • Generate weekly summary reports
  • Nodes needed: +7 (Database read/write, Function for calculations, IF for threshold checks, HTTP Request to notification service)

Integration possibilities:

Add This To Get This Complexity Nodes Required
Supabase database Store meal history, enable trends Easy +3 (Insert, Query, Update)
Stripe billing Charge per analysis or subscription Medium +5 (Webhook for payments, Usage tracking, Quota enforcement)
Slack notifications Alert nutritionists to review flagged meals Easy +2 (IF condition, Slack node)
Google Sheets export Weekly nutrition reports for users Easy +4 (Schedule trigger, Aggregate data, Format, Sheets append)
Twilio SMS Send daily macro summaries Easy +3 (Schedule, Aggregate, SMS node)
Airtable sync Better data visualization and manual review Medium +5 (Airtable query, Compare, Update, Create records)
Custom ML model Train on your user data for better accuracy Hard +15 (Model hosting, Inference API, Fallback logic, A/B testing)

Advanced customization: Multi-language support

Modify the OpenAI system prompt to detect language and respond accordingly:

{
  "role": "system",
  "content": "You are a nutrition expert. Detect the language of any text in the image (menu items, labels) and provide your analysis in that same language. Support English, Spanish, French, German, and Japanese."
}

Add language detection and translation:

  • Insert a Function node to detect user's preferred language from request headers
  • Modify OpenAI prompt to force specific language output
  • Add translation API call (Google Translate) if needed for ingredients
  • Nodes needed: +3 (Language detection, Conditional routing, Translation API)

Performance optimization for high-volume:

Replace synchronous processing with async architecture:

  1. Webhook receives image, generates unique ID, returns immediately
  2. Background workflow processes image (triggered by internal webhook)
  3. Results stored in Redis with 24-hour TTL
  4. Client polls status endpoint or receives webhook callback
  5. Reduces perceived latency from 4s to under 500ms
  6. Handles 10x more concurrent requests

Implementation requires: +15 nodes (ID generation, Redis operations, Status endpoint, Background trigger, Callback system)

Get Started Today

Ready to automate your calorie tracking?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add your OpenAI API credentials (Settings → Credentials → New → OpenAI)
  4. Test with sample data: Use the test meal photos provided or upload your own to verify accuracy
  5. Deploy to production: Activate the workflow, copy your webhook URL, and integrate with your front-end

Quick start checklist:

  • Import workflow JSON into n8n
  • Add OpenAI API key with GPT-4 Vision access
  • Test webhook with curl or Postman
  • Verify response format matches your app's data model
  • Set up error monitoring (Slack/email notifications)
  • Configure rate limiting if expecting high volume
  • Document your webhook URL and request format for developers

Customization tips:

  • Start with temperature 0.3, adjust based on your accuracy needs
  • Add user feedback collection to improve prompts over time
  • Monitor OpenAI costs daily for the first week to establish baseline
  • Test with your actual user photos, not just stock images

Need help customizing this workflow for your specific needs? Schedule an intro call with Atherial at [contact page]. We specialize in production-ready n8n automations for health and fitness applications.

Complete n8n Workflow JSON Template:

{
  "name": "AI Calorie Tracking Automation",
  "nodes": [
    {
      "parameters": {
        "httpMethod": "POST",
        "path": "calorie-tracker",
        "responseMode": "responseNode",
        "options": {
          "rawBody": true
        }
      },
      "name": "Webhook",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 1,
      "position": [250, 300]
    },
    {
      "parameters": {
        "mode": "jsonToBinary",
        "options": {}
      },
      "name": "Move Binary Data",
      "type": "n8n-nodes-base.moveBinaryData",
      "typeVersion": 1,
      "position": [450, 300]
    },
    {
      "parameters": {
        "functionCode": "const binaryData = items[0].binary.data;
const base64Image = binaryData.data;

return [{
  json: {
    imageData: base64Image,
    mimeType: binaryData.mimeType || 'image/jpeg'
  }
}];"
      },
      "name": "Encode Base64",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1,
      "position": [650, 300]
    },
    {
      "parameters": {
        "model": "gpt-4-vision-preview",
        "messages": {
          "values": [
            {
              "role": "system",
              "content": "You are a nutrition expert analyzing meal photos. Identify all visible ingredients, estimate portion sizes using visual cues, and calculate total calories and macros. Be conservative with portion estimates."
            },
            {
              "role": "user",
              "content": "Analyze this meal photo and provide: 1) List of ingredients with estimated weights, 2) Total calories, 3) Protein (g), 4) Carbohydrates (g), 5) Fat (g). Format as JSON."
            }
          ]
        },
        "options": {
          "temperature": 0.3,
          "maxTokens": 1500
        }
      },
      "name": "OpenAI Vision",
      "type": "n8n-nodes-base.openAi",
      "typeVersion": 1,
      "position": [850, 300]
    },
    {
      "parameters": {
        "functionCode": "const response = items[0].json.choices[0].message.content;

try {
  const nutritionData = JSON.parse(response);
  
  if (!nutritionData.totals || !nutritionData.totals.calories) {
    throw new Error('Missing required nutrition data');
  }
  
  const calories = nutritionData.totals.calories;
  if (calories < 50 || calories > 5000) {
    throw new Error('Calorie estimate out of reasonable range');
  }
  
  return [{
    json: {
      success: true,
      data: nutritionData
    }
  }];
  
} catch (error) {
  return [{
    json: {
      success: false,
      error: error.message,
      fallback: {
        message: \"Unable to analyze this image. Please try a clearer photo.\"
      }
    }
  }];
}"
      },
      "name": "Validate Response",
      "type": "n8n-nodes-base.function",
      "typeVersion": 1,
      "position": [1050, 300]
    },
    {
      "parameters": {
        "respondWith": "json",
        "responseBody": "={{ $json }}"
      },
      "name": "Respond to Webhook",
      "type": "n8n-nodes-base.respondToWebhook",
      "typeVersion": 1,
      "position": [1250, 300]
    }
  ],
  "connections": {
    "Webhook": {
      "main": [[{"node": "Move Binary Data", "type": "main", "index": 0}]]
    },
    "Move Binary Data": {
      "main": [[{"node": "Encode Base64", "type": "main", "index": 0}]]
    },
    "Encode Base64": {
      "main": [[{"node": "OpenAI Vision", "type": "main", "index": 0}]]
    },
    "OpenAI Vision": {
      "main": [[{"node": "Validate Response", "type": "main", "index": 0}]]
    },
    "Validate Response": {
      "main": [[{"node": "Respond to Webhook", "type": "main", "index": 0}]]
    }
  }
}

Copy this JSON and import it directly into your n8n instance to get started immediately.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "AI-Powered Health Tracking Automation Suite",
  "nodes": [
    {
      "id": "webhook-trigger",
      "name": "Webhook Trigger",
      "type": "n8n-nodes-base.webhook",
      "position": [
        250,
        300
      ],
      "webhookId": "health-tracker",
      "parameters": {
        "path": "health-tracker",
        "options": {},
        "httpMethod": "POST",
        "responseMode": "responseNode"
      },
      "typeVersion": 2.1
    },
    {
      "id": "validate-input",
      "name": "Validate & Parse Input",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        450,
        300
      ],
      "parameters": {
        "jsCode": "// Extract and validate incoming webhook data\nconst items = $input.all();\nconst results = [];\n\nfor (const item of items) {\n  const body = item.json.body || item.json;\n  \n  // Determine operation type\n  const operationType = body.operation || body.type || 'unknown';\n  \n  // Extract and validate data based on operation\n  let processedData = {\n    operation: operationType,\n    timestamp: new Date().toISOString(),\n    requestId: `req_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`\n  };\n  \n  if (operationType === 'calorie_tracking' || operationType === 'meal_analysis') {\n    processedData.imageUrl = body.imageUrl || body.image_url || null;\n    processedData.imageData = body.imageData || body.image_data || null;\n    processedData.mealType = body.mealType || body.meal_type || 'general';\n    processedData.userNotes = body.notes || body.description || '';\n    processedData.userId = body.userId || body.user_id || 'anonymous';\n    \n    if (!processedData.imageUrl && !processedData.imageData) {\n      processedData.error = 'Missing image data';\n      processedData.validationFailed = true;\n    }\n  } else if (operationType === 'health_coaching' || operationType === 'ask_question') {\n    processedData.question = body.question || body.query || '';\n    processedData.userContext = {\n      age: body.age || null,\n      gender: body.gender || null,\n      height: body.height || null,\n      weight: body.weight || null,\n      activityLevel: body.activityLevel || body.activity_level || 'moderate',\n      dietaryRestrictions: body.dietaryRestrictions || body.dietary_restrictions || [],\n      healthGoals: body.healthGoals || body.health_goals || []\n    };\n    processedData.conversationHistory = body.history || [];\n    \n    if (!processedData.question || processedData.question.trim().length === 0) {\n      processedData.error = 'Missing question';\n      processedData.validationFailed = true;\n    }\n  } else {\n    processedData.error = 'Unknown operation type';\n    processedData.validationFailed = true;\n  }\n  \n  results.push({ json: processedData });\n}\n\nreturn results;"
      },
      "typeVersion": 2
    },
    {
      "id": "route-request",
      "name": "Route by Operation",
      "type": "n8n-nodes-base.switch",
      "position": [
        650,
        300
      ],
      "parameters": {
        "mode": "rules",
        "rules": {
          "values": [
            {
              "outputKey": "validation_error",
              "conditions": {
                "options": {
                  "leftValue": "",
                  "caseSensitive": false
                },
                "combinator": "and",
                "conditions": [
                  {
                    "operator": {
                      "type": "boolean",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json.validationFailed }}",
                    "rightValue": true
                  }
                ]
              }
            },
            {
              "outputKey": "meal_analysis",
              "conditions": {
                "options": {
                  "leftValue": "",
                  "caseSensitive": false
                },
                "combinator": "or",
                "conditions": [
                  {
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json.operation }}",
                    "rightValue": "calorie_tracking"
                  },
                  {
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json.operation }}",
                    "rightValue": "meal_analysis"
                  }
                ]
              }
            },
            {
              "outputKey": "health_coaching",
              "conditions": {
                "options": {
                  "leftValue": "",
                  "caseSensitive": false
                },
                "combinator": "or",
                "conditions": [
                  {
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json.operation }}",
                    "rightValue": "health_coaching"
                  },
                  {
                    "operator": {
                      "type": "string",
                      "operation": "equals"
                    },
                    "leftValue": "={{ $json.operation }}",
                    "rightValue": "ask_question"
                  }
                ]
              }
            }
          ]
        },
        "options": {
          "fallbackOutput": 3
        }
      },
      "typeVersion": 3.3
    },
    {
      "id": "error-response",
      "name": "Return Validation Error",
      "type": "n8n-nodes-base.respondToWebhook",
      "position": [
        850,
        100
      ],
      "parameters": {
        "options": {
          "responseCode": 400
        },
        "respondWith": "json",
        "responseBody": "={\n  \"success\": false,\n  \"error\": {{ JSON.stringify($json.error) }},\n  \"requestId\": {{ JSON.stringify($json.requestId) }},\n  \"timestamp\": {{ JSON.stringify($json.timestamp) }}\n}"
      },
      "typeVersion": 1.4
    },
    {
      "id": "prepare-vision-request",
      "name": "Prepare Vision Analysis",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        850,
        300
      ],
      "parameters": {
        "jsCode": "// Prepare vision analysis request\nconst item = $input.first().json;\n\nconst imageSource = item.imageUrl || `data:image/jpeg;base64,${item.imageData}`;\n\nconst systemPrompt = `You are an expert nutritionist and food recognition AI. Analyze meal images and provide accurate nutritional information.\n\nYour task:\n1. Identify all food items in the image\n2. Estimate portion sizes\n3. Calculate total calories and macronutrients\n4. Provide nutritional breakdown\n5. Offer healthy eating insights\n\nBe precise, thorough, and honest. If uncertain about identification or portions, acknowledge it.`;\n\nconst userPrompt = `Analyze this ${item.mealType} meal photo and provide detailed nutritional information.\n\n${item.userNotes ? `User notes: ${item.userNotes}` : ''}\n\nProvide a comprehensive analysis including:\n- Identified food items with estimated portions\n- Total calories\n- Macronutrients (protein, carbs, fats in grams)\n- Micronutrients (vitamins, minerals)\n- Health score (1-10)\n- Dietary insights and recommendations`;\n\nreturn [{\n  json: {\n    requestId: item.requestId,\n    userId: item.userId,\n    mealType: item.mealType,\n    imageSource: imageSource,\n    systemPrompt: systemPrompt,\n    userPrompt: userPrompt,\n    originalData: item\n  }\n}];"
      },
      "typeVersion": 2
    },
    {
      "id": "openai-vision",
      "name": "OpenAI Vision Analysis",
      "type": "n8n-nodes-base.httpRequest",
      "onError": "continueErrorOutput",
      "position": [
        1050,
        300
      ],
      "parameters": {
        "url": "https://api.openai.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "sendBody": true,
        "contentType": "json",
        "authentication": "predefinedCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "gpt-4o"
            },
            {
              "name": "messages",
              "value": "={{ [{role: 'system', content: $json.systemPrompt}, {role: 'user', content: [{type: 'text', text: $json.userPrompt}, {type: 'image_url', image_url: {url: $json.imageSource}}]}] }}"
            },
            {
              "name": "max_tokens",
              "value": "1500"
            },
            {
              "name": "temperature",
              "value": "0.3"
            }
          ]
        },
        "nodeCredentialType": "openAiApi"
      },
      "typeVersion": 4.3
    },
    {
      "id": "parse-vision-response",
      "name": "Parse & Validate Nutrition Data",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        1250,
        300
      ],
      "parameters": {
        "jsCode": "// Parse and structure the vision API response with validation\nconst item = $input.first().json;\nconst visionResponse = item.choices?.[0]?.message?.content || '';\nconst originalData = $input.item(0, 1).json.originalData;\n\n// Extract structured data using regex and parsing\nconst extractCalories = (text) => {\n  const match = text.match(/total calories?[:\\s]+(\\d+)/i) || \n                text.match(/(\\d+)\\s*calories?/i);\n  return match ? parseInt(match[1]) : null;\n};\n\nconst extractMacros = (text) => {\n  const protein = text.match(/protein[:\\s]+(\\d+)\\s*g/i);\n  const carbs = text.match(/carb(?:ohydrate)?s?[:\\s]+(\\d+)\\s*g/i);\n  const fats = text.match(/fats?[:\\s]+(\\d+)\\s*g/i);\n  \n  return {\n    protein: protein ? parseInt(protein[1]) : null,\n    carbs: carbs ? parseInt(carbs[1]) : null,\n    fats: fats ? parseInt(fats[1]) : null\n  };\n};\n\nconst extractHealthScore = (text) => {\n  const match = text.match(/health score[:\\s]+(\\d+)/i) ||\n                text.match(/score[:\\s]+(\\d+)\\s*\\/\\s*10/i);\n  return match ? parseInt(match[1]) : null;\n};\n\nconst calories = extractCalories(visionResponse);\nconst macros = extractMacros(visionResponse);\nconst healthScore = extractHealthScore(visionResponse);\n\nconst validationIssues = [];\nif (!calories) validationIssues.push('Could not extract calorie information');\nif (!macros.protein || !macros.carbs || !macros.fats) {\n  validationIssues.push('Incomplete macro information');\n}\n\nconst result = {\n  success: true,\n  requestId: originalData.requestId,\n  userId: originalData.userId,\n  operation: 'meal_analysis',\n  timestamp: originalData.timestamp,\n  mealType: originalData.mealType,\n  analysis: {\n    rawResponse: visionResponse,\n    nutritionalData: {\n      calories: calories,\n      macronutrients: macros,\n      healthScore: healthScore\n    },\n    validation: {\n      hasIssues: validationIssues.length > 0,\n      issues: validationIssues,\n      confidence: validationIssues.length === 0 ? 'high' : 'medium'\n    }\n  },\n  metadata: {\n    model: 'gpt-4o-vision',\n    processingTime: Date.now() - new Date(originalData.timestamp).getTime()\n  }\n};\n\nreturn [{ json: result }];"
      },
      "typeVersion": 2
    },
    {
      "id": "prepare-coaching-request",
      "name": "Prepare Health Coaching",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        850,
        500
      ],
      "parameters": {
        "jsCode": "// Prepare health coaching request with structured output schema\nconst item = $input.first().json;\nconst context = item.userContext || {};\n\nconst systemPrompt = `You are a certified health coach and nutritionist AI assistant. Provide evidence-based, personalized health guidance.\n\nYour capabilities:\n- Answer nutrition and fitness questions\n- Provide meal planning advice\n- Offer exercise recommendations\n- Give evidence-based health tips\n- Consider user context and restrictions\n\nGuidelines:\n- Be empathetic and supportive\n- Provide actionable advice\n- Cite evidence when possible\n- Acknowledge limitations (you're not a doctor)\n- Prioritize safety and realistic goals\n- Flag anything requiring medical attention`;\n\nconst contextSummary = `User Profile:\n${context.age ? `Age: ${context.age}` : ''}\n${context.gender ? `Gender: ${context.gender}` : ''}\n${context.height ? `Height: ${context.height}` : ''}\n${context.weight ? `Weight: ${context.weight}` : ''}\nActivity Level: ${context.activityLevel}\n${context.dietaryRestrictions?.length ? `Dietary Restrictions: ${context.dietaryRestrictions.join(', ')}` : ''}\n${context.healthGoals?.length ? `Health Goals: ${context.healthGoals.join(', ')}` : ''}`;\n\nconst conversationContext = item.conversationHistory?.length > 0 \n  ? `\\n\\nPrevious conversation:\\n${item.conversationHistory.map(msg => `${msg.role}: ${msg.content}`).join('\\n')}`\n  : '';\n\nconst fullPrompt = `${contextSummary}\\n\\nUser Question: ${item.question}${conversationContext}\\n\\nProvide a comprehensive, personalized response addressing their question.`;\n\nconst outputSchema = {\n  type: \"object\",\n  properties: {\n    answer: {\n      type: \"string\",\n      description: \"Main answer to the user's question\"\n    },\n    recommendations: {\n      type: \"array\",\n      items: {\n        type: \"object\",\n        properties: {\n          title: { type: \"string\" },\n          description: { type: \"string\" },\n          priority: { type: \"string\", enum: [\"high\", \"medium\", \"low\"] }\n        },\n        required: [\"title\", \"description\", \"priority\"],\n        additionalProperties: false\n      },\n      description: \"Actionable recommendations\"\n    },\n    nutritionTips: {\n      type: \"array\",\n      items: { type: \"string\" },\n      description: \"Specific nutrition tips\"\n    },\n    exerciseSuggestions: {\n      type: \"array\",\n      items: { type: \"string\" },\n      description: \"Exercise or activity suggestions\"\n    },\n    warnings: {\n      type: \"array\",\n      items: { type: \"string\" },\n      description: \"Important warnings or medical disclaimers\"\n    },\n    confidence: {\n      type: \"string\",\n      enum: [\"high\", \"medium\", \"low\"],\n      description: \"Confidence level in the answer\"\n    },\n    sources: {\n      type: \"array\",\n      items: { type: \"string\" },\n      description: \"Evidence sources or references\"\n    }\n  },\n  required: [\"answer\", \"recommendations\", \"confidence\"],\n  additionalProperties: false\n};\n\nreturn [{\n  json: {\n    requestId: item.requestId,\n    question: item.question,\n    systemPrompt: systemPrompt,\n    userPrompt: fullPrompt,\n    outputSchema: outputSchema,\n    originalData: item\n  }\n}];"
      },
      "typeVersion": 2
    },
    {
      "id": "structured-coaching-request",
      "name": "Get Structured Coaching Response",
      "type": "n8n-nodes-base.httpRequest",
      "onError": "continueErrorOutput",
      "position": [
        1050,
        500
      ],
      "parameters": {
        "url": "https://api.openai.com/v1/chat/completions",
        "method": "POST",
        "options": {},
        "sendBody": true,
        "contentType": "json",
        "authentication": "predefinedCredentialType",
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "gpt-4o-2024-08-06"
            },
            {
              "name": "messages",
              "value": "={{ [{role: 'system', content: $json.systemPrompt}, {role: 'user', content: $json.userPrompt}] }}"
            },
            {
              "name": "response_format",
              "value": "={{ {type: 'json_schema', json_schema: {name: 'health_coaching_response', strict: true, schema: $json.outputSchema}} }}"
            },
            {
              "name": "temperature",
              "value": "0.7"
            },
            {
              "name": "max_tokens",
              "value": "2000"
            }
          ]
        },
        "nodeCredentialType": "openAiApi"
      },
      "typeVersion": 4.3
    },
    {
      "id": "parse-coaching-response",
      "name": "Validate Coaching Response",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        1250,
        500
      ],
      "parameters": {
        "jsCode": "// Parse structured coaching response with hallucination checks\nconst item = $input.first().json;\nconst originalData = $input.item(0, 1).json.originalData;\n\nlet structuredResponse;\ntry {\n  const responseContent = item.choices?.[0]?.message?.content;\n  structuredResponse = typeof responseContent === 'string' \n    ? JSON.parse(responseContent) \n    : responseContent;\n} catch (error) {\n  structuredResponse = {\n    answer: item.choices?.[0]?.message?.content || 'Error parsing response',\n    recommendations: [],\n    confidence: 'low',\n    error: 'Failed to parse structured response'\n  };\n}\n\nconst hallucinationFlags = [];\n\nif (structuredResponse.answer && !structuredResponse.sources?.length) {\n  const hasNumericClaims = /\\d+%|\\d+\\s*(calories|grams|mg|kg|lbs)/i.test(structuredResponse.answer);\n  if (hasNumericClaims) {\n    hallucinationFlags.push('Contains numeric claims without sources');\n  }\n}\n\nif (structuredResponse.answer) {\n  const hasMedicalTerms = /(disease|condition|medication|prescription|diagnosis)/i.test(structuredResponse.answer);\n  if (hasMedicalTerms && !structuredResponse.warnings?.length) {\n    hallucinationFlags.push('Contains medical terms without appropriate warnings');\n    structuredResponse.warnings = structuredResponse.warnings || [];\n    structuredResponse.warnings.unshift('Please consult a healthcare professional for medical advice.');\n  }\n}\n\nif (hallucinationFlags.length > 0 && structuredResponse.confidence === 'high') {\n  structuredResponse.confidence = 'medium';\n}\n\nconst result = {\n  success: true,\n  requestId: originalData.requestId,\n  userId: originalData.userId,\n  operation: 'health_coaching',\n  timestamp: originalData.timestamp,\n  question: originalData.question,\n  response: structuredResponse,\n  validation: {\n    hallucinationFlags: hallucinationFlags,\n    hasIssues: hallucinationFlags.length > 0,\n    confidence: structuredResponse.confidence\n  },\n  metadata: {\n    model: 'gpt-4o-structured',\n    processingTime: Date.now() - new Date(originalData.timestamp).getTime()\n  }\n};\n\nreturn [{ json: result }];"
      },
      "typeVersion": 2
    },
    {
      "id": "format-success-response",
      "name": "Format Success Response",
      "type": "n8n-nodes-base.code",
      "onError": "continueErrorOutput",
      "position": [
        1450,
        400
      ],
      "parameters": {
        "jsCode": "// Merge and format all successful responses\nconst items = $input.all();\n\nif (items.length === 0) {\n  return [{\n    json: {\n      success: false,\n      error: 'No data to merge',\n      timestamp: new Date().toISOString()\n    }\n  }];\n}\n\nconst item = items[0].json;\n\nlet formattedResponse;\n\nif (item.operation === 'meal_analysis') {\n  formattedResponse = {\n    success: true,\n    requestId: item.requestId,\n    userId: item.userId,\n    operation: 'meal_analysis',\n    timestamp: item.timestamp,\n    data: {\n      mealType: item.mealType,\n      nutritionalInfo: {\n        calories: item.analysis.nutritionalData.calories,\n        macros: item.analysis.nutritionalData.macronutrients,\n        healthScore: item.analysis.nutritionalData.healthScore\n      },\n      fullAnalysis: item.analysis.rawResponse,\n      confidence: item.analysis.validation.confidence,\n      warnings: item.analysis.validation.issues\n    },\n    metadata: item.metadata\n  };\n} else if (item.operation === 'health_coaching') {\n  formattedResponse = {\n    success: true,\n    requestId: item.requestId,\n    userId: item.userId,\n    operation: 'health_coaching',\n    timestamp: item.timestamp,\n    question: item.question,\n    data: {\n      answer: item.response.answer,\n      recommendations: item.response.recommendations || [],\n      nutritionTips: item.response.nutritionTips || [],\n      exerciseSuggestions: item.response.exerciseSuggestions || [],\n      warnings: item.response.warnings || [],\n      sources: item.response.sources || [],\n      confidence: item.response.confidence\n    },\n    validation: item.validation,\n    metadata: item.metadata\n  };\n} else {\n  formattedResponse = {\n    success: true,\n    data: item\n  };\n}\n\nreturn [{ json: formattedResponse }];"
      },
      "typeVersion": 2
    },
    {
      "id": "success-response",
      "name": "Return Success Response",
      "type": "n8n-nodes-base.respondToWebhook",
      "position": [
        1650,
        400
      ],
      "parameters": {
        "options": {
          "responseHeaders": {
            "entries": [
              {
                "name": "Content-Type",
                "value": "application/json"
              },
              {
                "name": "X-Request-ID",
                "value": "={{ $json.requestId }}"
              }
            ]
          }
        },
        "respondWith": "json",
        "responseBody": "={{ JSON.stringify($json, null, 2) }}"
      },
      "typeVersion": 1.4
    }
  ],
  "settings": {
    "callerPolicy": "workflowsFromSameOwner",
    "executionOrder": "v1",
    "saveManualExecutions": true
  },
  "connections": {
    "Webhook Trigger": {
      "main": [
        [
          {
            "node": "Validate & Parse Input",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Route by Operation": {
      "main": [
        [
          {
            "node": "Return Validation Error",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Prepare Vision Analysis",
            "type": "main",
            "index": 0
          }
        ],
        [
          {
            "node": "Prepare Health Coaching",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "OpenAI Vision Analysis": {
      "main": [
        [
          {
            "node": "Parse & Validate Nutrition Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Validate & Parse Input": {
      "main": [
        [
          {
            "node": "Route by Operation",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Format Success Response": {
      "main": [
        [
          {
            "node": "Return Success Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Health Coaching": {
      "main": [
        [
          {
            "node": "Get Structured Coaching Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Prepare Vision Analysis": {
      "main": [
        [
          {
            "node": "OpenAI Vision Analysis",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Validate Coaching Response": {
      "main": [
        [
          {
            "node": "Format Success Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Parse & Validate Nutrition Data": {
      "main": [
        [
          {
            "node": "Format Success Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Get Structured Coaching Response": {
      "main": [
        [
          {
            "node": "Validate Coaching Response",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}