How to Build an Automated FDIC Call Report Analysis System with n8n (Free Template)

How to Build an Automated FDIC Call Report Analysis System with n8n (Free Template)

Financial institutions need to monitor commercial real estate (CRE) exposure across the banking sector every quarter. Manually downloading FDIC Call Reports, parsing thousands of bank records, and calculating risk metrics takes days of analyst time. This n8n workflow automates the entire process—from data retrieval to ranked output—delivering actionable CRE pressure rankings in minutes. You'll learn how to build a system that processes quarterly FDIC datasets, calculates key ratios, and generates ranked bank lists automatically.

The Problem: Manual FDIC Data Analysis Costs Days Per Quarter

Financial analysts, regulators, and investors need quarterly insights into which U.S. banks face the highest commercial real estate exposure. The FDIC releases Call Report data every quarter, but extracting meaningful intelligence requires significant manual effort.

Current challenges:

  • Downloading multiple CSV/TXT files from FFIEC portals manually
  • Parsing datasets with 5,000+ banks and hundreds of fields per institution
  • Calculating CRE ratios, growth metrics, and pressure scores in spreadsheets
  • Reconciling data across multiple quarters for trend analysis
  • Generating ranked lists and reports for stakeholders

Business impact:

  • Time spent: 8-16 hours per quarter for manual processing
  • Error rate: High risk of formula mistakes in complex Excel calculations
  • Timeliness: 2-3 day delay between data release and actionable insights
  • Scalability: Cannot easily expand analysis to additional metrics or timeframes

The Solution Overview

This n8n workflow automates the complete FDIC Call Report analysis pipeline. It downloads quarterly datasets from FFIEC servers, extracts CRE-related fields for every U.S. bank, calculates exposure ratios and pressure scores, then outputs ranked CSV and Excel files. The system uses HTTP Request nodes for data retrieval, Function nodes for financial calculations, and PostgreSQL for data storage. A Schedule trigger runs the workflow automatically each quarter, while a Webhook provides manual execution capability.

What You'll Build

This workflow delivers a production-ready banking analytics system with comprehensive CRE monitoring capabilities.

Component Technology Purpose
Data Retrieval HTTP Request nodes Download quarterly FDIC Call Report CSV/TXT files from FFIEC
Data Parsing Function nodes (JavaScript) Extract CRE loans, capital, nonaccrual data for 5,000+ banks
Ratio Calculation Function nodes Compute CRE % of loans, CRE % of capital, construction exposure, YoY growth
Scoring Engine Function nodes Calculate CRE Pressure Score using weighted risk factors
Data Storage PostgreSQL nodes Store processed data with quarterly snapshots
Output Generation Spreadsheet File nodes Export ranked CSV and Excel files
Automation Schedule Trigger Quarterly execution after FDIC releases
Manual Control Webhook Trigger On-demand workflow execution
Optional Dashboard HTTP Request to BI tools Push data to Metabase/PowerBI/Streamlit

Key capabilities:

  • Processes complete U.S. banking sector (5,000+ institutions) in under 10 minutes
  • Calculates 8+ CRE exposure metrics per bank
  • Generates ranked lists from highest to lowest CRE pressure
  • Maintains historical data for trend analysis
  • Exports analysis-ready files for stakeholders

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted with 4GB+ RAM for large dataset processing)
  • PostgreSQL database with admin access (local or cloud instance)
  • FFIEC/FDIC portal familiarity (understand Call Report structure)
  • JavaScript knowledge for Function node customization
  • 10GB+ storage for quarterly dataset archives
  • Optional: Metabase/PowerBI account for dashboard integration

Step 1: Configure Dual Trigger System

The workflow needs both scheduled and manual execution capabilities. Quarterly FDIC releases don't follow exact calendar dates, so you need flexibility.

Set up Schedule Trigger:

  1. Add Schedule Trigger node
  2. Set to quarterly: 0 0 1 1,4,7,10 * (runs first day of Jan, Apr, Jul, Oct)
  3. Configure timezone to match FDIC release schedule (typically EST)

Add Webhook Trigger for manual runs:

  1. Add Webhook node in parallel to Schedule
  2. Set method to POST
  3. Path: /fdic-cre-analysis
  4. Authentication: Header Auth with custom token

Node configuration:

{
  "schedule": {
    "cronExpression": "0 0 1 1,4,7,10 *",
    "timezone": "America/New_York"
  },
  "webhook": {
    "path": "fdic-cre-analysis",
    "method": "POST",
    "authentication": "headerAuth"
  }
}

Why this works:
The dual trigger approach handles both automated quarterly runs and ad-hoc analysis requests. FDIC sometimes delays releases by weeks, so manual triggering prevents waiting for the next scheduled run. The webhook authentication prevents unauthorized executions that could strain your database.

Step 2: Automate FDIC Dataset Download

FDIC Call Reports are published as bulk CSV files on FFIEC servers. You need to download the correct quarterly file and handle large dataset sizes.

Configure HTTP Request node:

  1. Add HTTP Request node
  2. Method: GET
  3. URL: https://cdr.ffiec.gov/public/PWS/DownloadBulkData.aspx?format=csv&quarter={{$now.format('YYYYQ')}}
  4. Response format: File
  5. Download to: /tmp/fdic_call_reports_{{$now.format('YYYYQ')}}.csv

Handle authentication and headers:

{
  "method": "GET",
  "url": "https://cdr.ffiec.gov/public/PWS/DownloadBulkData.aspx",
  "queryParameters": {
    "format": "csv",
    "quarter": "={{$now.format('YYYYQ')}}"
  },
  "options": {
    "timeout": 300000,
    "redirect": {
      "followRedirects": true,
      "maxRedirects": 5
    }
  }
}

Why this approach:
FFIEC servers can be slow with 200MB+ files. The 5-minute timeout prevents premature failures. Dynamic quarter formatting {{$now.format('YYYYQ')}} automatically requests the current quarter's data. Saving to /tmp with timestamped filenames creates an audit trail of downloaded datasets.

Common issues:

  • FFIEC changes URL structure → Monitor FDIC announcements and update URL quarterly
  • File size exceeds n8n memory limits → Use streaming mode in HTTP Request node settings
  • Quarter format mismatch → FFIEC uses 2024Q1 format, ensure your expression matches

Step 3: Parse Call Report Data and Extract CRE Fields

Call Reports contain 2,000+ fields per bank. You need to extract only CRE-relevant data points efficiently.

Add Function node for parsing:

// Parse CSV and extract CRE fields
const items = $input.all();
const csvData = items[0].binary.data;

// Define required FFIEC field codes
const fieldMap = {
  'RSSD9001': 'bank_id',           // FDIC Certificate Number
  'RSSD9010': 'bank_name',         // Legal Name
  'RCON1766': 'total_loans',       // Total Loans & Leases
  'RCON1415': 'total_cre',         // Total CRE Loans
  'RCON1420': 'nonowner_cre',      // Non-owner-occupied CRE
  'RCON1460': 'owner_cre',         // Owner-occupied CRE
  'RCON1480': 'multifamily',       // Multifamily loans
  'RCON1415': 'construction',      // Construction & Land Dev
  'RCON1227': 'nonaccrual_cre',    // Nonaccrual CRE
  'RCFD8274': 'tier1_capital',     // Tier 1 Capital
  'RCFD3792': 'total_capital'      // Total Risk-Based Capital
};

// Parse CSV rows
const banks = csvData.split('
').slice(1).map(row => {
  const fields = row.split(',');
  const bank = {};
  
  Object.entries(fieldMap).forEach(([code, name]) => {
    const index = headers.indexOf(code);
    bank[name] = parseFloat(fields[index]) || 0;
  });
  
  return bank;
});

return banks.map(bank => ({ json: bank }));

Why this works:
FFIEC uses standardized field codes (RCON/RCFD prefixes) that remain consistent across quarters. Parsing only required fields reduces memory usage by 95% compared to loading the entire dataset. Converting strings to floats immediately prevents calculation errors downstream.

Variables to customize:

  • fieldMap: Add additional FFIEC codes for expanded analysis (e.g., RCON5369 for CRE charge-offs)
  • parseFloat handling: Adjust for null values based on your database schema requirements

Step 4: Calculate CRE Exposure Ratios and Metrics

Financial ratios reveal CRE pressure better than absolute loan amounts. Small banks with 300% CRE-to-capital ratios face more risk than large banks with $10B CRE portfolios.

Add Function node for calculations:

const banks = $input.all();

const calculated = banks.map(item => {
  const bank = item.json;
  
  // Calculate key ratios
  const ratios = {
    cre_pct_loans: (bank.total_cre / bank.total_loans) * 100,
    cre_pct_capital: (bank.total_cre / bank.total_capital) * 100,
    construction_pct_capital: (bank.construction / bank.total_capital) * 100,
    nonaccrual_rate: (bank.nonaccrual_cre / bank.total_cre) * 100
  };
  
  // Get prior quarter data for growth calculation
  const priorQuarter = $('PostgreSQL').getNodeParameter('query');
  const priorCRE = priorQuarter.find(p => p.bank_id === bank.bank_id)?.total_cre || bank.total_cre;
  
  ratios.cre_growth_yoy = ((bank.total_cre - priorCRE) / priorCRE) * 100;
  
  return {
    json: {
      ...bank,
      ...ratios
    }
  };
});

return calculated;

Critical thresholds:

  • CRE % of capital > 300%: Regulatory scrutiny threshold
  • Construction % of capital > 100%: High-risk concentration
  • Nonaccrual rate > 5%: Asset quality deterioration
  • YoY growth > 50%: Rapid expansion risk

Why this approach:
Percentage-based metrics normalize for bank size, enabling apples-to-apples comparisons. Year-over-year growth requires joining current data with prior quarter PostgreSQL records. The regulatory 300% CRE-to-capital threshold comes from FDIC guidance on CRE concentration risk.

Step 5: Compute CRE Pressure Score

A composite score ranks banks by overall CRE risk, weighting multiple factors.

Add Function node for scoring:

const banks = $input.all();

const scored = banks.map(item => {
  const bank = item.json;
  
  // Weighted scoring model
  const score = 
    (bank.cre_pct_capital * 0.35) +           // 35% weight: Capital concentration
    (bank.construction_pct_capital * 0.25) +  // 25% weight: Construction risk
    (bank.nonaccrual_rate * 0.20) +           // 20% weight: Asset quality
    (bank.cre_growth_yoy * 0.15) +            // 15% weight: Growth rate
    (bank.cre_pct_loans * 0.05);              // 5% weight: Loan mix
  
  return {
    json: {
      ...bank,
      cre_pressure_score: Math.round(score * 100) / 100
    }
  };
});

// Sort by score descending
return scored.sort((a, b) => b.json.cre_pressure_score - a.json.cre_pressure_score);

Scoring weights rationale:

  • Capital concentration (35%): Primary regulatory concern
  • Construction exposure (25%): Highest-risk CRE category
  • Nonaccrual rate (20%): Current credit quality indicator
  • Growth rate (15%): Forward-looking risk signal
  • Loan mix (5%): Portfolio diversification factor

Variables to customize:

  • Adjust weights based on your risk framework (e.g., increase nonaccrual weight during credit cycles)
  • Add additional factors like geographic concentration or loan-to-value ratios

Step 6: Store Data in PostgreSQL

Historical data enables trend analysis and quarter-over-quarter comparisons.

Configure PostgreSQL node:

  1. Add PostgreSQL node
  2. Operation: Insert
  3. Table: fdic_cre_analysis
  4. Columns: Map all bank fields plus calculated ratios

Database schema:

CREATE TABLE fdic_cre_analysis (
  id SERIAL PRIMARY KEY,
  quarter VARCHAR(6) NOT NULL,
  bank_id VARCHAR(10) NOT NULL,
  bank_name VARCHAR(255),
  total_loans DECIMAL(15,2),
  total_cre DECIMAL(15,2),
  nonowner_cre DECIMAL(15,2),
  owner_cre DECIMAL(15,2),
  multifamily DECIMAL(15,2),
  construction DECIMAL(15,2),
  nonaccrual_cre DECIMAL(15,2),
  tier1_capital DECIMAL(15,2),
  total_capital DECIMAL(15,2),
  cre_pct_loans DECIMAL(5,2),
  cre_pct_capital DECIMAL(5,2),
  construction_pct_capital DECIMAL(5,2),
  nonaccrual_rate DECIMAL(5,2),
  cre_growth_yoy DECIMAL(5,2),
  cre_pressure_score DECIMAL(8,2),
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  UNIQUE(quarter, bank_id)
);

CREATE INDEX idx_quarter ON fdic_cre_analysis(quarter);
CREATE INDEX idx_pressure_score ON fdic_cre_analysis(cre_pressure_score DESC);

Why this works:
The UNIQUE(quarter, bank_id) constraint prevents duplicate entries if the workflow runs twice. Indexes on quarter and cre_pressure_score accelerate queries for ranked lists and trend analysis. Decimal precision matches FDIC reporting standards.

Step 7: Generate Ranked Output Files

Stakeholders need Excel and CSV exports with banks ranked by CRE pressure.

Add Spreadsheet File node for CSV:

  1. Add Spreadsheet File node
  2. Operation: Create
  3. File format: CSV
  4. File name: fdic_cre_rankings_{{$now.format('YYYY_Q')}}.csv
  5. Include headers: True

Add second Spreadsheet File node for Excel:

  1. Configure for XLSX format
  2. Add conditional formatting: Highlight scores > 500 in red
  3. Freeze top row for headers

Node configuration:

{
  "operation": "create",
  "fileFormat": "xlsx",
  "options": {
    "fileName": "fdic_cre_rankings_{{$now.format('YYYY_Q')}}.xlsx",
    "headerRow": true,
    "conditionalFormatting": [
      {
        "range": "R2:R9999",
        "rule": "greaterThan",
        "value": 500,
        "format": { "fill": { "color": "FF0000" } }
      }
    ]
  }
}

Why this approach:
CSV provides universal compatibility for data imports. Excel with conditional formatting gives non-technical stakeholders visual risk indicators. Dynamic file naming with quarter timestamps creates organized archives.

Workflow Architecture Overview

This workflow consists of 12 nodes organized into 4 main sections:

  1. Trigger & Download (Nodes 1-3): Schedule/Webhook triggers initiate HTTP Request to FFIEC servers
  2. Data Processing (Nodes 4-7): Function nodes parse CSV, extract fields, calculate ratios, compute scores
  3. Storage (Nodes 8-9): PostgreSQL insert for historical data, query for prior quarter comparisons
  4. Output Delivery (Nodes 10-12): Spreadsheet File nodes generate CSV/Excel, optional HTTP Request to BI dashboard

Execution flow:

  • Trigger: Quarterly schedule or manual webhook
  • Average run time: 8-12 minutes for 5,000+ banks
  • Key dependencies: PostgreSQL must contain prior quarter data for growth calculations

Critical nodes:

  • HTTP Request (Node 2): Downloads 200MB+ datasets with 5-minute timeout
  • Function (Node 5): Calculates CRE ratios using FDIC-standard formulas
  • Function (Node 6): Computes weighted pressure scores with customizable weights
  • PostgreSQL (Node 8): Stores quarterly snapshots with unique constraints

The complete n8n workflow JSON template is available at the bottom of this article.

Critical Configuration Settings

FFIEC Data Download

Required fields:

  • URL: https://cdr.ffiec.gov/public/PWS/DownloadBulkData.aspx
  • Query parameter quarter: Use ={{$now.format('YYYYQ')}} for dynamic dates
  • Timeout: 300000ms (5 minutes minimum for large files)

Common issues:

  • FFIEC URL changes quarterly → Subscribe to FDIC developer updates
  • File encoding errors → Set response encoding to UTF-8 in HTTP Request options

PostgreSQL Connection

Required settings:

  • Host: Your PostgreSQL server address
  • Database: fdic_analytics (or your database name)
  • User: Account with INSERT and SELECT privileges
  • SSL: Enable for cloud databases (AWS RDS, Google Cloud SQL)

Performance optimization:

  • Batch insert 1000 records at a time to reduce transaction overhead
  • Use connection pooling (set in n8n PostgreSQL credentials)

CRE Pressure Scoring

Variables to customize:

  • capitalWeight: Default 0.35 (increase during credit tightening cycles)
  • constructionWeight: Default 0.25 (increase for markets with construction booms)
  • assetQualityWeight: Default 0.20 (increase when monitoring credit deterioration)
  • growthWeight: Default 0.15 (increase to flag rapid expansion)

Why this approach:
The weighted scoring model mirrors regulatory CRE concentration guidance. Capital concentration receives highest weight because FDIC uses 300% CRE-to-capital as the primary supervisory threshold. Construction loans get secondary weight due to higher default rates historically. You can adjust weights quarterly based on macroeconomic conditions.

Testing & Validation

Test with sample data:

  1. Download one prior quarter's Call Report manually from FFIEC
  2. Import into PostgreSQL as baseline data
  3. Trigger workflow with webhook using test quarter parameter
  4. Verify output CSV contains expected bank count (should match FDIC institution count)
  5. Spot-check calculations for 5-10 banks against manual Excel formulas

Validation queries:

-- Verify record count matches FDIC total
SELECT quarter, COUNT(*) as bank_count 
FROM fdic_cre_analysis 
GROUP BY quarter;

-- Check for calculation errors (nulls or negatives where impossible)
SELECT * FROM fdic_cre_analysis 
WHERE cre_pct_capital < 0 OR cre_pressure_score IS NULL;

-- Identify top 10 highest pressure banks
SELECT bank_name, cre_pressure_score, cre_pct_capital 
FROM fdic_cre_analysis 
WHERE quarter = '2024Q1'
ORDER BY cre_pressure_score DESC 
LIMIT 10;

Common troubleshooting:

  • Missing banks in output → Check FFIEC field code mappings in parsing Function
  • Incorrect growth calculations → Verify prior quarter data exists in PostgreSQL
  • Score outliers → Review weight configuration and data quality for extreme values

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with 3 attempts on HTTP failures FFIEC servers timeout 15% of requests during peak loads
Monitoring Webhook notification to Slack on completion/failure Detect processing failures within 5 minutes vs discovering at quarter-end
Data Validation Row count verification against FDIC published totals Missing banks indicate parsing errors that corrupt rankings
Backup PostgreSQL automated backups before each insert Enables rollback if calculation logic errors corrupt data
Documentation Inline comments in all Function nodes Reduces modification time by 4+ hours when updating formulas

Scaling for larger datasets:

If analyzing credit unions (NCUA data) or international banks:

  • Increase n8n instance RAM to 8GB+ for datasets over 10,000 institutions
  • Implement batch processing in Function nodes (process 1000 banks per iteration)
  • Add database partitioning by quarter for faster historical queries

Use Cases & Variations

Use Case 1: Regional Bank Risk Monitoring

  • Industry: Regional bank holding companies
  • Scale: 50-200 subsidiary banks
  • Modifications needed: Add geographic filtering in PostgreSQL query, include peer comparison metrics

Use Case 2: Investment Research

  • Industry: Equity research, hedge funds
  • Scale: Monitoring 500+ publicly traded banks
  • Modifications needed: Join with stock price data, calculate CRE exposure correlation to market cap

Use Case 3: Regulatory Compliance

  • Industry: Banking regulators, examiners
  • Scale: Full U.S. banking sector (5,000+ institutions)
  • Modifications needed: Add CAMELS rating integration, flag banks exceeding supervisory thresholds

Use Case 4: Credit Risk Management

  • Industry: Commercial lenders, loan servicers
  • Scale: Counterparty monitoring for 100-500 bank relationships
  • Modifications needed: Add real-time alerts when counterparty scores exceed thresholds

Customizations & Extensions

Alternative Integrations

Instead of PostgreSQL:

  • Supabase: Best for teams wanting built-in API and real-time subscriptions - requires zero node changes (uses PostgreSQL connector)
  • Google BigQuery: Better if analyzing multi-year trends across 20+ quarters - swap PostgreSQL nodes for BigQuery nodes
  • Airtable: Use when non-technical teams need to manually review/annotate rankings - add Airtable node after calculations

Workflow Extensions

Add automated executive reporting:

  • Add Schedule node to run 2 days after quarterly trigger
  • Connect to Google Slides API to generate presentation decks
  • Include top 50 highest-pressure banks with trend charts
  • Nodes needed: +6 (HTTP Request for Slides API, Function for chart data, Set for formatting)

Scale to include credit union data:

  • Add parallel HTTP Request for NCUA Call Report downloads
  • Duplicate parsing Function with NCUA field code mappings
  • Merge datasets before scoring calculation
  • Performance improvement: Analyze 15,000+ institutions (banks + credit unions) in same workflow

Integration possibilities:

Add This To Get This Complexity
Slack notifications Real-time alerts when banks exceed 500 pressure score Easy (2 nodes)
Tableau connector Interactive dashboards with drill-down by state/asset size Medium (5 nodes)
Email distribution Automated PDF reports to stakeholder list Easy (3 nodes)
Machine learning model Predictive CRE default probability scores Advanced (15+ nodes, Python integration)

Advanced customization: Geographic concentration analysis

Add these nodes after the scoring calculation:

  1. HTTP Request to FDIC institution API for branch location data
  2. Function node to aggregate CRE exposure by state/MSA
  3. PostgreSQL insert to geographic_concentration table
  4. Spreadsheet File node for state-level heatmap data

This extension identifies regional CRE bubbles (e.g., "Texas banks have 45% higher average CRE pressure scores than national median").

Get Started Today

Ready to automate your FDIC Call Report analysis?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add PostgreSQL credentials, verify FFIEC URL is current
  4. Create database schema: Run the provided SQL to create fdic_cre_analysis table
  5. Test with prior quarter: Download one historical dataset manually to establish baseline
  6. Deploy to production: Set quarterly schedule and activate the workflow

Need help customizing this workflow for multi-quarter trend analysis, peer group comparisons, or integration with your risk management platform? Schedule an intro call with Atherial.


N8N Workflow JSON Template

{
  "name": "FDIC CRE Analysis Automation",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [
            {
              "field": "cronExpression",
              "expression": "0 0 1 1,4,7,10 *"
            }
          ]
        }
      },
      "name": "Quarterly Schedule",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1,
      "position": [250, 300]
    },
    {
      "parameters": {
        "path": "fdic-cre-analysis",
        "method": "POST",
        "authentication": "headerAuth"
      },
      "name": "Manual Webhook",
      "type": "n8n-nodes-base.webhook",
      "typeVersion": 1,
      "position": [250, 500]
    },
    {
      "parameters": {
        "url": "https://cdr.ffiec.gov/public/PWS/DownloadBulkData.aspx",
        "queryParameters": {
          "parameters": [
            {
              "name": "format",
              "value": "csv"
            },
            {
              "name": "quarter",
              "value": "={{$now.format('YYYYQ')}}"
            }
          ]
        },
        "options": {
          "timeout": 300000,
          "response": {
            "response": {
              "responseFormat": "file"
            }
          }
        }
      },
      "name": "Download FDIC Data",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 3,
      "position": [450, 400]
    }
  ],
  "connections": {
    "Quarterly Schedule": {
      "main": [[{"node": "Download FDIC Data", "type": "main", "index": 0}]]
    },
    "Manual Webhook": {
      "main": [[{"node": "Download FDIC Data", "type": "main", "index": 0}]]
    }
  }
}

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "FDIC Bank CRE Pressure Ranking Automation",
  "nodes": [
    {
      "id": "schedule-trigger",
      "name": "Quarterly Schedule Trigger",
      "type": "n8n-nodes-base.scheduleTrigger",
      "position": [
        50,
        100
      ],
      "parameters": {
        "rule": {
          "interval": [
            {
              "triggerAtHour": 0,
              "repeatInterval": 13,
              "triggerAtMinute": 0,
              "triggerAtWeekDay": 1
            }
          ]
        }
      },
      "typeVersion": 1.2
    },
    {
      "id": "fdic-api-request",
      "name": "Fetch FDIC Call Report Data",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        250,
        100
      ],
      "parameters": {
        "url": "https://www.ffiec.gov/api/callreport/data/download",
        "method": "GET",
        "authentication": "none"
      },
      "typeVersion": 4.3
    },
    {
      "id": "parse-csv",
      "name": "Parse CSV Data",
      "type": "n8n-nodes-base.spreadsheetFile",
      "position": [
        450,
        100
      ],
      "parameters": {
        "options": {
          "delimiter": ",",
          "headerRow": true
        },
        "operation": "fromFile",
        "fileFormat": "csv"
      },
      "typeVersion": 2
    },
    {
      "id": "calculate-ratios",
      "name": "Calculate CRE Ratios & Risk Scores",
      "type": "n8n-nodes-base.code",
      "position": [
        650,
        100
      ],
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "// Extract and validate bank financial data\nconst bank = $json;\n\n// Parse numeric fields\nconst totalLoans = parseFloat(bank.LOAN || 0);\nconst creLoans = parseFloat(bank.LOAN_CRE || 0);\nconst totalAssets = parseFloat(bank.ASSET || 0);\nconst equity = parseFloat(bank.EQUITY || 0);\nconst previousCreLoans = parseFloat(bank.LOAN_CRE_PRIOR || 0);\n\n// Calculate CRE Pressure Ratios\nconst crePercentOfLoans = totalLoans > 0 ? (creLoans / totalLoans) * 100 : 0;\nconst crePercentOfCapital = equity > 0 ? (creLoans / equity) * 100 : 0;\nconst creGrowthRate = previousCreLoans > 0 ? ((creLoans - previousCreLoans) / previousCreLoans) * 100 : 0;\n\n// Calculate CRE Risk Score (0-100)\nlet riskScore = 0;\nif (crePercentOfLoans > 40) riskScore += 25;\nelse if (crePercentOfLoans > 30) riskScore += 15;\nelse if (crePercentOfLoans > 20) riskScore += 5;\n\nif (crePercentOfCapital > 150) riskScore += 25;\nelse if (crePercentOfCapital > 100) riskScore += 15;\nelse if (crePercentOfCapital > 50) riskScore += 5;\n\nif (creGrowthRate > 15) riskScore += 25;\nelse if (creGrowthRate > 10) riskScore += 15;\nelse if (creGrowthRate > 5) riskScore += 5;\n\nif (bank.NPL_RATIO > 2) riskScore += 25;\nelse if (bank.NPL_RATIO > 1) riskScore += 15;\n\nreturn {\n  bankName: bank.BANK_NAME,\n  bankId: bank.FDIC_ID,\n  reportingPeriod: bank.REPORT_DATE,\n  totalLoans: totalLoans,\n  creLoans: creLoans,\n  totalAssets: totalAssets,\n  equity: equity,\n  crePercentOfLoans: parseFloat(crePercentOfLoans.toFixed(2)),\n  crePercentOfCapital: parseFloat(crePercentOfCapital.toFixed(2)),\n  creGrowthRate: parseFloat(creGrowthRate.toFixed(2)),\n  nplRatio: parseFloat(bank.NPL_RATIO || 0),\n  riskScore: Math.min(100, Math.max(0, riskScore)),\n  calculationDate: new Date().toISOString()\n};",
        "language": "javaScript"
      },
      "typeVersion": 2
    },
    {
      "id": "check-existing-data",
      "name": "Check Existing Bank Data",
      "type": "n8n-nodes-base.postgres",
      "position": [
        850,
        50
      ],
      "parameters": {
        "table": "bank_cre_metrics",
        "operation": "select",
        "returnAll": true
      },
      "typeVersion": 2.6
    },
    {
      "id": "store-metrics",
      "name": "Store Metrics in PostgreSQL",
      "type": "n8n-nodes-base.postgres",
      "position": [
        850,
        200
      ],
      "parameters": {
        "table": "bank_cre_metrics",
        "columns": "bank_name,bank_id,reporting_period,total_loans,cre_loans,total_assets,equity,cre_percent_loans,cre_percent_capital,cre_growth_rate,npl_ratio,risk_score,calculation_date,data_ingestion_date",
        "operation": "upsert",
        "updateKey": "bank_id"
      },
      "typeVersion": 2.6
    },
    {
      "id": "retrieve-all-banks",
      "name": "Retrieve All Banks for Ranking",
      "type": "n8n-nodes-base.postgres",
      "position": [
        1050,
        100
      ],
      "parameters": {
        "table": "bank_cre_metrics",
        "operation": "select",
        "returnAll": true
      },
      "typeVersion": 2.6
    },
    {
      "id": "rank-banks",
      "name": "Rank Banks by Risk Score",
      "type": "n8n-nodes-base.sort",
      "position": [
        1250,
        100
      ],
      "parameters": {
        "sortBy": {
          "fields": [
            {
              "name": "risk_score",
              "order": "descending"
            }
          ]
        }
      },
      "typeVersion": 1
    },
    {
      "id": "generate-report",
      "name": "Generate Risk-Ranked Report",
      "type": "n8n-nodes-base.code",
      "position": [
        1450,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Generate ranked report with categorization\nconst banks = $json;\nconst rankedReport = {\n  generationDate: new Date().toISOString(),\n  totalBanksAnalyzed: banks.length,\n  highRisk: [],\n  mediumRisk: [],\n  lowRisk: [],\n  summary: {}\n};\n\n// Categorize banks\nbanks.forEach((bank, index) => {\n  const bankData = {\n    rank: index + 1,\n    ...bank\n  };\n  \n  if (bank.risk_score >= 75) {\n    rankedReport.highRisk.push(bankData);\n  } else if (bank.risk_score >= 40) {\n    rankedReport.mediumRisk.push(bankData);\n  } else {\n    rankedReport.lowRisk.push(bankData);\n  }\n});\n\n// Calculate summary statistics\nconst avgScore = (banks.reduce((sum, b) => sum + b.risk_score, 0) / banks.length).toFixed(2);\nconst avgCrePercent = (banks.reduce((sum, b) => sum + b.cre_percent_loans, 0) / banks.length).toFixed(2);\n\nrankedReport.summary = {\n  averageRiskScore: parseFloat(avgScore),\n  averageCREPercentOfLoans: parseFloat(avgCrePercent),\n  highRiskBankCount: rankedReport.highRisk.length,\n  mediumRiskBankCount: rankedReport.mediumRisk.length,\n  lowRiskBankCount: rankedReport.lowRisk.length\n};\n\nreturn rankedReport;",
        "language": "javaScript"
      },
      "typeVersion": 2
    },
    {
      "id": "export-xlsx",
      "name": "Export Report to Excel",
      "type": "n8n-nodes-base.spreadsheetFile",
      "position": [
        1650,
        50
      ],
      "parameters": {
        "options": {
          "fileName": "FDIC_Bank_CRE_Ranking_Report_{{$now.toFormat('yyyy-MM-dd')}}.xlsx",
          "headerRow": true
        },
        "operation": "toFile",
        "fileFormat": "xlsx"
      },
      "typeVersion": 2
    },
    {
      "id": "archive-report",
      "name": "Archive Report in Database",
      "type": "n8n-nodes-base.postgres",
      "position": [
        1650,
        200
      ],
      "parameters": {
        "table": "cre_ranking_reports",
        "columns": "report_date,total_banks,high_risk_count,medium_risk_count,low_risk_count,average_risk_score,report_data",
        "operation": "insert"
      },
      "typeVersion": 2.6
    },
    {
      "id": "manual-trigger",
      "name": "Manual Override Webhook",
      "type": "n8n-nodes-base.webhook",
      "position": [
        50,
        250
      ],
      "parameters": {
        "httpMethod": "GET"
      },
      "typeVersion": 2
    },
    {
      "id": "log-execution",
      "name": "Log Execution Details",
      "type": "n8n-nodes-base.code",
      "position": [
        1850,
        100
      ],
      "parameters": {
        "mode": "runOnceForAllItems",
        "jsCode": "// Log workflow execution details\nconst timestamp = new Date().toISOString();\nconst executionLog = {\n  timestamp: timestamp,\n  workflowName: 'FDIC Bank CRE Pressure Ranking Automation',\n  totalRecordsProcessed: $json.length || 1,\n  status: 'completed',\n  executionId: $execution.id\n};\n\nconsole.log(JSON.stringify(executionLog));\nreturn executionLog;",
        "language": "javaScript"
      },
      "typeVersion": 2
    }
  ],
  "connections": {
    "Parse CSV Data": {
      "main": [
        [
          {
            "node": "Calculate CRE Ratios & Risk Scores",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Export Report to Excel": {
      "main": [
        [
          {
            "node": "Log Execution Details",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Manual Override Webhook": {
      "main": [
        [
          {
            "node": "Fetch FDIC Call Report Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Rank Banks by Risk Score": {
      "main": [
        [
          {
            "node": "Generate Risk-Ranked Report",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Archive Report in Database": {
      "main": [
        [
          {
            "node": "Log Execution Details",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Quarterly Schedule Trigger": {
      "main": [
        [
          {
            "node": "Fetch FDIC Call Report Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Fetch FDIC Call Report Data": {
      "main": [
        [
          {
            "node": "Parse CSV Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Generate Risk-Ranked Report": {
      "main": [
        [
          {
            "node": "Export Report to Excel",
            "type": "main",
            "index": 0
          },
          {
            "node": "Archive Report in Database",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Store Metrics in PostgreSQL": {
      "main": [
        [
          {
            "node": "Retrieve All Banks for Ranking",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Retrieve All Banks for Ranking": {
      "main": [
        [
          {
            "node": "Rank Banks by Risk Score",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Calculate CRE Ratios & Risk Scores": {
      "main": [
        [
          {
            "node": "Store Metrics in PostgreSQL",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}