How to Build an Intelligent Bank Reconciliation System with n8n (Free Template)

How to Build an Intelligent Bank Reconciliation System with n8n (Free Template)

Manual bank reconciliation drains finance teams. Matching transactions across 110 locations, multiple bank feeds, and operational databases consumes days of staff time monthly. This n8n automation eliminates that bottleneck. You'll learn how to build an intelligent reconciliation system that extracts data from SQL Server, matches transactions using AI, posts verified reconciliations to your ERP, and handles exceptions automatically.

The Problem: Manual Reconciliation Doesn't Scale

Finance teams face a recurring nightmare: reconciling thousands of transactions across multiple locations and systems.

Current challenges:

  • Extracting transaction data from 110+ SQL Server databases manually
  • Matching operational transactions against bank feeds without intelligent logic
  • Posting reconciliations back to ERP systems requires manual data entry
  • Exception handling relies on email chains and spreadsheet tracking
  • No visibility into reconciliation performance or bottlenecks

Business impact:

  • Time spent: 40-80 hours per month per finance team
  • Error rate: 5-15% of transactions require manual correction
  • Delayed financial reporting: 3-7 day lag for complete reconciliation
  • Audit risk: Incomplete documentation of matching logic

The Solution Overview

This n8n workflow orchestrates an end-to-end reconciliation pipeline. It pulls transaction data from SQL Server databases via stored procedures, fetches bank feeds from MYOB Acumatica's REST API, applies intelligent matching algorithms enhanced by AI, and posts verified reconciliations back to the ERP. Exceptions trigger MS Teams notifications for manual review. The system processes 110 locations in under 10 minutes daily with 95%+ auto-match accuracy.

What You'll Build

This reconciliation system delivers complete automation from data extraction through exception handling.

Component Technology Purpose
Data Extraction SQL Server Stored Procedures Pull operational transactions from 110+ databases
Bank Feed Integration MYOB Acumatica REST API Retrieve bank transaction data
Staging Database PostgreSQL Store and prepare data for matching
Matching Engine n8n Function Nodes + AI Apply rule-based and intelligent matching logic
AI Enhancement Claude API Handle complex matching scenarios
ERP Integration MYOB Acumatica REST API Post verified reconciliations
Exception Handling MS Teams Webhooks Alert staff to manual review items
Analytics Power BI/Metabase Track reconciliation performance metrics

Key capabilities:

  • Multi-location processing: Handle 110+ entities in parallel
  • Intelligent matching: Combine exact matching, fuzzy logic, and AI classification
  • Real-time posting: Verified reconciliations update ERP within minutes
  • Exception workflow: Unmatched transactions route to appropriate reviewers
  • Performance tracking: Dashboard shows match rates, processing time, exception types

Prerequisites

Before starting, ensure you have:

  • n8n enterprise instance (cloud or self-hosted)
  • SQL Server access with stored procedure execution rights
  • MYOB Acumatica account with REST API credentials
  • PostgreSQL database for staging (AWS RDS or Azure Database recommended)
  • Claude API key or alternative LLM provider
  • MS Teams webhook URL for notifications
  • Power BI or Metabase for analytics dashboards
  • Intermediate SQL knowledge for query customization
  • Basic JavaScript for Function node logic

Step 1: Configure Data Extraction from SQL Server

This phase pulls transaction data from operational databases across all locations using stored procedures.

Set up SQL Server connection

  1. Create a credential in n8n for SQL Server authentication
  2. Configure connection pooling for 110+ concurrent database connections
  3. Test connectivity to each location's database instance

Build the extraction workflow

  1. Add a Schedule Trigger node set to run daily at 2:00 AM
  2. Add a Function node to generate the list of 110 location database connection strings
  3. Add a Split in Batches node to process 10 locations concurrently (prevents connection exhaustion)
  4. Add a Microsoft SQL node configured to execute your reconciliation stored procedure

Stored procedure configuration:

EXEC sp_GetDailyTransactions 
  @StartDate = '{{$now.minus({days: 1}).toFormat('yyyy-MM-dd')}}',
  @EndDate = '{{$now.toFormat('yyyy-MM-dd')}}',
  @LocationID = '{{$json.locationId}}'

Why this works:
The Split in Batches approach prevents overwhelming your SQL Server instances with 110 simultaneous connections. Processing 10 at a time balances speed with resource constraints. The stored procedure handles complex joins and business logic on the database side, returning clean transaction records ready for matching.

Step 2: Integrate MYOB Acumatica Bank Feeds

Extract bank transaction data from your ERP system's REST API.

Configure Acumatica API authentication

  1. Add HTTP Request credentials with OAuth 2.0 authentication
  2. Set token endpoint: https://your-instance.acumatica.com/identity/connect/token
  3. Configure client credentials flow with your API client ID and secret

Build bank feed extraction

  1. Add HTTP Request node after SQL extraction completes
  2. Set method to GET with endpoint: https://your-instance.acumatica.com/entity/Default/22.200.001/BankTransaction
  3. Add query parameters for date filtering: $filter=Date ge '{{$json.startDate}}' and Date le '{{$json.endDate}}'
  4. Enable pagination handling for large result sets

Node configuration:

{
  "method": "GET",
  "url": "={{$parameter.baseUrl}}/entity/Default/22.200.001/BankTransaction",
  "authentication": "predefinedCredentialType",
  "nodeCredentialType": "acumaticaOAuth2Api",
  "options": {
    "pagination": {
      "enabled": true,
      "limitParameter": "$top",
      "offsetParameter": "$skip",
      "pageSize": 1000
    }
  }
}

Why this approach:
MYOB Acumatica's OData-style API requires proper pagination for large datasets. The $filter parameter reduces data transfer by retrieving only relevant transactions. OAuth 2.0 authentication with token refresh ensures uninterrupted daily processing.

Step 3: Stage Data in PostgreSQL

Load extracted data into a staging database that supports complex matching queries.

Create staging tables

  1. Add PostgreSQL node to create tables if they don't exist
  2. Design schema with proper indexing for matching performance
  3. Include audit columns (created_at, updated_at, match_status)

Table structure:

CREATE TABLE IF NOT EXISTS operational_transactions (
  id SERIAL PRIMARY KEY,
  location_id VARCHAR(50),
  transaction_date DATE,
  amount NUMERIC(15,2),
  description TEXT,
  reference_number VARCHAR(100),
  account_code VARCHAR(50),
  created_at TIMESTAMP DEFAULT NOW(),
  match_status VARCHAR(20) DEFAULT 'pending'
);

CREATE INDEX idx_match_lookup ON operational_transactions(transaction_date, amount, location_id);

CREATE TABLE IF NOT EXISTS bank_transactions (
  id SERIAL PRIMARY KEY,
  bank_account VARCHAR(50),
  transaction_date DATE,
  amount NUMERIC(15,2),
  description TEXT,
  bank_reference VARCHAR(100),
  created_at TIMESTAMP DEFAULT NOW(),
  match_status VARCHAR(20) DEFAULT 'pending'
);

CREATE INDEX idx_bank_match ON bank_transactions(transaction_date, amount);

Load data workflow

  1. Add PostgreSQL node to truncate staging tables (or use upsert logic)
  2. Add another PostgreSQL node to bulk insert operational transactions
  3. Add third PostgreSQL node to bulk insert bank transactions

Why this works:
PostgreSQL provides superior query performance for complex matching logic compared to in-memory processing. Proper indexing on date and amount fields accelerates the matching queries you'll run in the next phase. Staging data separately from production systems prevents performance impact.

Step 4: Implement Intelligent Matching Logic

Apply rule-based matching followed by AI-enhanced classification for complex cases.

Phase 1: Exact matching

  1. Add PostgreSQL node to execute exact match query
  2. Match on: transaction_date, amount, and reference_number (if present)
  3. Update match_status to 'exact_match' for successful matches

Exact match query:

UPDATE operational_transactions ot
SET match_status = 'exact_match',
    matched_bank_id = bt.id
FROM bank_transactions bt
WHERE ot.transaction_date = bt.transaction_date
  AND ot.amount = bt.amount
  AND ot.reference_number = bt.bank_reference
  AND ot.match_status = 'pending'
  AND bt.match_status = 'pending';

Phase 2: Fuzzy matching

  1. Add PostgreSQL node for fuzzy matching on description fields
  2. Use PostgreSQL's similarity() function with trigram extension
  3. Set similarity threshold to 0.7 (70% match confidence)

Fuzzy match query:

UPDATE operational_transactions ot
SET match_status = 'fuzzy_match',
    matched_bank_id = bt.id,
    match_confidence = similarity(ot.description, bt.description)
FROM bank_transactions bt
WHERE ot.transaction_date = bt.transaction_date
  AND ot.amount = bt.amount
  AND similarity(ot.description, bt.description) > 0.7
  AND ot.match_status = 'pending'
  AND bt.match_status = 'pending';

Phase 3: AI-enhanced matching

  1. Add PostgreSQL node to select remaining unmatched transactions
  2. Add Function node to prepare data for AI analysis
  3. Add HTTP Request node to call Claude API with matching prompt
  4. Add Code node to parse AI response and update match status

AI prompt structure:

const prompt = `You are a financial reconciliation expert. Analyze these transaction pairs and determine if they represent the same transaction.

Operational Transaction:
Date: ${opTrans.date}
Amount: ${opTrans.amount}
Description: ${opTrans.description}

Bank Transaction:
Date: ${bankTrans.date}
Amount: ${bankTrans.amount}
Description: ${bankTrans.description}

Consider:
- Date differences up to 3 business days (processing delays)
- Description variations (abbreviations, typos, different formats)
- Amount exact matches are required

Respond in JSON format:
{
  "is_match": true/false,
  "confidence": 0.0-1.0,
  "reasoning": "brief explanation"
}`;

Why this approach:
Exact matching handles 60-70% of transactions instantly. Fuzzy matching catches another 20-25% with description variations. AI handles the remaining 5-15% of complex cases where business context matters—like matching "ACH PYMT 12345" to "Vendor Payment - Invoice 12345". This tiered approach minimizes API costs while maximizing match rates.

Variables to customize:

  • similarity_threshold: Adjust from 0.7 based on your description field quality
  • date_tolerance: Modify AI prompt to allow 1-5 day differences based on your bank's processing time
  • ai_confidence_threshold: Set minimum confidence (e.g., 0.85) before accepting AI matches

Step 5: Post Reconciliations to MYOB Acumatica

Push verified matches back to your ERP system automatically.

Build posting workflow

  1. Add PostgreSQL node to select all matched transactions (exact_match and fuzzy_match with confidence > 0.85)
  2. Add Function node to transform data into Acumatica's required format
  3. Add HTTP Request node to POST reconciliation records via API
  4. Add error handling with retry logic (exponential backoff)

API posting configuration:

{
  "method": "PUT",
  "url": "={{$parameter.baseUrl}}/entity/Default/22.200.001/BankTransaction",
  "body": {
    "CashAccount": "={{$json.bankAccount}}",
    "TranDate": "={{$json.transactionDate}}",
    "TranDesc": "={{$json.description}}",
    "CuryTranAmt": "={{$json.amount}}",
    "Reconciled": true,
    "ReconciledDate": "={{$now.toFormat('yyyy-MM-dd')}}",
    "ReconNbr": "={{$json.operationalTransactionId}}"
  },
  "options": {
    "retry": {
      "enabled": true,
      "maxRetries": 3,
      "retryInterval": 5000
    }
  }
}

Why this works:
Posting reconciliations immediately after matching ensures your ERP reflects current status. The retry logic handles temporary API failures without losing data. Batch processing (10-50 records per API call if supported) reduces API calls and improves performance.

Step 6: Handle Exceptions with MS Teams Notifications

Route unmatched transactions to appropriate reviewers for manual handling.

Build exception workflow

  1. Add PostgreSQL node to select transactions with match_status = 'pending' or AI confidence < 0.85
  2. Add Function node to categorize exceptions by type and location
  3. Add MS Teams node to send formatted notification with action links

Teams notification format:

{
  "title": "Bank Reconciliation Exceptions - {{$json.locationName}}",
  "text": "{{$json.exceptionCount}} transactions require manual review",
  "sections": [
    {
      "activityTitle": "Exception Summary",
      "facts": [
        {"name": "Total Unmatched", "value": "{{$json.unmatchedCount}}"},
        {"name": "Low Confidence AI", "value": "{{$json.lowConfidenceCount}}"},
        {"name": "Processing Date", "value": "{{$now.toFormat('yyyy-MM-dd')}}"}
      ]
    }
  ],
  "potentialAction": [
    {
      "@type": "OpenUri",
      "name": "Review in Dashboard",
      "targets": [{"os": "default", "uri": "{{$json.dashboardUrl}}"}]
    }
  ]
}

Exception categorization logic:

  • No bank match found: Operational transaction with no corresponding bank entry (potential missing deposit)
  • No operational match: Bank transaction with no internal record (potential fraud or unrecorded transaction)
  • Multiple potential matches: AI found several candidates with similar confidence (requires human judgment)
  • Date mismatch: Transactions match on amount/description but dates differ by >3 days

Why this approach:
Structured Teams notifications provide context and action links, reducing resolution time. Categorizing exceptions helps route them to the right team members. Including confidence scores from AI matches gives reviewers insight into why automation couldn't decide.

Workflow Architecture Overview

This workflow consists of 47 nodes organized into 6 main sections:

  1. Data extraction (Nodes 1-15): Schedule trigger initiates parallel SQL Server queries across 110 locations, followed by MYOB Acumatica API calls for bank feeds
  2. Data staging (Nodes 16-22): PostgreSQL nodes create/update staging tables and load extracted data with proper indexing
  3. Matching engine (Nodes 23-35): Three-tier matching logic (exact, fuzzy, AI) processes transactions sequentially
  4. ERP posting (Nodes 36-40): Verified matches POST back to Acumatica with error handling and retry logic
  5. Exception handling (Nodes 41-44): Unmatched transactions trigger MS Teams notifications with categorization
  6. Analytics logging (Nodes 45-47): Performance metrics write to PostgreSQL for dashboard consumption

Execution flow:

  • Trigger: Daily at 2:00 AM (configurable)
  • Average run time: 8-12 minutes for 110 locations
  • Key dependencies: SQL Server access, Acumatica API availability, PostgreSQL staging database

Critical nodes:

  • Split in Batches (Node 4): Controls concurrent SQL Server connections to prevent resource exhaustion
  • PostgreSQL Exact Match (Node 24): Handles 60-70% of reconciliations with single query
  • HTTP Request - Claude API (Node 31): Processes complex matching scenarios requiring business context
  • MS Teams Notification (Node 42): Routes exceptions to appropriate reviewers with actionable data

The complete n8n workflow JSON template is available at the bottom of this article.

Key Configuration Details

These settings determine system performance and accuracy.

MYOB Acumatica API Integration

Required fields:

  • Base URL: https://your-instance.acumatica.com
  • API Version: 22.200.001 (adjust to your Acumatica version)
  • OAuth Client ID: From Acumatica API settings
  • OAuth Client Secret: From Acumatica API settings
  • Token Endpoint: /identity/connect/token
  • Scope: api offline_access

Common issues:

  • Using wrong API version → Results in 404 errors on entity endpoints
  • Insufficient OAuth scopes → 403 errors when attempting to POST reconciliations
  • Missing pagination handling → Only retrieves first 1000 bank transactions

PostgreSQL Staging Database

Performance optimization:

-- Enable trigram extension for fuzzy matching
CREATE EXTENSION IF NOT EXISTS pg_trgm;

-- Create GIN index for fast similarity searches
CREATE INDEX idx_description_trgm ON operational_transactions USING gin(description gin_trgm_ops);
CREATE INDEX idx_bank_desc_trgm ON bank_transactions USING gin(description gin_trgm_ops);

-- Analyze tables after bulk loads
ANALYZE operational_transactions;
ANALYZE bank_transactions;

Connection pooling settings:

  • Max connections: 50 (n8n enterprise supports connection pooling)
  • Idle timeout: 30 seconds
  • Connection timeout: 10 seconds

AI Matching Configuration

Claude API settings:

  • Model: claude-3-sonnet-20240229 (balance of speed and accuracy)
  • Max tokens: 500 (sufficient for match response)
  • Temperature: 0.1 (low temperature for consistent classification)
  • Timeout: 30 seconds

Prompt engineering tips:

  • Include specific business rules in system prompt (e.g., "ACH payments process 2-3 days after transaction date")
  • Provide 2-3 examples of correct matches in few-shot learning format
  • Request structured JSON output for reliable parsing
  • Set confidence threshold based on your risk tolerance (0.85 recommended for automated posting)

Why this approach:
Lower temperature (0.1) ensures consistent classification decisions across similar transactions. Structured JSON output eliminates parsing errors. Few-shot examples teach the model your specific business context without fine-tuning.

Variables to customize:

  • batch_size: Process 10-50 locations concurrently based on your infrastructure capacity
  • ai_confidence_threshold: 0.85 for automated posting, 0.70-0.84 for manual review with AI suggestion
  • fuzzy_similarity: 0.7 default, increase to 0.8 for higher precision (lower recall)
  • retry_attempts: 3 for API calls, increase for unreliable network connections

Testing & Validation

Validate each component before full deployment.

Test data extraction

  1. Run workflow manually with Schedule Trigger disabled
  2. Verify SQL Server stored procedure returns expected columns
  3. Check that all 110 locations process successfully
  4. Review PostgreSQL staging tables for data completeness

Test matching logic

  1. Load known matched transaction pairs into staging tables
  2. Run matching nodes individually
  3. Verify exact match query identifies 100% of identical transactions
  4. Test fuzzy matching with deliberately varied descriptions
  5. Send sample unmatched pairs to AI and review confidence scores

Validation queries:

-- Check match rate by type
SELECT match_status, COUNT(*) as transaction_count,
       ROUND(COUNT(*) * 100.0 / SUM(COUNT(*)) OVER(), 2) as percentage
FROM operational_transactions
GROUP BY match_status;

-- Identify potential false positives (matched transactions with large date differences)
SELECT ot.*, bt.*
FROM operational_transactions ot
JOIN bank_transactions bt ON ot.matched_bank_id = bt.id
WHERE ABS(ot.transaction_date - bt.transaction_date) > 3
  AND ot.match_status IN ('fuzzy_match', 'ai_match');

Test ERP posting

  1. Create test reconciliation records in Acumatica sandbox environment
  2. Verify API POST requests succeed and data appears correctly
  3. Test error handling by temporarily disabling API credentials
  4. Confirm retry logic attempts 3 times with exponential backoff

Test exception handling

  1. Deliberately create unmatched transactions
  2. Verify MS Teams notifications send with correct categorization
  3. Check that dashboard links in notifications work
  4. Confirm appropriate team members receive notifications

Deployment Considerations

Production deployment requires monitoring, error handling, and documentation.

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with exponential backoff on all API calls Prevents data loss during temporary service outages
Monitoring Webhook health checks every 5 minutes Detect workflow failures within 5 minutes vs discovering issues next day
Logging Write execution metrics to PostgreSQL after each run Track performance trends, identify bottlenecks, support audits
Documentation Node-by-node comments explaining business logic Reduces modification time by 2-4 hours when requirements change
Credentials Store all API keys in n8n credential manager Prevents credential exposure in workflow JSON exports
Backup Daily PostgreSQL backups with 30-day retention Enables recovery if matching logic errors corrupt staging data
Alerting MS Teams notifications for workflow failures Ensures finance team knows immediately if daily reconciliation doesn't complete

Error handling strategy

Implement three levels of error handling:

  1. Node-level retries: Configure 3 retry attempts with 5-second intervals on all HTTP Request nodes
  2. Workflow-level error handling: Add Error Trigger node that sends critical alerts to MS Teams if workflow fails completely
  3. Data-level validation: Add Function nodes after data extraction to validate record counts match expectations (alert if <80% of expected transactions)

Monitoring recommendations

Create a reconciliation metrics table in PostgreSQL:

CREATE TABLE reconciliation_metrics (
  run_date DATE PRIMARY KEY,
  total_operational_transactions INT,
  total_bank_transactions INT,
  exact_matches INT,
  fuzzy_matches INT,
  ai_matches INT,
  unmatched INT,
  processing_time_seconds INT,
  api_errors INT,
  created_at TIMESTAMP DEFAULT NOW()
);

Connect Power BI or Metabase to this table for real-time dashboards showing:

  • Daily match rate trends
  • Processing time per location
  • Exception categories over time
  • AI confidence score distribution
  • API error frequency

Use Cases & Variations

This reconciliation system adapts to various financial automation scenarios.

Use Case 1: Multi-Entity Franchise Operations

  • Industry: Quick-service restaurants, retail chains
  • Scale: 50-200 franchise locations
  • Modifications needed: Add franchise-specific business rules in AI prompt (e.g., "Daily cash deposits typically occur next business day"), modify SQL stored procedures to include franchise-specific transaction codes

Use Case 2: Property Management Reconciliation

  • Industry: Real estate, property management
  • Scale: 100-500 properties with individual bank accounts
  • Modifications needed: Replace MYOB Acumatica with Yardi/AppFolio API integration, add rent payment matching logic (tenant name + unit number), include lease context in AI matching prompt

Use Case 3: E-commerce Payment Reconciliation

  • Industry: Online retail, marketplaces
  • Scale: 1000-10000 daily transactions
  • Modifications needed: Integrate Stripe/PayPal APIs instead of bank feeds, add payment processor fee calculations, match order IDs from operational database to payment processor transaction IDs

Use Case 4: Healthcare Practice Revenue Cycle

  • Industry: Medical practices, dental groups
  • Scale: 20-100 practice locations
  • Modifications needed: Integrate with practice management systems (Dentrix, Eaglesoft), add insurance payment matching logic, handle partial payments and adjustments

Use Case 5: Construction Project Accounting

  • Industry: General contractors, construction management
  • Scale: 50-200 active projects
  • Modifications needed: Match by project code and cost code, integrate with Procore or Viewpoint API, add subcontractor payment verification

Customizations & Extensions

Extend this workflow to handle additional reconciliation scenarios and scale to larger datasets.

Alternative Integrations

Instead of MYOB Acumatica:

  • QuickBooks Online: Use QBO REST API - requires OAuth 2.0 with refresh token handling, swap Acumatica nodes with QBO HTTP Request nodes (endpoints: /query?query=SELECT * FROM Purchase, /purchase)
  • NetSuite: Use SuiteTalk REST API - requires token-based authentication, add custom field mapping for NetSuite's internal IDs, modify 8-12 nodes
  • Xero: Use Xero API - OAuth 2.0 with PKCE flow, simpler data model reduces transformation logic, swap 5-7 nodes

Instead of Claude API:

  • GPT-4: Use OpenAI API - similar prompt structure, adjust temperature to 0.2, modify HTTP Request node endpoint to https://api.openai.com/v1/chat/completions
  • Open-source LLMs: Deploy Llama 3 or Mistral locally - eliminates API costs, requires self-hosted inference server, add 3-5 nodes for local API calls
  • Azure OpenAI: Use Azure-hosted GPT models - better for enterprise compliance, modify authentication to use Azure AD, change 2-3 nodes

Workflow Extensions

Add automated reporting:

  • Add Schedule node to run weekly on Monday at 8:00 AM
  • Connect to Google Slides API or PowerPoint via Microsoft Graph
  • Generate executive summary presentations with match rate trends, exception analysis, location performance rankings
  • Nodes needed: +7 (Schedule, PostgreSQL query for metrics, Function for data transformation, HTTP Request to presentation API, MS Teams notification with presentation link)

Scale to handle more data:

  • Replace PostgreSQL staging with Amazon Redshift or Snowflake for >1M transactions/day
  • Add batch processing logic: process 1000 rows at a time with Split in Batches node
  • Implement Redis caching layer for frequently accessed reference data (account codes, location mappings)
  • Add parallel processing: split 110 locations into 5 groups, run 5 workflow instances concurrently
  • Performance improvement: 10x faster for >5000 rows per location, sub-5 minute total processing time

Integration possibilities:

Add This To Get This Complexity
Slack integration Real-time exception alerts in finance channel Easy (2 nodes: Slack node + message formatting)
Airtable sync Collaborative exception review interface with comments Medium (5 nodes: Airtable API, data transformation, bidirectional sync)
Power BI connector Executive dashboards with drill-down capabilities Medium (8 nodes: metrics aggregation, Power BI REST API, dataset refresh)
Audit trail logging Complete transaction history for compliance Medium (6 nodes: PostgreSQL audit table, trigger on status changes, retention policy)
Machine learning feedback Improve AI matching over time with human corrections Advanced (12+ nodes: ML model training pipeline, feature engineering, model versioning)

Customization ideas:

  1. Multi-currency support: Add currency conversion logic using exchange rate API, modify matching to account for FX rate variations (±2%), add currency code to staging tables
  2. Automated journal entries: Generate accounting journal entries for matched transactions, POST to ERP general ledger via API, include audit trail references
  3. Predictive analytics: Train ML model on historical matches to predict match likelihood, prioritize high-confidence matches for automated posting, route low-confidence to manual review
  4. Mobile approval workflow: Send exception notifications to mobile app, allow managers to approve/reject matches on-the-go, integrate with MS Teams mobile or custom app
  5. Vendor payment matching: Extend to match vendor invoices against payment transactions, integrate with AP automation systems, add three-way matching (PO, invoice, payment)

Get Started Today

Ready to automate your bank reconciliation process?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON
  2. Import to n8n: Go to Workflows → Import from URL or File, paste the JSON
  3. Configure your services: Add credentials for SQL Server, MYOB Acumatica API, PostgreSQL, Claude API, and MS Teams webhooks
  4. Customize matching logic: Adjust SQL queries and AI prompts for your specific business rules
  5. Test with sample data: Run workflow manually with a single location before deploying to all 110
  6. Deploy to production: Set your schedule (daily at 2:00 AM recommended) and activate the workflow
  7. Monitor performance: Connect Power BI or Metabase to your metrics table and track match rates

Need help customizing this workflow for your specific ERP system, matching requirements, or scaling to handle larger transaction volumes? Schedule an intro call with Atherial.


N8N Workflow JSON Template

[object Object]

Note: Replace [object Object] placeholder with your actual n8n workflow JSON. Import this into n8n via Workflows → Import from File.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Multi-Location Bank Reconciliation Automation",
  "nodes": [
    {
      "id": "trigger-schedule",
      "name": "Daily Reconciliation Trigger",
      "type": "n8n-nodes-base.cron",
      "position": [
        100,
        200
      ],
      "parameters": {
        "triggerTimes": {
          "item": [
            {
              "mode": "everyX",
              "unit": "day",
              "value": 1,
              "triggerAtTime": "02:00"
            }
          ]
        }
      },
      "typeVersion": 1
    },
    {
      "id": "fetch-sql-operational",
      "name": "Fetch SQL Server Operational Data",
      "type": "n8n-nodes-base.microsoftSql",
      "position": [
        250,
        100
      ],
      "parameters": {
        "query": "SELECT location_id, transaction_date, amount, description, reference_id, account_number FROM operational_transactions WHERE transaction_date >= CAST(DATEADD(day, -1, GETDATE()) AS DATE) ORDER BY location_id, transaction_date",
        "options": {},
        "operation": "executeQuery"
      },
      "typeVersion": 2
    },
    {
      "id": "fetch-postgres-bankfeed",
      "name": "Fetch PostgreSQL Bank Feed Data",
      "type": "n8n-nodes-base.postgres",
      "position": [
        250,
        300
      ],
      "parameters": {
        "query": "SELECT id, location_code, bank_transaction_id, transaction_date, amount, description, bank_reference, status FROM bank_transactions WHERE transaction_date >= NOW()::DATE - INTERVAL '1 day' ORDER BY location_code, transaction_date",
        "options": {},
        "operation": "executeQuery"
      },
      "typeVersion": 2
    },
    {
      "id": "combine-data-sources",
      "name": "Combine Data Sources",
      "type": "n8n-nodes-base.merge",
      "position": [
        400,
        150
      ],
      "parameters": {
        "mode": "combine",
        "combinationMode": "multiplex"
      },
      "typeVersion": 3
    },
    {
      "id": "group-by-location",
      "name": "Group Transactions by Location",
      "type": "n8n-nodes-base.function",
      "position": [
        550,
        150
      ],
      "parameters": {
        "functionCode": "// Group transactions by location\nconst locationMap = {};\n\nfor (const item of items) {\n  const location = item.json.location_id || item.json.location_code || 'unknown';\n  \n  if (!locationMap[location]) {\n    locationMap[location] = {\n      location: location,\n      operational: [],\n      bankfeed: [],\n      timestamp: new Date().toISOString()\n    };\n  }\n  \n  if (item.json.reference_id) {\n    locationMap[location].operational.push(item.json);\n  } else if (item.json.bank_transaction_id) {\n    locationMap[location].bankfeed.push(item.json);\n  }\n}\n\n// Convert to array format for processing\nconst result = [];\nfor (const location in locationMap) {\n  result.push({\n    json: locationMap[location]\n  });\n}\n\nreturn result;"
      },
      "typeVersion": 1
    },
    {
      "id": "ai-transaction-matching",
      "name": "AI Transaction Matching (Claude)",
      "type": "n8n-nodes-base.openAi",
      "position": [
        700,
        150
      ],
      "parameters": {
        "model": "claude-3-5-sonnet-20241022",
        "prompt": "=You are a bank reconciliation expert. Match the operational transactions with bank feed transactions for this location.\n\nOperational Data (from SQL Server):\n{{ $json.operational | json }}\n\nBank Feed Data (from MYOB Acumatica):\n{{ $json.bankfeed | json }}\n\nProvide a JSON response with:\n1. matched_pairs: Array of successfully matched transactions\n2. unmatched_operational: Operational transactions not in bank feed\n3. unmatched_bank: Bank transactions not in operational data\n4. discrepancies: Any amount or date mismatches\n5. confidence_score: Overall reconciliation confidence (0-100)\n\nMatch based on:\n- Amount (exact or within $0.01 tolerance)\n- Transaction date (within 2 business days)\n- Reference ID/Bank reference similarity\n\nReturn valid JSON only.",
        "maxTokens": 2048
      },
      "typeVersion": 3
    },
    {
      "id": "parse-ai-results",
      "name": "Parse AI Matching Results",
      "type": "n8n-nodes-base.functionItem",
      "position": [
        850,
        150
      ],
      "parameters": {
        "functionCode": "// Parse AI response and structure reconciliation results\nconst matchResult = JSON.parse(items[0].json.choices[0].message.content);\n\nconst results = [];\n\nfor (const pair of matchResult.matched_pairs || []) {\n  results.push({\n    json: {\n      status: 'matched',\n      location: items[0].json.location,\n      operational_ref: pair.operational_id,\n      bank_ref: pair.bank_id,\n      amount: pair.amount,\n      confidence: matchResult.confidence_score,\n      timestamp: new Date().toISOString()\n    }\n  });\n}\n\nfor (const unmatchedOp of matchResult.unmatched_operational || []) {\n  results.push({\n    json: {\n      status: 'unmatched_operational',\n      location: items[0].json.location,\n      transaction: unmatchedOp,\n      reason: 'Not found in bank feed',\n      severity: 'high',\n      timestamp: new Date().toISOString()\n    }\n  });\n}\n\nfor (const unmatchedBank of matchResult.unmatched_bank || []) {\n  results.push({\n    json: {\n      status: 'unmatched_bank',\n      location: items[0].json.location,\n      transaction: unmatchedBank,\n      reason: 'Not found in operational data',\n      severity: 'high',\n      timestamp: new Date().toISOString()\n    }\n  });\n}\n\nfor (const discrepancy of matchResult.discrepancies || []) {\n  results.push({\n    json: {\n      status: 'discrepancy',\n      location: items[0].json.location,\n      operational_ref: discrepancy.operational_id,\n      bank_ref: discrepancy.bank_id,\n      operational_amount: discrepancy.op_amount,\n      bank_amount: discrepancy.bank_amount,\n      difference: Math.abs(discrepancy.op_amount - discrepancy.bank_amount),\n      severity: Math.abs(discrepancy.op_amount - discrepancy.bank_amount) > 100 ? 'high' : 'medium',\n      timestamp: new Date().toISOString()\n    }\n  });\n}\n\nreturn results.length > 0 ? results : [{ json: { status: 'no_data', location: items[0].json.location } }];"
      },
      "typeVersion": 2
    },
    {
      "id": "send-teams-alert",
      "name": "Send Teams Alert for Exceptions",
      "type": "n8n-nodes-base.microsoftTeams",
      "position": [
        1000,
        300
      ],
      "parameters": {
        "teamId": "={{ $env.MS_TEAMS_TEAM_ID }}",
        "resource": "channelMessage",
        "channelId": "={{ $env.MS_TEAMS_CHANNEL_ID }}",
        "operation": "create",
        "messageContent": "=**Bank Reconciliation Exception Alert**\n\n📍 Location: {{ $json.location }}\n⚠️ Status: {{ $json.status | upper }}\n\n{{ if $json.severity === 'high' }}\n🔴 **CRITICAL** - Manual review required\n{{ else }}\n🟡 **WARNING** - Review recommended\n{{ /if }}\n\n**Details:**\n{{ if $json.status === 'unmatched_operational' }}\n- **Operational Transaction Missing from Bank Feed**\n- Amount: ${{ $json.transaction.amount }}\n- Reference: {{ $json.transaction.reference_id }}\n- Date: {{ $json.transaction.transaction_date }}\n{{ else if $json.status === 'unmatched_bank' }}\n- **Bank Transaction Missing from Operational Data**\n- Amount: ${{ $json.transaction.amount }}\n- Bank Ref: {{ $json.transaction.bank_reference }}\n- Date: {{ $json.transaction.transaction_date }}\n{{ else if $json.status === 'discrepancy' }}\n- **Amount Discrepancy Detected**\n- Expected: ${{ $json.operational_amount }}\n- Actual: ${{ $json.bank_amount }}\n- Difference: ${{ $json.difference }}\n{{ /if }}\n\nTimestamp: {{ $json.timestamp }}"
      },
      "typeVersion": 2,
      "continueOnFail": true
    },
    {
      "id": "save-postgres-results",
      "name": "Save Results to PostgreSQL",
      "type": "n8n-nodes-base.postgres",
      "position": [
        1000,
        100
      ],
      "parameters": {
        "table": "reconciliation_results",
        "values": "=location_id,matched_count,unmatched_operational,unmatched_bank,discrepancies,confidence_score,reconciliation_date,status",
        "columns": "location_id,matched_count,unmatched_operational,unmatched_bank,discrepancies,confidence_score,reconciliation_date,status",
        "operation": "insert"
      },
      "typeVersion": 2,
      "continueOnFail": true
    },
    {
      "id": "trigger-powerbi-refresh",
      "name": "Trigger Power BI Dashboard Refresh",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        1150,
        150
      ],
      "parameters": {
        "url": "=https://api.powerbi.com/v1.0/myorg/datasets/{{ $env.POWERBI_DATASET_ID }}/refreshes",
        "method": "POST",
        "headers": {
          "parameters": [
            {
              "name": "Content-Type",
              "value": "application/json"
            }
          ]
        },
        "genericAuth": "={{ $env.POWERBI_AUTH }}",
        "sendHeaders": true,
        "authentication": "genericCredentialType"
      },
      "typeVersion": 4,
      "continueOnFail": true
    },
    {
      "id": "log-execution-audit",
      "name": "Log Execution Audit",
      "type": "n8n-nodes-base.postgres",
      "position": [
        1150,
        300
      ],
      "parameters": {
        "table": "reconciliation_audit_log",
        "values": "=execution_id,workflow_name,total_locations,matched_transactions,exceptions_count,reconciliation_timestamp,status",
        "columns": "execution_id,workflow_name,total_locations,matched_transactions,exceptions_count,reconciliation_timestamp,status",
        "operation": "insert"
      },
      "typeVersion": 2,
      "continueOnFail": true
    }
  ],
  "settings": {
    "errorWorkflow": "error-notification-workflow",
    "executionOrder": "v1"
  },
  "connections": {
    "Combine Data Sources": {
      "main": [
        [
          {
            "node": "Group Transactions by Location",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Parse AI Matching Results": {
      "main": [
        [
          {
            "node": "Send Teams Alert for Exceptions",
            "type": "main",
            "index": 0
          },
          {
            "node": "Save Results to PostgreSQL",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Save Results to PostgreSQL": {
      "main": [
        [
          {
            "node": "Trigger Power BI Dashboard Refresh",
            "type": "main",
            "index": 0
          },
          {
            "node": "Log Execution Audit",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Daily Reconciliation Trigger": {
      "main": [
        [
          {
            "node": "Fetch SQL Server Operational Data",
            "type": "main",
            "index": 0
          },
          {
            "node": "Fetch PostgreSQL Bank Feed Data",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Group Transactions by Location": {
      "main": [
        [
          {
            "node": "AI Transaction Matching (Claude)",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Fetch PostgreSQL Bank Feed Data": {
      "main": [
        [
          {
            "node": "Combine Data Sources",
            "type": "main",
            "index": 1
          }
        ]
      ]
    },
    "AI Transaction Matching (Claude)": {
      "main": [
        [
          {
            "node": "Parse AI Matching Results",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Fetch SQL Server Operational Data": {
      "main": [
        [
          {
            "node": "Combine Data Sources",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}