How to Build a Semantic FAQ Search API with n8n (Free Template)

How to Build a Semantic FAQ Search API with n8n (Free Template)

Keyword matching fails when customers phrase questions differently than your FAQ database. Your support automation returns wrong answers, frustrating customers and wasting agent time. This article teaches you how to build a semantic FAQ search system using embeddings and vector databases that understands meaning, not just words—integrated directly into your n8n workflow.

The Problem: Keyword Matching Breaks Customer Support Automation

Most automated support systems rely on keyword overlap to match customer questions with FAQ answers. A customer asks "Where's my package?" but your FAQ says "Track your shipment"—no keyword match, wrong answer delivered.

Current challenges:

  • Keyword matching fails when customers use different terminology
  • False positives return irrelevant FAQs that confuse customers
  • Manual FAQ selection wastes support team time reviewing automation failures
  • Scaling FAQ databases makes keyword matching exponentially less accurate

Business impact:

  • Support teams spend 3-5 hours weekly fixing automation errors
  • Customer satisfaction drops when automated responses miss the mark
  • Growing FAQ libraries (50+ entries) make keyword matching unreliable
  • Wrong answers force customers to contact support again, doubling ticket volume

The Solution Overview

Replace keyword matching with semantic search using embeddings and vector databases. This approach converts both your FAQs and customer questions into numerical vectors that capture meaning. Vector similarity search finds the closest matches regardless of exact wording—"Where's my package?" correctly matches "Track your shipment" because the concepts are semantically similar.

The n8n workflow calls a custom API endpoint that embeds the customer's question, searches your vector database, and returns the top 1-3 most relevant FAQs. You maintain your existing Google Sheets FAQ source and n8n automation—just swap the matching logic for semantic search.

What You'll Build

This solution creates a lightweight FAQ retrieval service that integrates with your existing n8n customer support workflow.

Component Technology Purpose
FAQ Source Google Sheets Centralized FAQ management (question, answer, links)
Embedding Engine OpenAI API Converts text to 1536-dimension vectors
Vector Database Supabase/pgvector Stores and searches FAQ embeddings
Search API Node.js/Python microservice /faq/search endpoint for n8n HTTP requests
n8n Integration HTTP Request node Calls search API and injects results into AI assistant

Key capabilities:

  • Semantic matching that understands question intent, not just keywords
  • Returns top 1-3 FAQs ranked by relevance score
  • Handles FAQ updates by re-embedding changed rows
  • Sub-second response times for real-time support automation
  • Simple JSON API that works with any n8n workflow

Prerequisites

Before starting, ensure you have:

  • n8n instance (cloud or self-hosted)
  • OpenAI API key with access to text-embedding-ada-002 model
  • Supabase account (free tier sufficient) or PostgreSQL with pgvector extension
  • Google Sheets with FAQ data (columns: question, answer, links)
  • Basic JavaScript or Python knowledge for API customization
  • Node.js 18+ or Python 3.9+ installed locally for development

Step 1: Set Up Your Vector Database

Your vector database stores FAQ embeddings and performs similarity searches. Supabase with pgvector offers the simplest setup—managed PostgreSQL with vector search built in.

Create Supabase project:

  1. Sign up at supabase.com and create a new project
  2. Navigate to SQL Editor and run this schema:
-- Enable pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;

-- Create FAQs table with vector column
CREATE TABLE faqs (
  id SERIAL PRIMARY KEY,
  question TEXT NOT NULL,
  answer TEXT NOT NULL,
  links TEXT,
  embedding VECTOR(1536),
  created_at TIMESTAMP DEFAULT NOW(),
  updated_at TIMESTAMP DEFAULT NOW()
);

-- Create index for fast vector similarity search
CREATE INDEX ON faqs USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
  1. Save your Supabase URL and anon key from Project Settings → API

Why this works:

The VECTOR(1536) column stores OpenAI's embedding dimensions. The ivfflat index uses Inverted File with Flat compression—it clusters similar vectors together, reducing search time from O(n) to O(log n) for large FAQ databases. The vector_cosine_ops operator measures cosine similarity, which works better than Euclidean distance for text embeddings because it focuses on direction (meaning) rather than magnitude.

Alternative setup:

If you prefer Pinecone, create an index with dimension=1536 and metric=cosine. For self-hosted PostgreSQL, install pgvector: CREATE EXTENSION vector;

Step 2: Build the FAQ Embedding Pipeline

This component reads your Google Sheet, generates embeddings for each FAQ, and populates your vector database.

Configure Google Sheets integration:

  1. In n8n, add a Google Sheets node with "Read" operation
  2. Select your FAQ spreadsheet and sheet name
  3. Set "Range" to include all FAQ rows: A2:C (question, answer, links)
  4. Enable "RAW Data" to get clean JSON output

Generate embeddings:

Add an HTTP Request node after Google Sheets:

{
  "method": "POST",
  "url": "https://api.openai.com/v1/embeddings",
  "authentication": "predefinedCredentialType",
  "nodeCredentialType": "openAiApi",
  "sendHeaders": true,
  "headerParameters": {
    "parameters": [
      {
        "name": "Content-Type",
        "value": "application/json"
      }
    ]
  },
  "sendBody": true,
  "bodyParameters": {
    "parameters": [
      {
        "name": "input",
        "value": "={{ $json.question }}"
      },
      {
        "name": "model",
        "value": "text-embedding-ada-002"
      }
    ]
  }
}

Store in vector database:

Add a Supabase node configured for "Insert" operation:

{
  "table": "faqs",
  "data": {
    "question": "={{ $json.question }}",
    "answer": "={{ $json.answer }}",
    "links": "={{ $json.links }}",
    "embedding": "={{ $json.embedding }}"
  }
}

Why this approach:

Processing FAQs in batches of 1 prevents rate limiting and makes debugging easier. The text-embedding-ada-002 model costs $0.0001/1K tokens—embedding 100 FAQs costs less than $0.01. Store the raw embedding array directly in PostgreSQL; pgvector handles the vector type conversion automatically.

Variables to customize:

  • batchSize: Process multiple FAQs per API call (max 2048 tokens total)
  • updateFrequency: Run this workflow daily/weekly to catch FAQ changes
  • errorHandling: Add retry logic for OpenAI API timeouts

Step 3: Create the Search API Endpoint

Build a lightweight microservice that accepts customer questions and returns relevant FAQs. This example uses Node.js with Express, but Python with FastAPI works identically.

API structure:

const express = require('express');
const { createClient } = require('@supabase/supabase-js');
const OpenAI = require('openai');

const app = express();
app.use(express.json());

const supabase = createClient(
  process.env.SUPABASE_URL,
  process.env.SUPABASE_KEY
);

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY
});

app.post('/faq/search', async (req, res) => {
  const { query, topK = 3 } = req.body;
  
  // Generate embedding for customer question
  const embeddingResponse = await openai.embeddings.create({
    model: 'text-embedding-ada-002',
    input: query
  });
  
  const queryEmbedding = embeddingResponse.data[0].embedding;
  
  // Vector similarity search
  const { data, error } = await supabase.rpc('match_faqs', {
    query_embedding: queryEmbedding,
    match_threshold: 0.7,
    match_count: topK
  });
  
  if (error) return res.status(500).json({ error: error.message });
  
  res.json({ results: data });
});

app.listen(3000);

Create the search function in Supabase:

CREATE OR REPLACE FUNCTION match_faqs(
  query_embedding VECTOR(1536),
  match_threshold FLOAT,
  match_count INT
)
RETURNS TABLE (
  id INT,
  question TEXT,
  answer TEXT,
  links TEXT,
  similarity FLOAT
)
LANGUAGE plpgsql
AS $$
BEGIN
  RETURN QUERY
  SELECT
    faqs.id,
    faqs.question,
    faqs.answer,
    faqs.links,
    1 - (faqs.embedding <=> query_embedding) AS similarity
  FROM faqs
  WHERE 1 - (faqs.embedding <=> query_embedding) > match_threshold
  ORDER BY faqs.embedding <=> query_embedding
  LIMIT match_count;
END;
$$;

Why this works:

The <=> operator calculates cosine distance (1 - cosine similarity). Setting match_threshold: 0.7 filters out irrelevant results—only FAQs with 70%+ semantic similarity return. The function runs entirely in PostgreSQL, avoiding data transfer overhead. Typical response time: 50-150ms for databases with 500+ FAQs.

Deploy options:

  • Railway/Render: Push to GitHub, auto-deploy with environment variables
  • Docker: FROM node:18-alpine, expose port 3000, deploy to any container host
  • Serverless: Adapt to AWS Lambda/Vercel Functions (cold start adds 200-500ms)

Step 4: Integrate Search API with n8n Workflow

Connect your n8n customer support workflow to the semantic search API. This replaces your existing keyword matching logic.

Add HTTP Request node:

Configure after your "Extract Customer Message" node:

{
  "method": "POST",
  "url": "https://your-api-domain.com/faq/search",
  "sendBody": true,
  "contentType": "application/json",
  "bodyParameters": {
    "parameters": [
      {
        "name": "query",
        "value": "={{ $json.customerMessage }}"
      },
      {
        "name": "topK",
        "value": 3
      }
    ]
  },
  "options": {
    "timeout": 10000
  }
}

Parse and inject results:

Add a Function node to format FAQ results for your AI assistant:

const results = $input.item.json.results;

// Format top 3 FAQs for context injection
const faqContext = results.map((faq, index) => 
  `FAQ ${index + 1} (${(faq.similarity * 100).toFixed(1)}% match):
  Q: ${faq.question}
  A: ${faq.answer}
  Links: ${faq.links || 'None'}`
).join('

');

return {
  json: {
    faqContext,
    topMatch: results[0],
    allMatches: results
  }
};

Update AI assistant prompt:

Modify your OpenAI/Claude node to include FAQ context:

You are a customer support assistant. Use these relevant FAQs to answer the customer's question:

{{ $json.faqContext }}

Customer question: {{ $json.customerMessage }}

Provide a helpful response based on the most relevant FAQ. If none match well, acknowledge you need to escalate to a human agent.

Common issues:

  • API timeout errors → Increase n8n HTTP node timeout to 15 seconds
  • Empty results → Lower match_threshold to 0.6 for broader matching
  • Wrong FAQ selected → Review similarity scores; <0.75 may need human review

Workflow Architecture Overview

This solution consists of two n8n workflows working together:

Workflow 1: FAQ Indexing (5 nodes, runs daily)

  1. Schedule Trigger: Runs at 2 AM daily to catch FAQ updates
  2. Google Sheets: Reads all FAQ rows
  3. HTTP Request (OpenAI): Generates embeddings for each question
  4. Supabase: Upserts FAQ data with embeddings
  5. Notification: Sends Slack alert on completion or errors

Workflow 2: Customer Support Automation (12 nodes, runs on webhook)

  1. Webhook Trigger: Receives customer message from support system
  2. Extract Message: Parses JSON payload for question text
  3. Intent Classification: Categorizes as refund/shipping/sales
  4. HTTP Request (Search API): Calls /faq/search with customer question
  5. Parse Results: Formats top 3 FAQs for AI context
  6. OpenAI Assistant: Generates response using FAQ context
  7. Quality Check: Validates response completeness
  8. Send Reply: Posts answer back to customer via API
  9. Log Interaction: Stores in database for analytics

Execution flow:

  • Trigger: Webhook from support platform (Zendesk, Intercom, custom)
  • Average run time: 2-4 seconds end-to-end
  • Key dependencies: OpenAI API, Search API microservice, Supabase

Critical nodes:

  • HTTP Request (Search API): Must return within 10 seconds or workflow times out
  • Parse Results: Handles empty results gracefully, triggers human escalation
  • OpenAI Assistant: Injects FAQ context to ground responses in accurate information

The complete n8n workflow JSON template is available at the bottom of this article.

Key Configuration Details

Search API Performance Tuning

Required environment variables:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-anon-key
OPENAI_API_KEY=sk-...
MATCH_THRESHOLD=0.7
TOP_K_RESULTS=3

Similarity threshold guidelines:

  • 0.85+: Extremely similar, almost identical phrasing
  • 0.75-0.84: Strong match, same topic and intent
  • 0.65-0.74: Moderate match, related but may need review
  • <0.65: Weak match, likely irrelevant

Common issues:

  • Search returns no results → Lower MATCH_THRESHOLD to 0.6
  • Too many irrelevant results → Raise threshold to 0.75
  • Slow response times → Add more pgvector index lists: WITH (lists = 200)

FAQ Update Strategy

Incremental updates:
Instead of re-embedding all FAQs daily, track changes:

// In your indexing workflow Function node
const existingFAQs = $input.all()[0].json; // From Supabase query
const sheetFAQs = $input.all()[1].json; // From Google Sheets

const changed = sheetFAQs.filter(sheet => {
  const existing = existingFAQs.find(db => db.id === sheet.id);
  return !existing || existing.question !== sheet.question;
});

return changed; // Only embed changed FAQs

Why this approach:
Re-embedding unchanged FAQs wastes API calls and processing time. Comparing question text catches edits. For 100 FAQs with 5% weekly changes, this reduces embedding costs from $0.01 to $0.0005 per run.

Testing & Validation

Test semantic matching accuracy:

Create a test dataset with known question variations:

Customer Question Expected FAQ Match Similarity Score
"Where's my order?" "Track your shipment" Should be >0.75
"How do I get a refund?" "Refund policy" Should be >0.80
"My package is late" "Track your shipment" Should be >0.70

Run these through your /faq/search endpoint and verify:

  1. Correct FAQ appears in top 3 results
  2. Similarity score meets your threshold
  3. Response time stays under 2 seconds

n8n workflow testing:

  1. Use n8n's "Execute Workflow" with manual JSON input
  2. Inject test customer messages that should match specific FAQs
  3. Review the "Parse Results" node output—verify FAQ context formatting
  4. Check AI assistant responses—ensure they reference the correct FAQ

Monitor production accuracy:

Add a feedback loop to your workflow:

// After sending response to customer
return {
  json: {
    timestamp: new Date().toISOString(),
    customerQuestion: $json.customerMessage,
    topFAQMatch: $json.topMatch.question,
    similarityScore: $json.topMatch.similarity,
    aiResponse: $json.finalResponse
  }
};

Store this in a Google Sheet or database. Review weekly for:

  • Questions with low similarity scores (<0.70) that still got answered
  • Repeated customer questions not matching any FAQ (add new FAQs)
  • FAQs that never get matched (remove or rewrite)

Deployment Considerations

Production Deployment Checklist

Area Requirement Why It Matters
Error Handling Retry logic with exponential backoff on API calls Prevents workflow failures from transient network issues
Rate Limiting Queue system for high-volume periods OpenAI embeddings API limits: 3,000 requests/min
Monitoring Health check endpoint (/health) returning 200 OK Detect API downtime within 1 minute vs discovering during customer interaction
Caching Redis cache for frequently asked questions Reduces embedding API calls by 40-60% for common questions
Security API key authentication on /faq/search endpoint Prevents unauthorized access and API cost abuse
Logging Structured JSON logs with request IDs Trace customer question → FAQ match → AI response for debugging

Scaling considerations:

For 1,000+ FAQs:

  • Increase pgvector index lists to 300-500 for faster search
  • Consider chunking long FAQ answers (>500 words) into separate entries
  • Implement FAQ categories to narrow search scope

For 10,000+ requests/day:

  • Deploy multiple API instances behind a load balancer
  • Use connection pooling for Supabase (max 15 connections per instance)
  • Cache embeddings for 24 hours using Redis with FAQ ID as key

Cost optimization:

Component Free Tier Paid Tier Trigger Monthly Cost at Scale
Supabase 500MB database, 2GB bandwidth >500MB or >2GB transfer $25/month for 8GB database
OpenAI Embeddings N/A (pay-per-use) Immediate $1-5/month for 10K questions
n8n Cloud 5,000 workflow executions >5,000 executions $20/month for 30K executions
Hosting (Railway) $5 free credit After credit expires $5-10/month for basic API

Use Cases & Variations

Use Case 1: E-commerce Order Support

  • Industry: Online retail
  • Scale: 2,000 customer inquiries/day
  • Modifications needed: Add product SKU matching to FAQ search, integrate with order management system to inject real-time order status into AI context

Use Case 2: SaaS Product Documentation

  • Industry: B2B software
  • Scale: 500 support tickets/week
  • Modifications needed: Embed entire help docs (not just FAQs), add code snippet extraction, return relevant API documentation links alongside answers

Use Case 3: Healthcare Patient Portal

  • Industry: Medical services
  • Scale: 1,000 appointment/billing questions/day
  • Modifications needed: HIPAA-compliant hosting (AWS/Azure), patient ID verification before FAQ search, separate vector databases for appointment vs billing FAQs

Use Case 4: Financial Services Compliance

  • Industry: Banking/fintech
  • Scale: 300 regulatory questions/week from internal teams
  • Modifications needed: Version control for FAQ embeddings (track regulation changes), audit logging for all searches, human-in-the-loop approval before sending compliance answers

Customizations & Extensions

Alternative Integrations

Instead of Supabase:

  • Pinecone: Best for serverless deployments—no database management, auto-scales, costs $70/month for 100K vectors
  • Weaviate: Better if you need multi-language support—built-in translation, self-hosted or cloud, requires 2GB+ RAM
  • Qdrant: Use when you need on-premise deployment—Docker-based, fast performance, no external dependencies

Instead of OpenAI embeddings:

  • Cohere: Multilingual embeddings, 90% cheaper ($0.0001 vs $0.0001 per 1K tokens), swap API endpoint in HTTP node
  • Sentence Transformers (self-hosted): Free, run on your own GPU, 200ms latency vs 500ms for API calls, requires Python service

Workflow Extensions

Add automated FAQ gap analysis:

  • Add a Function node after search results to detect low similarity scores (<0.65)
  • Aggregate questions without good matches weekly
  • Send report to support team: "Top 10 customer questions not covered by FAQs"
  • Nodes needed: +4 (Function, Aggregate, Schedule, Email)

Implement multi-language support:

  • Detect customer question language using HTTP Request to Google Translate API
  • Translate to English before embedding (keeps single vector database)
  • Translate FAQ answer back to customer's language before sending
  • Performance impact: +300ms per request, +$0.002 per translation
  • Nodes needed: +6 (2x HTTP Request for translation, 2x Function for formatting)

Scale to handle product-specific FAQs:

  • Add product category metadata to FAQ table
  • Filter vector search by category before similarity matching
  • Reduces search space by 70-90% for large catalogs
  • Example: "Shipping question about laptops" only searches laptop FAQs
  • Nodes needed: +2 (Function to extract product category, Supabase filter parameter)

Integration possibilities:

Add This To Get This Complexity
Slack notifications Alert team when FAQ confidence <0.70 Easy (2 nodes: IF, Slack)
Analytics dashboard Track most-asked questions, FAQ coverage gaps Medium (5 nodes: Postgres, Aggregate, Google Sheets)
A/B testing framework Compare semantic search vs keyword matching accuracy Medium (8 nodes: Split, 2x search paths, Merge, Compare)
Voice integration Handle phone support with speech-to-text → FAQ search Hard (12 nodes: Twilio, Deepgram, search, text-to-speech)

Get Started Today

Ready to eliminate keyword matching failures in your customer support automation?

  1. Download the template: Scroll to the bottom of this article to copy the n8n workflow JSON for both FAQ indexing and customer support automation
  2. Set up your vector database: Create a free Supabase account, run the SQL schema from Step 1, save your API credentials
  3. Deploy the search API: Clone the Node.js example, add environment variables, deploy to Railway or Render (5-minute setup)
  4. Import to n8n: Go to Workflows → Import from File, paste the JSON, configure your Google Sheets and API credentials
  5. Test with sample data: Run the FAQ indexing workflow first, then test the search API with example customer questions
  6. Connect to your support system: Replace the webhook trigger URL in your existing support platform

Expected results:

  • FAQ matching accuracy improves from 60-70% to 85-95%
  • Response generation time stays under 3 seconds
  • Support team spends 80% less time fixing automation errors

Need help customizing this workflow for your specific support system, scaling to handle thousands of FAQs, or integrating with your existing customer data platform? Schedule an intro call with Atherial at https://atherial.ai/contact—we'll review your setup and provide implementation guidance.

Complete N8N Workflow Template

Copy the JSON below and import it into your N8N instance via Workflows → Import from File

{
  "name": "Semantic FAQ Retrieval with Vector Search",
  "nodes": [
    {
      "id": "webhook_faq",
      "name": "FAQ Search Webhook",
      "type": "n8n-nodes-base.webhook",
      "position": [
        100,
        200
      ],
      "parameters": {
        "path": "faq-search",
        "httpMethod": "POST",
        "responseData": "firstEntryJson",
        "responseMode": "lastNode"
      },
      "typeVersion": 2
    },
    {
      "id": "extract_question",
      "name": "Extract Question",
      "type": "n8n-nodes-base.function",
      "position": [
        300,
        200
      ],
      "parameters": {
        "jsonCode": "return { \"question\": $json.question || 'How can I help?' }"
      },
      "typeVersion": 1
    },
    {
      "id": "embeddings_openai",
      "name": "Create Question Embedding",
      "type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
      "position": [
        500,
        200
      ],
      "parameters": {
        "model": "text-embedding-3-small"
      },
      "typeVersion": 1
    },
    {
      "id": "vector_search_pinecone",
      "name": "Search Pinecone Vector Store",
      "type": "@n8n/n8n-nodes-langchain.vectorStorePinecone",
      "position": [
        700,
        200
      ],
      "parameters": {
        "mode": "load",
        "topK": 3,
        "prompt": "{{ $json.question }}"
      },
      "typeVersion": 1
    },
    {
      "id": "read_faqs",
      "name": "Read FAQ Database",
      "type": "n8n-nodes-base.googleSheets",
      "position": [
        700,
        400
      ],
      "parameters": {
        "range": "A:E",
        "options": {},
        "operation": "read"
      },
      "typeVersion": 4
    },
    {
      "id": "format_results",
      "name": "Format Search Results",
      "type": "n8n-nodes-base.function",
      "position": [
        900,
        300
      ],
      "parameters": {
        "functionCode": "// Format vector search results with FAQ metadata\nconst searchResults = $json.search_results || [];\nconst faqData = $json.faq_data || [];\n\nconst enrichedResults = searchResults.map((result, index) => {\n  const matchingFaq = faqData.find(faq => faq.id === result.id);\n  return {\n    rank: index + 1,\n    question: result.question || matchingFaq?.question || 'N/A',\n    answer: result.answer || matchingFaq?.answer || 'N/A',\n    similarity_score: result.similarity_score || 0,\n    category: matchingFaq?.category || 'General',\n    metadata: {\n      vector_id: result.id,\n      last_updated: matchingFaq?.last_updated || 'Unknown'\n    }\n  };\n});\n\nreturn [{ json: { results: enrichedResults, total_results: enrichedResults.length } }];"
      },
      "typeVersion": 1
    },
    {
      "id": "log_faq_query",
      "name": "Log FAQ Query",
      "type": "n8n-nodes-base.httpRequest",
      "position": [
        1100,
        300
      ],
      "parameters": {
        "url": "{{ $env.FAQ_API_ENDPOINT || 'https://api.example.com/faq/log' }}",
        "method": "POST",
        "sendBody": true,
        "contentType": "json",
        "bodyParameters": {
          "parameters": [
            {
              "name": "question",
              "value": "{{ $json.question }}"
            },
            {
              "name": "results_returned",
              "value": "{{ $json.results.length }}"
            },
            {
              "name": "top_answer",
              "value": "{{ $json.results[0].answer }}"
            },
            {
              "name": "timestamp",
              "value": "{{ $now.toISO() }}"
            }
          ]
        }
      },
      "typeVersion": 4
    }
  ],
  "connections": {
    "Extract Question": {
      "main": [
        [
          {
            "node": "Create Question Embedding",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Read FAQ Database": {
      "main": [
        [
          {
            "node": "Format Search Results",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "FAQ Search Webhook": {
      "main": [
        [
          {
            "node": "Extract Question",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Format Search Results": {
      "main": [
        [
          {
            "node": "Log FAQ Query",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Create Question Embedding": {
      "main": [
        [
          {
            "node": "Search Pinecone Vector Store",
            "type": "main",
            "index": 0
          }
        ]
      ]
    },
    "Search Pinecone Vector Store": {
      "main": [
        [
          {
            "node": "Format Search Results",
            "type": "main",
            "index": 0
          }
        ]
      ]
    }
  }
}