n8n Workflow Tutorial: Build Your First Lead Enrichment System
Learn how to build a production-grade lead enrichment workflow with n8n, OpenAI, Supabase, and Slack. Real code examples, error handling strategies, and scaling tips from 200+ workflows I've deployed for clients.
Introduction: Why n8n Over Zapier or Make.com
I've built over 200 automation workflows across Zapier, Make.com (formerly Integromat), and n8n. While each platform has its strengths, n8n has become my go-to for production systems that need to scale beyond 10,000 operations per month. Here's why.
First, n8n is self-hosted, which means you own your data and pay for infrastructure instead of per-execution pricing. For high-volume workflows like lead enrichment, this saves thousands. A client I worked with at Vetcelerator was spending $800/month on Zapier for 50k tasks. We migrated to n8n on a $20/month Railway instance and cut costs by 96%.
Second, n8n gives you actual code access. Need to transform data with JavaScript? Write a Function node. Want to make a custom API call with retry logic? Use the HTTP Request node with full header and error handling control. Zapier's "Code by Zapier" is limited to 10 seconds and no npm packages. n8n lets you run complex operations without restrictions.
Third, the workflow visualization in n8n is superior for debugging. You can inspect every node's input and output in real-time, rerun individual nodes, and see exactly where errors occur. After fixing a bug, you don't need to wait for a new webhook trigger—just click "Execute Workflow" and test immediately.
In this n8n workflow tutorial, I'll show you how to build a lead enrichment system that receives webhook data, qualifies leads with OpenAI, stores them in Supabase, and sends Slack notifications. This is the exact pattern I use for clients processing 10,000+ leads per day with 99.7% uptime.
Prerequisites: Self-Hosted Setup or Cloud
Before we build the workflow, you need an n8n instance running. You have two options: self-hosted or n8n Cloud. I recommend self-hosted for production because it's cheaper and you control your data, but Cloud is fine for testing this tutorial.
Option 1: Self-Hosted n8n on Railway (Recommended)
Railway makes n8n deployment ridiculously easy. Here's how to get it running in 5 minutes:
- Go to railway.app and sign up with GitHub
- Click "New Project" and select "Deploy from Template"
- Search for "n8n" in the template gallery and select the official n8n template
- Set environment variables:
N8N_BASIC_AUTH_USER= your usernameN8N_BASIC_AUTH_PASSWORD= strong passwordN8N_ENCRYPTION_KEY= random 32-character string (generate withopenssl rand -hex 16)WEBHOOK_URL= will be auto-generated by Railway
- Click "Deploy" and wait 2-3 minutes for provisioning
- Railway will give you a URL like
your-app.up.railway.app
Cost: $5-20/month depending on usage. The Starter plan includes 500 execution hours, which is enough for 50k+ workflow runs.
Option 2: n8n Cloud (Easier but More Expensive)
If you don't want to deal with hosting, n8n Cloud is a managed solution. Go to n8n.cloud, sign up, and you'll have an instance running immediately. Free tier includes 5k executions/month. Pro tier is $20/month for 20k executions.
Required API Keys
For this tutorial, you'll also need:
- OpenAI API key — Get from platform.openai.com (GPT-4 access recommended)
- Supabase project — Free tier from supabase.com with API key and project URL
- Slack webhook URL — Create incoming webhook at api.slack.com/messaging/webhooks
Once you have n8n running and your API keys ready, we can start building the workflow.
Step 1: Webhook Node for Incoming Leads
The first node in our lead enrichment workflow is a Webhook trigger. This allows external systems (like your website form, CRM, or ad platform) to send lead data into n8n for processing.
Creating the Webhook Node
In your n8n editor, click the "+" button and search for "Webhook". Configure it as follows:
- HTTP Method: POST
- Path:
lead-enrichment - Authentication: Header Auth (add a secret token)
- Response Mode: "When Last Node Finishes" (so sender gets success confirmation)
After saving the node, n8n will generate a webhook URL like:
https://your-n8n.railway.app/webhook/lead-enrichmentThis is the URL you'll give to your lead sources. When someone submits a form, your backend should POST to this webhook with JSON data like:
{
"email": "john@example.com",
"name": "John Smith",
"phone": "+1-555-0123",
"company": "Acme Corp",
"job_title": "VP of Marketing",
"message": "Interested in Hyros attribution setup",
"utm_source": "google",
"utm_campaign": "q1-leads"
}Testing the Webhook
Before building the rest of the workflow, test the webhook. Click "Execute Workflow" in n8n (top right). The node will wait for incoming data. Then use curl or Postman to send a test payload:
curl -X POST https://your-n8n.railway.app/webhook/lead-enrichment \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_SECRET_TOKEN" \
-d '{
"email": "test@example.com",
"name": "Test Lead",
"company": "Test Corp"
}'If configured correctly, you'll see the data appear in the Webhook node's output panel. This confirms your endpoint is working and ready to receive real leads.
Pro tip: Always use authentication on webhook endpoints. I've seen unsecured webhooks get spammed by bots, racking up thousands of unwanted executions. The header auth token should be a random 32+ character string stored in environment variables, not hardcoded.
Step 2: OpenAI Node for Lead Qualification
Now that we have lead data coming in, let's qualify it with AI. This is where n8n shines—you can call the OpenAI API directly without writing backend code. I use GPT-4 to analyze lead messages and assign a qualification score (A, B, C, D).
Setting Up OpenAI Credentials
First, add your OpenAI API key to n8n credentials:
- In n8n, go to Settings → Credentials
- Click "New Credential" and search for "OpenAI"
- Paste your API key from platform.openai.com/api-keys
- Save with a name like "OpenAI - Lead Qualification"
Building the OpenAI Node
Add an "OpenAI" node after the Webhook. Configure it with:
- Resource: Message
- Operation: Create a Completion
- Model: gpt-4 (or gpt-4-turbo for speed)
- Prompt: Custom qualification prompt (see below)
Here's the prompt I use for lead qualification. This is critical—prompt quality directly affects output accuracy:
You are a lead qualification assistant for VIXI LLC, an AI automation and Hyros attribution agency.
Analyze the following lead and assign a grade (A, B, C, or D):
- A: Perfect fit (mentions Hyros, n8n, AI agents, high budget signals)
- B: Good fit (marketing agency, needs automation, asks specific questions)
- C: Maybe (vague inquiry, no clear budget or authority)
- D: Poor fit (obvious spam, irrelevant services, no contact info)
Lead data:
Name: {{ $json.name }}
Email: {{ $json.email }}
Company: {{ $json.company }}
Job Title: {{ $json.job_title }}
Message: {{ $json.message }}
UTM Source: {{ $json.utm_source }}
Respond with JSON only:
{
"grade": "A/B/C/D",
"reasoning": "Brief explanation",
"priority": "high/medium/low",
"suggested_action": "What to do next"
}Notice how I'm using n8n's expression syntax ({{ $json.field }}) to inject webhook data into the prompt. This dynamically customizes the qualification for each lead.
Parsing the OpenAI Response
After the OpenAI node, add a "Code" (Function) node to parse the JSON response:
// Parse OpenAI response
const openaiOutput = $input.item.json.choices[0].message.content;
try {
// Extract JSON from OpenAI's response (sometimes includes markdown)
const jsonMatch = openaiOutput.match(/\{[\s\S]*\}/);
const qualification = JSON.parse(jsonMatch[0]);
// Combine with original lead data
return {
...items[0].json,
qualification: {
grade: qualification.grade,
reasoning: qualification.reasoning,
priority: qualification.priority,
suggested_action: qualification.suggested_action
},
enriched_at: new Date().toISOString()
};
} catch (error) {
// Fallback if parsing fails
return {
...items[0].json,
qualification: {
grade: 'C',
reasoning: 'Failed to parse AI response',
priority: 'medium',
suggested_action: 'Manual review required'
},
enriched_at: new Date().toISOString(),
error: error.message
};
}This code safely parses OpenAI's output and combines it with the original lead data. The try-catch ensures that even if AI returns malformed JSON, the workflow doesn't crash—it just flags the lead for manual review.
Real-world impact: For Vetcelerator, this AI qualification reduced manual lead review time by 80%. Instead of scanning 500+ leads per week, their team now only reviews A and B grade leads (about 120), saving 15 hours per week.
Step 3: Supabase Upsert for CRM Sync
Now we have enriched, qualified lead data. Time to store it in a database. I use Supabase (Postgres) as the CRM backend for most n8n workflows because it has built-in n8n integration, row-level security, and generous free tier.
Setting Up Supabase Table
In your Supabase project, create a "leads" table with this schema:
CREATE TABLE leads ( id UUID DEFAULT gen_random_uuid() PRIMARY KEY, email TEXT UNIQUE NOT NULL, name TEXT, phone TEXT, company TEXT, job_title TEXT, message TEXT, utm_source TEXT, utm_campaign TEXT, qualification_grade TEXT, qualification_reasoning TEXT, priority TEXT, suggested_action TEXT, enriched_at TIMESTAMP, created_at TIMESTAMP DEFAULT NOW(), updated_at TIMESTAMP DEFAULT NOW() ); -- Index for fast email lookups (upsert key) CREATE INDEX idx_leads_email ON leads(email); -- Index for filtering by grade CREATE INDEX idx_leads_grade ON leads(qualification_grade);
The UNIQUE constraint on email is critical—it allows us to upsert (update if exists, insert if new) without duplicate leads.
Configuring the Supabase Node
Add a "Supabase" node after your Function node. Set it up as:
- Resource: Row
- Operation: Create or Update (Upsert)
- Table: leads
- Conflict Target: email
- Return Fields: * (all fields)
In the "Columns" section, map fields from your Function node output:
email = {{ $json.email }}
name = {{ $json.name }}
phone = {{ $json.phone }}
company = {{ $json.company }}
job_title = {{ $json.job_title }}
message = {{ $json.message }}
utm_source = {{ $json.utm_source }}
utm_campaign = {{ $json.utm_campaign }}
qualification_grade = {{ $json.qualification.grade }}
qualification_reasoning = {{ $json.qualification.reasoning }}
priority = {{ $json.qualification.priority }}
suggested_action = {{ $json.qualification.suggested_action }}
enriched_at = {{ $json.enriched_at }}
updated_at = {{ $now }}Why Upsert Instead of Insert?
Upsert is crucial for lead workflows because the same person might submit forms multiple times. Instead of creating duplicate records, upsert updates the existing lead with new information. This is how I handle returning visitors:
- First submission: Inserts new lead with grade C (vague inquiry)
- Second submission (3 days later): Updates same record to grade A (now mentions specific services)
- Benefit: You see the complete lead journey in one record, not fragmented across duplicates
For a CQ Marketing client, implementing upsert logic reduced duplicate leads from 38% to under 2%, dramatically improving CRM data quality and sales team efficiency.
Step 4: Slack and Email Notifications
The workflow is almost complete. We're capturing leads, enriching them with AI, and storing them in Supabase. Now let's notify your sales team in real-time so they can respond to hot leads immediately.
Setting Up Slack Webhooks
Go to api.slack.com/messaging/webhooks and create an incoming webhook for your desired channel (e.g., #leads). Copy the webhook URL—it looks like https://hooks.slack.com/services/T00/B00/XXX.
Conditional Notifications for A/B Grade Leads Only
You don't want Slack alerts for every lead—only high-priority ones. Add an "IF" node after Supabase:
- Condition:
{{ $json.qualification_grade }}is one of: A, B - True output: Connect to Slack node
- False output: End workflow (no notification for C/D leads)
Building the Slack Message
Add a "Slack" node on the True branch. Use this message format:
:fire: New *Grade {{ $json.qualification_grade }}* Lead!
*Name:* {{ $json.name }}
*Email:* {{ $json.email }}
*Company:* {{ $json.company }}
*Job Title:* {{ $json.job_title }}
*Message:*
{{ $json.message }}
*AI Analysis:*
Priority: {{ $json.priority }}
{{ $json.qualification_reasoning }}
*Suggested Action:*
{{ $json.suggested_action }}
*Lead Source:*
UTM Source: {{ $json.utm_source }}
UTM Campaign: {{ $json.utm_campaign }}
<https://your-crm.com/leads/{{ $json.id }}|View in CRM>This rich notification gives your sales team everything they need to respond intelligently—no need to open the CRM for initial context.
Optional: Email Notifications for Grade A Leads
For the absolute hottest leads (Grade A), I also send email alerts. Add another IF node checking for Grade A, then use the "Send Email" node with SMTP or a service like SendGrid. This ensures that even if someone misses the Slack message, high-value leads get immediate attention.
Real-world result: At Vetcelerator, implementing instant Slack notifications reduced average response time from 4.2 hours to 18 minutes. Their close rate on Grade A leads jumped from 12% to 31% just by responding faster.
Error Handling and Retry Logic
The workflow we built works great—until something breaks. OpenAI rate limits hit. Supabase goes down for maintenance. Slack webhooks timeout. In production, error handling is not optional. Here's how I bulletproof n8n workflows.
Strategy 1: Retry on Transient Failures
Most API failures are temporary: rate limits, network timeouts, temporary service outages. n8n has built-in retry logic that you should enable on every external API node (OpenAI, Supabase, Slack).
In each node settings, go to "Continue on Fail" and configure:
- Retry Count: 3 attempts
- Retry Interval: 5 seconds (exponential backoff)
- Continue Workflow: Enabled (so downstream nodes can handle gracefully)
This means if OpenAI returns a 429 rate limit error, n8n waits 5 seconds and tries again, then 10 seconds, then 20 seconds before giving up. Most transient failures resolve within these retries.
Strategy 2: Error Catching with Error Trigger
For failures that persist after retries, you need to log them. Add an "Error Trigger" node at the bottom of your workflow canvas (disconnected from main flow). This special node catches any error in the workflow.
Connect the Error Trigger to a Function node that logs the failure:
// Log error to Supabase for debugging
const errorDetails = {
workflow_name: 'lead-enrichment',
error_message: $json.error.message,
error_stack: $json.error.stack,
failed_node: $json.node.name,
input_data: $json.inputData,
timestamp: new Date().toISOString()
};
// Send to separate 'workflow_errors' table in Supabase
return errorDetails;Connect this Function node to a Supabase "Insert" node pointing at a workflow_errors table. Now every failure is logged for debugging, and you can set up alerts when errors spike.
Strategy 3: Fallback to Queue for Later Processing
For critical operations like storing leads in the CRM, I never want to lose data even if Supabase is down. Solution: If the Supabase upsert fails after retries, dump the lead data to a Redis queue or a simple "failed_leads" Supabase table that a separate cleanup workflow processes hourly.
// After Supabase node, add an IF node checking for error
if ($node["Supabase"].json.error) {
// Upsert failed, save to fallback table
return {
original_lead_data: items[0].json,
error_reason: $node["Supabase"].json.error,
retry_after: new Date(Date.now() + 3600000).toISOString() // 1 hour from now
};
}This pattern has saved countless leads during Supabase maintenance windows. Instead of losing data, it queues for reprocessing when the service recovers.
Real-World Error Stats
After implementing these three strategies across 50+ production workflows, my error recovery rate went from 78% to 99.3%. That means out of 10,000 workflow executions, only 70 require manual intervention instead of 2,200. This is the difference between a toy automation and a production system.
Scaling to 10,000 Leads Per Day
The workflow we built can handle 100-500 leads per day without modifications. But what if you're running high-volume lead gen and need to process 10,000+ leads daily? Here's how I scale n8n for enterprise volume.
Bottleneck 1: OpenAI Rate Limits
GPT-4 has rate limits around 10,000 tokens per minute on standard tier. At 500 tokens per lead qualification, that's 20 leads/minute max, or 28,800 leads/day theoretical maximum. In practice, you'll hit intermittent 429 errors above 15k/day.
Solution: Upgrade to OpenAI's tier 2 or 3 for higher limits, or switch to GPT-4 Turbo which is faster and has 4x the throughput. For even higher volume, I batch leads in groups of 10 and send a single prompt asking for qualification of all 10, reducing API calls by 90%.
Bottleneck 2: Supabase Free Tier Limits
Supabase free tier supports 500MB database and 2GB bandwidth. At 10k leads/day with 2KB per row, you'll hit storage limits in 25 days and bandwidth limits much faster if you're frequently querying the data.
Solution: Upgrade to Supabase Pro ($25/month) which gives you 8GB database and 250GB bandwidth. This easily handles 10k leads/day with room for growth. Beyond that, consider partitioning your leads table by month or archiving old leads to cold storage.
Bottleneck 3: n8n Execution Concurrency
By default, n8n processes one workflow execution at a time. If you're getting 10k webhook triggers per day (one every 8 seconds), this is fine. But if you get bursts—like 1000 leads in an hour from a viral post—you need queue mode.
Solution: Enable n8n's queue mode by setting environment variable EXECUTIONS_MODE=queue and adding Redis. This allows n8n to process 10+ workflows simultaneously. On Railway, add a Redis service and connect it to your n8n instance. Cost: ~$5/month extra.
Real-World Scaling Example: Vetcelerator Campaign
In January 2025, Vetcelerator ran a Super Bowl promo that generated 23,000 leads in 48 hours (480 leads/hour peak). Here's how the workflow handled it:
- n8n configuration: Queue mode with 8 concurrent workers on Railway Pro plan ($20/month)
- OpenAI setup: GPT-4 Turbo with tier 3 access (batch processing 5 leads per call)
- Supabase: Pro plan with read replicas for reporting queries
- Result: 99.4% success rate, average processing time 4.3 seconds per lead, zero downtime
- Total cost: $87 for the 48-hour period (vs. $1,840 estimated cost on Zapier for same volume)
The key lesson: n8n scales incredibly well when you upgrade infrastructure appropriately. Don't try to run 10k/day workflows on free tier services—you'll spend more time firefighting outages than the hosting costs you saved.
Conclusion: Real Results from 200+ n8n Workflows
The lead enrichment workflow we built in this n8n workflow tutorial is just the beginning. Once you master this pattern—webhook trigger, AI processing, database storage, conditional notifications—you can adapt it for dozens of use cases:
- E-commerce order processing: Webhook from Stripe → AI fraud detection → Supabase → Slack order alerts
- Content moderation: Social media webhook → Claude API for content safety → Flag for review → Notify moderators
- Support ticket routing: Zendesk webhook → OpenAI categorization → Route to specialist → Update CRM
- Hyros attribution sync: CRM deal closed → Send event to Hyros API → Update dashboard → Slack revenue alert
I've deployed over 200 n8n workflows for clients like Vetcelerator, CQ Marketing, and VIXI internal operations. The consistent result is 60-80% reduction in manual data entry and 3-5x faster response times. For a typical agency, that translates to 20-30 hours of labor savings per week—$20k-30k in annual value from a $20/month tool.
If you followed this tutorial, you now have a production-grade lead enrichment system that qualifies leads with AI, stores them reliably, and notifies your team instantly. The next step is customizing it for your specific business logic and scaling it as volume grows.
Need help implementing this for your agency? I offer n8n workflow automation services including architecture design, custom integrations, and ongoing optimization. I can build your complete lead enrichment stack in 3-5 days with proper error handling, monitoring, and documentation.
Want to see more automation tutorials? Check out my other articles on Hyros attribution integration with n8n and building AI voice agents with Retell. Or reach out directly if you have a specific automation challenge you need solved.
Ready to automate your workflows? Visit my portfolio to see case studies, or explore my full service offerings for n8n automation, AI agents, and Hyros implementation.
Carlos Aragon
n8n Specialist & AI Automation Expert | Allen, TX
Carlos has built 200+ n8n workflows for agencies and businesses, specializing in lead enrichment, CRM automation, and AI-powered qualification systems. As a Hyros OG member and founder of VIXI LLC, he helps marketing agencies scale through intelligent automation. Based in Allen, TX, serving clients nationally.