Skip to main content
← Back to Blog
Tutorial

Supabase Connection Pooling in Next.js Production

A
Abe Reyes
February 6, 20267 min read

Supabase Connection Pooling in Next.js Production: How to Handle 100+ Concurrent Users

You're running a Next.js app on Vercel. Traffic's picking up. Everything seems fine until you check your Supabase logs and see this:

Error: remaining connection slots reserved for non-replication superuser connections

Your app just ran out of database connections. Users are seeing errors. Your carefully built product is down.

I've been there. Here's how to fix it.

The "Too Many Connections" Problem

Supabase's free tier gives you 60 concurrent connections. Paid plans go up to 200. Sounds like plenty, right?

Not when you're running serverless functions.

The Serverless Connection Leak

Every time a Next.js API route runs on Vercel:

  1. A new Lambda function instance spins up (cold start)
  2. Your code creates a new Supabase client
  3. That client opens a database connection
  4. The function finishes and goes idle
  5. The connection stays open

Now multiply that by 100 users hitting your app simultaneously. Each request creates a new connection. Within seconds, you've exhausted your connection pool.

What Makes This Worse

  • Cold starts: New function instances don't share connections
  • Connection leaks: Unclosed connections pile up
  • No cleanup: Functions stay warm for ~15 minutes, holding connections
  • Spiky traffic: A sudden burst can exhaust the pool in seconds

Your database can only handle so many connections at once. Once you hit the limit, new requests fail.

Solution 1: Singleton Supabase Client

The first fix is simple: create one Supabase client per function instance, not per request.

Bad Pattern (New Client Per Request)

// app/api/products/route.ts
import { createClient } from '@supabase/supabase-js';

export async function GET() {
  // Creates a new client (and connection) on every request
  const supabase = createClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_ROLE_KEY!
  );

  const { data } = await supabase.from('products').select('*');
  return Response.json(data);
}

Every request opens a new connection. If 100 users hit this endpoint simultaneously, that's 100 connections.

Good Pattern (Singleton Client)

// lib/supabase.ts
import { createClient } from '@supabase/supabase-js';

let supabaseInstance: ReturnType<typeof createClient> | null = null;

export function getSupabaseClient() {
  if (!supabaseInstance) {
    supabaseInstance = createClient(
      process.env.NEXT_PUBLIC_SUPABASE_URL!,
      process.env.SUPABASE_SERVICE_ROLE_KEY!,
      {
        db: {
          schema: 'public',
        },
        auth: {
          persistSession: false, // Important for server-side
        },
      }
    );
  }
  return supabaseInstance;
}
// app/api/products/route.ts
import { getSupabaseClient } from '@/lib/supabase';

export async function GET() {
  const supabase = getSupabaseClient(); // Reuses same client

  const { data } = await supabase.from('products').select('*');
  return Response.json(data);
}

Now all requests within the same function instance share one connection. 100 requests might only use 5-10 connections (depending on how many Lambda instances are running).

Key Configuration

auth: {
  persistSession: false, // Don't store auth state server-side
}

Server-side clients shouldn't persist sessions. That's for client-side auth flows.

Solution 2: Supabase Connection Pooler

Even with a singleton client, you can still run out of connections if you have enough concurrent Lambda instances. This is where Supabase's built-in connection pooler comes in.

Two Pooler Modes

Supabase provides two connection strings:

  1. Transaction Mode (port 6543): [project-id].pooler.supabase.com:6543
  2. Session Mode (port 5432): [project-id].pooler.supabase.com:5432

Transaction Mode (Recommended for Serverless)

// lib/supabase.ts
import { createClient } from '@supabase/supabase-js';

const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseKey = process.env.SUPABASE_SERVICE_ROLE_KEY!;

// Use pooler URL for serverless environments
const connectionString = process.env.VERCEL
  ? `postgresql://postgres.[project-id].pooler.supabase.com:6543/postgres`
  : supabaseUrl;

export function getSupabaseClient() {
  return createClient(supabaseUrl, supabaseKey, {
    db: {
      schema: 'public',
    },
    auth: {
      persistSession: false,
    },
  });
}

Transaction mode:

  • Each query gets a connection from the pool
  • Connection is returned immediately after the query completes
  • Perfect for short-lived serverless functions
  • Supports most read/write operations

Session mode:

  • Holds a connection for the entire session
  • Required for: prepared statements, LISTEN/NOTIFY, advisory locks
  • Use sparingly in serverless environments

When to Use Which

Use CasePooler Mode
API routes (GET/POST)Transaction (6543)
Background jobsTransaction (6543)
Real-time subscriptionsSession (5432)
Prepared statementsSession (5432)
Long-running operationsSession (5432)

Solution 3: Serverless-Specific Patterns

Beyond connection pooling, here are patterns that keep your connection usage low.

1. Connection Lifecycle in Vercel Functions

// app/api/orders/route.ts
import { getSupabaseClient } from '@/lib/supabase';

export async function POST(request: Request) {
  const supabase = getSupabaseClient();
  const body = await request.json();

  // Do all DB operations with the same client
  const { data: order } = await supabase
    .from('orders')
    .insert(body)
    .select()
    .single();

  const { data: items } = await supabase
    .from('order_items')
    .insert(body.items);

  // Function ends, connection returns to pool (Transaction mode)
  // OR stays open for ~15 min (Session mode, singleton pattern)
  
  return Response.json({ order, items });
}

2. Avoid Opening Multiple Clients

// ❌ BAD - Opens 3 connections
const supabase1 = createClient(url, key);
const supabase2 = createClient(url, key);
const supabase3 = createClient(url, key);

// ✅ GOOD - Reuses 1 connection
const supabase = getSupabaseClient();
await supabase.from('users').select('*');
await supabase.from('orders').select('*');
await supabase.from('products').select('*');

3. Combine Pattern with Circuit Breaker

I wrote about circuit breakers for API reliability before. They work great with connection pooling:

// lib/supabase-with-breaker.ts
import { getSupabaseClient } from './supabase';
import { executeWithCircuitBreaker } from './circuit-breaker';

export async function queryWithBreaker<T>(
  queryFn: (supabase: any) => Promise<T>
): Promise<T> {
  return executeWithCircuitBreaker(async () => {
    const supabase = getSupabaseClient();
    return queryFn(supabase);
  });
}
// Usage
const products = await queryWithBreaker(
  (supabase) => supabase.from('products').select('*')
);

Now if connections are exhausted, the circuit opens and fails fast instead of piling up more connection attempts.

Monitoring Connection Usage

Check your current connection usage with this query (run in Supabase SQL Editor):

SELECT 
  count(*) as total_connections,
  max_conn,
  max_conn - count(*) as available_connections
FROM pg_stat_activity, 
  (SELECT setting::int AS max_conn FROM pg_settings WHERE name='max_connections') max
WHERE datname = current_database();

What to Watch For

  • Total connections near max: Time to enable pooler or upgrade tier
  • Idle connections piling up: Check for connection leaks
  • Spiky connection usage: Normal for serverless, pooler helps smooth this out

Set Up Alerts

In Supabase dashboard (Database → Health):

  • Set alert at 70% connection usage
  • Monitor connection errors in logs
  • Watch for slow queries (might hold connections longer)

Production Deployment Guide

1. Environment Variables

# .env.local
NEXT_PUBLIC_SUPABASE_URL=https://[project-id].supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key

# Optional: Override with pooler URL for production
SUPABASE_POOLER_URL=postgresql://postgres.[project-id].pooler.supabase.com:6543/postgres

2. Pool Sizing

Transaction mode (6543) handles this automatically, but if you're using session mode:

TierMax ConnectionsRecommended Pool Size
Free6040 (leave headroom)
Pro200150
Enterprise500+400+

3. Singleton Pattern + Pooler

Use both for maximum efficiency:

// lib/supabase.ts
let supabaseInstance: any = null;

export function getSupabaseClient() {
  if (!supabaseInstance) {
    const url = process.env.NEXT_PUBLIC_SUPABASE_URL!;
    const key = process.env.SUPABASE_SERVICE_ROLE_KEY!;
    
    supabaseInstance = createClient(url, key, {
      db: { schema: 'public' },
      auth: { persistSession: false },
    });
  }
  return supabaseInstance;
}

Point NEXT_PUBLIC_SUPABASE_URL to the pooler URL in production. Each function instance reuses one client, and that client uses the pooler.

4. Vercel-Specific Config

// vercel.json
{
  "functions": {
    "app/api/**/*.ts": {
      "maxDuration": 10,
      "memory": 1024
    }
  }
}

Shorter maxDuration means functions don't stay warm as long, releasing connections faster.

Real-World Results

After implementing these patterns on NeedThisDone.com:

  • Before: 60 connections exhausted with ~30 concurrent users
  • After: 15-20 connections used with 100+ concurrent users
  • Downtime: Went from 2-3 connection errors per week to zero

The singleton pattern alone cut connection usage by 60%. Adding the pooler brought it down another 30%.

Need Help Scaling Your Supabase App?

Connection pooling is just one piece of production-ready architecture. There's also:

  • Error handling and retries
  • Rate limiting
  • Request deduplication
  • Real-time scaling with RAG-powered features

I've built all of this for NeedThisDone.com and clients like Acadio. If you're running into Supabase scaling issues, I can help.

See how I can help →

Or get in touch and we'll talk through your specific setup.

Need Help Getting Things Done?

Whether it's a project you've been putting off or ongoing support you need, we're here to help.