Scale a Tech Startup: From 10 to 100 Users Without Exploding
Practical guide to scale infrastructure, team and processes from 10 to 100 users. Architecture, monitoring, technical debt, budget. Avoid the pitfalls.
Scale a Tech Startup: From 10 to 100 Users Without Exploding
80% of startups crash between 50 and 200 users. Cause: unprepared infrastructure. Here's the step-by-step plan to scale from 10 to 100 users without breaking everything.
The 3 Critical Thresholds
Threshold 1: 10 → 50 users (MVP → Early traction)
Growth Symptoms:
- ⚠️ App slows down during peak hours
- ⚠️ Bugs reported daily
- ⚠️ Customer support takes 4h/day
- ⚠️ Deployments = stress (possible downtime)
Technical Problems:
- Unoptimized database queries (N+1)
- No cache
- Logs in console.log
- No monitoring
- Manual deployments
Tech Budget: 500-1K€/month Setup Time: 2-3 weeks
Threshold 2: 50 → 100 users (Product-market fit)
Growth Symptoms:
- 🔥 Site down 2-3x/week
- 🔥 Frequent database timeouts
- 🔥 Features take 2x longer
- 🔥 New dev onboarding = 2 weeks
Technical Problems:
- Monolithic architecture
- No automated tests
- Technical debt = 30% dev time
- No CI/CD
- Database migrations = panic
Tech Budget: 2-5K€/month Setup Time: 4-6 weeks
Threshold 3: 100+ users (Scale)
Growth Symptoms:
- 💥 Load balancing needed
- 💥 Multi-region considered
- 💥 Team 5+ devs
- 💥 Compliance (GDPR, SOC2)
Not covered here: This article focuses 10 → 100 users
Phase 1: 10 → 50 users (4 weeks)
Week 1: Setup monitoring (CRITICAL)
Goal: See before it breaks
Tools to install:
| Tool | Usage | Price/month | Setup |
|---|---|---|---|
| Sentry | Error tracking | 0-26€ | 30min |
| Uptime Robot | Site monitoring | 0€ | 10min |
| Vercel Analytics | Performance | 0-20€ | 5min |
| PostgreSQL stats | DB slow queries | 0€ | 1h |
Sentry Configuration:
// next.config.js
const { withSentryConfig } = require('@sentry/nextjs');
module.exports = withSentryConfig({
// Next.js config
}, {
org: "your-org",
project: "your-project",
silent: true,
});
// pages/_app.tsx
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
tracesSampleRate: 0.1, // 10% of requests
environment: process.env.NODE_ENV,
});
Alerts to configure:
- Error rate >5% → Immediate email
- Site down >1min → SMS
- API latency >2s → Slack
- Database CPU >80% → Email
Budget: 0-50€/month
Week 2: Optimize database (80% impact)
Audit slow queries:
-- PostgreSQL: top 10 slow queries
SELECT
query,
mean_exec_time,
calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
Classic Problems:
1. N+1 queries
Before (50 queries):
// ❌ Bad
const users = await prisma.user.findMany();
for (const user of users) {
user.posts = await prisma.post.findMany({
where: { userId: user.id }
});
}
After (2 queries):
// ✅ Good
const users = await prisma.user.findMany({
include: { posts: true }
});
Gain: -96% queries, -80% latency
2. Missing indexes
Before (3s query):
-- Full table scan (100K rows)
SELECT * FROM posts WHERE user_id = 123;
After (20ms query):
-- Create index
CREATE INDEX idx_posts_user_id ON posts(user_id);
-- Same query = 150x faster
SELECT * FROM posts WHERE user_id = 123;
Critical Indexes:
-- Foreign keys
CREATE INDEX idx_posts_user_id ON posts(user_id);
CREATE INDEX idx_comments_post_id ON comments(post_id);
-- Frequently filtered columns
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_created_at ON posts(created_at);
-- Text search
CREATE INDEX idx_posts_title_trgm ON posts USING gin(title gin_trgm_ops);
3. No pagination
Before (10s, 50MB transferred):
// ❌ Load 10K posts
const posts = await prisma.post.findMany();
After (200ms, 500KB transferred):
// ✅ Load 20 posts
const posts = await prisma.post.findMany({
take: 20,
skip: page * 20,
orderBy: { createdAt: 'desc' }
});
Gain: -95% latency, -99% bandwidth
Week 3: Add strategic cache
Caching levels:
Browser Cache (3600s)
↓
CDN Cache (Vercel, 86400s)
↓
App Cache (Redis, 300s)
↓
Database
Redis setup (Upstash free):
// lib/redis.ts
import { Redis } from '@upstash/redis';
export const redis = new Redis({
url: process.env.UPSTASH_REDIS_URL!,
token: process.env.UPSTASH_REDIS_TOKEN!,
});
// Wrapper with TTL
export async function getCached<T>(
key: string,
fetcher: () => Promise<T>,
ttl = 300 // 5min default
): Promise<T> {
// Check cache
const cached = await redis.get<T>(key);
if (cached) return cached;
// Fetch fresh
const fresh = await fetcher();
await redis.setex(key, ttl, fresh);
return fresh;
}
Usage:
// Without cache: 500ms/request
const user = await prisma.user.findUnique({ where: { id } });
// With cache: 20ms/request
const user = await getCached(
`user:${id}`,
() => prisma.user.findUnique({ where: { id } }),
3600 // 1h
);
What to cache:
- ✅ User profiles (1h TTL)
- ✅ App config (24h TTL)
- ✅ Paginated lists (5min TTL)
- ❌ Real-time data (messages, notifications)
- ❌ User-specific data
Budget: 0€ (Upstash free tier = 10K reqs/day)
Week 4: Basic CI/CD
Goal: Deploy without stress
GitHub Actions setup:
# .github/workflows/deploy.yml
name: Deploy
on:
push:
branches: [main]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: 20
- run: npm ci
- run: npm test
- run: npm run lint
deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: amondnet/vercel-action@v20
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.ORG_ID }}
vercel-project-id: ${{ secrets.PROJECT_ID }}
vercel-args: '--prod'
Benefits:
- ✅ Auto tests before deployment
- ✅ 1-click deployment (git push)
- ✅ Easy rollback (revert commit)
- ✅ Preview branches (Vercel)
Budget: 0€ (GitHub Actions free <2000 min/month)
Phase 1 Result:
- ✅ Monitoring in place
- ✅ Database optimized (-80% latency)
- ✅ Strategic cache (-90% DB load)
- ✅ Automated deployment
- 💰 Budget: 50-100€/month
- ⏱️ Setup: 40-60h dev
Phase 2: 50 → 100 users (6 weeks)
Week 1-2: Modular architecture
Problem: 20K line monolith = impossible to maintain
Solution: Split into modules
Before:
src/
├── pages/
│ ├── api/
│ │ ├── users.ts (500 lines)
│ │ └── posts.ts (800 lines)
│ │ └── comments.ts (300 lines)
After:
src/
├── modules/
│ ├── users/
│ │ ├── user.service.ts
│ │ ├── user.repository.ts
│ │ └── user.types.ts
│ ├── posts/
│ │ ├── post.service.ts
│ │ ├── post.repository.ts
│ │ └── post.types.ts
├── pages/
│ └── api/
│ ├── users/[id].ts (50 lines)
│ └── posts/[id].ts (50 lines)
Pattern: Service → Repository → Database
// modules/users/user.repository.ts
export class UserRepository {
async findById(id: string) {
return prisma.user.findUnique({ where: { id } });
}
}
// modules/users/user.service.ts
export class UserService {
constructor(private repo: UserRepository) {}
async getUser(id: string) {
const user = await getCached(
`user:${id}`,
() => this.repo.findById(id),
3600
);
if (!user) throw new NotFoundError();
return user;
}
}
// pages/api/users/[id].ts (50 lines)
export default async function handler(req, res) {
const service = new UserService(new UserRepository());
const user = await service.getUser(req.query.id);
res.json(user);
}
Benefits:
- ✅ Testable code (mock repository)
- ✅ Reusable (shared service)
- ✅ Maintainable (1 file = 1 responsibility)
Week 3: Automated tests (critical)
Goal: Deploy without fear of breaking everything
Target Coverage:
- 80% unit tests (business functions)
- 20% integration tests (API endpoints)
- 0% E2E (too slow for early stage)
Vitest Setup:
// vitest.config.ts
import { defineConfig } from 'vitest/config';
export default defineConfig({
test: {
coverage: {
provider: 'v8',
reporter: ['text', 'html'],
thresholds: {
lines: 80,
functions: 80,
branches: 70,
}
}
}
});
Test Example:
// modules/users/user.service.test.ts
import { describe, it, expect, vi } from 'vitest';
import { UserService } from './user.service';
describe('UserService', () => {
it('should get user from cache', async () => {
const mockRepo = {
findById: vi.fn().mockResolvedValue({ id: '1', name: 'John' })
};
const service = new UserService(mockRepo);
const user = await service.getUser('1');
expect(user.name).toBe('John');
expect(mockRepo.findById).toHaveBeenCalledOnce();
});
it('should throw if user not found', async () => {
const mockRepo = {
findById: vi.fn().mockResolvedValue(null)
};
const service = new UserService(mockRepo);
await expect(service.getUser('999')).rejects.toThrow();
});
});
Run tests:
npm test # Run all
npm test -- --watch # Watch mode
npm test -- --coverage # Coverage report
CI integration (GitHub Actions):
- run: npm test -- --coverage
- uses: codecov/codecov-action@v3 # Upload coverage
Budget: 0€
Week 4: Secure database migrations
Problem: ALTER TABLE = downtime
Solution: Zero-downtime migrations
Prisma migrations:
# Create migration
npx prisma migrate dev --name add_user_role
# Review generated SQL
cat prisma/migrations/XXX_add_user_role/migration.sql
Zero-downtime checklist:
- Additive only: Add columns (don't delete)
- Default values: Always a default
- Nullable first: Make nullable → populate → NOT NULL
- Online indexes: CONCURRENT (Postgres)
- Backward compatible: App v1 must work with schema v2
Safe migration example:
-- ❌ BAD (downtime)
ALTER TABLE users ADD COLUMN role TEXT NOT NULL;
-- ✅ GOOD (zero downtime)
-- Step 1: Add nullable with default
ALTER TABLE users ADD COLUMN role TEXT DEFAULT 'user';
-- Step 2: Populate existing (background job)
UPDATE users SET role = 'admin' WHERE email LIKE '%@company.com';
-- Step 3: Make NOT NULL (after deploy)
ALTER TABLE users ALTER COLUMN role SET NOT NULL;
Rollback strategy:
-- Always have a DOWN migration
-- migration_down.sql
ALTER TABLE users DROP COLUMN role;
Week 5: Performance budget
Goal: Guarantee performance
Core Web Vitals targets:
| Metric | Threshold | Current | Action |
|---|---|---|---|
| LCP (Largest Contentful Paint) | <2.5s | 3.2s | ❌ Lazy load images |
| FID (First Input Delay) | <100ms | 50ms | ✅ OK |
| CLS (Cumulative Layout Shift) | <0.1 | 0.15 | ❌ Reserve image space |
Lighthouse CI:
# .github/workflows/lighthouse.yml
- uses: treosh/lighthouse-ci-action@v9
with:
urls: |
https://preview-${{ github.sha }}.vercel.app
uploadArtifacts: true
temporaryPublicStorage: true
Performance budget:
// lighthouserc.json
{
"ci": {
"assert": {
"assertions": {
"first-contentful-paint": ["error", {"maxNumericValue": 2000}],
"interactive": ["error", {"maxNumericValue": 3000}],
"total-byte-weight": ["error", {"maxNumericValue": 500000}]
}
}
}
}
CI blocks if perf < threshold ✅
Week 6: Technical documentation
Goal: Onboard dev #2 in <2 days
Critical docs:
1. README.md
# Project Name
## Quick start
\`\`\`bash
git clone ...
npm install
cp .env.example .env
npm run dev
\`\`\`
## Architecture
- Frontend: Next.js 15 + React 19
- Backend: tRPC + Prisma
- Database: PostgreSQL (Supabase)
- Cache: Redis (Upstash)
## Deploy
\`\`\`bash
git push origin main # Auto-deploy Vercel
\`\`\`
2. ARCHITECTURE.md
## System design
[Diagram here]
## Data flow
1. User → Next.js (Vercel)
2. Next.js → tRPC API
3. tRPC → Service Layer
4. Service → Repository
5. Repository → Prisma
6. Repository → PostgreSQL
## Key decisions
- **Why Next.js?**: SEO + React
- **Why tRPC?**: Type-safe API
- **Why Supabase?**: Managed Postgres
3. CONTRIBUTING.md
## Workflow
1. Create branch `feat/feature-name`
2. Code + tests
3. Push → Auto-preview Vercel
4. PR → Code review
5. Merge → Auto-deploy prod
## Standards
- ESLint + Prettier
- Conventional commits
- Test coverage >80%
Budget: 8-16h writing
Phase 2 Result:
- ✅ Modular architecture
- ✅ 80% test coverage
- ✅ Zero-downtime migrations
- ✅ Performance budget
- ✅ Complete documentation
- 💰 Budget: 100-200€/month
- ⏱️ Setup: 120-150h dev
Infrastructure: Real costs 10 → 100 users
Phase 1: 10-50 users
| Service | Usage | Price/month |
|---|---|---|
| Hosting (Vercel Pro) | Unlimited bandwidth | 20€ |
| Database (Supabase Free) | 500MB, 2GB bandwidth | 0€ |
| Cache (Upstash Free) | 10K requests/day | 0€ |
| Monitoring (Sentry) | 5K errors/month | 0€ |
| Analytics (Vercel) | Unlimited | 0€ |
| Email (Resend Free) | 3K emails/month | 0€ |
| TOTAL | 20€/month |
Phase 2: 50-100 users
| Service | Usage | Price/month |
|---|---|---|
| Hosting (Vercel Pro) | Unlimited | 20€ |
| Database (Supabase Pro) | 8GB, 50GB bandwidth | 25€ |
| Cache (Upstash Pay-as-you-go) | 100K req/day | 10€ |
| Monitoring (Sentry Team) | 50K errors/month | 26€ |
| Analytics (Vercel) | Unlimited | 0€ |
| Email (Resend) | 10K emails/month | 20€ |
| Storage (S3) | 10GB, 50GB transfer | 5€ |
| TOTAL | 106€/month |
Projection 100+ users: 200-500€/month
Scaling readiness checklist
Infrastructure ✅
- Monitoring setup (Sentry + Uptime)
- Database optimized (indexes, N+1 fixed)
- Cache layer (Redis)
- Automated CI/CD (GitHub Actions)
- Database backup (daily)
Code ✅
- Modular architecture (services + repos)
- Test coverage >80%
- Up-to-date technical documentation
- Performance budget configured
- Standardized error handling
Process ✅
- Zero-downtime migrations
- Rollback strategy
- Incident response plan (who to call?)
- Feature flags (deploy without activating)
Conclusion
Scaling from 10 to 100 users requires 10-12 weeks of technical work and 120-200€/month infrastructure.
ROI: Invest 15K€ dev now avoids 50-100K€ refactor at 500 users.
Infrastructure scaling audit: Evaluate your readiness and get an action plan.
About: Jérémy Marquer has accompanied 20+ startups in their scaling. Zero major crashes to date.
Related articles
Choosing Your Startup Tech Stack 2025: Decision Guide (Next.js, React, Python)
Complete framework to choose startup tech stack: Next.js vs React, Node vs Python, PostgreSQL vs MongoDB. Criteria, benchmarks, costs, mistakes to avoid.
Product-Market Fit: Technical Guide to Validate Your Startup
Complete methodology to achieve Product-Market Fit: metrics, signals, experiments, analytics stack. Sean Ellis framework + concrete examples.
Automate Your Startup Processes: AI, No-Code and Productivity Gains in 2025
How to automate startup processes with AI and No-Code? Complete guide for CTOs and founders: tools, ROI, use cases, pitfalls to avoid.
