Polling vs. webhooks vs. SSE vs. WebSockets
How to choose between polling, webhooks, Server-Sent Events, and WebSockets: latency, infrastructure complexity, directionality, scalability, and which pattern fits each use case.
TL;DR
| Pattern | Best for | Direction | Latency | Complexity |
|---|---|---|---|---|
| Polling | Simple read-heavy checks, low-frequency updates | Client โ Server | Seconds to minutes | Low |
| Webhooks | Server-to-server event notifications | Server โ Server | Near-instant | Medium |
| SSE | One-way live feeds to browser clients | Server โ Client | Sub-second | Low-medium |
| WebSockets | Bidirectional real-time (chat, gaming, collaboration) | Bidirectional | Sub-100ms | High |
Default answer: SSE for server-to-client pushes. WebSockets only when the client needs to send data back. Webhooks for server-to-server. Polling only when you genuinely can't do better.
The Framing
Your team ships a dashboard that shows order status. Version one polls the API every 5 seconds. It works. Then the product manager wants real-time updates. Then Finance notices the API bill: 90% of those polling requests return "no change." You're burning bandwidth, database queries, and money on empty responses.
This is the core tension: who initiates the communication, and how often?
Polling puts the client in control. Simple, but wasteful. Push-based approaches (webhooks, SSE, WebSockets) put the server in control. More efficient, but more infrastructure. The right choice depends on directionality, latency requirements, and what your client actually is: browser, mobile app, or another server.
How Each Works
Polling: Client Pulls on a Timer
The simplest approach. The client sends HTTP requests at a fixed interval and checks for new data.
// Short polling: fixed interval
const poll = async () => {
while (true) {
const res = await fetch('/api/orders/123/status');
const data = await res.json();
updateUI(data);
await sleep(5000); // 5-second interval
}
};
// Long polling: server holds connection until data arrives
const longPoll = async () => {
while (true) {
// Server holds this request open up to 30 seconds
const res = await fetch('/api/orders/123/status?wait=30');
const data = await res.json();
updateUI(data);
// Immediately reconnect
}
};
Short polling fires requests at a fixed interval (1s, 5s, 30s). Most responses are empty. At 5-second intervals with 10,000 clients, that's 2,000 requests/second, even when nothing changes.
Long polling is smarter. The server holds the connection until new data arrives (or a 30-second timeout). This simulates push over HTTP, but each "event" still requires a full request/response cycle. Both work through any HTTP infrastructure with zero special configuration.
For your interview: if someone asks about real-time and you mention polling, immediately follow with "but polling wastes bandwidth, so I'd prefer SSE or WebSockets depending on the direction of data flow."
Webhooks: Server POSTs to Your URL
Webhooks flip the model: instead of the client asking for updates, the server sends an HTTP POST to a pre-registered URL when an event occurs.
// Registration: tell the provider where to send events
await stripe.webhookEndpoints.create({
url: 'https://api.myapp.com/webhooks/stripe',
enabled_events: ['payment_intent.succeeded', 'charge.failed'],
});
// Receiver: handle incoming events
app.post('/webhooks/stripe', (req, res) => {
// Step 1: Verify signature (CRITICAL)
const sig = req.headers['stripe-signature'];
const event = stripe.webhooks.constructEvent(
req.body, sig, process.env.STRIPE_WEBHOOK_SECRET
);
// Step 2: Process idempotently (webhooks retry)
await processEvent(event.id, event.type, event.data);
// Step 3: Respond 200 quickly (provider times out at ~5-15s)
res.status(200).json({ received: true });
});
The webhook receiver must be a publicly-reachable server. This makes webhooks ideal for server-to-server integrations (Stripe, GitHub, Twilio) but useless for browser or mobile clients.
Delivery guarantees matter. Webhook providers retry failed deliveries (typically exponential backoff over 24-72 hours). Your receiver must be idempotent because you will receive the same event multiple times. Always deduplicate by event ID.
Security is non-negotiable. Every webhook payload must include an HMAC signature. Without verification, anyone can POST fake events to your endpoint. Stripe uses HMAC-SHA256 with a shared secret. GitHub uses the X-Hub-Signature-256 header.
SSE: Server Streams Events Over HTTP
Server-Sent Events use a persistent HTTP connection where the server pushes text-based event frames to the client. One direction only: server to client.
// Server (Node.js/Express)
app.get('/events', (req, res) => {
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
const send = (data: object) => {
res.write(`id: ${Date.now()}\n`);
res.write(`data: ${JSON.stringify(data)}\n\n`);
};
orderService.on('statusChange', send);
req.on('close', () => orderService.off('statusChange', send));
});
// Client (browser, native API, no library needed)
const source = new EventSource('/events');
source.onmessage = (e) => {
const data = JSON.parse(e.data);
updateUI(data);
};
// Auto-reconnects on disconnect with Last-Event-ID header
The EventSource browser API handles reconnection automatically. If the connection drops, the browser reconnects and sends the Last-Event-ID header so the server can replay missed events. Built into the spec, no library needed.
SSE works through standard HTTP infrastructure. With HTTP/2 multiplexing, multiple SSE streams share a single TCP connection, eliminating the old "6 connections per domain" browser limit.
My recommendation for most server-to-client real-time needs: SSE first, WebSockets only when you need client-to-server messaging.
WebSockets: Full Duplex Over a Persistent Connection
WebSockets upgrade an HTTP connection to a persistent, full-duplex protocol. After the handshake, both sides send frames freely with minimal overhead (2-14 bytes per frame vs. hundreds of bytes per HTTP request).
// Server (Node.js with ws library)
const wss = new WebSocket.Server({ port: 8080 });
wss.on('connection', (ws) => {
ws.on('message', (data) => {
const msg = JSON.parse(data.toString());
wss.clients.forEach(client => {
if (client.readyState === WebSocket.OPEN) {
client.send(JSON.stringify({
user: msg.user, text: msg.text, ts: Date.now()
}));
}
});
});
});
// Client (browser, native API)
const ws = new WebSocket('wss://chat.myapp.com');
ws.onopen = () => ws.send(JSON.stringify({ type: 'join', room: 'general' }));
ws.onmessage = (e) => updateChat(JSON.parse(e.data));
After the HTTP upgrade handshake, the connection is no longer HTTP. Load balancers need WebSocket support, sticky sessions or a pub/sub layer (Redis), and connection-aware routing.
For multi-server deployments, every WebSocket server subscribes to a Redis Pub/Sub channel. When a message arrives on Server 1, it publishes to Redis, which fans it out to all servers. Without this layer, messages only reach clients connected to the same server.
Head-to-Head Comparison
| Dimension | Polling | Webhooks | SSE | WebSockets |
|---|---|---|---|---|
| Direction | Client โ Server | Server โ Server | Server โ Client | Bidirectional |
| Latency | Interval-dependent (1-60s) | Event-driven (~100ms) | Event-driven (~50ms) | Event-driven (~10ms) |
| Connection | Short-lived HTTP | Short-lived HTTP POST | Long-lived HTTP stream | Long-lived upgraded socket |
| Protocol | HTTP | HTTP | HTTP (text/event-stream) | WebSocket (ws://) |
| Browser support | Universal | N/A (server-to-server) | EventSource API (all modern) | WebSocket API (all modern) |
| Reconnect | Client timer | Provider retries (backoff) | Built-in (Last-Event-ID) | Manual (libraries help) |
| Load balancer | Standard HTTP | Standard HTTP | Standard HTTP | Needs WS support + sticky sessions |
| Scaling | Stateless, trivial | Stateless, trivial | Connection-per-client | Connection-per-client + pub/sub |
| Bandwidth | High (empty responses) | Low (event-driven only) | Low (event-driven only) | Lowest (2-byte frames) |
| Security concerns | Standard HTTP auth | HMAC signature verification | Standard HTTP auth | Origin checking, auth tokens |
The fundamental tension is simplicity vs. efficiency vs. capability. Polling is the simplest but most wasteful. Webhooks are efficient but server-to-server only. SSE is efficient and browser-native but one-directional. WebSockets are the most capable but the most complex to operate.
When Polling Wins
Polling is the right choice when simplicity matters more than efficiency:
- Low-frequency checks where latency in seconds or minutes is acceptable. Checking build status every 30 seconds, syncing configuration every 5 minutes, refreshing a dashboard that updates hourly.
- Unreliable network environments where persistent connections drop frequently. Each polling request is independent, so disconnections are a non-issue.
- Legacy systems with no event infrastructure. If the data source is a database table with a
last_updatedcolumn and no change notification, polling is your only option. - Rate-limited third-party APIs that don't offer webhooks or streaming. Poll at the rate limit and cache results.
When Webhooks Win
Webhooks are the right choice for server-to-server event delivery:
- Third-party integrations where the provider controls the data (Stripe payment events, GitHub push notifications, Twilio delivery receipts). You don't control when data changes, so polling is wasteful and slow.
- Microservice event pipelines where Service A notifies Service B of state changes. Decouples services without requiring a message queue for simple cases.
- Infrequent but important events. A new user signs up, a payment succeeds, a deployment completes. Events that happen minutes apart but need prompt handling.
- Fan-out to multiple consumers. One event triggers 5 downstream services (email, analytics, fraud, billing, audit log).
When SSE Wins
SSE is the right choice when the server needs to push updates to browser or mobile clients:
- Live feeds and dashboards. Stock tickers, activity streams, notification badges, live sports scores. The server pushes data as it happens.
- Order/delivery tracking. Customer opens the tracking page, SSE pushes status updates as the order progresses. No client-to-server data needed.
- Server-side notifications. "Your export is ready," "Your build passed," "Someone commented on your PR." One-way notifications that don't need a response.
- Standard HTTP infrastructure. SSE works through every load balancer, CDN, and proxy without special configuration.
When WebSockets Win
WebSockets are the right choice when both sides need to send data in real time:
- Chat applications. Users send and receive messages. Bidirectional by nature. Slack, Discord, and WhatsApp Web all use WebSockets.
- Collaborative editing. Google Docs, Figma, Notion. Users type, changes propagate to everyone. Both directions, low latency, high frequency.
- Multiplayer games. Player inputs go to the server, game state comes back. Sub-50ms latency. Frame-level synchronization.
- Live trading platforms. Traders submit orders (client to server) and receive market data (server to client) simultaneously. The 2-byte frame overhead matters at 1,000+ updates per second.
The Nuance
The real-world answer is often "more than one."
A production application might use polling for configuration refresh (every 5 minutes), webhooks for payment processing (Stripe events), SSE for user-facing notifications, and WebSockets for the chat feature. Each pattern fits its use case. The mistake is picking one and forcing it everywhere.
Long polling is the bridge pattern. When you need push semantics but can't deploy SSE or WebSocket infrastructure, long-polling gives you event-driven behavior over plain HTTP. Not elegant, but it works everywhere. Early Slack used long polling before migrating to WebSockets.
Webhooks + SSE is a powerful combo. Your server receives events from Stripe via webhooks (server-to-server), then pushes them to the browser via SSE (server-to-client). The two patterns compose naturally at the system boundary.
Real-World Examples
Slack: Started with long polling for the browser client. As they scaled beyond 10 million daily active users, they migrated to WebSockets for the main message stream, cutting server-side connection overhead by ~50% and reducing message delivery latency from seconds (long-poll timeout cycle) to under 100ms. They still use webhooks for their platform API (slash commands, event subscriptions).
GitHub: Uses WebSockets for Codespaces terminal and live collaboration. Uses SSE for the Actions workflow status page (you can see text/event-stream in the network tab). Uses webhooks for their entire platform integration API. For the notification bell, they found 60-second polling was sufficient because "under a minute" latency was acceptable. A clean example of matching each pattern to its requirements.
Stripe: Built their real-time dashboard on WebSockets for bidirectional communication (subscribe to filters, receive matching events). Their payment notifications to merchants use webhooks exclusively with exponential backoff retries for 72 hours, dead letter queues after that, and HMAC-SHA256 signature verification with timestamp-based replay protection (reject events older than 5 minutes).
How This Shows Up in Interviews
Interview tip: lead with the decision matrix
When an interviewer asks "how would you implement real-time updates," don't jump to WebSockets. Say: "The choice depends on the direction of data flow. Server-to-client only? SSE. Bidirectional? WebSockets. Server-to-server events? Webhooks. If latency tolerance is above 5 seconds, I'd start with polling."
When to bring this up proactively:
- Any system with "real-time" in the requirements (chat, notifications, live feeds)
- Whenever you're integrating with external services (payment, CI/CD, messaging)
- When discussing scaling a feature that currently polls
Depth expected at senior/staff level:
- Explain why SSE and WebSockets have different scaling characteristics
- Know Redis Pub/Sub as the multi-server WebSocket backbone
- SSE auto-reconnects with Last-Event-ID; WebSocket reconnection is manual
- HTTP/2 multiplexing eliminates the SSE "6 connections per domain" limit
- Webhook security: HMAC verification, replay protection, idempotent processing
| Interviewer asks | Strong answer |
|---|---|
| "Why not use WebSockets for everything?" | "WebSockets require sticky sessions and a pub/sub layer for multi-server. SSE works through standard HTTP infra. For server-to-client, SSE is half the operational cost." |
| "How do you handle webhook failures?" | "Idempotent receivers keyed by event_id, async processing with immediate 200, exponential backoff retries from the provider, and a safety-net reconciliation job that polls the provider API." |
| "Your SSE server has 100K connections. How?" | "Edge fan-out. Origin servers produce events, publish to Redis Pub/Sub, SSE proxy nodes hold client connections. Each proxy handles 30-50K connections." |
| "How would you migrate from polling?" | "Start with SSE for server-to-client. It's HTTP-native, so existing infra works. Add Redis Pub/Sub behind the SSE endpoint. Only add WebSockets if client-to-server messaging is needed." |
| "What about mobile clients?" | "SSE works but persistent connections drain battery. For mobile, use push notifications (APNs/FCM) for critical events and polling with exponential backoff for background sync." |
Quick Recap
- Polling sends repeated HTTP requests on a timer, simple but wastes 90%+ of requests on empty responses at typical intervals.
- Webhooks push HTTP POST events from server to server on occurrence, requiring HMAC verification, idempotent handlers, and retry logic.
- SSE streams server-to-client events over persistent HTTP with automatic reconnection via Last-Event-ID, working through standard infrastructure.
- WebSockets provide full-duplex communication after an HTTP upgrade handshake, requiring sticky sessions and pub/sub for multi-server deployments.
- Default to SSE for server-to-client real-time; reach for WebSockets only when the client needs to send data back. Webhooks and SSE compose naturally at the system boundary.
Related Trade-offs
- Sync vs. async communication - The broader question of request-reply vs. event-driven messaging that underlies all four patterns.
- Push vs. pull - The general architectural tradeoff that polling and webhooks are specific instances of.
- Stateful vs. stateless services - WebSocket servers are inherently stateful; SSE can be made stateless with external pub/sub.