Price Alert
Design a price alert system: inverted-index matching against 50M alerts, a cooldown state machine that eliminates notification storms, and idempotent fan-out to email, push, and SMS.
What is a price alert system?
A price alert system watches prices across products and financial instruments and notifies users when a target threshold is crossed. The apparent simplicity hides a matching problem: when a price update arrives for one of millions of tracked items, which of the millions of active alerts does it trigger? A naive scan collapses immediately at scale. The interesting engineering is an inverted index that turns a price update into a list of triggered alerts in microseconds, plus idempotent multi-channel delivery that fires exactly once even when prices bounce around the threshold. It is a strong interview question because it combines an inverted-index data structure problem, event-driven pipeline design, idempotent notification delivery, and a nuanced alert state machine, testing four distinct skills together.
Functional Requirements
Core Requirements
- Users can create a price alert on any tracked product or stock ticker, specifying a target price and direction (price drops to or below target, or rises to or above target).
- Users receive a notification when the current price crosses their threshold.
- Users can choose which channels receive the notification: email, push notification, or SMS.
- Users can view, pause, and delete their active alerts.
Below the Line (out of scope)
- Order execution or automated trading once the alert fires.
- Complex multi-condition alerts (for example, "alert if price drops AND trading volume exceeds X").
- Price history charts, trend analysis, and historical price data.
- Social features such as sharing alerts or following other users' watchlists.
Order execution belongs to a trading system that sits downstream of the alert. Linking the two would couple notification delivery to financial settlement, which carries entirely different reliability and compliance requirements. To add it, the alert would publish an event to a trading engine asynchronously; the alert system itself stays stateless with respect to the trade.
Complex conditional alerts could be built by adding a rule evaluation layer in front of the current threshold check. Each price update would be run through a small expression evaluator. The data model would store a rule AST instead of a single target_price scalar. Deliberately deferred because it doesn't change the core ingestion and matching pipeline.
Price history is below the line because it doesn't change the ingestion or matching pipeline. To add it, capture every PriceUpdate event to a time-series DB (TimescaleDB or InfluxDB) on the write path. The alert matching pipeline reads only the latest price; the history reader would be a separate query path that doesn't touch the alert evaluation logic at all.
Social alert features don't interact with the evaluation pipeline. They'd live in a separate follow-graph service that aggregates public alert activity. The alert system would publish anonymized trigger events to a feed topic, and the social service would consume them independently. The matching and notification pipeline wouldn't change.
The hardest part in scope: When a price update for a single item arrives, the system must instantly identify which of potentially thousands of active alerts for that item to fire, then deliver each notification exactly once even if the price oscillates around the threshold for minutes.
Non-Functional Requirements
Core Requirements
- Notification latency: Alert fires within 60 seconds of a triggering price update. Most users accept near-real-time; sub-second is not required and would over-engineer the ingestion pipeline.
- Throughput: Ingest up to 50,000 price updates per second across all tracked items during peak trading hours. Supported without batching at the ingestion layer.
- Scale: Support 50 million active alerts across 5 million tracked items.
- Availability: 99.9% uptime for alert creation and management. A brief spike that delays notifications by a few minutes is tolerable; silently missing a triggered alert is not.
- Exactly-once delivery: Each alert fires at most once per trigger event. Duplicate notifications erode user trust faster than latency does.
Below the Line
- Sub-second notification delivery (would require a dedicated low-latency push path; unnecessary for this use case).
- Real-time price data fidelity for high-frequency trading (our 60-second SLA allows seconds of price lag from the feed).
Read/write ratio: Price updates arrive far more often than users create or modify alerts. For every alert creation, we see roughly 1,000 price updates across the platform. The evaluation pipeline is the hot path. This 1000:1 ratio means write throughput for alert management is modest; the system must be optimized for high-volume, low-latency price evaluation, not for alert CRUD. Every architectural decision downstream traces back to this skew.
Core Entities
- Alert: A user's price threshold for a specific item. Captures the target price, direction (lte or gte), preferred notification channels, and current firing state (active, triggered, paused).
- Item: A tracked product from a retailer catalog or a financial instrument (stock, ETF, crypto). The unit of price updates.
- PriceUpdate: The current price of an item at a point in time, sourced from a price feed. Ephemeral; only the latest price for each item needs to be retained outside the event log.
- Notification: A record of a dispatched notification message. Serves as the idempotency log; before firing, the system checks this table to prevent duplicate sends for the same alert trigger event.
- User: The account that owns alerts and holds channel credentials (email address, push token, phone number).
The primary relationship is User โ Alert โ Item. An item can have thousands of alerts from different users. Full schema and indexing decisions are deferred to the deep dives.
API Design
One endpoint per core functional requirement, grouped by the requirement it satisfies.
FR 1 and FR 3: Create a price alert with channel preferences:
POST /alerts
Authorization: Bearer {token}
Body: {
item_id: "AAPL",
target_price: 150.00,
direction: "lte",
channels: ["push", "email"]
}
Response 201: {
alert_id: "alrt_8fk2x",
item_id: "AAPL",
target_price: 150.00,
direction: "lte",
channels: ["push", "email"],
status: "active",
created_at: "2026-03-29T10:00:00Z"
}
direction: "lte" means "alert me when price falls to or below target". Using an explicit direction field (rather than inferring it from whether the current price is above or below target) makes the intent unambiguous and allows both "price drop" and "price recovery" alerts to coexist on the same item.
FR 4: List alerts (paginated):
GET /alerts?status=active&cursor=alrt_8fk2x&limit=25
Response 200: {
alerts: [...],
next_cursor: "alrt_9gm3y"
}
Cursor-based pagination over keyset on alert_id (ordered by created_at). Offset pagination collapses when rows are inserted between pages; cursor pagination is stable.
FR 4: Update an alert (pause, resume, modify target):
PATCH /alerts/{alert_id}
Body: { status: "paused" } // or: { target_price: 145.00 }
Response 200: { alert_id, status, target_price, ... }
PATCH over PUT because users typically update one field at a time (pause/resume or adjust threshold). A full PUT would require the client to re-send all fields.
FR 4: Delete an alert:
DELETE /alerts/{alert_id}
Response 204: (empty)
Hard delete. The alert is removed from the matching index immediately so no further notifications are evaluated. Notification history is preserved under the Notification entity.
Internal (price ingestion, write-only, not user-facing):
POST /internal/prices
Body: {
updates: [
{ item_id: "AAPL", price: 149.50, timestamp: "2026-03-29T10:01:33Z", source: "nasdaq_feed" },
{ item_id: "NVDA", price: 800.00, timestamp: "2026-03-29T10:01:33Z", source: "nasdaq_feed" }
]
}
Response 202: { accepted: 2 }
202 Accepted is correct here: the system acknowledges receipt but alert evaluation is asynchronous. The source feed gets a fast acknowledgment without waiting for match evaluation and notification dispatch. In practice this endpoint is an internal Kafka publish, not an HTTP call; showing it as HTTP makes the contract explicit for the interview.
High-Level Design
1. Users can create and manage price alerts
The write path: a user submits an alert, it lands in persistent storage, and the system is ready to match incoming price updates against it.
Components:
- Client: Web or mobile app sending POST /alerts and PATCH/DELETE requests.
- Alert Service: Validates the request, persists the alert, and returns the
alert_id. - Alert DB (PostgreSQL): Primary store for alert records. Queried for management operations (list, update, delete).
Request walkthrough:
- Client sends
POST /alertswith item_id, target_price, direction, and channels. - Alert Service validates that item_id exists and target_price is a positive number.
- Alert Service inserts the alert row into Alert DB with
status = active. - Alert Service returns
{ alert_id, status }to the client.
This covers alert creation and management. Price evaluation hasn't started yet; the DB is purely a record store at this point.
2. When a price crosses a threshold, fire the matching alerts
This is the core matching problem. Price updates arrive for thousands of items per second, and the system needs to identify which active alerts to fire for each update. I'd spend the bulk of an interview on this section, because the naive approach fails in a very instructive way.
Phase 1: Naive scan
The simplest idea is to query the database on every price update: find all active alerts for this item where the threshold is breached.
Components:
- Price Ingestion Worker: Consumes a price feed and for each update runs a DB query.
- Alert DB: Scanned for matching alerts on each price change.
Request walkthrough:
- Price feed publishes a new price for item
AAPL: $149.50. - Ingestion Worker runs:
SELECT * FROM alerts WHERE item_id = 'AAPL' AND status = 'active' AND direction = 'lte' AND target_price >= 149.50. - For each matched alert, insert a notification job into a queue.
Why this breaks: At 50,000 price updates per second, this is 50,000 database queries per second. Each query hits a B-tree index on (item_id, direction, target_price), which is fast for a single lookup but not for 50K concurrent ones. More critically, if one item (say AAPL) has 200,000 active alerts, scanning and loading 200,000 rows on every tick is a full index range scan every few milliseconds. The database becomes the bottleneck for evaluation, not for storage.
Phase 2: Inverted index in Redis
The fix is to move the matching index out of PostgreSQL and into a structure purpose-built for this query pattern. The key insight is: the matching query for a "price drops below target" alert is equivalent to a range query on a sorted set. Redis sorted sets (ZRANGEBYSCORE) perform exactly this operation in O(log N + K) time, where K is the number of triggered alerts.
Components:
- Alert Service: On alert creation, writes to both Alert DB and the Redis sorted set for the item.
- Redis Sorted Sets (Inverted Index): Key per item and direction. Score = target_price. Member = alert_id.
- Price Ingestion Worker: On each price update, calls
ZRANGEBYSCOREinstead of a DB scan.
Redis key structure:
alerts:lte:{item_id} โ sorted set { score: target_price, member: alert_id }
alerts:gte:{item_id} โ sorted set { score: target_price, member: alert_id }
For an LTE alert (fire when price drops to or below target): fetch all alerts with target_price >= new_price.
// On price update for item_id with new_price:
// LTE alerts: fire when new_price <= target_price
triggered_lte = ZRANGEBYSCORE alerts:lte:{item_id} new_price +inf
// GTE alerts: fire when new_price >= target_price
triggered_gte = ZRANGEBYSCORE alerts:gte:{item_id} -inf new_price
triggered = union(triggered_lte, triggered_gte)
for alert_id in triggered:
enqueue(notification_queue, { alert_id, item_id, triggered_price: new_price })
Redis sorted set membership is by score, not by key. ZRANGEBYSCORE key min max returns all members with scores between min and max in O(log N + K). For an item with 200,000 active LTE alerts and a price drop that triggers 5 alerts, this is roughly O(log 200,000 + 5) = ~18 operations instead of loading 200,000 rows.
Evolved request walkthrough:
- Price Ingestion Worker receives
AAPL: $149.50from the feed. - Worker calls
ZRANGEBYSCORE alerts:lte:AAPL 149.50 +infon Redis: returns alert IDs with target_price between 149.50 and the max (these alert for a drop to or below their target, and the new price is at or below them). - Worker enqueues each triggered alert_id to the Notification Queue.
- Alert Service marks those alerts as
triggeredin Alert DB (asynchronously).
Partitioning Kafka by item_id ensures all price updates for the same item go to the same partition and worker, eliminating race conditions in the sorted set lookups. I'd call out this partitioning choice explicitly in the interview.
3. Deliver notifications across multiple channels
Decoupling the notification delivery from the matching pipeline is essential. If the email provider is slow or down, we don't want it stalling price evaluation. The alert-triggers Kafka topic acts as the buffer.
Components:
- Notification Dispatcher: Consumes
alert-triggers, loads the full alert record from Alert DB (to get channel preferences and user contact info), dispatches to each registered channel. - Channel Adapters: Thin wrappers around SendGrid (email), APNs/FCM (push), and Twilio (SMS). Isolated behind a common interface so channels can be added independently.
- Notification Log (PostgreSQL): Idempotency table. Before dispatching, check if a notification for this
(alert_id, trigger_event_id)has already been sent.
Request walkthrough:
- Notification Dispatcher reads
{ alert_id: "alrt_8fk2x", triggered_price: 149.50 }from the alert-triggers topic. - Dispatcher loads the alert from Alert DB: confirms status is
triggered, readschannels: ["push", "email"]anduser_id. - Dispatcher loads user contact info: push token from device registry, email address from user profile.
- Dispatcher checks Notification Log: has a notification for this
alert_id + trigger_event_idalready been sent? - If not, dispatches to push adapter and email adapter in parallel.
- Dispatcher writes to Notification Log on successful send.
Kafka consumer retries can replay the same alert trigger if the dispatcher crashes after dispatching but before committing the offset. Without an idempotency check, the user receives duplicate notifications. The Notification Log prevents this: the idempotency key is (alert_id, trigger_event_id) where trigger_event_id is derived from the Kafka partition + offset, providing a stable deduplication key across retries.
4. Users can view and manage their active alerts
The read path for alert management is straightforward: list all alerts for the authenticated user, paginated, filterable by status. This doesn't require any new components; it's a direct read from Alert DB.
Components:
- Alert Service (same as FR 1): Handles GET /alerts with cursor-based pagination and PATCH/DELETE for updates.
- Alert DB: Queried with a simple index on
(user_id, created_at)for the list view.
Important sync requirement: When a user deletes or pauses an alert, the system must remove or flag the corresponding entry in the Redis sorted set immediately, otherwise the matching engine continues to fire notifications for a deleted alert. Alert Service handles this synchronously: delete from Alert DB and ZREM from the Redis sorted set in the same request, using a transaction wrapper.
Request walkthrough (list):
- Client sends
GET /alerts?status=active&limit=25. - Alert Service queries
SELECT * FROM alerts WHERE user_id = ? AND status = 'active' ORDER BY created_at LIMIT 25. - Returns paginated alert list with
next_cursorbased on the lastalert_idin the result.
Request walkthrough (delete):
- Client sends
DELETE /alerts/{alert_id}. - Alert Service begins a DB transaction.
- Deletes the row from Alert DB:
DELETE FROM alerts WHERE alert_id = ? AND user_id = ?. - Calls
ZREM alerts:lte:{item_id} alert_id(orgte) on Redis. - Commits the transaction. Returns HTTP 204.
Potential Deep Dives
1. How do we efficiently match alerts when prices update at scale?
The matching step is the hot path. At 50,000 price updates per second with 5 million tracked items, each update must be evaluated against up to hundreds of thousands of alerts per item in under a few milliseconds. There are three options of escalating sophistication.
2. What happens when a price bounces around the threshold? (One-shot vs. cooldown semantics)
This is the most underrated tricky part of this design. If AAPL's price oscillates between $149.40 and $150.20 every 30 seconds for an hour, and a user set a $150.00 LTE alert, they could receive 60+ notifications in an hour. This destroys trust. The question is: when should an alert re-arm itself after firing?
3. How do we scale the price ingestion pipeline beyond a single region?
At 50,000 price updates per second globally, a single-region Kafka cluster becomes both a throughput bottleneck and a single point of failure. Stock markets and e-commerce platforms have natural geographic price distribution.
Final Architecture
The separation of concerns is the core insight: the price ingestion pipeline and the alert management API never touch the same hot path. Ingestion workers are stateless, reading only from Kafka and Redis, with no write contention on the Alert DB during the matching phase. The Alert DB is updated asynchronously for status tracking, not for matching. This keeps the evaluation latency under 5ms even at 50K updates per second.
Interview Cheat Sheet
- State the scale upfront: 50 million active alerts, 50,000 price updates per second. These two numbers explain every downstream decision.
- The naive matching approach (SQL scan per price update) fails immediately: 50K queries per second to assess at scale.
- Use Redis sorted sets as an inverted index: key =
alerts:{direction}:{item_id}, score =target_price, member =alert_id. A singleZRANGEBYSCOREreturns all triggered alerts in O(log N + K). - Partition Kafka by
item_idso all updates for the same item go to the same worker, preventing race conditions in the Redis sorted set. - Decouple matching from notification delivery with a Kafka
alert-triggerstopic. If email is slow or down, it doesn't stall price evaluation. - Idempotency in notification dispatch: use
(alert_id, kafka_partition + offset)as the deduplication key. Kafka consumer retries can replay triggers; without this, users get duplicate notifications. - Cooldown state machine over one-shot: after firing, set alert status to
cooling_downand remove it from the Redis sorted set. Rearm automatically if the price has recovered after the cooldown window. This prevents notification storms on volatile prices. - The rearm check matters: only rearm if the current price no longer satisfies the threshold. Otherwise, the alert fires again immediately upon rearm.
- Sync Redis on every alert management operation:
ZADDon create,ZREMon delete or pause. If Redis and PostgreSQL drift, ghost notifications fire for deleted alerts. - For multi-region: deploy Redis sorted sets per region, route alerts to the region where the price originates (not where the user is). A US stock alert by a Tokyo user still lives in the US Redis cluster.
- State the OWASP concern: the internal
/internal/pricesendpoint must be network-segmented (not publicly reachable). A rogue price injection could trigger millions of alerts and cause an SMS bill explosion.