How Rumbliq Works: From Endpoint to Alert in Under a Minute
Rumbliq is built on a deceptively simple idea: continuously fetch your third-party API endpoints, extract their response schema, compare it against a stored baseline, and alert you the moment something changes.
Simple in concept. Non-trivial in execution.
Here's a technical walkthrough of exactly how it works — from the moment you add a monitor to the moment an alert lands in your Slack.
Step 1: Adding a Monitor
You give Rumbliq an endpoint to watch. At minimum, that's a URL and a name. Optionally, you configure:
- HTTP method and body — for POST/PUT endpoints that require a payload to return meaningful data
- Headers — for auth tokens, API version headers, or other required request headers
- Stored credentials — for endpoints that require OAuth tokens, API keys, or Basic auth (stored encrypted in Rumbliq's credential vault using AES-256-GCM with per-user keys derived via HKDF-SHA512)
- Check interval — how often Rumbliq should poll the endpoint (Free: 60 min minimum; Pro: 15 min; Team: 5 min; Enterprise: 1 min)
- Alert severity threshold — whether to alert on any structural change or only breaking ones
You can add monitors manually, or import them in bulk from OpenAPI specs, Swagger files, Postman collections, or cURL commands.
Step 2: The Scheduler
Rumbliq uses BullMQ (a Redis-backed job queue) to manage its check schedule. When you create a monitor, Rumbliq registers a repeating cron job in BullMQ for that monitor's configured interval. Up to 10 workers run concurrently, ensuring checks fire on time even at scale.
When a check is due, BullMQ enqueues a job for that monitor. A worker picks it up and calls executeCheck().
Step 3: Fetching the Endpoint
The checker makes an authenticated HTTP request to your configured endpoint. A few things happen here worth noting:
SSRF protection is enforced on every request. Before any outbound HTTP call, Rumbliq validates the target URL against a blocklist of private IP ranges (10.x.x.x, 172.16.x.x, 192.168.x.x), loopback addresses, cloud metadata endpoints (169.254.169.254), and any admin-banned endpoints. This prevents the monitoring system itself from becoming a vector for server-side request forgery attacks.
Credentials are injected at request time from the encrypted credential vault. Bearer tokens, API key headers, Basic auth, and custom headers are all supported. OAuth2 client credentials flows handle automatic token refresh.
Response size is capped at 5MB to prevent memory pressure from pathologically large responses.
Step 4: Schema Extraction
This is the core of Rumbliq's approach, and where it differs from simple status monitoring.
Rumbliq doesn't store the raw response body. It doesn't do semantic comparison. It extracts the structural schema of the JSON response using a recursive algorithm (extractSchema()):
{
"user": {
"id": "usr_abc123",
"name": "Alice",
"plan": "pro",
"createdAt": "2026-01-01T00:00:00Z"
},
"monitors": [
{ "id": "mon_xyz", "status": "active" }
]
}
Becomes:
{
type: "object",
properties: {
user: {
type: "object",
properties: {
id: { type: "string" },
name: { type: "string" },
plan: { type: "string" },
createdAt: { type: "string" }
}
},
monitors: {
type: "array",
items: {
type: "object",
properties: {
id: { type: "string" },
status: { type: "string" }
}
}
}
}
}
This schema extraction strips out actual values while preserving structure and types. The result is a stable fingerprint of the response's shape — not its content.
Why not store the full response? Because response values change constantly: timestamps, IDs, paginated results. You don't care that id is now "usr_xyz456" instead of "usr_abc123". You care if id disappears from the response entirely, or changes type from string to integer.
Step 5: Baseline Comparison
Every monitor has a stored baseline — the schema fingerprint from when the monitor was created (or last reset). The checker calls diffSchemas(), which recursively walks both schema trees and produces a structured diff:
Fields that were added — new keys that appear in the current response but weren't in the baseline.
Fields that were removed — keys present in the baseline but missing from the current response. These are typically the most dangerous, as your code is likely reading these fields.
Type changes — a field that was a string is now a number, or was an object and is now null. Often indicates a data model migration.
Structure changes — a field that was an object is now an array, or vice versa.
Each change is classified by severity:
- Breaking: removed fields, type changes, structural inversions
- Non-breaking: added fields, new optional properties
The diff is stored in the check record in the database, timestamped to the millisecond.
Step 6: Alerting
When drift is detected, fireAlerts() notifies all configured alert destinations for that monitor. Rumbliq supports:
- Webhook — POST a structured JSON payload to any URL. Includes the full diff, timestamp, monitor info, and a link to the check detail.
- Slack — Formatted Slack message with a summary of what changed, severity, and a direct link to the check in Rumbliq.
- Email — Human-readable alert via Resend, with the diff rendered clearly and a call-to-action link.
Alerts fire within seconds of a drift detection. On a 1-minute polling interval, you'll know about an API change within 1–2 minutes of it happening.
Step 7: The Check Record
Every check — whether it detected drift or not — is recorded in the database. The check record contains:
- Status:
ok,drift_detected,error - HTTP response code and latency
- Extracted schema for this check
- The diff (if any drift was detected)
- Timestamp
This gives you a complete historical timeline of every API response you've ever monitored. You can see exactly when a field disappeared, what the diff looked like, and how your baseline has evolved over time.
The Seismograph
Rumbliq's dashboard includes a visual called the Seismograph — a timeline view that shows the check history for each monitor. Quiet periods (no drift) appear as a flat line. Drift events appear as spikes. You can see at a glance which APIs are stable and which ones are restless.
Resetting Your Baseline
Sometimes API changes are intentional — a provider releases a new version of their API and you've updated your code accordingly. In that case, you reset the baseline to accept the new schema as the new normal. The old baseline is archived; the new one becomes your reference point for future checks.
Import and Bootstrap
For teams with many endpoints to monitor, Rumbliq supports bulk import:
- OpenAPI / Swagger: Parses the spec and creates one monitor per endpoint, with correct method, headers, and parameter structure pre-filled.
- Postman collections: Imports all requests, preserving auth configs and example payloads.
- cURL commands: Paste a curl invocation and Rumbliq extracts the URL, method, headers, and body automatically.
This is the full pipeline: schedule → fetch → extract → diff → alert → record. Running on intervals as short as 1 minute, across hundreds or thousands of monitors, continuously — so you know about API drift before your users do.
Beyond Single Endpoints: Sequences
For workflows that span multiple API calls — authenticate, fetch data, submit an order — Rumbliq supports multi-step API sequences. Chain HTTP requests together, pass variables between steps (like auth tokens), and verify that your entire API workflow works end-to-end. Each step can independently enable schema drift detection against its own baseline. If step 3 of your checkout flow breaks, you know immediately — and you know exactly which step failed and why.
Sequences are available on all plans (3 on free, scaling up with paid tiers).
Related Posts
Start monitoring your APIs free → — 25 monitors, 3 sequences, no credit card required.