How to Detect Breaking API Changes Automatically
Your third-party API works perfectly on Monday. By Thursday, field names have changed, required parameters have been added, and your application is silently returning garbage data to users. Nobody told you. The API provider's changelog has a brief note buried three pages in.
This is the reality of depending on external APIs — and it's why automatic detection of breaking changes isn't optional for production systems.
What Counts as a Breaking API Change?
Not every API change is breaking. Adding a new optional field is generally safe. But the following changes will break your integration:
- Removed fields — your code references
response.user_idand it no longer exists - Renamed fields —
user_idbecomesuserId(case change only, still breaking) - Changed data types — a field that returned a number now returns a string
- Changed enum values —
status: "active"becomesstatus: "enabled" - New required request parameters — existing API calls now fail with 400
- Changed response status codes — what returned 200 now returns 201 or 404
- Modified nested structures —
response.address.citybecomesresponse.location.city - Authentication changes — new token scopes required, different header formats
The dangerous ones are changes that don't cause HTTP errors. When a field is renamed, the endpoint still returns 200 OK. Basic uptime monitoring stays green. Your application silently processes incomplete data.
Why Manual Detection Fails
The naive approach is to watch API changelogs and test after releases. This fails for several reasons:
You don't control the release schedule. API providers ship on their timeline, not yours. Changes can land at 3am on a Sunday.
Changelogs are incomplete. Many providers don't document schema changes because they're considered internal. The field was "implementation detail" until you shipped code that depended on it.
You integrate with too many APIs. A single modern service might integrate with 10-20 third-party APIs. Manually monitoring changelogs for all of them isn't sustainable.
Staging environments don't help. Some providers don't have staging environments. Others roll out changes to production incrementally, meaning you only see them in production traffic.
The Automatic Detection Approach
Automatic detection works by establishing a baseline and comparing every new response against it. Here's how to build this into your workflow:
Step 1: Capture a Schema Baseline
The first time you call an API endpoint, record the response structure — not the values, but the schema. Field names, data types, whether fields are present or null, nesting structure.
// Capture schema from live response
async function captureSchema(endpoint, headers) {
const response = await fetch(endpoint, { headers });
const data = await response.json();
return extractSchema(data);
}
function extractSchema(obj, path = '') {
const schema = {};
for (const [key, value] of Object.entries(obj)) {
const fullPath = path ? `${path}.${key}` : key;
if (value === null) {
schema[fullPath] = 'null';
} else if (Array.isArray(value)) {
schema[fullPath] = 'array';
if (value.length > 0) {
Object.assign(schema, extractSchema(value[0], `${fullPath}[]`));
}
} else if (typeof value === 'object') {
Object.assign(schema, extractSchema(value, fullPath));
} else {
schema[fullPath] = typeof value;
}
}
return schema;
}
Step 2: Compare on Every Poll
On each subsequent poll, extract the schema from the new response and diff it against your baseline:
function diffSchemas(baseline, current) {
const changes = [];
// Check for removed fields
for (const field of Object.keys(baseline)) {
if (!(field in current)) {
changes.push({ type: 'REMOVED', field, was: baseline[field] });
}
}
// Check for type changes
for (const field of Object.keys(current)) {
if (field in baseline && baseline[field] !== current[field]) {
changes.push({
type: 'TYPE_CHANGED',
field,
was: baseline[field],
now: current[field]
});
}
}
// Check for new fields (informational, not always breaking)
for (const field of Object.keys(current)) {
if (!(field in baseline)) {
changes.push({ type: 'ADDED', field, now: current[field] });
}
}
return changes;
}
Step 3: Alert on Breaking Changes
Not all detected changes need the same urgency. Classify and route accordingly:
const BREAKING_TYPES = ['REMOVED', 'TYPE_CHANGED'];
const WARNING_TYPES = ['ADDED'];
function classifyChanges(changes) {
return {
breaking: changes.filter(c => BREAKING_TYPES.includes(c.type)),
warnings: changes.filter(c => WARNING_TYPES.includes(c.type))
};
}
async function alertOnBreakingChanges(endpoint, changes) {
const { breaking } = classifyChanges(changes);
if (breaking.length > 0) {
await sendAlert({
severity: 'critical',
message: `Breaking changes detected in ${endpoint}`,
changes: breaking
});
}
}
Polling Frequency Matters
How quickly you detect a breaking change depends on how frequently you poll. The math is simple: if you poll every 15 minutes and a breaking change lands at 2:01am, you won't know until 2:15am at the earliest — and that's only if the alert reaches someone immediately.
For production APIs:
- Critical integrations (payments, auth, core data): poll every 1-5 minutes
- Important integrations: poll every 5-15 minutes
- Low-traffic integrations: poll every 30-60 minutes
Using a Dedicated API Monitoring Service
Building schema diffing from scratch works but requires ongoing maintenance. Dedicated tools like Rumbliq handle this automatically:
- Point Rumbliq at your API endpoints — provide the endpoint URL, any required headers or auth tokens
- Rumbliq captures the baseline — it records the full response schema on first poll
- Continuous monitoring — Rumbliq polls on your configured interval
- Instant alerts — when a field is removed, type changes, or the response structure shifts, you get notified immediately — before users see errors
Rumbliq also handles authentication flows (OAuth, API key rotation), multi-step sequences where you need to authenticate before checking a protected endpoint, and SSL certificate monitoring as a bonus.
Integrating Detection into Your CI/CD Pipeline
Beyond runtime monitoring, you can add API contract tests to your CI pipeline to catch breaking changes during deployments:
# .github/workflows/api-contracts.yml
name: API Contract Tests
on: [push, pull_request]
jobs:
test-contracts:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run API contract tests
run: |
# Fetch current response
curl -s "$API_ENDPOINT" \
-H "Authorization: Bearer $API_KEY" \
-o current_response.json
# Compare against committed schema baseline
node scripts/validate-schema.js \
--baseline schemas/api-baseline.json \
--current current_response.json
This catches breaking changes from the upstream API before your code ships — though it doesn't catch changes that happen between deployments.
What to Do When You Detect a Breaking Change
Detection is only valuable if you have a response plan:
- Immediate triage — is this affecting users now? Check error rates and user-facing behavior.
- Isolate the affected code — which parts of your codebase consume this API field?
- Check provider communication — is this intentional? Is there a migration guide?
- Implement a defensive fallback —
response.userId ?? response.user_idwhile you migrate - Update your schema baseline — once you've adapted, update the baseline to the new schema
Start Catching Changes Before They Catch You
The first time your monitoring catches a breaking API change at 2am instead of your users catching it at 9am, the value of automatic detection becomes obvious.
Related Posts
- how to detect breaking API changes
- API schema drift vs breaking changes
- third-party API breaking changes detection
Start monitoring your APIs with Rumbliq → — free plan includes schema drift detection with no credit card required.