Set Up Slack Alerts for API Breaking Changes
You shouldn't have to check a dashboard to find out if a critical API changed. When a breaking change happens, your team should know immediately — in the channel where they're already working.
This tutorial shows you how to connect Rumbliq to Slack so your team gets an instant notification the moment a monitored API changes its schema. We'll also cover how to use Rumbliq's outbound webhooks for custom integrations with PagerDuty, Discord, Microsoft Teams, or any endpoint that accepts HTTP POST requests.
Prerequisites
- A Rumbliq account with at least one monitor set up
- A Slack workspace where you want to receive alerts
- Slack admin permissions to create an incoming webhook (or ask your Slack admin)
Part 1: Configure Slack Alerts in Rumbliq
Step 1: Create a Slack Incoming Webhook
First, create the webhook in Slack:
- Go to api.slack.com/apps and click Create New App
- Choose From scratch, name it
Rumbliq Alerts, select your workspace - In the left sidebar, click Incoming Webhooks
- Toggle Activate Incoming Webhooks to On
- Click Add New Webhook to Workspace
- Choose the channel where you want alerts (e.g.,
#api-alertsor#engineering) - Click Allow
- Copy the webhook URL — it looks like:
https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX
Tip: Create a dedicated
#api-drift-alertschannel. This keeps API change notifications organized and searchable, separate from general engineering chat.
Step 2: Add the Alert in Rumbliq
- Log in to rumbliq.com
- Click Alerts in the left sidebar
- Click New Alert
- Choose Slack as the destination type
- Paste your Slack webhook URL
- Give it a name:
Slack - #api-alerts
Step 3: Configure Alert Severity
Choose which types of changes should trigger the alert:
| Severity | What triggers it |
|---|---|
| Breaking changes | Required fields removed, field types changed, endpoints returning errors |
| Non-breaking changes | New optional fields added, response structure additions |
| Monitor errors | Endpoint unreachable, auth failures, timeout |
For a high-traffic #api-alerts channel, we recommend enabling all three. For a narrower #incidents channel, enable only breaking changes and errors.
Step 4: Link Monitors to the Alert
By default, a new alert applies to all your monitors. To restrict it:
- In the alert settings, under Monitors, uncheck All monitors
- Select only the monitors you want to route to this Slack channel
This is useful for larger teams — payment-related drift goes to #payments-eng, infrastructure-related drift goes to #infra.
Step 5: Test the Alert
Click Test Alert in the alert settings. Rumbliq sends a sample notification to your Slack channel:
🔔 Rumbliq Test Alert
Monitor: Stripe Payment Intents
This is a test notification from Rumbliq.
If you see it in Slack, your integration is working. If not, double-check the webhook URL — Slack webhook URLs are long and easy to truncate accidentally.
What a Real Alert Looks Like
When Rumbliq detects a breaking change, your Slack message will look something like this:
🚨 Breaking API Change Detected
Monitor: Stripe Payment Intents
URL: https://api.stripe.com/v1/payment_intents
Detected at: 2026-03-27 14:32:11 UTC
Changes:
• amount: type changed number → string (breaking)
• currency: changed from required → optional (breaking)
• amount_decimal: new field added (non-breaking)
View full diff → https://rumbliq.com/monitors/mon_abc123/checks/chk_xyz
For non-breaking changes:
ℹ️ API Schema Update (Non-Breaking)
Monitor: GitHub OAuth
URL: https://api.github.com/user
Detected at: 2026-03-27 09:15:44 UTC
Changes:
• notification_email: new field added (non-breaking)
• two_factor_authentication: new field added (non-breaking)
View full diff → https://rumbliq.com/monitors/mon_ghi789/checks/chk_abc
The direct link takes your team straight to the Rumbliq diff view so they can understand the full change in context.
Part 2: Use the Webhook Alert Type for Custom Integrations
Rumbliq's Webhook alert type sends a structured HTTP POST to any URL when drift is detected. This lets you build custom integrations with any platform.
Webhook Payload Format
When Rumbliq fires a webhook alert, it sends:
{
"event": "drift_detected",
"monitor": {
"id": "mon_abc123",
"name": "Stripe Payment Intents",
"url": "https://api.stripe.com/v1/payment_intents"
},
"check": {
"id": "chk_xyz789",
"timestamp": "2026-03-27T14:32:11Z",
"hasBreakingChanges": true,
"driftDetected": true
},
"changes": [
{
"field": "amount",
"changeType": "type_change",
"before": "number",
"after": "string",
"breaking": true
},
{
"field": "amount_decimal",
"changeType": "field_added",
"breaking": false
}
]
}
Create a Webhook Alert
- In Rumbliq, go to Alerts → New Alert
- Choose Webhook as the destination type
- Paste your target URL
- Optionally add custom headers (e.g.,
Authorization: Bearer your_secret_tokento authenticate inbound requests on your side)
Example: Receive Webhooks in an Express Server
Here's a minimal Node.js/Express handler that receives Rumbliq alerts and logs breaking changes:
import express from 'express'
const app = express()
app.use(express.json())
app.post('/webhooks/rumbliq', (req, res) => {
const { event, monitor, check, changes } = req.body
if (event !== 'drift_detected') {
return res.status(200).send('ok')
}
const breakingChanges = changes.filter((c: any) => c.breaking)
if (breakingChanges.length > 0) {
console.error(`🚨 Breaking changes on ${monitor.name}:`)
breakingChanges.forEach((change: any) => {
console.error(` - ${change.field}: ${change.changeType}`)
})
// Trigger your incident workflow here
// e.g., create a PagerDuty incident, post to your ops channel, etc.
}
res.status(200).send('ok')
})
app.listen(3000, () => console.log('Webhook receiver running on port 3000'))
Always return 200 OK promptly — Rumbliq considers any non-2xx response a delivery failure and will retry.
Integrate with PagerDuty
For critical APIs where breaking changes require an immediate on-call page, route Rumbliq webhooks to PagerDuty using their Events API v2:
import express from 'express'
const app = express()
app.use(express.json())
const PAGERDUTY_ROUTING_KEY = process.env.PAGERDUTY_ROUTING_KEY!
app.post('/webhooks/rumbliq', async (req, res) => {
// Respond immediately to avoid delivery failure
res.status(200).send('ok')
const { monitor, check, changes } = req.body
const breakingChanges = changes.filter((c: any) => c.breaking)
if (!check.hasBreakingChanges || breakingChanges.length === 0) return
// Create PagerDuty incident
await fetch('https://events.pagerduty.com/v2/enqueue', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
routing_key: PAGERDUTY_ROUTING_KEY,
event_action: 'trigger',
dedup_key: `rumbliq-${monitor.id}-${check.id}`,
payload: {
summary: `Breaking API change: ${monitor.name}`,
severity: 'critical',
source: 'Rumbliq',
custom_details: {
monitor_url: monitor.url,
breaking_changes: breakingChanges.map((c: any) =>
`${c.field}: ${c.changeType}`
).join(', '),
rumbliq_check_url: `https://rumbliq.com/monitors/${monitor.id}/checks/${check.id}`
}
}
})
})
})
Integrate with Microsoft Teams
Microsoft Teams uses its own incoming webhook format. Here's how to forward Rumbliq alerts to a Teams channel:
app.post('/webhooks/rumbliq', async (req, res) => {
res.status(200).send('ok')
const { monitor, check, changes } = req.body
if (!check.driftDetected) return
const TEAMS_WEBHOOK_URL = process.env.TEAMS_WEBHOOK_URL!
const breakingChanges = changes.filter((c: any) => c.breaking)
const emoji = breakingChanges.length > 0 ? '🚨' : 'ℹ️'
const severity = breakingChanges.length > 0 ? 'Breaking Change' : 'Non-Breaking Update'
await fetch(TEAMS_WEBHOOK_URL, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
'@type': 'MessageCard',
'@context': 'http://schema.org/extensions',
themeColor: breakingChanges.length > 0 ? 'FF0000' : 'FFA500',
summary: `${emoji} API ${severity}: ${monitor.name}`,
sections: [{
activityTitle: `${emoji} ${severity}: ${monitor.name}`,
activitySubtitle: monitor.url,
facts: changes.map((c: any) => ({
name: c.field,
value: `${c.changeType}${c.breaking ? ' ⚠️ breaking' : ''}`
})),
potentialAction: [{
'@type': 'OpenUri',
name: 'View in Rumbliq',
targets: [{
os: 'default',
uri: `https://rumbliq.com/monitors/${monitor.id}`
}]
}]
}]
})
})
})
Part 3: Routing Alerts to the Right Channels
For larger engineering teams, routing all alerts to a single channel creates noise. Here's a strategy for routing by API criticality:
| Channel | Alert type | Monitors |
|---|---|---|
#incidents |
Breaking changes only | Payment APIs, auth APIs |
#api-drift |
All changes | All monitors |
#infra-alerts |
Errors only | Infrastructure APIs |
| PagerDuty | Breaking changes | Payment APIs, auth APIs |
Set this up in Rumbliq by creating multiple alert destinations, each with:
- Different Slack webhook URLs (pointing to different channels)
- Different severity filters
- Different monitor selections
Verifying Your Setup End-to-End
Before relying on these alerts in production, verify the full pipeline works:
Trigger a test alert: In Rumbliq, go to your alert and click Test Alert. Confirm it arrives in Slack.
Simulate drift detection: If you have a test API endpoint under your control, change its response structure and wait for Rumbliq to detect it. Verify the full alert arrives with the correct diff.
Verify webhook delivery: If using the Webhook alert type, check your server logs to confirm Rumbliq reached your endpoint successfully.
Test alert routing: For multiple alert destinations, verify each one routes correctly by sending test alerts from each.
Troubleshooting
No alert in Slack after test:
- Check that the webhook URL wasn't truncated when pasting
- Verify the Slack app is still installed in your workspace (Settings → Integrations)
- Check if the target channel was deleted or archived
Webhook deliveries failing:
- Ensure your endpoint returns
200 OKwithin 5 seconds - Check your firewall doesn't block requests from Rumbliq's IP ranges
- Verify any authentication headers are correct
Getting too many alerts (noise):
- Switch non-critical monitors to non-breaking-only or error-only alerts
- Create separate alert destinations with different severity filters
- Use monitor grouping to route alerts by team ownership
Related Posts
Summary
In under 15 minutes, you've set up:
- A Slack channel dedicated to API drift alerts
- Rumbliq wired to send immediate notifications on breaking changes
- Optionally: a custom webhook receiver for PagerDuty, Teams, or any integration
Your team will know about breaking API changes the moment Rumbliq detects them — without anyone having to check a dashboard.