You already know when something breaks. Observability, monitoring, and paging tools handle that. Alerts, automations, and runbooks assemble the team and start the investigation. Then you start opening browser tabs. You spend the next few hours searching pull-requests (PRs), Slack threads, Application Performance Metrics (APM) data, and Jira tickets for answers: what changed, when did it happen, who knew about it, how big is the blast radius, and will the fix hold?
Today we’re releasing integrations with Datadog and Sentry to close that gap. Incidents and issues now flow into the Unblocked context engine alongside your code, conversations, tickets, and docs. In addition to this, the context engine is able to query logs and APM data as needed. Rather than gathering context from multiple sources, you can simply ask Unblocked to assemble all of the relevant details and provide a summary of what happened along with recommendations for remediation.
Why connect Datadog or Sentry to Unblocked?#
Most engineering teams are running heavy automation around their incident process. But the process of reconstruction – actually connecting the dots – is still fairly manual and dependent on domain expertise. This is where the minutes in your Mean Time To Recovery (MTTR) stack up. Since Unblocked can now search for incident-related context alongside incident data, you start from a synthesized, compiled summary rather than a blinking cursor.
Bringing observability data into the Unblocked platform now helps surface the conversation where someone flagged a concern, the review comment where it was overridden, and the design doc that addresses the constraint. As a result, you get higher quality Root Cause Analysis (RCA) post incidents because context is assembled automatically – you don’t need to rely on domain experts to remember or find it.
How to use Unblocked during an incident#
Here’s a typical scenario during an incident:
Sentry logs an issue that error event count is spiking in the payment processing service. This is not transient — it was first seen two hours ago and customer support is reporting an increase in inbound volume.
You look in GitHub for a cause. A PR was merged three days ago: "Refactor payment processor abstraction." The description looks routine; no red flags. In the review comments someone asked, "What if processor initialization fails — do we still attempt the refund?" The author replied: "The factory handles that, it always returns a valid processor." Just as you’re about to look at the factory, you get a Slack message, "Is this related to the payment processor deprecation?" You didn't know about the deprecation. That thread was in #platform-updates eleven days ago. The Slack message links to a Jira ticket, which links to a design doc, which explains that the old processor is being sunset and the new one has different behavior during the migration window - this is the root cause.
An hour in, the fix is two lines and the bulk of the time to recovery was spent on the reconstruction.
Now let’s try this with Unblocked:
Ask @Unblocked, "What's causing the PaymentService refund errors?"
Unblocked connects the dots by searching across sources simultaneously – Sentry for the error and event details, GitHub for related PRs, review comments that flagged the exact risk, the Slack thread discussing the processor deprecation, the Jira ticket tracking the migration. It returns a synthesized answer: The refactored code assumes the payment processor will always initialize, but this no longer holds.
What used to require manual context-switching across multiple tools now flows as a single, structured investigation.
Get started#
The Datadog and Sentry integrations are available now. Connect them from the Unblocked admin panel to expedite investigations, build more durable fixes, and improve the quality of your post-incident reports. If you’re not yet using Unblocked, we’d love to show you how this works. Reach out to get a demo.



