Launched this week

Struct
AI agent that root-causes engineering alerts
689 followers
AI agent that root-causes engineering alerts
689 followers
Struct is an AI agent that root-causes engineering alerts using logs, metrics, traces, and code. Resolve incidents faster with a composable, customizable system that deploys in minutes and works with your existing DevOps workflows.







As someone who runs a production SaaS on Render with Sentry for error tracking, the alert-to-root-cause gap is real. I've spent more time correlating Sentry exceptions with Render logs and Supabase query patterns than I'd like to admit.
The fact that it drafts incident reports with timelines and commit histories is the part that caught my attention. That's usually the thing that gets skipped because everyone's too relieved the fire is out.
Does it work with Sentry as an observability source, or is it primarily geared toward Datadog/Grafana-style platforms?
Struct
Hey Hunters!
We're Deepan and Nimesh, co-founders of Struct. Today we're excited to launch the on-call agent every team deserves -- for free!
If you've been on-call, you know the drill: alert fires, you open Datadog (or Grafana, or whatever), hunt for spikes, grep through logs and code, loop in a senior engineer...rinse & repeat. Meanwhile, noisy alerts never get tuned and customer issues slip through.
Struct gets you from alert → root cause before you even open your laptop.
Within minutes of an alert firing, Struct:
✅ Pulls relevant metrics, logs, traces, monitors, and code
✅ Does a regression analysis and correlates anomalies and spikes
✅ Replies with with a root cause, impact summary, and pattern analysis
✅ Drafts a full incident report with dynamically generated charts, timelines, and commit histories
Dive deeper in Slack or our app. Or handoff the full context to your favorite coding agent to ship a fix in one-click.
We built Struct for lean teams without an SRE, and orgs going all-in on AI dev workflows — companies like FERMAT and Arcana already use Struct to auto-investigate thousands of alerts monthly and give every engineer the context to handle incidents on their own.
Five minute set up, integrates with every leading observability platform plus Slack, GitHub, Linear, Claude Code, and fully SOC 2 Type II and HIPAA compliant.
Get started free at struct.ai — no credit card required.
Questions? Hit us in the comments - we'll be around all day. Or shoot us an email at founders@struct.ai.
And as a special thanks to the Product Hunt community, if you upgrade to a paid plan, use promo code HUNTSTRUCT for 20% off for the next 3 months! 🔥
@deepan_m Spent way too long last month chasing a latency spike that turned out to be a downstream service silently retrying on timeout. Three people, two hours, just to realize the alert was pointing at a symptom. The Slack-native workflow makes sense — curious about the pattern memory though. Does it pick up on recurring issues automatically or do you need to label past incidents?
@deepan_m Congrats on the launch of Struct, the idea of AI investigating alerts automatically is compelling.
One thing I found myself thinking about while exploring the site was how new users interpret the value when they first land.
Curious which workflow tends to pull people in first.
Alert fatigue is the thing nobody warns you about until you're waking up at 4 am to a page that turns out to be nothing. If this can automatically trace an alert back to its root cause, that saves hours of digging through logs and dashboards. I've had incidents where the alert fired on a symptom three layers removed from the actual problem. How does it handle cases where the root cause is outside your codebase, like a third-party API degradation or a DNS issue?
Struct
@aitubespark thanks for the question. the short answer is that it works surprisingly well!
because it's able to autonomously do web research, it can actually, for example, pull up status pages for third party services. it can often also identify flakiness vs. more serious regressions in third party APIs by examining patterns of failed and successful calls. last week, it actually identified a serious degradation in slack's web_mention webhook hours before they updated their status page.
the caveat is that it's limited by the context that it has access to.
Struct
@aitubespark Could not agree more. Our agent operates iteratively, coming up with hypotheses, challenging them, and looking for evidence to validate or invalidate them at every layer. When it identifies degradation of third party services, it validates against authoritative sources, like status pages, to confirm outages.
The gap between "alert fires" and "engineer understands what actually broke" is where most incident response time gets wasted — correlating metrics, logs, and traces across services is exactly the kind of tedious cross-referencing that AI should handle. The one-click handoff to a coding agent to ship the fix is a compelling end-to-end vision — how well does that work today for non-trivial root causes that span multiple services?
Struct
@svyat_dvoretski Great question! Multiple services is exactly where this becomes so powerful. Struct is able to string together logs across different services from different observability providers using encoded correlation techniques (e.g. querying by correlation ids, querying for known logs, sifting through a time range, etc.) which is ordinarily a tedious process. It constructs a timeline of the issue and iteratively goes deeper to establish a definitive root cause. It memorizes successful debugging techniques for each customer's unique architecture, which makes it get even better over time. Our customers working at a large scale with many services are already reporting an 80% reduction in triage time.
@svyat_dvoretski @nimeshmc Struct pulling logs, metrics, traces, and code into a single root cause analysis is the hard part. Most teams settle for manual grep workflows and siloed dashboards. The regression analysis plus correlation across telemetry sources is where on-call time gets reclaimed. The pattern memory angle, learning from past incidents to improve future investigations, compounds value as your architecture gets messier. The edge case to watch: root causes spanning uninstrumented service boundaries where Struct can't pull telemetry.
Struct
@svyat_dvoretski @piroune_balachandran Absolutely. Struct can provide a hypothesis with next steps to confirm evidence from sources it doesn't have access to, which is useful in itself to provide engineers guidance for where to look. That is often the hard part for really messy issues; the fix itself is usually simple.
Riveter (YC F24)
This is awesome! It would be great to manage my Sentry anxiety and take work off my plate. Are you all mostly for enterprise or does it work for small teams, too?
Struct
@abbygrills absolutely! we built our self-service tier so small teams can get set up and running in <10 minutes for free. if you have any issues getting set up, just reach out.
Cekura
Great team!! I still remember the all-nighters we pulled dealing with both sides of the problem: noisy alerts and critical ones getting missed and escalated by customers. Using Struct has been a game changer!
Would love to hear more on the roadmap ahead though @nimeshmc @deepan_m
Struct
@deepan_m @kabra_sidhant So glad we've been able to save you guys engineering time. We've got a crazy roadmap coming up. To name a few:
Autonomous awareness of past & ongoing incidents
Reducing alert noise
Incident triage for agents
Congrats on your launch guys! Can you share any creative ways you've seen teams get value from Struct, or ways that surprised you?
Struct
@adam_suskin Thanks Adam! Our customers are pretty awesome: some of them are using it to debug why their agents are going off the rails by pointing us to their agents’ traces as logs and asking Struct to analyze them!