Launched this week

Radar
The missing open-source Kubernetes UI
684 followers
The missing open-source Kubernetes UI
684 followers
Radar brings your Kubernetes workflows into one fast, open-source UI: real-time topology, resources, events, Helm, GitOps, live traffic flows, security & best-practice checks, image filesystem inspection, and MCP for AI agents. Run it locally as a single binary or self-host it in-cluster with RBAC + OIDC — no account, agents, or cloud required.








Radar
Hey PH 👋 Eyal, Roy, and Nadav here - the team behind Radar. We also build Skyhook, YC W23.
We've wanted a better Kubernetes UI for a long time. kubectl is powerful, but day-to-day cluster work still ends up split across terminals, dashboards, Helm, Argo/Flux, cloud consoles, and log tools.
The existing options all have tradeoffs. Lens lost the OSS trust that made people love it. FreeLens is a welcome fork, but still carries the same heavy Electron desktop model. Headlamp is useful, but shallow once you want deeper operations - Helm, GitOps, traffic, audits. k9s is excellent if you live in the terminal, but not everyone does. And the SaaS tools often price by node and ask for a work email before they let you look at your own cluster.
So we built the Kubernetes UI we wanted: fast, local-first, open source, and not locked behind an account. We quietly shipped it a couple of months ago. The community took it past 1.4k GitHub stars and gave us way more feedback than we expected, so we kept shipping. Today is the proper launch.
What's in it:
- Topology with real ownership chains, not force-directed spaghetti
- Live event stream across all resources using Kubernetes watches, not polling
- Helm release management with diff/rollback + native Argo CD / Flux sync
- Live traffic flows via Hubble/Cilium, Caretta, or Istio
- Cost insights via OpenCost - auto-detected per namespace, workload, and node
- Cluster audit - 31 checks across security, reliability, and efficiency
- Image filesystem viewer - read container files in the UI, no exec, no pull
- Built-in MCP server - point Cursor or Claude at your cluster
Plus first-class integrations for 20+ popular K8s tools - Argo Rollouts, Karpenter, KEDA, cert-manager, Trivy, Kyverno, Velero, Knative, and more.
Single Go binary. Apache 2.0. No account required. No usage tracking. No cloud dependency.
Site: https://radarhq.io
Repo: https://github.com/skyhook-io/radar
Discord: https://radarhq.io/community/chat
Also yes, we cared about making it beautiful. K8s tools don't have to look like punishment.
Would love your feedback - what's missing, what breaks, what we got wrong. We're here all day.
@nadav_erell Finally! k9s is still my go-to, but as you said, most people may want a real GUI. Nice product!
Radar
@sammy_anagolum Thanks!
Would love to hear what you think compared to k9s once you've had more change to test it out :D
RiteKit Company Logo API
@nadav_erell This is a thoughtful breakdown of the UI/UX gap in Kubernetes tooling. The ownership chains visualization sounds particularly useful for teams managing complex clusters. Curious how you're thinking about the monetization angle long-term if you're committed to keeping the core open source and account-free.
Radar
Thanks @osakasaul , Roy from the Radar team here. Really glad you like it!
It's a fair question. To fund the project long term, we're experimenting with a hosted version tailored for multi-cluster orgs (offering things like cross-cluster fleet views, long-term retention, and SAML/SCIM).
However, we've documented strict community commitments so folks can hold us accountable:
Strictly Apache 2.0: No relicensing to restrictive "open-ish" models.
No artificial crippling: If a feature works in a single cluster, it ships in OSS.
Our bet is that the open-source Apache 2.0 version has to be genuinely great to earn trust for the hosted tier.
a cluster audit with 31 checks across security and reliability is a massive value-add for day-to-day ops. we usually run separate trivy or kyverno reports, so having that integrated into the primary ui workflow is a huge time-saver. does the audit allow for custom check injection? @nadav_erell
Radar
@vikramp7470 Thanks! Not yet, but definitely something we can consider.
If you have any specific checks you think are missing that could be very fast to add, we tried to balance and not go overboard spamming too many, so real things don't get lost in the noise.
@nadav_erell Makes sense 👍 Custom checks later would be super useful for advanced workflows.
The MCP-for-AI-agents piece is the most interesting bit and the existing comments haven't poked at it. What can an agent actually do via Radar's MCP — read-only stuff (describe pods, fetch events, summarise an outage), or destructive ops (kubectl delete, rollout restart)? That boundary decides whether anyone runs this against a prod cluster.
Radar
@sounak_bhattacharya Hi, Roy from the Radar team here. Good question!
We had your considerations in mind when building it. Boundary is "non-destructive by design." No delete, no force-uninstall, no --cascade=orphan.
Reads: dashboard, resources, topology, events, pod / workload logs, changes timeline, Helm releases. Outputs are minified and secret-scrubbed (Secret .data never returned, env values redacted, logs scrubbed for token shapes).
Writes are a curated, non-destructive set: restart / scale / rollback workloads, trigger / suspend cronjobs, sync / reconcile GitOps, cordon / drain nodes, and apply_resource with dry_run. Nothing that deletes.
The other half is RBAC. Calls go through K8s with the user's identity (impersonation in OIDC / proxy mode, the agent's ServiceAccount otherwise), so the agent inherits exactly your perms - a 403 from K8s is a 403 from MCP. And in OSS, MCP listens on loopback only.
Full breakdown at /features/mcp.
Would love to hear your feedback on this approach - and if you've got ideas for where the boundary should sit differently, very open to it.
Local-first with zero cluster-side installation is the right call. When I was scaling an engineering org from 15 to 120, the biggest friction with K8s tooling was always the chicken-and-egg problem: you need cluster access to install the tool that helps you understand the cluster. Having this run as a single binary using existing kubeconfig means any engineer can get visibility without needing platform team approval first. That alone probably saves a week of onboarding time per new infrastructure hire.
Radar
@avrisimon 🎯
Radar
@jalcantara You deserve better than shoddiness :)
We basically built this because none of the existing options really did what we wanted well enough. Surprising that this is the case for k8s in 2026.
ApyHub : The All in one API Platform
nice - gave it a star! good luck!
Radar
@nikolas_dimitroulakis thank you, much appreciated!
Argonaut
Looks very useful, sharing it with my team
Radar
@surya_oruganti1 thank you! Would love to get their feedback