Garry Tan

Weave - Engineering metrics for the AI era

Weave is a engineering intelligence platform that combines LLMs & ML to tell you exactly how well you're using AI and shows you how to improve.

Add a comment

Replies

Best
Den Taylor

Finally, a tool that not only helps teams adopt AI but actually shows how well it's being used. The combination of setup optimization (like Cursor rules linting from GitHub) and smart suggestions based on actual usage is incredibly thoughtful.

Brennan Lupyrypa

@den_taylor We agree! ty

Mu Joe

Whoa, this is truely cool! I'm always looking for ways to better utilize AI in my workflow, and the idea of an intelligence platform that actually *shows* you how to improve — not just tells you — is kinda genius imo. So excited to see how the LLM/ML combo helps optimize my processes. How customizable is it for different AI models?

Jeremy Yan

Love the pivot story! Going from general productivity to AI-specific analytics makes sense. tbh though, how do you get devs to actually adopt another monitoring tool? Most teams I know are already drowning in metrics and dashboards...

Nitesh Padghan

Really like how Weave goes beyond “AI is cool” and actually measures if it’s working. The ROI + usage suggestions make it feel way more grounded than just tracking commit counts.

Artem Anikeev

Cool! Congratulations to the team on the launch!

William Jin

I want to measure real AI impact on my engineering projects. How granular are the insights for individual developer performance?

Steven

@william_jin We track AI performance per tool and per engineer (except for Github Copilot because they don’t allow per user data collection🙄).

Richard Gu

Congrats on the launch! The perfect tool for engineering teams in 2025

Brennan Lupyrypa

@richard_gu Thank you :)

Deelaka Alawathugoda

Congratulations! Strong PLG surface.

I think PQL here is, 1. GitHub/GitLab + Linear connected 2. first AI-coded PR merged 3. ROI view by a team lead, yeah?

If you don't mind, I have few suggestions, let me know what you think.

Add a "simulate on sample repo" function to show value instantly without touching customer code.

Maybe trigger a human touch from the team when few integrations are added, leadership logins increase, or when usage spikes.

Also a read-only audit mode that exports a lightweight report can help with shortening enterprise evaluations.

Maximiliano Redigonda

Congrats on the launch! As an engineer I'm curious on having a platform measure my own personal impact and seeing that increase over time as I learn, do you already support this? Is this a path you're thinking of pursuing?

Thanks!

Amir Agassi

Awesome job on the launch!!