Mo Ashique Kuthini

Aura - Semantic version control for AI coding agents on top of Git

Legacy Git tracks text; Aura tracks mathematical logic. By hashing your AST instead of lines, Aura provides flawless traceability for AI-generated code. Block undocumented AI commits, surgically rewind broken functions with the Amnesia Protocol, and orchestrate massive code generation—all while saving 95% on LLM tokens. 100% local. Apache 2.0 Open Source.

Add a comment

Replies

Best
Mo Ashique Kuthini
Hey Product Hunt! 👋 I'm Mo, CEO of Naridon (naridon.com), and today we’re open-sourcing Aura. At Naridon, our main business is building complex AI Search Optimization (AIO) infrastructure for e-commerce brands. We spend our days working deeply with LLMs to optimize how models like ChatGPT and Perplexity index and recommend products. Because we build AI products, we rely heavily on autonomous AI agents (like Cursor and Claude) to write our code. But over the last year, we hit a massive bottleneck: Git was built for humans typing linearly, not for AI agents generating 4,000 lines of non-linear code per minute. When our AI agents hallucinated, standard text diffs resulted in chaotic, unresolvable merge conflicts that brought our sprints to a halt. We had to build Aura for our own team's sanity. It is a "Semantic Time Machine" that stops AI agents from breaking our production environments. Today, we’re sharing it with the world for the betterment of the agentic coding future. Instead of tracking text lines, Aura natively parses your codebase into an Abstract Syntax Tree (AST) locally (supporting Rust, Python, TypeScript, and JavaScript). 🚀 What Aura gives you for free (Apache 2.0): * The Semantic Scalpel (`aura rewind`): Revert a single broken function or class the AI wrote without losing the rest of the good code in the file. * The Amnesia Protocol (`--amnesia`): Surgically wipe an AI's chat memory of a specific coding hallucination so it doesn't get stuck in a recursive failure loop. * The Gatekeeper (`aura capture-context`): A parasitic Git hook that hard-blocks git commit if the AI's natural language intent doesn't mathematically match the AST nodes it modified. * Native GSD Orchestration (`aura plan`): We integrated the "Get Shit Done" methodology directly into the Rust core. It X-Rays your AST Merkle-Graph and builds mathematically sound execution waves before the AI writes a single line of code. * The Sovereign Allowlist (`aura request-access`): Securely whitelist specific logic nodes (like Auth Headers) to bypass the Gatekeeper, allowing for precise secrets management. * Semantic Audit (`aura audit`): Scans your Git history to catch any rogue, undocumented code an AI agent snuck in using --no-verify. * Token Efficiency (`aura handover`): Compresses your entire architectural context into dense XML, saving you up to 95% on LLM API token costs when switching agents. Aura operates as a meta-layer directly on top of Git. It runs 100% locally on your machine, we never see your code. We’ve released the core engine today under the Apache 2.0 license. This isn't our core commercial product; it's the foundational tool we had to build to survive the AI era, and we wanted the community to have it. Would love your feedback! Try it out with a single curl command on macOS/Linux: curl -fsSL https://auravcs.com/install.sh | bash Question for the community: What's the worst merge conflict an AI agent has caused you recently? Let me know below! 👇
Łukasz Sągol

@mhdashiquek congrats, very inspiring idea. Can you share any results you have seen in your own teams already? Did it fully replace the manual code reviews by your engineers or operate on a different level?

Kirolus Ghattas

@mhdashiquek  This is awesome - congrats. Would be great to also understand if this works for non-technical team members too who want to understand the quality of what their tech team is producing!

Mo Ashique Kuthini

@kirolus_ghattas 

Yes. Aura is built for the world where humans lead the Intent and AI handles the implementation.

Because Aura tracks the 'Why' in plain English rather than just text diffs, non-technical members can use `aura dashboard` to visualize logic quality and `aura prove` to get a mathematical 'Yes/No' on feature completion, without ever reading a line of code.

it's an internal tool we built we decided to open source. Please try and let us know if there's any improvements needed. Or please do contribute.

Mo Ashique Kuthini

@lukaszsagol 

Thanks Łukasz!

No, Aura didn’t replace code reviews for us at Naridon — but it changed what we review. Before, engineers spent most of their time just understanding what the AI did. Now Aura blocks undocumented or misaligned AI changes before review (it’s pretty brutal, our AI agents genuinely hate it 😅, I asked after a session to Claude code and it literally said Aura blocks it so much it's frustratingly annoying. but that’s the point).

When something breaks, we don’t revert whole PRs anymore. We rewind just the one function that caused the issue and move on.

One unexpected win: by using AST-based context (instead of dumping files or chat logs), we saw ~80% to even 93%+ reduction in LLM context size when handing work between agents. Way fewer tokens, way less noise.

So humans still review code, Aura just removes the AI archaeology and keeps things sane.

Kimberly Ross

@mhdashiquek Hi Muhammed. Can Aura handle multi‑language repos and frameworks with consistent reliability? What kinds of visualisation or tooling does Aura offer to help developers understand semantic diffs?

Mo Ashique Kuthini

@kimberly_ross 

Hi Kimberly!

Yes, Aura uses tree-sitter under the hood, which means it parses code down to an Abstract Syntax Tree (AST) rather than reading text. This makes it framework-agnostic. It currently has native, highly reliable support for TypeScript/JS (including React/JSX), Python, and Rust.


For visualization, Aura moves away from traditional red/green text diffs. We offer two main tools:

1. `aura dashboard`: A local web UI that provides a 'Semantic Feed', summarizing the actual architectural impact (e.g., 'Mathematically verified 34 logic nodes') and tracks the AI's progress against the active project plan.
2. `aura map`: Generates a visual Merkle-Graph of your system's logic dependencies so you can see exactly how functions connect before and after an AI refactor.
Daniyar

@mhdashiquek I have sent you a request for an invitation.

Mo Ashique Kuthini

@daniyar_abdukarimov Invite for? you can use it for free, no invite needed.

Shrujal Mandawkar

This is a really interesting direction — moving from text diffs to intent + AST-level tracking makes a lot of sense in an AI-first workflow

Curious — how do you handle cases where the AI’s “intent” is correct at a high level, but the implementation subtly diverges across multiple files?

Does Aura catch cross-file semantic inconsistencies as well or mainly within scoped changes?

Mo Ashique Kuthini

@shrujal_mandawkar1 

Great question, Shrujal. This is exactly why we couldn't rely on text diffs! Aura handles cross-file divergence in two specific ways:

1. Global Merkle-Graph (Blast Radius): Aura doesn't just parse isolated files; it builds a mathematical graph of your entire repository locally. If an AI modifies a core function in file_a.ts, Aura's 'Proactive Blast Radius' engine immediately flags downstream functions in file_b.ts and file_c.ts that are now tainted by the change, warning you before you commit.

2. Strict Intent Alignment: If an AI agent refactors 15 logic nodes across 5 different files, Aura mathematically cross-references the AST hashes against the agent's stated intent. If the AI subtly hallucinated and modified a 16th node that it didn't explicitly declare in its reasoning, Aura triggers an 'Intent Mismatch' and halts the commit. For complex end-to-end verification, we also have `aura prove`, which traces the actual execution paths across multiple files to mathematically prove the AI's high-level intent was successfully implemented without breaking connected modules.

Letian Wang

The `aura rewind` for single functions is exactly what I need — reverting entire PRs because one AI-generated function broke things has been my biggest pain point with Claude Code. The 93% token reduction on handover is wild if that holds up in practice.

Mo Ashique Kuthini

@letian_wang3 

Thanks Letian! That exact pain point with Claude Code was one of the main reasons we built this. Standard git revert is a sledgehammer, but AI hallucinations usually only require a scalpel. Because Aura maps the Abstract Syntax Tree (AST), it knows exactly where a specific function starts and ends, letting you swap out just that broken logic block while keeping the other

500 lines of perfect AI code intact.

As for the 93% token reduction with aura handover, it holds up! Instead of dumping raw, unstructured files full of comments and whitespace into the context window, Aura generates a dense XML payload of just the logic node signatures and their dependencies. The LLM gets the exact architectural context it needs, and you save a massive amount of tokens (and money).