Centralized rules for Coding Agents like Claude Code, Github Copilot & Cursor. Your AI coding agent automatically picks the right rules per task. Ship enterprise-ready code at 10x speed.
For large teams of thousands of devs, especially in polyrepo / microfrontends like where I work at the moment, this tooling is exactly what we need to scale best practices while enforcing security compliance.
@kaeligΒ great to hear that! That's the reason why we focus on larger companies! I don't think managing scattered .md files across polyrepo or microfrontend structure is the future in context management!
Love to hear more about your use case! Feel free to schedule a call with me to get either a demo or we can discuss your use case more in depth! https://cal.com/lukas-holzer/introduction-call
Seems Straion could handle the challenge of hallucination.... but just curious - would agents themselves handle making rules in future? lol
Report
As a founder of a security consultancy, watching how quickly the AI and agentic movement has taken off has been incredible, but also has introduced new and interesting challenges in keeping the company safe!
I am super excited to see what Straion can do in keeping engineering teams moving quickly while keeping the codebase clean and company policies met!
@patrickfarwickΒ Thanks! yea this whole thing is moving at light speed (or even warp speed?)
With straion we try to help devs to not have to go that pace and commit for one technology. we try to be a proxy managing all rules you you don't have to think about (skills, how to structure .md files so they are picked up best by the latest model, context engineering etc...) or even should I go with Cursor or Claude Code.
We are Provider agnostic and optimizing the rules internally so that they are best picked up by agents!
Hey, this looks amazing! Really useful concept, especially with regard to giving focussed context to an agent and for centralising rules across repos. I'd love to know how the tool selects the right rules to use and if there's any way to see which rules have been selected for a prompt?
@orinokaiΒ We took a completely different route here for rule matching as Cursor or others are doing.
Instead of going on a folder level or file extension to match rules, we've trained a machine learning pipeline to do the matching of the rules. This is based out of a variety of constraints. classifications, embeddings, labelings and so on. Basically we've tried to immitate the human brain! My brain does not work by locating knowledge based on a directory π
By that we can be super agnostic of repos and the developer don't have to recall where the rules are located they need!
When it comes to visualisation we currently fall a bit short. We just present you the output inside the terminal of Claude Code, Codex or Github Copilot! (You get a kind of validation report)
But we are planning on implementing a dashboard so you see exactly for which task which rules where applied and taken!
That's how we showcase it currently:
Report
This solves a need that came up in discussion just this week for me. Very interested to follow your progress!
@jwhistΒ thank you very much Jordan. I'd love to learn more about your particular use case. Let me know if you are open for a call later this week. Feel free to book a slot on my calendar.
@phirabuΒ thanks so much! Yes we think the future in agentic coding is not in larger context sizes it's about rules and constraints to get true 10x productivity!
Report
The plan-stage validation approach is really smart. Most governance tools catch problems after code is written, by then the developer already invested time and pushes back on changes. Catching it during the planning phase is a much better feedback loop.
Curious about the ML-based rule matching - how does it handle edge cases where a task touches multiple domains with conflicting rules?
Does it prioritize by specificity or let the team configure precedence?
Replies
Read.HN
For large teams of thousands of devs, especially in polyrepo / microfrontends like where I work at the moment, this tooling is exactly what we need to scale best practices while enforcing security compliance.
Straion
@kaeligΒ great to hear that! That's the reason why we focus on larger companies! I don't think managing scattered .md files across polyrepo or microfrontend structure is the future in context management!
Love to hear more about your use case!
Feel free to schedule a call with me to get either a demo or we can discuss your use case more in depth! https://cal.com/lukas-holzer/introduction-call
Agnes AI
Seems Straion could handle the challenge of hallucination.... but just curious - would agents themselves handle making rules in future? lol
As a founder of a security consultancy, watching how quickly the AI and agentic movement has taken off has been incredible, but also has introduced new and interesting challenges in keeping the company safe!
I am super excited to see what Straion can do in keeping engineering teams moving quickly while keeping the codebase clean and company policies met!
Straion
@patrickfarwickΒ Thanks! yea this whole thing is moving at light speed (or even warp speed?)
With straion we try to help devs to not have to go that pace and commit for one technology. we try to be a proxy managing all rules you you don't have to think about (skills, how to structure .md files so they are picked up best by the latest model, context engineering etc...) or even should I go with Cursor or Claude Code.
We are Provider agnostic and optimizing the rules internally so that they are best picked up by agents!
Netlify
Hey, this looks amazing! Really useful concept, especially with regard to giving focussed context to an agent and for centralising rules across repos. I'd love to know how the tool selects the right rules to use and if there's any way to see which rules have been selected for a prompt?
Straion
@orinokaiΒ We took a completely different route here for rule matching as Cursor or others are doing.
Instead of going on a folder level or file extension to match rules, we've trained a machine learning pipeline to do the matching of the rules. This is based out of a variety of constraints. classifications, embeddings, labelings and so on. Basically we've tried to immitate the human brain! My brain does not work by locating knowledge based on a directory π
By that we can be super agnostic of repos and the developer don't have to recall where the rules are located they need!
When it comes to visualisation we currently fall a bit short. We just present you the output inside the terminal of Claude Code, Codex or Github Copilot! (You get a kind of validation report)
But we are planning on implementing a dashboard so you see exactly for which task which rules where applied and taken!
That's how we showcase it currently:
This solves a need that came up in discussion just this week for me. Very interested to follow your progress!
Straion
@jwhistΒ thank you very much Jordan. I'd love to learn more about your particular use case. Let me know if you are open for a call later this week. Feel free to book a slot on my calendar.
Straion
@jwhistΒ nice! Maybe you could try us?
Digger.dev
Massive congratulations on the launch team!
Straion
@nadigerutpalΒ Thanks so much! this means a lot from such a seasoned founder!
Triforce Todos
Supervising AI instead of coding defeats the purpose. If Straion solves that, itβs a big win for engineering orgs π
Straion
@abod_rehmanΒ thanks for comment βΊοΈ.
Straion
@abod_rehmanΒ Hey yea you should not babysit your agents! if you have that pain in your org let's chat!
the more access a tool gets on you computer the more important it is to give it rules and constraints. i think it solves a real problem.
Straion
@phirabuΒ thanks so much!
Yes we think the future in agentic coding is not in larger context sizes it's about rules and constraints to get true 10x productivity!
The plan-stage validation approach is really smart. Most governance tools catch problems after code is written, by then the developer already invested time and pushes back on changes.
Catching it during the planning phase is a much better feedback loop.
Curious about the ML-based rule matching - how does it handle edge cases where a task touches multiple domains with conflicting rules?
Does it prioritize by specificity or let the team configure precedence?