Centralized rules for Coding Agents like Claude Code, Github Copilot & Cursor. Your AI coding agent automatically picks the right rules per task. Ship enterprise-ready code at 10x speed.
Hey makers & creators, Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
Report
@lukas_holzer@peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
@lukas_holzer@peterbuch@mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
Report
@lukas_holzer@peterbuch@katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Report
Straion is badly needed. There is no way to centrally managed .md files, collaborate on them and dynamically update them across several repositories.
@panagiotis_papadopoulos Yea good point the updating! That's indeed a case a lot of companies don't think about!
They just think adding the rules once is enough. But what if you have 3 repos with the same frontend rules? You don't want to go into each repo and update the AGENTS.md or CLAUDE.md files there whenever you decide on new rules/guidances.
I'll bet they will be soon out of date!
Report
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
Report
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
@giammbo Thanks a lot we are still trying to figure out what's the best approach so every feedback is warmly welcomed!
Report
@giammbo This resonates. The enterprise angle Straion is taking makes sense for larger teams where rules live in Confluence pages nobody reads.
For individual devs or small teams the problem is simpler but still annoying: you have CLAUDE.md, .cursor/rules, and GEMINI.md with basically the same content drifting apart. I ended up writing a CLI that compiles YAML to each editor's native format so I only maintain one source. Different scale, same underlying pain.
Curious to see how this space evolves. Right now every tool has its own format and there's no sign of convergence.
Report
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
Report
Hey Pete, that line about ending up supervising instead of building is so accurate. Was there a specific moment where an AI agent completely ignored how your team does things and you had to undo or re-explain everything?
@vouchy Yea you are so true! On Thursday I had a convo with a very seasoned developer that said. He can't keep up on the pace of the ecosystem anymore. He's afraid that he's taking a "wrong" turn by choosing full on a specific technology.
We try to help those companies to take off that burden by having one central place to manage all your rules and supporting mutliple agents. Straion is installed via a CLI. In the background it sets up a skill for the coding agent of your choice that's it super simple
We built Straion because AI-generated code is everywhere — but in reality, it rarely fits how companies actually build software.
The problem isn’t generating code anymore. It’s alignment. Every company has its own standards for security, privacy, architecture, design systems, and frameworks. Yet AI tools don’t automatically understand those rules. The result? Manual fixes, long review cycles, and wasted time.
We built Straion to change that.
Straion automatically extracts company-specific requirements from sources like wikis, contribution guidelines, and best practices — and translates them into instructions AI agents can actually follow. That way, generated code fits the organization from the start.
This means:
Less manual correction
Fewer review loops
Better security and compliance alignment
Faster, more cost-efficient delivery
Before building, we conducted 100+ interviews with software teams to truly understand their pain points. The result is a product that doesn’t just work technically — it solves a real, expensive problem.
Ultimately, we built Straion so developers can focus on what really matters again: building great software instead of fixing AI output.
@doris_freihofner thanks for the question! Sure it works for a solo dev as well! but if it's just a small react project you can probably manage the effort with some AGENTS or CLAUDE .md files as well.
The true benefit for Straion is for teams within larger organizations as they have a fast growing codebase (often multiple repositories and need rules to align the code)
A good example is if you have multiple repositories having golang microservices. You don't want to duplicate all the rules in each repository. In this case you want to have a single central hub to manage all your rules!
This is exactly the usecase where straion shines. Once you update a rule in straion it will be immediately propagated to all of your devs. They don't have to update/install anything. It's just there! So enforcing coding standards, security best practices and other rules is just a click away!
@cruise_chen 🤔 For now, let's add your rules to Straion and let it handle the hallucinations. With stronger, more sophisticated and especially targeted rules hallucinations shouldn't be an issue anymore.
Replies
findable.
Hey makers & creators,
Pete here, Founder of @findable. and one of the early testers and supporters of Straion.
I’ve been working closely with @lukas_holzer and the team, as I keep seeing the problem of AI coding agents going off the rails.
Doesn't matter if you us Claude Code, Cursor, or Copilot. Yes, they make you faster, but especially in bigger orgs they often create problems.
So instead of just building, you often end up supervising. Correcting. Re-explaining context. Pulling the AI back onto the right path.
That's where Straion is helping engineering teams to stick to the organisations rules.
What impressed me early on is the simplicity of the core idea: give engineering teams a structured way to define “how we build software here,” and make sure AI coding agents actually follow those rules automatically.
Please let us know here in the comments what problems you are facing with AI coding, and how we can help,
Happy Sunday, Pete
@lukas_holzer @peterbuch Interesting angle — especially enforcing “how we build here” across AI agents. Curious: are teams adopting this more for code quality, security, or just reducing review overhead? Feels very relevant as AI-generated code scales.
Straion
@katrin_freihofner will tell you more from her product perspective!
Straion
@lukas_holzer @peterbuch @mangal_s07 Great question. We’re seeing teams adopt this for all three reasons you mentioned, code quality, security, and review overhead — but review overhead is often the immediate pain (or the loudest voice in the room).
As AI coding agents generate more code, engineers increasingly become bottlenecks, spending large chunks of time reviewing instead of building. That’s manageable at small scale, but once output accelerates, the traditional review process just doesn’t keep up.
Security and quality are just as critical, though — especially at scale. As teams grow, “how we build here” (architecture patterns, security constraints, naming conventions, infra standards) becomes part of the company’s operating system. The challenge is that AI doesn’t naturally know those rules, and humans can’t manually enforce them forever.
Straion helps encode and enforce those standards automatically, so teams can scale AI-generated code without sacrificing quality, security, or maintainability.
@lukas_holzer @peterbuch @katrin_freihofner This makes a lot of sense — especially the idea that review overhead becomes the first visible bottleneck as AI output scales. Encoding “how we build here” feels less like a tooling problem and more like preserving institutional memory for AI.
Straion
@mangal_s07 Can you expand a bit on what you mean with Encoding "how we build here”? Not sure if I got that!
MCP-Builder.ai
Crongrats on the lunch. Totally see the need as i am often afraid that my coding Assistant is steadily drifitng away from our coding guidlines.
Am i also be able to setup different coding rules depending on the techstack of my project and teams? Web, python,... ?
Straion
@dominik_rampelt thanks! Yea this is a common problem we try to fix! Sure you can have as many rules as you want spanning from infra rules to frontend guidelines. The techstack does not really matter!
They can be even functional rules like behavioural flows!
Straion
@dominik_rampelt Thank you Dominik! Yes, you can have different coding rules depending on the tech stack. Straion is going to automatically pick the applicable rules based on the task.
Straion
Hey makers, Lukas here, CEO & Co-Founder of Straion.
We built Straion after repeatedly running into the same issue while working with AI coding agents like Claude Code, Cursor, and Copilot.
They’re powerful, but they don’t naturally understand how your organization builds software. Things like internal standards, architectural decisions, security rules, or simply “how we do things here.” As a result, teams often spend a lot of time reviewing, correcting, and re-guiding the AI.
Straion is our attempt to help with that.
It gives engineering teams a central place to define their rules, and ensures those rules are automatically applied whenever AI generates code.
We have a simple goal: help teams get the speed benefits of AI without losing consistency and control.
We’re still very early, and there’s a lot we need to learn.
If you’re using AI coding tools in your team, we’d genuinely love your feedback: What works, what doesn’t, and where something like Straion could be useful (or not).
Also always happy to jump on a call,
And if you know engineering leaders or teams at larger organizations who are actively using AI for software development, introductions would mean a lot. We’re especially interested in learning from real-world setups + challenges.
Thanks so much for checking out Straion and for any feedback. I’ll be here all day to answer questions and learn from you.
Lukas
Straion is badly needed. There is no way to centrally managed .md files, collaborate on them and dynamically update them across several repositories.
Looking forward to what the team will build!
Straion
@panagiotis_papadopoulos Yea good point the updating! That's indeed a case a lot of companies don't think about!
They just think adding the rules once is enough. But what if you have 3 repos with the same frontend rules? You don't want to go into each repo and update the AGENTS.md or CLAUDE.md files there whenever you decide on new rules/guidances.
I'll bet they will be soon out of date!
This hits close to home. Coding agents are only as good as the context you give them, and right now that context lives in random markdown files scattered across repos. Having one source of truth that works across Cursor, Copilot, and Claude Code just makes sense.
Straion
@giammbo yep, a big IF the context lives in random markdown files, the sad truth is that a lot of companies don't have markdown files in there repos even. They have their rules in Confluence pages or scatter wikis, in the worst case they are stuck in the head of single developers that comment then on repos.
So with straion we try to help you extract those rules from existing sites/pages and even repositories. So to get you started quicker.
@lukas_holzer That's a great point — the "rules stuck in someone's head" problem is real. Extracting from Confluence and existing sources sounds like the right approach to get teams onboarded fast. Smart move.
Straion
@giammbo Thanks a lot we are still trying to figure out what's the best approach so every feedback is warmly welcomed!
@giammbo This resonates. The enterprise angle Straion is taking makes sense for larger teams where rules live in Confluence pages nobody reads.
For individual devs or small teams the problem is simpler but still annoying: you have CLAUDE.md, .cursor/rules, and GEMINI.md with basically the same content drifting apart. I ended up writing a CLI that compiles YAML to each editor's native format so I only maintain one source. Different scale, same underlying pain.
Curious to see how this space evolves. Right now every tool has its own format and there's no sign of convergence.
Hi, looks awesome @lukas_holzer! is there any limitation in terms of team size, or can it be used with a e.g. 2person team and a 30 person team with the same results?
Straion
@bernischaffer Hey no there is no limitation in terms of team size, you can use Straion for a small team, but we are focussing on Enterprise clients because we've seen the problems there are at a different magnitude. Not saying small teams don't have those problems. But for a solo developer managing the rules in an AGENTS.md is doable.
If you work though in a large monorepo with multiple services frontend/backend then it's def. something you should take a look at!
Straion
@vouchy Yea you are so true! On Thursday I had a convo with a very seasoned developer that said. He can't keep up on the pace of the ecosystem anymore. He's afraid that he's taking a "wrong" turn by choosing full on a specific technology.
We try to help those companies to take off that burden by having one central place to manage all your rules and supporting mutliple agents. Straion is installed via a CLI. In the background it sets up a skill for the coding agent of your choice that's it super simple
Here is a getting started video:
Straion
We built Straion because AI-generated code is everywhere — but in reality, it rarely fits how companies actually build software.
The problem isn’t generating code anymore. It’s alignment. Every company has its own standards for security, privacy, architecture, design systems, and frameworks. Yet AI tools don’t automatically understand those rules. The result? Manual fixes, long review cycles, and wasted time.
We built Straion to change that.
Straion automatically extracts company-specific requirements from sources like wikis, contribution guidelines, and best practices — and translates them into instructions AI agents can actually follow. That way, generated code fits the organization from the start.
This means:
Less manual correction
Fewer review loops
Better security and compliance alignment
Faster, more cost-efficient delivery
Before building, we conducted 100+ interviews with software teams to truly understand their pain points. The result is a product that doesn’t just work technically — it solves a real, expensive problem.
Ultimately, we built Straion so developers can focus on what really matters again: building great software instead of fixing AI output.
Does it also work for small teams?
Straion
@doris_freihofner thanks for the question! Sure it works for a solo dev as well! but if it's just a small react project you can probably manage the effort with some AGENTS or CLAUDE .md files as well.
The true benefit for Straion is for teams within larger organizations as they have a fast growing codebase (often multiple repositories and need rules to align the code)
A good example is if you have multiple repositories having golang microservices. You don't want to duplicate all the rules in each repository. In this case you want to have a single central hub to manage all your rules!
This is exactly the usecase where straion shines. Once you update a rule in straion it will be immediately propagated to all of your devs. They don't have to update/install anything. It's just there! So enforcing coding standards, security best practices and other rules is just a click away!
Agnes AI
Seems Straion could handle the challenge of hallucination.... but just curious - would agents themselves handle making rules in future? lol
Straion
@cruise_chen 🤔 For now, let's add your rules to Straion and let it handle the hallucinations. With stronger, more sophisticated and especially targeted rules hallucinations shouldn't be an issue anymore.