Incerto is an AI-native Copilot for databases. Using context-aware agents, it turns natural language into tasks — like writing queries, schema changes, or moving data — boosting dev productivity 10x with 90% less manual work.
1. Understand what user is asking -- clarification agent does that. 2. text to SQL accuracy -- this is achieved with schema linking, keeping only relevant context and giving lot of context (around deployement, past queries, table schema, values of fields, semantic layer on data) 3. Doing specialized tasks -- we have different agent for notably useful tasks. E.g. Root Cause agent which is used to find problems just hypothesises and verifier agent tests then approves or disapproves. 4. We also list out the assumptions at the end of the conversation and a accuracy score (though LLM generated only) 5. Complex tasks [still not GA] : We will split complex tasks into mermaid dependency graph and execute smaller tasks with human feedback. 6. There ability for users to add more contexts (through templates), ability to change the system prompt (for specialized niche tasks)
Hope that answers, looking forward to your feedback on product!
You can check out https://noveum.ai/en. This is our product, we help companies run evals and make sure that their AI agents are reliable. Let me know if you would want to understand more.
I’d like to know—does it simply query the data, or is it also capable of grasping the business context of the tables?
Report
@prabhat_kumar42 It does capture context at multiple level. It capture the production issue, it captures the symantics of which data if store where and it store human curate template and system prompts too.
Thanks for asking this, hope Incerto streamlines your database oprations, do dowload and let us any feedback.
Promise: To create a 'context-aware' database co-pilot that unifies and automates database change management, troubleshooting, and workflow optimization for developers and data professionals. Did I get it right?
In that case, I think the primary risk is the ambiguity of 'context-aware,' which could lead to a 'jack-of-all-trades, master-of-none' solution in a crowded market of highly specialized, best-in-class tools for schema migration, observability, and querying.
If so, my recommendation will be proceeding but narrowing the focus to validate a single, high-pain workflow for a specific persona. For example, focus exclusively on 'performance troubleshooting for application developers who are not database experts' to prove the 'context' advantage is a 10x improvement over existing fragmented solutions.
Product has gone through its journey. At this moment we are focus mostly on fetching data from database as accurately and conviniently as possible.
I understand the value of sharp positioning for persona and industry. But we have a hunch it it is generally very useful to wide variety of personas if we nail accuracy and complext tasks execution.
@anurag_pandey19 Gotcha! Interesting to deep further on the user segments, my thoughts:
First Believers (Fastest Path to Validation):
Who: Mid-stage startup developers (5-20 engineers) using a popular framework (e.g., Rails, Django) with a mainstream database (e.g., PostgreSQL) on a major cloud provider.
Why: This segment feels the pain acutely. They are too small for a dedicated DBA but large enough that database changes are becoming a bottleneck and a source of production incidents. They live in Git and are already embracing CI/CD for their application code.
Valuable Skeptics (Fastest Path to Invalidation):
Who: Experienced database administrators at large enterprises.
Why: This segment has a suite of powerful, specialized tools and decades of experience. They will immediately see through any 'magic'. If they cannot find a 10x improvement in their workflow for a specific task (e.g., reviewing a developer's proposed change), then the 'context-aware' value proposition is not strong enough to displace best-of-breed tools.
I think your point is absolutely valid. But we are at a stage where we are tying to see which is the most important, or most painful use-case that we can serve.
Keeping the product general for now, but later might continue to align in a certain direction depending upon the usage :)
Report
Great initiative and promising product scope. Congratulations team on the launch.
Qq: how do and upto what extent you guardrail prod data not to mess things Up by agentic hallucinations
@debarghyaroy every single SQL request is routed to the user and no request is executed on its own. AI has dummy tools which just send a request to the user, and actual executions are done by user on UI. Even in "auto" mode, all auto requests are sent with a read only mode, and mutable queries require user approvals
That's in important one, we have it summarized in this blog : <blogs in website, PH is not letting me post link>
In short AI doesn't even use MCP, it has not way to execute queries at all. Identity and credential of data is completely dependent on user.
You always have to accept the query (which is visible and write queries are marked with bold red color).
You can use readonly user in credential and then there is 0 chance any thing bad happens.
For PII data you can obsfucate collumn with configurable name. E.g you can say "mobile", "email", "Name" should never go to AI and we obsfucate it before sending to LLMs.
Report
@anurag_pandey19@whybee99 thanks for sharing the insights , does it mean , you relives the Dev by helping to write queries and form schema architecture , is that my understanding correct ?
if my understanding is correct how it is different than anormal query anyonce ask while building with AI ?
lets say i am using claude for my dev , it can also give me same capability is that correct ?
and if that understanding of mine is correct , how do you see , how can we actually make it in AUTO Mode? DO you think of auto in replication of sharding or probably pre prod in place to get it done by iteslf, compare the prod and pre prod , analyze differences , figure out ambiguity as a whole and finally sends out probably 1000s of DB ops in summary to user for approval something of that short ?
Auto mode is just for readonly queries (which executed with setting readonly = 1 for clickhouse, as an example)
If you are comparing prod and pre prod, and you want to analyze along the way, you'd not use auto mode.
It will make first SQL query -> ask permision to run -> slap the result (truncated and obsfucated) LLM -> come back with insights -> Another SQL query -- this loop goes on.
It can also transfer to more suited agent in the loop.
If you are in auto mode, you don't have to click "accept" every time, that's the only convinience -- and for write queries there is no auto mode.
All value is in context management and query execution loop and ability of agents to explore database without you having to tell which database, table, etc to query.
100s of DB ops -- doesn't happen. It will not do 1000 queries one by one ever. It has limit of 20 consecutive query without user interaction.
Its free to use from website please use it once when you have time, and that should clear lot of doubts.
Let me know if I answered your queries, happy to clear further doubts.
Thanks Anurag, i understand , context and READ queries are normal for Any LLM no and they are also agentic ?
i was just trying understand how it differentiates between Incerto and normal LLM agentic calls?
and what i was saying is there any line of thoughts or scope that you guys are planning , which actually can help a TON that preprod , streaming , batching soltn. 1000s of ops , then You are unstoppable along with if you can also get into all CRUD ops.
Congrats on launching! One of the biggest struggles I’ve had with AI + DBs is context. LLMs can refactor a query, but they rarely account for relationships, constraints, or migrations in a real schema... Especially in Ruby on Rails. If Incerto can bridge that gap and bring genuine awareness of DB context, it would be a game changer.
I mean this sincerely: I’m super excited to see where this goes and will definitely be trying it out in my current project! Again, congrats on the launch! 🚀
@quadralift Thank you!! Please let us know your feedback!
Report
@quadralift yep, Incerto will be able to help. But not every context is generated automaticlly, but we have designed to be flexible.
For example to get the semantics out of clickhouse, "train <database>" will extract the relationships among tables, fields, schema, values examples etc and auto apply that whenever you mention this same instance.
For other nuances you will have to tell chat to cacluate that, make a manual template and tag it whenever you are doing a task.
Hope it helps!
Let us know if we fall short of any feature in your experiment. Looking forward to your feedback.
Congrats on the launch of Incerto. Wishing you huge success ahead an AI copilot for databases sounds like a real productivity booster. Excited to see how it transforms the way devs handle queries and data management.
Loved the product. Had been struggling with my n8n agent which writes shitty queries, irrespective of how many times the schema is provided.
Question: What are you using for evals? How are you making sure of the performance of your AI copilot?
@additi thanks!
There are certain steps to it :
1. Understand what user is asking -- clarification agent does that.
2. text to SQL accuracy -- this is achieved with schema linking, keeping only relevant context and giving lot of context (around deployement, past queries, table schema, values of fields, semantic layer on data)
3. Doing specialized tasks -- we have different agent for notably useful tasks. E.g. Root Cause agent which is used to find problems just hypothesises and verifier agent tests then approves or disapproves.
4. We also list out the assumptions at the end of the conversation and a accuracy score (though LLM generated only)
5. Complex tasks [still not GA] : We will split complex tasks into mermaid dependency graph and execute smaller tasks with human feedback.
6. There ability for users to add more contexts (through templates), ability to change the system prompt (for specialized niche tasks)
Hope that answers, looking forward to your feedback on product!
@anurag_pandey19 Great!
You can check out https://noveum.ai/en. This is our product, we help companies run evals and make sure that their AI agents are reliable. Let me know if you would want to understand more.
@additi
Sure that actually sounds useful, we spend quite sometime evaluating the output. Would love to connect!
Incerto
Thank you ❤️ @additi
I hope you have connected with Anurag on LI to take the conversation forward :)
Do you support postgres? It is serious replacement for something like PgAdmin. Upvoted, all the best!
@vatsmi thanks for the comment
Yes, we support postgres. Its better than PgAdmin in the sense it detects problems and solves it too.
In respect to getting your daily tasks done, PgAdmin is not even competing there.
Looking forward to you feedback after trying out the product!
Incerto
Thank you for all the support ❤️ @vatsmi
Great job.
I’d like to know—does it simply query the data, or is it also capable of grasping the business context of the tables?
@prabhat_kumar42 It does capture context at multiple level. It capture the production issue, it captures the symantics of which data if store where and it store human curate template and system prompts too.
Thanks for asking this, hope Incerto streamlines your database oprations, do dowload and let us any feedback.
Thanks for you comment!
Incerto
Thank you ❤️ @prabhat_kumar42
Humva
Fantastic product—here’s my take:
Promise: To create a 'context-aware' database co-pilot that unifies and automates database change management, troubleshooting, and workflow optimization for developers and data professionals. Did I get it right?
In that case, I think the primary risk is the ambiguity of 'context-aware,' which could lead to a 'jack-of-all-trades, master-of-none' solution in a crowded market of highly specialized, best-in-class tools for schema migration, observability, and querying.
If so, my recommendation will be proceeding but narrowing the focus to validate a single, high-pain workflow for a specific persona. For example, focus exclusively on 'performance troubleshooting for application developers who are not database experts' to prove the 'context' advantage is a 10x improvement over existing fragmented solutions.
@joywakeup Completely understood.
Product has gone through its journey. At this moment we are focus mostly on fetching data from database as accurately and conviniently as possible.
I understand the value of sharp positioning for persona and industry. But we have a hunch it it is generally very useful to wide variety of personas if we nail accuracy and complext tasks execution.
Humva
@anurag_pandey19 Gotcha! Interesting to deep further on the user segments, my thoughts:
First Believers (Fastest Path to Validation):
Who: Mid-stage startup developers (5-20 engineers) using a popular framework (e.g., Rails, Django) with a mainstream database (e.g., PostgreSQL) on a major cloud provider.
Why: This segment feels the pain acutely. They are too small for a dedicated DBA but large enough that database changes are becoming a bottleneck and a source of production incidents. They live in Git and are already embracing CI/CD for their application code.
Valuable Skeptics (Fastest Path to Invalidation):
Who: Experienced database administrators at large enterprises.
Why: This segment has a suite of powerful, specialized tools and decades of experience. They will immediately see through any 'magic'. If they cannot find a 10x improvement in their workflow for a specific task (e.g., reviewing a developer's proposed change), then the 'context-aware' value proposition is not strong enough to displace best-of-breed tools.
Incerto
Thank you ❤️@joywakeup
I think your point is absolutely valid. But we are at a stage where we are tying to see which is the most important, or most painful use-case that we can serve.
Keeping the product general for now, but later might continue to align in a certain direction depending upon the usage :)
Great initiative and promising product scope. Congratulations team on the launch.
Qq: how do and upto what extent you guardrail prod data not to mess things Up by agentic hallucinations
Incerto
@debarghyaroy every single SQL request is routed to the user and no request is executed on its own. AI has dummy tools which just send a request to the user, and actual executions are done by user on UI.
Even in "auto" mode, all auto requests are sent with a read only mode, and mutable queries require user approvals
@debarghyaroy Thanks.
That's in important one, we have it summarized in this blog : <blogs in website, PH is not letting me post link>
In short AI doesn't even use MCP, it has not way to execute queries at all. Identity and credential of data is completely dependent on user.
You always have to accept the query (which is visible and write queries are marked with bold red color).
You can use readonly user in credential and then there is 0 chance any thing bad happens.
For PII data you can obsfucate collumn with configurable name. E.g you can say "mobile", "email", "Name" should never go to AI and we obsfucate it before sending to LLMs.
@anurag_pandey19 @whybee99 thanks for sharing the insights , does it mean , you relives the Dev by helping to write queries and form schema architecture , is that my understanding correct ?
if my understanding is correct how it is different than anormal query anyonce ask while building with AI ?
lets say i am using claude for my dev , it can also give me same capability is that correct ?
and if that understanding of mine is correct , how do you see , how can we actually make it in AUTO Mode? DO you think of auto in replication of sharding or probably pre prod in place to get it done by iteslf, compare the prod and pre prod , analyze differences , figure out ambiguity as a whole and finally sends out probably 1000s of DB ops in summary to user for approval something of that short ?
@whybee99 @debarghyaroy
Auto mode is just for readonly queries (which executed with setting readonly = 1 for clickhouse, as an example)
If you are comparing prod and pre prod, and you want to analyze along the way, you'd not use auto mode.
It will make first SQL query -> ask permision to run -> slap the result (truncated and obsfucated) LLM -> come back with insights -> Another SQL query -- this loop goes on.
It can also transfer to more suited agent in the loop.
If you are in auto mode, you don't have to click "accept" every time, that's the only convinience -- and for write queries there is no auto mode.
All value is in context management and query execution loop and ability of agents to explore database without you having to tell which database, table, etc to query.
100s of DB ops -- doesn't happen. It will not do 1000 queries one by one ever. It has limit of 20 consecutive query without user interaction.
Its free to use from website please use it once when you have time, and that should clear lot of doubts.
Let me know if I answered your queries, happy to clear further doubts.
@anurag_pandey19
Thanks Anurag, i understand , context and READ queries are normal for Any LLM no and they are also agentic ?
i was just trying understand how it differentiates between Incerto and normal LLM agentic calls?
and what i was saying is there any line of thoughts or scope that you guys are planning , which actually can help a TON that preprod , streaming , batching soltn. 1000s of ops , then You are unstoppable along with if you can also get into all CRUD ops.
Job Application Answer Generator
Congrats on launching! One of the biggest struggles I’ve had with AI + DBs is context. LLMs can refactor a query, but they rarely account for relationships, constraints, or migrations in a real schema... Especially in Ruby on Rails. If Incerto can bridge that gap and bring genuine awareness of DB context, it would be a game changer.
I mean this sincerely: I’m super excited to see where this goes and will definitely be trying it out in my current project! Again, congrats on the launch! 🚀
Incerto
@quadralift Thank you!! Please let us know your feedback!
@quadralift yep, Incerto will be able to help. But not every context is generated automaticlly, but we have designed to be flexible.
For example to get the semantics out of clickhouse, "train <database>" will extract the relationships among tables, fields, schema, values examples etc and auto apply that whenever you mention this same instance.
For other nuances you will have to tell chat to cacluate that, make a manual template and tag it whenever you are doing a task.
Hope it helps!
Let us know if we fall short of any feature in your experiment. Looking forward to your feedback.
Lancepilot
Congrats on the launch of Incerto. Wishing you huge success ahead an AI copilot for databases sounds like a real productivity booster. Excited to see how it transforms the way devs handle queries and data management.
Incerto
@priyankamandal Thank you! Would love to get your feedback and thoughts!
Incerto
@priyankamandal Thank you ❤️