How much do you trust AI agents?
With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."
I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.
I certainly wouldn't trust something to the extent of providing:
access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)
sensitive health and biometric information (can be easily misused)
confidential communication with key people (secret is secret)
Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?
Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.


Replies
My trust in AI agents is directly proportional to how transparent they are about what they're doing. The agents I use daily for coding — I trust them for boilerplate, refactoring, and well-defined tasks. I don't trust them for architecture decisions or anything security-sensitive without review.
The trust equation for me: Can I see the reasoning? Can I verify the output quickly? Is the cost of failure low? If all three are yes, I'll let the agent run autonomously. If any is no, it becomes a suggestion engine, not an executor.
What's changed my perspective is tracking the actual outputs over time. When you can see that an agent gets structured output right 95% of the time but hallucates API endpoints 20% of the time, you learn exactly where to trust and where to verify.
AI should automate tasks, not own trust. I’d delegate workflows, but never unrestricted access to money, identity, or private relationships.
Same instinct here, though I’ve noticed my line is less about what the data is and more about what the agent does with it. Read access I give pretty freely — let it scan my calendar, my drafts, my files. Write access is where I get careful, and “send” or “transact” access is where I basically don’t go yet. An agent reading my finances to surface insights feels fine. An agent moving money on my behalf, even with limits, is a different category of trust I haven’t built up to.
It's still dangerous to trust an AI agent 100% of the time, so every platform should have some kind of HITL like feature. But there's so much that can be simplified and automated today, and it's still early