How much do you trust AI agents?
With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."
I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.
I certainly wouldn't trust something to the extent of providing:
access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)
sensitive health and biometric information (can be easily misused)
confidential communication with key people (secret is secret)
Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?
Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.


Replies
Honestly, I trust AI for stuff like catching dumb typos in my Python scripts or auto-generating boilerplate for new classes. I still run the main logic myself and give it a quick sanity check—don’t want some rogue refactor sneaking in. It’s super handy for the boring grind, and overall I’m happy with the time it saves me. For example, yesterday it helped me spin up a CRUD API skeleton in like 10 minutes instead of an hour—saved me a ton of headache
I’m with you — I think people trust AI agents too much too fast. I treat them more like untrusted systems than assistants. Anything sensitive or irreversible (money, credentials, private data) stays off-limits.
What worries me most isn’t obvious failures — it’s edge cases like prompt injection or tool misuse that slip through.
Curious — are you setting hard boundaries, or relying more on guardrails?
Trust in AI agents scales with context quality, not just capability.
We're building Forjinn at InnoSynth — WhatsApp AI agents for businesses. The trust question comes up constantly with our users. They're comfortable letting the agent handle FAQs and bookings, but draw the line at anything involving payment flows or account changes.
What we've found: trust increases dramatically when the agent actually knows your business deeply — your exact products, policies, pricing edge cases. Hallucinations about the company's own offerings are what erode trust fastest.
The tasks I'd still keep human: anything involving exceptions, emotional escalations, and final purchase confirmation. Agents are great at gathering context and doing the first 80% — humans close the loop on the sensitive 20%.
I don't have a clear Yes or No answer. It depends.
On one hand, I spend a lot of time on company automation and building agents, which involves constant work with documents and numbers. In that context, AI helps simplify workflows and daily routine a lot. However, there is always a human-in-the-loop, at least for now.
On the other hand, when it comes to using chat tools in daily work, or for personal questions, especially for health-related questions, I have very little trust in them. I usually ask the same question several times in different ways and compare the answers. Always try to keep in mind that the answer is always just a likely continuation of the question.