How much do you trust AI agents?
With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."
I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.
I certainly wouldn't trust something to the extent of providing:
access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)
sensitive health and biometric information (can be easily misused)
confidential communication with key people (secret is secret)
Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?
Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.


Replies
Great question. I think about this constantly as someone building AI agents for e-commerce.
The way I approach it: not all data is created equal. I categorize access into three tiers:
Tier 1 - Full access: Product catalogs, inventory feeds, pricing rules. If this leaks or gets corrupted, it"s annoying but recoverable. The upside (automation speed) outweighs the risk.
Tier 2 - Gated access: Customer data, order history. Read-only most of the time. Any write operation needs a confirmation step or a hard budget limit (e.g., "refund max $50 without approval").
Tier 3 - No access: Payment credentials, auth tokens, anything that can"t be rotated or revoked instantly. Also proprietary algorithms or launch plans—things where one leak could kill competitive advantage.
The Sapiom news is interesting but also a warning sign. If agents start controlling budgets directly, we"re one prompt injection away from a very bad day.
Curious if others have a similar tiered approach, or do you go case-by-case?
minimalist phone: creating folders
@arron_young Not gonna lie, but I wouldn't give access even to a single penny. Money is money, so sorry my dear bot.
I'm actually building in this exact space - an AI that applies to jobs on your behalf.
Trust is the #1 objection we hear. Our answer: full transparency - user sees every application before it goes out, nothing happens without approval.
Blind automation = scary. Supervised automation = powerful.
Tomosu
Interestingly, I find the AI trust question connects to how we use our phones in general. I'm building Tomosu — an iOS app where your phone starts quiet by default and you consciously unlock apps when you need them. The "do I really need this right now?" friction applies to AI agents too. Would love to hear your thoughts!
minimalist phone: creating folders
@nakajima_ryoma This sounds a bit like a productivity app (what category does the app belong to)? :)
i run an ai agent that has access to my email, calendar, browser, and messaging. it drafts emails, posts on social media, and manages my spreadsheets while i sleep. sounds insane but after a few weeks you stop thinking about it the same way you stopped thinking about autofill having your credit card.
the real question isnt trust vs no trust, its what guardrails you set up. mine cant send emails without me approving the draft first. cant post publicly without a review step. but it can read, organize, research, and draft freely. thats the line that works for me - read access is wide open, write access has a human gate.
minimalist phone: creating folders
@umairnadeem Anyway, I would be hesitant to give AI all of that information. But when it comes to the "approval" process, we are on the same page here.
Velocity: AI User testing
Not just securtiy, the biggest problems come from DB migrations. Agents are guilty of making breaking changes and then not highlighting the breaking change. I shy away from trusting them with likely breaking changes, things that will likely change the DB topology.
minimalist phone: creating folders
@kevin_mcdonagh1 I would be hesitant to give them the option to work with databases and the data of our users. One will never know how it can be misused or messed up.
After reading the tweet from Summer Yue at Meta, I think I'll hold off with clawdbot for a little bit longer
https://x.com/summeryue0/status/2025774069124399363?s=20
minimalist phone: creating folders
@ceciliatran I tried to install it, but I am not so technically good, so haven't completed it. And it was the best thing that happened to me :D
I think that its moreso that people are extremely eager to have someone or something solve and handle their problems and challenges. AI agents are positioned as the ultimate solution and so people are willing to take any risk involved if it means solving any painful challenges they face in their day to day.
minimalist phone: creating folders
@michael_cervantes I still believe that some people are patient and more conscious about their decision to use AI for everything. But there is maybe like 5% of those people. lol
i run an AI agent 24/7 on my machine and honestly the trust thing is less binary than people make it. its not "trust or dont trust" - its about scoping what the agent can touch.
the biggest unlock for me was treating it like hiring a junior dev. you dont give them prod database access on day one. same with agents - start with read-only stuff, let it draft things, review for a week, then slowly open up write access to specific tools. ive had mine managing my calendar, checking emails, even doing research tasks for weeks now and the failure mode isnt "it goes rogue" - its more like it misunderstands context and does something slightly wrong. which is... exactly what humans do too.
the finance stuff i agree with though. anything involving money stays manual. not because the agent cant do it but because the cost of a mistake is too high and theres no undo button.
minimalist phone: creating folders
@umairnadeem IMO, it is crazy how people give access to databases. Esp. with data of other people/users. I wouldn't trust it so much :D
@busmark_w_nika people give other people access to databases too, and other people are prone to social engineering just like LLMs are prone to prompt injections. if anything, LLMs are already more reliable than people in this regard.
The trust question is domain-specific and stakes-calibrated. I'm building an AI travel planner — the threshold there is very different from finance or health: if the agent picks a slightly off restaurant, the downside is a disappointing dinner, not a ruined credit score. But there's a subtler trust problem I think gets underexplored: preference trust. Not 'will it misuse my data' but 'does it actually model what I want, or is it confidently wrong about my taste?' That second failure is harder to catch — the agent feels like it's working until you realize it's been optimizing for the wrong thing for weeks.
minimalist phone: creating folders
@giammbo When do you launch btw? :)
Aiming for the coming weeks — still tightening the experience before going public. Will definitely ping you when we're ready to go live on PH — having the right voices behind it early makes a real difference.
minimalist phone: creating folders
@giammbo Yes, please, ideally on LI :)
@busmark_w_nika Will do — I'll connect on LI before we go live. Really appreciate you saying that.
I trust everything I can control and validate. May be it is a little bit conservative, but it works for me
minimalist phone: creating folders
@ilia_ilinskii I am the same (old-school) cool