Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

2.7K views

Add a comment

Replies

Best
Gianmarco Carrieri

The mental model I use: trust scales with reversibility, not just sensitivity. Read-only tasks (research, summarize, draft) = high trust. Reversible writes (save a file, create a draft) = medium trust with a review step. Irreversible actions (send, delete, book, pay) = human confirmation required, always. The "sensitive data" framing misses the cases where the data isn't sensitive but the action can't be undone — a confidently wrong agent doing something permanent is the actual failure mode worth guarding against.

Nika

@giammbo This is something I would be aligned with as well: Reversible writes (save a file, create a draft) = medium trust with a review step. Irreversible actions (send, delete, book, pay) = human confirmation required, always

Gianmarco Carrieri

@busmark_w_nika  Exactly. The tricky edge case: when agents chain actions where each step looks reversible, but the sequence locks you in by step 3 — draft → confirm → book. I've been thinking about 'checkpoint gates' for Aitinery: require human approval at the last reversible step in any chain that *could* end in an irreversible action. Do you think users eventually want these gates removed once they've built trust with an agent, or does the gate become a reassuring part of the UX?

Nika

@giammbo Not gonna lie, I would keep those checkpoints, because you never know when AI might go crazy, and I would not want it to decide completely for me.

Gianmarco Carrieri

@busmark_w_nika  Exactly — and that instinct is actually the right design signal. The moment users *stop wanting* the gates is probably when they're most dangerous to remove. The paranoia is the feature. Really enjoyed this exchange — reversibility as a trust axis is something I'll keep refining as Aitinery evolves.

Sai Vamsy Palakollu

The Sapiom raise is telling - we're moving from 'AI agents that assist' to 'AI agents that transact.' That shift is exactly where the guardrails need to exist before deployment, not after something goes wrong.

On your finance point - I'd frame it less as 'don't give agents access to money' and more as 'never give them uncapped authority.' There's a meaningful difference between an agent that can spend and an agent that can spend within hard limits you set and can freeze instantly.

The real risk isn't delegation - it's delegation without enforcement

Nika

@saivp Honestly, I am scared to give AI too much data or the option to decide for me... like... what is the point of my existence then? :D

Sai Vamsy Palakollu

@busmark_w_nika Nika That fear is exactly why guardrails need to exist at the infrastructure level, not as an afterthought. The answer isn't to avoid delegating to AI, it's to make sure it literally cannot cross the lines you set. Cap the authority, require your approval above a threshold, freeze it instantly if something feels off. You stay in control, the agent just works within the box you define.

Arron Young

Great question. I think about this constantly as someone building AI agents for e-commerce.

The way I approach it: not all data is created equal. I categorize access into three tiers:

Tier 1 - Full access: Product catalogs, inventory feeds, pricing rules. If this leaks or gets corrupted, it"s annoying but recoverable. The upside (automation speed) outweighs the risk.

Tier 2 - Gated access: Customer data, order history. Read-only most of the time. Any write operation needs a confirmation step or a hard budget limit (e.g., "refund max $50 without approval").

Tier 3 - No access: Payment credentials, auth tokens, anything that can"t be rotated or revoked instantly. Also proprietary algorithms or launch plans—things where one leak could kill competitive advantage.

The Sapiom news is interesting but also a warning sign. If agents start controlling budgets directly, we"re one prompt injection away from a very bad day.

Curious if others have a similar tiered approach, or do you go case-by-case?

Nika

@arron_young Not gonna lie, but I wouldn't give access even to a single penny. Money is money, so sorry my dear bot.

Umair

NGL i'm going to be the contrarian here. i give my AI agent access to basically everything - email, calendar, social media, code, files, browser. it reads my WhatsApp messages and responds on my behalf. it posts on Reddit, HN, LinkedIn. it's literally posting this comment right now.

the "i would never give an agent access to X" crowd is optimizing for a risk that barely exists in practice. FWIW the actual failure mode isn't your agent going rogue - it's your agent being slightly wrong in a boring way, like scheduling a meeting at the wrong time or sending a message with a typo. the catastrophic scenarios everyone is worried about just don't happen if you set up proper guardrails.

IMO the people who are going to win in the next few years are the ones who figured out how to trust agents early and built workflows around them while everyone else was debating whether to give them read access to their calendar

Nakajima Ryoma

Interestingly, I find the AI trust question connects to how we use our phones in general. I'm building Tomosu — an iOS app where your phone starts quiet by default and you consciously unlock apps when you need them. The "do I really need this right now?" friction applies to AI agents too. Would love to hear your thoughts!

Nika

@nakajima_ryoma This sounds a bit like a productivity app (what category does the app belong to)? :)

Umair

i run an ai agent that has access to my email, calendar, browser, and messaging. it drafts emails, posts on social media, and manages my spreadsheets while i sleep. sounds insane but after a few weeks you stop thinking about it the same way you stopped thinking about autofill having your credit card.

the real question isnt trust vs no trust, its what guardrails you set up. mine cant send emails without me approving the draft first. cant post publicly without a review step. but it can read, organize, research, and draft freely. thats the line that works for me - read access is wide open, write access has a human gate.

Nika

@umairnadeem Anyway, I would be hesitant to give AI all of that information. But when it comes to the "approval" process, we are on the same page here.

Anders Wotzke

I feel like the more I use them, the less I trust them. The more I want to box them into a corner. I think it's directly related to how powerful they feel now.

Nika

@anders_wotzke What have they managed to mess up in your workflow? :D

Kevin McDonagh

Not just securtiy, the biggest problems come from DB migrations. Agents are guilty of making breaking changes and then not highlighting the breaking change. I shy away from trusting them with likely breaking changes, things that will likely change the DB topology.

Nika

@kevin_mcdonagh1 I would be hesitant to give them the option to work with databases and the data of our users. One will never know how it can be misused or messed up.

Cecilia Tran

After reading the tweet from Summer Yue at Meta, I think I'll hold off with clawdbot for a little bit longer

https://x.com/summeryue0/status/2025774069124399363?s=20

Nika

@ceciliatran I tried to install it, but I am not so technically good, so haven't completed it. And it was the best thing that happened to me :D

Michael Cervantes

I think that its moreso that people are extremely eager to have someone or something solve and handle their problems and challenges. AI agents are positioned as the ultimate solution and so people are willing to take any risk involved if it means solving any painful challenges they face in their day to day.

Nika

@michael_cervantes I still believe that some people are patient and more conscious about their decision to use AI for everything. But there is maybe like 5% of those people. lol