Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.6K views

Add a comment

Replies

Best
Ibrahim Khalil

I use AI daily for work, but I draw the line at anything that requires actual accountability. If an agent messes up my schedule, it's annoying; if it messes up a bank transfer or a government application, it's a disaster. Until AI can be legally held responsible for its mistakes, my wallet and my ID stay offline.

Nika

@ibrahim_khalil25 True. I wouldn't give access to things that can harm my personal data or time + finances.

Manuel Del Verme

I work on agent tooling and the thing nobody talks about is that trust is a spectrum, not a binary. Right now most people either let the agent do whatever or don't use it at all.


What changed my mind was building replay/trace infrastructure — when you can go back and see exactly what an agent did step by step, you stop worrying about whether to trust it. You just check. Same way you'd review a junior dev's PR, not because you don't trust them, but because that's how you build confidence over time.

The actually scary failure mode isn't "agent accesses my bank account." It's 50 small reasonable-looking decisions that compound into something you didn't want.

Nika

@manuel_del_verme But that's usually how it starts. You want one innocent thing, and it can result in a catastrophe. :D I think that first you should know / predict what bad can happen and then say: don't do this... this... this...

SlamDunk

Great thread, Nika—trust in increasingly autonomous AI agents is one of the defining questions of the next few years.

My personal boundaries are quite similar to yours:

Hard no-go zones (no access, no exceptions):

  • Full control over personal banking, crypto wallets, or payment methods (only ever a capped “burner” amount I’m willing to lose)

  • Direct access to health records, genetic data, continuous biometrics, or medical history

  • Any end-to-end encrypted or highly confidential communication (family, legal, therapy, C-level business secrets)

  • Actions with real-world legal, financial, or physical consequences (signing contracts, posting publicly on my behalf, controlling smart home/security devices)

What I do delegate today (with monitoring):

  • Read-only analysis of finances (categorization, forecasting, anomaly detection)

  • Email triage, drafting low-stakes replies, meeting prep & follow-ups

  • Research, summarization, task triaging, calendar suggestions

  • Code generation/review in isolated environments

The Sapiom news is a fascinating (and slightly dystopian) signal—if agents start managing their own budgets and tooling autonomously, we’re moving toward agent-to-agent economies where human oversight becomes even more critical. That could unlock insane productivity… or create entirely new classes of misalignment risk.

Where do you personally draw the line between “helpful proactive assistant” and “too autonomous to feel safe”? Curious about your take as a minimalist-tool builder. 💭

Upvoted—excellent conversation starter. 🚀

Nika

@64185008aaa We are on the same page when it comes to delegating ;)

Daniil Bulgakov

As a data engineer, I think about this differently: it's not just about trusting the AI — it's about where your data goes when you use it. Most AI tools require uploading your files to their servers, and that's where the real trust question lies. I'd rather use tools that process data locally whenever possible. For repetitive data tasks (like reformatting spreadsheets), you don't even need AI — you need well-built deterministic tools. Not everything needs to be "smart" to be useful.

Nika

@daniil_bulgakov do you trust AI agents enough to run them on your computer? Or how is your stance between you and the usage?

Daniil Bulgakov

@busmark_w_nika  yep, running, but with guardrails and concrete goals.

You steering they and (if possible) make all their changes visible - like git diff for your files or other changes.

Sergio Cavallante

Hi Nika,
I also limit AI involvement in activities not related to personal sensitive areas.
The main risk in my opinion is AI use in countrie's attack/defense systems - mainly those with nuclear weapons. How far AI can interfere in this area - today and tomorrow?

Nika

@sergio_cavallante scary, but happy you mentioned that. I can see a possible threat esp. from China. But who knows... maybe another country will be fast enough (but hopefully, some sane people will not dare to harm).

Aayush

I am less worried about the first barrier (agents handling sensitive data) however I am way more worried about the stochasticity. Agents ARE NOT RELIABLE. And I say this after I've had patents on solving for agent reliability. Even if you end up building a Vault for the agents world, the usage of the said vault is so stochastic that I don't want to trust agents yet. In a multi-step workflow, this compounds catastrophically.

Nika

@aaupadhy So shouldn't I trust them at all?

Aayush

@busmark_w_nika usecase dependent. I will personally not use AI for banking / investment / notes management etc. yet.

Ben Sabic

Like with most people, I'd trust AI for most things, except for the things you listed (e.g. personal finances). At least not without approval processes built in (e.g. get a push notification to approve paying a bill).

Nika

@bensabic Yes, if there is an approval process, I would go for it (tho, I cannot imagine my full inbox, lol).

Anna Sokolova

I can barely control myself on Black Friday, and now I'm supposed to let a robot handle it? No thanks, the card stays with me. It can pick references for me, but spending money? I've got that covered just fine on my own 🤣

Nika

@annasokol :DDDDDDDD are you afraid that the bot will copy your behaviour? :D

Anna Sokolova

@busmark_w_nika Worse! At least I slow myself down by typing in the CVV code. A bot would bankrupt me in nanoseconds. That’d be a poverty speedrun I’m definitely not built for 😂

Nika

@annasokol  😂 yeah, we will be doomed.

Richard Francis

I don't mind it accessing productivity-related services (Notion, Google Docs, etc). I have been careful with giving it the ability to email/contact people, however I don't mind it reading my emails.

I'm also not comfortable giving it system level access to my Mac, and obviously that's a common concern which is why everyone's buying their own hardware.

Nika

@rich186 But aren't you afraid that a bot will write something inappropriate to people on your behalf, which could damage your reputation?

Jonathan Song

Great question! As someone building AI products, I think about this constantly.

The trust issue isn't just about the AI itself - it's about the entire data pipeline. Where does your data go? Who has access? Can it be deleted?

I draw hard lines around:

- Financial accounts (read-only at most, never write access)

- Personal communications (drafts only, never auto-send)

- Health data (absolutely off-limits)

The bigger concern for me is the "black box" problem. Most AI agents don't show you their reasoning process. You see the input and output, but not what happened in between. That's scary when dealing with sensitive tasks.

I think the future is "trust but verify" - agents that show their work, have audit trails, and give you granular control over what they can access. Until then, treating them like talented interns (not managers) seems like the right approach.

Nika

@jonathan_song2 So how do you use AI? Because you outlined limits for restrictions. What is doable for AI when it comes to your tasks?

First
Previous
•••
567
•••
Next
Last