Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

2.6K views

Add a comment

Replies

Best
Mark Lemuel M

lead gen. mostly. and automatic replies. can't fully trust with money tho... there's a news here that some Claude bot users bought an entire course just to serve it's master useful information regarding what he's looking for.

Nika

@kilopolki Damn, I would go crazy if it used my money like that. 😂

Nikhil Shahane

I am absolutely paranoid about stuff that is truly personal or with high fall-out potentially. So as far as possible, I don't give it access. I'm actually contemplating running stuff on a local RaspberryPi where I can have much more control.

I'm experimenting with building a few skills that make interactions more deterministic so I can give it gated / limited access to personal finances or other confidential data.

For data that I don't care about, I'm a lot more liberal. Eg Personal email is a no no - but a lot of the projects I build have API keys that are rate limited at the source, so I don't care. Honestly about 99% of my work emails are also game. Maybe the biz plans for the future might be somewhat confidential, but otherwise barely anything worth worrying about.

Nika

@nikhilshahane yeah, actually, I would be also scared that it would damage my reputation by, let's say, sending an inapropriate email to potential leads or prospects :D no good no good :D

Nikhil Shahane

@busmark_w_nika LOL! Yes, that's true.

Nika

@nikhilshahane but some people can be the same. 😅

David Alexander

@nikhilshahane  @busmark_w_nika With email, I would only ever allow it to write drafts, takes no time for me to review and send, delete or edit.

Nikhil Shahane

@busmark_w_nika  @david_alexander4 - This is actually the safest. However, sometimes you've just got to YOLO it in life! :D

David Alexander

@busmark_w_nika  @nikhilshahane Haha, the temptation is real! :) I'm still keeping my OpenClaw bot chained up in its digital basement though.

Alexey Glukharev

@nikhilshahane  @busmark_w_nika so true. Also worry about that. Hopefully good MD files will help. I think we will figure out some way to handle it properly when the issue becomes massive

Nikhil Shahane

@busmark_w_nika  @alexeyglukharev I've been building my own agent and .MD files won't cut it for all models.

Codex is very compliant and listens. Gemini is just an eager beaver that will shoot first and ask questions later. Claude is somewhat compliant, but does other stuff it wasn't asked to.

Biggest learning is that some of these things have to be hard-policy gated. But there is a way to make it work.

Alexey Glukharev

  @nikhilshahane From my experience, they do ask for approval for certain commands, but the human factor kicks in at some point. If the agent asks too often, I end up blindly approving everything. I’ve also had a couple of cases where I approved one thing, but the agent used it in a way I didn’t expect.

Matthew @ Sapling

@nikhilshahane I'm on a Mac and run Docker. Can setup an isolated instance and go crazy.

Nikhil Shahane

@tinyorgtech Absolutely safest way to do stuff. I had a lot of fun with Clawdbot (back when it was still Clawdbot...). It's much more restrained now.

Alan

@nikhilshahane  Running stuff locally is definitely the move. I've been running a few agents across different machines at home for a couple months now and honestly the trust problem I didn't expect was between the agents themselves — not just me trusting them. Like, if you have three instances on your LAN, how does one know the other is legit and not some rogue process pretending to be your agent? There's no identity layer for this stuff.

Everyone's worried about giving agents their bank login — fair enough — but nobody's really talking about how agents trust each other. Least privilege helps for sure, but gets tricky when agents need to coordinate.

AJ

I trust them very little because I am aware of how easily things get out of hand when context rot happens.

Aside from the security challenges, preventing subtle incapacitation is exceedingly hard.

Nika

@build_with_aj How do you protect yourself from being "scammed" by AI agents?

AJ

@busmark_w_nika 

Good opsec first and foremost.

Limit interaction to what is strictly necessary, use principle of least privilege.

Rely on agents a little as possible tbh.

Shubhra Srivastav

@busmark_w_nika  @build_with_aj Totally agree - least privilege is really important

Nika

@build_with_aj TBH, I expected something like a separate computer, but your principles are more strong :D

Igor Lysenko

There is always this feeling that AI truly stores personal information, and it could potentially be used against you. However, if you don't share that information, it will be easier. Although, sometimes we might accidentally give away this information :/

Nika

@ixord In my opinion, it will be the same as social media. If you do not have one, you will be excluded from happening. Like you have never existed. And fall behind.

Igor Lysenko

@busmark_w_nika If you have your own product and there is no LinkedIn profile of the founder then trust in the product may decrease. At the moment you are right that to exist for other people you need social networks. However if you are a super celebrity then social networks may not be necessary :)

Nika

@ixord the context matters, indeed :D But I am not a celebrity (yet) :DDDD

Igor Lysenko

@busmark_w_nika I think you are a celebrity (on PH) because you actively post interesting topics for discussion, and I see that many people know you :)

Dedy Ariansyah

I don’t think the problem is giving agents sensitive tasks.

The problem is we can’t inspect what they actually did.

Traditional systems have audit trails.
Most AI agents only show the final answer.

So the fear isn’t autonomy — it’s opacity.

Once agents become traceable and explainable, the conversation changes from “never trust AI” to “trust but verify.”

I’ve been experimenting with this idea in an open-source project while trying to understand how agent accountability should work in practice — happy to share if anyone’s curious

Nika

@dedy_ariansyah But AI agents are fast and rules should be set. When something inconvenient or bad already happens, it can be late (we will discover it after the event).

Dedy Ariansyah
@busmark_w_nika you are right. The project that I have been currently working on is to prevent just that! Firstly, it can trace the reasoning and execution path of every actions AI agent takes. Secondly, it has auto-eval capability that evaluates if the thoughts and actions agent took before reaching to a certain event is harmful or not and it stops agent right there just in case. Thirdly, it suggests remediation for alignment. this way we can embed the key elements of zero-trust into AI agent governance.
Nika

@dedy_ariansyah If you can create something like that, then cool, I think it can be sold to companies for linces.

Ryan Fong

@dedy_ariansyah  "The fear isn't autonomy - it's opacity" is exactly the framing we landed on too. We built Armalo (armalo.ai) for this: agents define behavioral pacts upfront - what they commit to doing, what's out of scope - then every action gets continuously evaluated against those pacts and stored as a verifiable track record. Turns "did this agent behave?" from unanswerable to queryable. Would genuinely love to see what you're building on the traceability side too.

Abhinav Sharma

One of the best way is to create a VM and then give that to OpenClaw.. if you want to run it locally

Nika

@abhinavsharma_ph I didn't know, so today I learned, thank you :)

Anton Ponikarovskii

As our core product at Lovon AI therapy is based on some kind of AI agents, I can say I completely trust them. It takes a lot of work and a lot of iterations to make it viable. But when you did hundreds of feedback loops with your AI agent it feels like a magic.

Obviously, when your AI agent is making it's first steps, it should be controlled by a human. That's why we have a medical team that analyzes anonymized data, and provide comprehensive feedback on how an AI therapist works and what might be improved.

Nika

@ponikarovskii Which medical system? :)

Anna Sokolova

@ponikarovskii It's great that there's oversight from doctors, but I don't think I could fully open up to a machine. There's this barrier: it doesn't feel me, it just calculates me. Although for initial screening, I guess it's useful

Peter Shu

Being scared of a "black-box" system having control over even a fraction of your computer is normal. But it looks in 2026 with things like Openclaw, people have kinda just "given up" on safety and just wanted to try giving agents full power.

I'm all for efficiency, but the line might be drawn at ai agents and money and key-related items/tasks.

Nika

@peterz_shu Sometimes it feels that people just wanna be the 1st who tried, and do not care about privacy and security. 🤷‍♀️

Peter Shu
David Martín Suárez

I’m not quite brave enough yet to give something like Clawbot/OpenClaw real autonomy 😅

But I’ve been using Claude Cowork for the past few weeks and honestly I’m really happy with it, as long as it stays in “copilot” mode (I review everything before it ships).

I’ve leaned on it heavily for a new side project I’m building (BenchCanvas): market + competitor research, branding notes, SEO ideas, PRDs, and even markdown instruction docs to run in Cursor. It’s also been great as a second pair of eyes on the landing design and copy.

So for me the trust line is: I’m very comfortable delegating thinking, synthesis, drafts, and structured docs. I’m not ready to delegate actions that touch money, private accounts, or anything irreversible without a human in the loop.

Nika

@david_martin_suarez To be honest, I do not recomment to delegate money stuff. Just because of this :D
https://www.facebook.com/groups/factpoint/posts/903666212031147/ I would go bankrupt lol

Michael Foote

I agree, I would be hesitant to allow it to freely access finances (maybe unless it a certain amount), medical and mental health information. I think too much work is being put on automating and streamlining AI without streamlining the safety and approval process first.

Nika

@michael_foote1 And since big companies wanna collab with the army, we are so doomed. I cannot trust AI like this anymore.

Ryan Fong

@michael_foote1  This is the exact problem Armalo (armalo.ai) was built for - the trust infrastructure has to come before the autonomy infrastructure. Agents define behavioral contracts upfront with explicit scope limits, get continuously evaluated against them, and build a verifiable reputation over time. The goal is not to slow automation down, it's to give you a principled basis for knowing which actions to approve vs. just let run.

123
•••
Next
Last