Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.6K views

Add a comment

Replies

Best
quinn.nelson

I wouldn’t delegate anything related to payments.

I’d only allow access to things that are already paid for.

Leaving payments open feels like a really bad idea to me.
I’d mainly use AI agents for tasks like market research or summarizing things that require regular updates.

Nika

@quinn_nelson payments are no no to me as well ;)

Fraser

About the same level of trust I have when I leave my 3 teenage sons at home for the weekend.

Nika

@couldashouldawoulda :D Teenage vs AI agents. I would trust AI agents more in this case :D I know teenagers :D

Bangalore Packers and Movers

I trust AI agents for routine tasks and productivity, but not for sensitive decisions or personal data, human judgment and verification are still essential for reliability and safety.

Nika

@bangalore_packers_and_movers I perceive it in the same way :)

Chris Lippi

I kind of come at this a little differently. The term "AI agent" can mean different things to different people. Yes, there are lots of people going all in on OpenClaw. It's definitely not ready for average human use. You have to really be savvy to use it safely. Most people won't take care, and will yolo and maybe regret.

What I have changed fundamentally in my every day life is I use AI tools interactively all day and sometimes turn them into repeatable tasks when I've honed the skill and trust it. I give Claude access to much of my work and personal data through well governed and scoped MCP's I control where I can log activity. The main thing I'm trusting is that Anthropic isn't slurping my data and I make sure I've configured Claude to restrict from web search and don't enable their stock MCP's.

Once you start prompting your way to simplify your daily work or life tasks, it's a drug of productivity. Literally hours of work reduced to nothing. Once you trust the results and skill them up as scheduled activities, you can't imagine going back. I recommend everyone I work with to start with AI through chat tools; figure out how to prompt better, establish guardrails, save off skills. It's a necessary skill in 2026 and preps you for what's coming down the pipe as the tools get better.

Nika

@clippi It would be useful in that case to find a good way or process to track and control the work of those agents effectively, so that there is no significant irreversible harm. :)

Chris Lippi

@busmark_w_nika for sure. trust builds from usage and expands with the visibility of tracking what the AI has done. indeed.

remi

Honestly, I’m a bit concerned about the idea of a blackbox. I’d prefer it to be more transparent, of course, as long as the content doesn’t end up feeling like spam.

Nika

@remi_kyrian Transparency is good but not enough tbh, I require more controlled environment.

cecilia

It really depends on the domain for me. I work in HR tech and we use AI agents to filter noise and surface patterns that humans would totally miss. But the final call on a candidate? That still needs a person who understands context, culture fit, and empathy. Trust builds when the agent is transparent about what it did and why, not when it just hands you an answer. The biggest risk isn't the AI being wrong, it's people not questioning the output.

Nika

@ceciliatran this is interesting – how do you use AI for your work and how do you see candidates using it? Are they relying too much on AI?

cecilia

@busmark_w_nika I use AI every day. I've trained my Claude on my tone of voice and it knows about my company and what we stand for, so a lot of admin work is eliminated.

As for candidates, I don't think using AI is a bad thing at all. I actually see it as a positive signal when someone knows how to leverage it well. But they should be mindful about making the output their own. If your resume or cover letter is full of em-dashes and words like "spearheaded" or "busywork," it's immediately obvious it came straight from ChatGPT. The suggestion would be to take the time to adjust it so it sounds like you, not like a prompt.

Hitesh

Photos and finances are my hard no. Happy to delegate tasks, but not trust AI with anything deeply personal or financially sensitive.

Nika

@hitesh55 Mentioning photos is new in this discussion. Why so?

Mykola Kondratiuk

I run 10+ autonomous agents daily for PM workflows. The trust framework I've landed on: read access is liberal, write access is reviewable, anything touching external systems or money is approval-gated. The paranoia goes away once the boundary is clear.

Nika

@mykola_kondratiuk How much do you pay for running it? :D

Mykola Kondratiuk

@busmark_w_nika they are all using Claude 200$ subscriptions. And this is the beauty. One subscription to chat in Claude, use Claude Code and run agents based on Sonnet and Opus :)

cecilia

Working in HR tech, I think about this constantly. We use AI agents to surface candidates and sort through messy data, and they are great at it. But the moment a decision impacts someone's career, I want a human making that call. For me, the trust line is not about capability; it is about the cost of getting it wrong. High stakes = human in the loop, always.

Nika

@ceciliatran do you use AI also for picking the right candidate for open role?

cecilia

@busmark_w_nika we use AI to evaluate the profiles and show the matching rates based on the recruiter's requirements, but we don't automatically pick the best ones, that's up to the recruiter!

Jo Public

We build tools with AI baked in but made the decision early on to keep everything 100% local. No cloud processing, no telemetry. For our users that's been the biggest trust signal - knowing their data never leaves their machine.

Nika

@jo_public I want to make it the same way tho it has a downside, that after reinstalling, there is not easy way to get data back.

Jo Public

That's a fair point. We built a 30-day quarantine system for exactly this reason - nothing gets permanently deleted, it all sits in a restorable vault first. But you're right that local-only means the user is responsible for their own backups. Trade-off worth making in our case though - our users would rather own that risk than hand their files to a cloud service.