Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

2.6K views

Add a comment

Replies

Best
Alexey Glukharev

Decently personal communications, maybe just bots for business needs.

About financial part I’m good to delegate but asking for approve with details for each move

Nika

@alexeyglukharev I stand for the opinion that things we care about quite much, I would like to do them in person/manually :) Or things I enjoy.

Umair

hot take but i think everyone here is worried about the wrong thing. the real risk with agents isnt data leakage or rogue bank transfers. its compounding errors over time that look fine individually but add up to something broken. ive been running coding agents continuously for months and the scariest moments werent security incidents, they were subtle logic drift where the agent confidently made a series of reasonable-looking decisions that were collectively wrong. nobody noticed until the output was way off.

the fix isnt restricting access, its making every action reversible. trash over rm, drafts over sends, branches over direct commits. if you design your workflow so nothing is permanent until a human says so, you can give agents surprisingly broad access without losing sleep.

Nika

@umairnadeem Or it would be cool to give one bot to give a promt create a code and to another 2 or 3 bots: Check bugs. Would be a cool experiment :)

Jarmo Tuisk

I do AI trainings for teams so this trust thing comes up like every single session.

Agents are getting really good but honestly how much you trust them depends way more on how you set up the context — guardrails, instructions, steering — than on the model itself. Like maybe 30% is the model and 70% is your prep work.

Best analogy I have is hiring a summer intern from college. Smart kid, learns fast, sometimes even brilliant. But then you realize you spend 2x more time supervising this intern than just doing the thing yourself :D

Trust comes when you stop expecting magic and start treating them like a junior teammate who needs really good onboarding docs.

Nika

@jarmo_tuisk2 OKay, not gonna lie, when I take into account my internships... I would trust AI more :D

Jarmo Tuisk

@busmark_w_nika :D :D maybe you are right

Nika

@jarmo_tuisk2 I am 100% right lol :D

Jarmo Tuisk

@busmark_w_nika  haha fair enough. at least AI doesn't steal your lunch from the office fridge

Aakash

As a developer, I mainly take the help of Agents to find gaps in my architecture and design and, maybe to code a complex feature that would have taken me a few days or so. Given AI is writing a bunch of code for me, I am always paranoid - what if it removed something I already built? What if it chooses a sub-optimal solution? What if its code brought in bugs in areas of the software? I mean, prompt engineering, spec-limited code and Agent Skills are good but as the context grows, the Agent starts forgetting or ignoring those. I have hence, learned to trust the AI "just enough" - enough to allow it to touch the codebase, not enough to make decisions on its own regarding the implementation and architecture. And yes, careful reviews after it is done and correcting them myself - AI outputs are good when taken with a grain of salt.

Nika

@aakashh242 We should still rely on ourselvs :)

Handuo

Trust really depends on what kind of data the agent needs access to. I draw a hard line at financial accounts and health records — too much downside risk.

But for content discovery and curation? I actually think AI agents add a lot of value there. We built Copus partly around this idea — helping people organize and rediscover the things they save across the web. The agent does the heavy lifting of surfacing relevant content, but the human still decides what matters.

The key is designing systems where AI handles the tedious parts (sorting, tagging, recommending) while keeping humans in control of the final decisions. Guardrails > blind trust.

Nika

@handuo Isn't it something that clawd can be used too?

Sangeet Banerjee

I’m excited about AI agents too, but I definitely have a few boundaries.

For me, anything involving direct control over money is a no-go. I might let an AI analyze spending or suggest actions, but I wouldn’t give it full access to move funds or make financial decisions on its own.

Same with sensitive health or biometric data. The upside isn’t worth the risk if that information gets misused or leaked.

And private conversations with important people (business partners, legal matters, personal messages) should stay private. Some things just shouldn’t pass through another system.

I’m happy letting AI handle research, drafting, organizing, and repetitive tasks. But when it comes to money, identity, or truly confidential information, I prefer to keep a human in the loop.

Nika

@sangeet_banerjee I feel a little bit dumb, because, confession: I copy-paste some conversations because I do not understand some people and nuances 😅 That said: You are more cautious and concious about usage :)

Liron Ben Moshe

"Trust" is an interesting word for AI Agents. I would say, I do not trust them, ever. At any point the data you feed that ai agent for the product you are building can be taken. You are relying on Anthropic, OpenAi, etc. to stay honest to us as users. Trust for me is pretty minimal when it comes to AI Agents. BUT.... I love how much of an impact it's made in my daily work. Always remember, the human brain, touch and eye means more than producing something in 1 second. For any industry.

Nika

@liron_ben_moshe I suppose that many people will blindly trust just because of the comfort that it offers. 🤷‍♀️

Liron Ben Moshe

@busmark_w_nika They certainly will. Problem is, once the user builds something so fast with the agent and problems occur that the agent cannot fix, said user will not know the solution, or it'll take them 2x as long to research and learn how to fix it.

Marces William

For me, trust is heavily tied to how long the "collaboration" has been going on, especially when the context is still pretty dynamic and the input can change over the course of months. Financials are less of a concern in that sense. But when it’s just about quick iterations, like making a greeting card illustration from a personal photo, no worries tbh.

Nika

@marces_wiliam Unil the output remains in my computer and needs approval first, I am okay with that ;)

Sansstuti Aggarwal

AI agents are great for analysis, automation, and summarising information, but I’d still keep them away from things that require high trust or irreversible actions like direct control over finances, sensitive health data, or private communications.

For now, I see them more as decision-support systems, not decision-makers.

Nika

@sansstuti_aggarwal as analysing tools – okay, I am buying it too :) We are on the same page in this.

Umair

i think most people in this thread are conflating "trust" with "giving full access" and those are completely different things. you dont need to trust an agent to use it effectively, you just need to scope its permissions correctly.

i run coding agents basically all day and the stuff that actually burned me wasnt the agent going rogue or leaking data. it was the agent confidently doing the wrong thing in a way that looked right. thats way more dangerous than some theoretical privacy breach because you dont catch it until its already in production.

the real risk with agents isnt malice or data theft, its competence drift. they work great for 45 minutes then the context window fills up and they start hallucinating solutions to problems that dont exist. if youre not checking their work regularly you end up with this slow accumulation of subtle bugs that no amount of sandboxing or VMs will prevent.

Nika

@umairnadeem Is there any way to prevent that hallucination in the future? Because one guy was presenting me some solution that it keeps the information about you and the AI will remember them. But I was a little bit confused with the target audience.