Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.8K views

Add a comment

Replies

Best
Jo Public

We build tools with AI baked in but made the decision early on to keep everything 100% local. No cloud processing, no telemetry. For our users that's been the biggest trust signal - knowing their data never leaves their machine.

Nika

@jo_public I want to make it the same way tho it has a downside, that after reinstalling, there is not easy way to get data back.

Jo Public

That's a fair point. We built a 30-day quarantine system for exactly this reason - nothing gets permanently deleted, it all sits in a restorable vault first. But you're right that local-only means the user is responsible for their own backups. Trade-off worth making in our case though - our users would rather own that risk than hand their files to a cloud service.

Nika

@jo_public but what about daily activity? Because my tool counts daily activity so people can be prevented to social media ban. Is there any technical solution to store this data?

Jo Public

@busmark_w_nika  Yeah sorry i was so wrapped up in setting up K8 stuff yesterday that i was only thinking and explaining about why I decided that K8 would not be phoning home to the Ai data mother ship to process our data🤣 Sorry...my bad 😇 But, It was more appealing and reassuring i felt to have a tool that didn't upload all your files to the Ai companies to process on their servers - as there is no such thing as data privacy when you use LLM's in my experience. Its a trade off. Dont get me wrong, ive made that trade off. Im not anti AI, i use it all day every day 🤣🥂 But im not blind to it either. Until we all get local Ai, the tech companies own all our data as it all passes back into the machine and learns from it. I know, I tried really hard for nearly 2 years to secure my work and research on open ai, manus, gemini, co pilot, I mainly use Anthropic the last year as i defected my main serious Ai work from open Ai because of privacy and intellectual data theft issues. I felt Anthropic had slightly better morals and better models, which they do. But in the end you can't stop the data bleed. If you interact with Ai, the model and the company will learn from your interactions, for good or for bad. We should all be paid for training all the Ai. So that's all the people who regularly use ai, especially for work or health. So that's us. Personally, I've gone completely all in on my personal details with my PA Codey, Claude in terminal. It's easier to get simple admin tasks done if my assistant knows all my details. But also last year i used open ai intensively to help with my couple months of Chemo, meds tracking, appointments etc...full time job without Ai. But what i dont give Ai is complete autonomy to act on my behalf in the outside world. I contain its access to act autonomously depending on the task. I wouldn't give it access to autonomy with my money for example, not because of the fear of the Ai not being competent to carry out certain tasks, but rather not trusting the lengths that people will go to hack and steal, prompt injecting etc. I would love to have dived into Clawd bot as it was first called. I thought it was Anthropic at first. But i decided to be cautious because of security risks as i didnt have the time to dedicate to trying to make it secure and impervious to attack. I'm glad i waited and have come up with my own version and system that i am much more in control of and helps me enormously because it does know about my work and my personal stuff in the way an excellent PA would. But its not allowed to run about willy nilly on the web acting on my behalf without my say so or express instruction. I may at some point as an experiment, send out an agent with £50 and see what it comes back with 🤣...but in general, for me, I have to be the human in the loop for decisions with real world consequences. Sorry that i went so into one Nika, but its a topic i've been wrestling with quietly alone for a long time and its something i feel quite strongly about. I feel that anyone selling Ai and ai products has a responsibility to make it safe to the user. I don't believe that is happening in general at the moment. Now on reading your comment again, forgive me, im not sure i exactly understand what you are collecting and what processing you are doing. Please feel free to give me a little more idea of your problem and im happy to try and think of a way that you could make it safer. Can't guarantee there is a way, but if there is anything that can be baked in thats a really good place to start. I hope i havent upset any rules already by rambling on here 🤣. Feel free to discuss on here or is there a pm on PH or not? Sorry im new here and dont know the rules...its my age...😇

Georgios Sarantitis

As someone with around 12 years experience in ML/AI apps, I will say I dont have full faith. The problem is AI agents lack accountability. I have seen many times people write bad code and introduce bugs (myself included ofc) but its always a human being behind that can be blamed but also re-trained, take responsibility, fix, improve. When AI agents do all the work then we end up with systems that are intransparent, inefficient, with lots of technical debt and with noone to actually know what has happended and how to fix. That causes serious security issues and I wouldnt feel (at least for the moment) comfortable giving over full control of entire processes to AI agents. Thats why, even though I am a big advocate of vibe coding, I always oversee commits, I check the unit tests, I resolve merge conflicts myself and generally play a big role in creating my new app (soon to launch but not yet :)).

Nika

@georgios_sarantitis_ IMO, vibecoding has meaning only for those, who are willing to understand the code (so at least understanding coding/programming itself). and not blindly copy paste.

Christophe Dupont

Honestly? I trust them more than I expected, but only because I treat them like a junior dev — great at execution, terrible at judgment. I use Claude Code daily to build my app and it's been a game changer for shipping faster. But I always review the code, always test, and never let it make architectural decisions alone. The moment you blindly trust the output is the moment you get a beautiful function that subtly breaks three other things. Trust the speed, verify the output. That's the balance I've found so far.

Nika

@thenomadcode what are you working on, and how do you use AI for getting your project done? Does it show you the code lines you need to insert there, or do you just know what to insert there, but double-check with Claude?

I am trying to learn coding by using that, but it always shows me the solution. I need to remind him not to do that.

Constance Tong

It really depends on whether it can deliver what I need. So it’s not about trust, it’s more about whether the outcome matches my expectations.

Nika

@constance_tong but with AI, you can have more attempts (within a short period of time), so maybe it is about "how many times I will try to get the right output" :)

Shawn U.

This hits close to home because I'm building a dating app right now, and AI trust is literally the core tension we navigate every day.

The irony: In dating, people WANT AI help finding compatible matches (saves time, surfaces people you'd never find manually), but they're terrified of the same AI "knowing too much" about their romantic preferences and behavior patterns.

What I've learned:

  • Users trust AI more when it shows its work. "We matched you because you both prioritize communication style over looks" is way more accepted than a mystery algorithm

  • - Transparency about what data trains the model matters enormously

  • - People draw hard lines around conversation content - they want AI to learn from their swiping patterns, but reading actual messages feels invasive

The counterintuitive thing? Video-based matching actually builds MORE trust in AI than photo-based, because users can see the AI is learning from authentic self-presentation rather than curated profile pics.

@Nika Curious if you see a difference in trust levels for AI that assists decisions vs. AI that makes decisions autonomously? In dating, people want recommendations but absolutely want final say.

Vishnu N C

As someone building in the enterprise AI space, this is the question I think about daily. My take: the trust problem isn't binary — it's about designing systems with the right guardrails so you can trust agents with progressively more responsibility.

For most enterprise use cases, the winning pattern is "AI drafts, human approves" for anything high-stakes, and "AI executes autonomously" for repetitive, low-risk tasks. The mistake most teams make is trying to go from zero trust to full autonomy in one leap. The real path is incremental: let the agent handle email triage first, prove it works, then expand to drafting responses, then eventually sending them.

The bigger issue nobody talks about enough is audit trails. I'd trust an AI agent with a lot more if I could see exactly what it did, why it made each decision, and roll back anything it got wrong. Transparency is the foundation of trust.

Kevin Xu

I’m definitely with you on the "bounded trust" approach. I treat AI agents like highly capable interns—I’ll let them draft my emails and organize my calendar, but they don't get the keys to the vault.

I draw the line at automated decision-making for high-stakes relationships. I wouldn't let an agent handle a sensitive conflict or a critical negotiation on my behalf, as the "human nuance" and accountability are things code just can't replicate yet. Where do you think the line is between "convenient automation" and "losing personal agency"?

Vishnu N C

This is a question I think about constantly as someone building in the enterprise AI space. The trust equation for AI agents in business is fundamentally different from personal use.

For personal tasks, the risk is mostly about privacy and convenience. But in enterprise contexts, a single bad AI decision can cascade — wrong data in a financial report, a compliance violation, or an unauthorized communication sent to a client.

What I've found is that trust with AI agents isn't binary — it's a spectrum that maps to reversibility. I'm comfortable letting agents handle tasks where the output can be reviewed before it takes effect (drafting, analysis, recommendations). But I draw a hard line at anything that's both irreversible AND high-stakes (sending payments, deleting production data, making binding commitments).

The most interesting pattern I'm seeing is "human-in-the-loop by default, with progressive autonomy." Start agents with training wheels, then gradually expand their authority as you build confidence in specific workflows. The companies that get this graduation model right will win the enterprise AI market.

First
Previous
•••
111213