Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.6K views

Add a comment

Replies

Best
Tony Shishov

Any data you pass to an AI provider should always be treated as potentially compromised. That’s the only rule I use to decide whether to share personal data with these systems.

As a software engineer, I have a solid understanding of how data flows between systems and how companies store and process it. The principle of least privilege is the closest information security concept that translates into a practical daily rule when working with AI agents.

Every program must be able to access only the information and resources that are necessary for its legitimate purpose.

Nika

@tony_shishov I would say anything that you share with anybody else is under threat. Even when you share something with your friend... where is the guarantee that it will not leak?

I am maybe too sceptical and paranoid, but as soon as your idea leaves your mind and is exposed, it can be accessed to more entities.

Shawn Upson

This hits close to home because I'm building a dating app right now, and AI trust is literally the core tension we navigate every day.

The irony: In dating, people WANT AI help finding compatible matches (saves time, surfaces people you'd never find manually), but they're terrified of the same AI "knowing too much" about their romantic preferences and behavior patterns.

What I've learned:

  • Users trust AI more when it shows its work. "We matched you because you both prioritize communication style over looks" is way more accepted than a mystery algorithm

  • - Transparency about what data trains the model matters enormously

  • - People draw hard lines around conversation content - they want AI to learn from their swiping patterns, but reading actual messages feels invasive

The counterintuitive thing? Video-based matching actually builds MORE trust in AI than photo-based, because users can see the AI is learning from authentic self-presentation rather than curated profile pics.

@Nika Curious if you see a difference in trust levels for AI that assists decisions vs. AI that makes decisions autonomously? In dating, people want recommendations but absolutely want final say.

Georgios Sarantitis

As someone with around 12 years experience in ML/AI apps, I will say I dont have full faith. The problem is AI agents lack accountability. I have seen many times people write bad code and introduce bugs (myself included ofc) but its always a human being behind that can be blamed but also re-trained, take responsibility, fix, improve. When AI agents do all the work then we end up with systems that are intransparent, inefficient, with lots of technical debt and with noone to actually know what has happended and how to fix. That causes serious security issues and I wouldnt feel (at least for the moment) comfortable giving over full control of entire processes to AI agents. Thats why, even though I am a big advocate of vibe coding, I always oversee commits, I check the unit tests, I resolve merge conflicts myself and generally play a big role in creating my new app (soon to launch but not yet :)).

First
Previous
•••
111213