Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

2.7K views

Add a comment

Replies

Best
Astro Tran

Coming at this from a slightly different angle. I build Murror, an AI app for emotional support and loneliness, so trust isn't just a nice-to-have for us, it's basically the whole product. If people don't feel safe being vulnerable with it, there's nothing there.

What I've noticed is that trust with AI in emotional contexts is earned really slowly and lost really fast. One weird or cold response and the person closes off. It's different from a productivity tool where a mistake is just annoying.

I think the harder question isn't "how much do you trust AI" but "does the AI know what it's holding." A lot of agents don't seem built with any awareness of how sensitive the context actually is. That gap worries me more than the capability questions.

First
Previous
•••
8910