Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.6K views

Add a comment

Replies

Best
Michael Foote

I agree, I would be hesitant to allow it to freely access finances (maybe unless it a certain amount), medical and mental health information. I think too much work is being put on automating and streamlining AI without streamlining the safety and approval process first.

Nika

@michael_foote1 And since big companies wanna collab with the army, we are so doomed. I cannot trust AI like this anymore.

Ryan Fong

@michael_foote1  This is the exact problem Armalo (armalo.ai) was built for - the trust infrastructure has to come before the autonomy infrastructure. Agents define behavioral contracts upfront with explicit scope limits, get continuously evaluated against them, and build a verifiable reputation over time. The goal is not to slow automation down, it's to give you a principled basis for knowing which actions to approve vs. just let run.

Alper Tayfur

Yeah, I draw pretty hard lines too.

Anything irreversible or deeply personal stays human for me. That includes:

  • full access to finances (I’ll allow read-only or capped actions at most)

  • health, biometric, or identity data

  • private communications where trust or intent really matters

  • decisions with legal or long-term consequences

AI agents are great for prep, analysis, drafts, and coordination — but not for final authority. I’m fine letting them recommend, not decide, especially when the downside isn’t recoverable.

Nika

@alpertayfurr I wouldn't be happy if Clawd would send some rude message to my clients. :DDD

Kashyap Rathod

I trust them with tasks, not with judgment.

I’m fine giving repetitive work. Not finances, private conversations, or anything sensitive.

Nika

@kashyaprathod That's my approach too :)

Harrie Vermeulen

During my work we stay away from agents and automations too much, we have a very high security policy set up and with a reason, I also think it is still to far away for the basic users, currently its really at a stage where it helps developers, but we should be focussing on making tools that will help people. Offline agents, not hooked up to anything that will help elderly with reminders for medication and excersize, help children with learning disabilities to get up to speed, set up daily plannings, do groceries not everything has to be online and hooked up to heavy AI. Big data and the internet of things have been topics that have been there for decades and we're still not at a level we ought to be..

Nika

@harrie_vermeulen In what industry do you work when you cannot use AI or agents?

Harrie Vermeulen

@busmark_w_nika Just to be sure, we use AI a lot, but only the secured versions, but agents to change sensitive content is no-go, there are too many validation flows in the processes making it too sensitive for agents to take over fully

Mihir Kanzariya

I trust AI agents for anything I can easily verify or undo. Writing drafts, generating boilerplate code, summarizing docs. If it messes up I catch it in 30 seconds.

Where I draw the line is anything with real consequences that's hard to reverse. Sending emails to clients, pushing code to production, financial transactions. The speed is tempting but one bad automation is way worse than doing it manually.

The weird middle ground is scheduling. I want to trust it but I got burned by an AI double-booking a demo call so now I always double check lol.

Nika

@mihir_kanzariya :DDD you are almost like that one guy whose AI agent bought an overpriced course to get access to specific information. lol

Mihir Kanzariya

@busmark_w_nika You’re smart about it… but you’re closer to the edge than you think 😄

park

I think we’re underestimating how risky this gets in real-time,

especially with AI generating code or logic on the fly.

it’s not just about what we give access to,

but also what the model outputs while we’re using it

Nika

@vlad1323 When you put this like that, I trust even less :D

park

@busmark_w_nika haha yeah that wasn’t my intention :D

but I guess once you start thinking about it,

it’s hard to unsee

Tereza Hurtová
I'm with you on the sensitive stuff, Nika! I love experimenting with tools like Cursor, but I treat AI more like a talented intern than a manager. I wouldn’t trust it f.e. with final decision-making on project priorities. AI can tell me what the data says, but it doesn’t know the 'soul' of my project or the long-term vision I have. It's exciting to see what's coming, but keeping that human 'source of truth' (as Tom Morkes mentioned elsewhere) is essential for building real trust.
Nika

@tereza_hurtova We should consider purchasing a separate device where we can run these agents. :D In general, I have trust issues :D

Matthew @ Sapling

Isn't Clawd just like Cowork? I've only been mildly impressed with agents. One goal of any founder is to find people to trust their reputation to and let those people grow and make mistakes with your name on the door. Finding the right people is make or break.

Finding an AI agent is kinda the same thing. You're trusting it with your name/brand and resources. So far I can't say I've been impressed beyond entry level. I'd rather find someone who can truly reason and knows how to get AI to do some grunt work.

Nika

@tinyorgtech Yes, but let's say that AI Agent is capable dof oing anything to deliver what you want. And can be like a very proactive idiot who doesn't mind getting it by any means (and that way it is getting related to something you don't like). Here's the example: https://www.instagram.com/p/DUL0RCLFEvv/

Matthew @ Sapling

@busmark_w_nika  I would want to fire that agent. $3000 in training classes. I mean its next predictive response must have sent it there and with payment processing available it goes nuts. Appreciate that user taking a hit for science!

Nika

@tinyorgtech TBH, when it comes to payments, I would require an AI agent to confirm it with me first.

Valentina Skakun

I wouldn't really want to give any personal data to AI or delegate any kind of management to it. For example, i think using AI to structure data or analyze public information is fine. But I'm not sure that I can trust AI to manage anything. And it doesn't matter whether it's managing my calendar, emails, or personal data.

Alina Petrova

I trust only the ones that were built by my team 😁

Nika

@alina_petrova3 Ofc, when you have an overview of the tool and the team, that is a win scenario. :D smart smart :D