Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.2K views

Add a comment

Replies

Best
Alper Tayfur

Yeah, I draw pretty hard lines too.

Anything irreversible or deeply personal stays human for me. That includes:

  • full access to finances (I’ll allow read-only or capped actions at most)

  • health, biometric, or identity data

  • private communications where trust or intent really matters

  • decisions with legal or long-term consequences

AI agents are great for prep, analysis, drafts, and coordination — but not for final authority. I’m fine letting them recommend, not decide, especially when the downside isn’t recoverable.

Nika

@alpertayfurr I wouldn't be happy if Clawd would send some rude message to my clients. :DDD

Valeriia Kuna

Definitely agree on personal finances and biometric data.

But I also draw a hard line at social media autonomy. I would never give an agent write-access to my LinkedIn or X accounts to post or reply automatically. My online presence is my reputation.

Nika

@valeriia_kuna Me as well. I hate it even when someone uses AI content. It feels so fake and synthetic.

Valeriia Kuna

@busmark_w_nika Me too! When I see some AI generated posts on social networks, I'm like🙄🙄🙄
I like to polish my texts with AI or translate them, but I don't like AI generic content.

Nika

@valeriia_kuna I do the same, but if I do no like it, I will delete it anyway. sooo.

Kashyap Rathod

I trust them with tasks, not with judgment.

I’m fine giving repetitive work. Not finances, private conversations, or anything sensitive.

Nika

@kashyaprathod That's my approach too :)

Mihir Kanzariya

I trust AI agents for anything I can easily verify or undo. Writing drafts, generating boilerplate code, summarizing docs. If it messes up I catch it in 30 seconds.

Where I draw the line is anything with real consequences that's hard to reverse. Sending emails to clients, pushing code to production, financial transactions. The speed is tempting but one bad automation is way worse than doing it manually.

The weird middle ground is scheduling. I want to trust it but I got burned by an AI double-booking a demo call so now I always double check lol.

Nika

@mihir_kanzariya :DDD you are almost like that one guy whose AI agent bought an overpriced course to get access to specific information. lol

Mihir Kanzariya

@busmark_w_nika You’re smart about it… but you’re closer to the edge than you think 😄

park

I think we’re underestimating how risky this gets in real-time,

especially with AI generating code or logic on the fly.

it’s not just about what we give access to,

but also what the model outputs while we’re using it

Nika

@vlad1323 When you put this like that, I trust even less :D

park

@busmark_w_nika haha yeah that wasn’t my intention :D

but I guess once you start thinking about it,

it’s hard to unsee

Tereza Hurtová
I'm with you on the sensitive stuff, Nika! I love experimenting with tools like Cursor, but I treat AI more like a talented intern than a manager. I wouldn’t trust it f.e. with final decision-making on project priorities. AI can tell me what the data says, but it doesn’t know the 'soul' of my project or the long-term vision I have. It's exciting to see what's coming, but keeping that human 'source of truth' (as Tom Morkes mentioned elsewhere) is essential for building real trust.
Nika

@tereza_hurtova We should consider purchasing a separate device where we can run these agents. :D In general, I have trust issues :D

Valentina Skakun

I wouldn't really want to give any personal data to AI or delegate any kind of management to it. For example, i think using AI to structure data or analyze public information is fine. But I'm not sure that I can trust AI to manage anything. And it doesn't matter whether it's managing my calendar, emails, or personal data.

Alina Petrova

I trust only the ones that were built by my team 😁

Nika

@alina_petrova3 Ofc, when you have an overview of the tool and the team, that is a win scenario. :D smart smart :D

Ryan Tucker

This is quite literally why we built kwAI to enable people selling rather than let AI do the selling.

We let AI find, research, and draft the messages, while the human does the relationship building.

Very good mix.

Nika

@ryan_tucker13 can you shar a link pls?

Ryan Tucker

@busmark_w_nika Of course!! https://i-kwai.com

We're launching next month. :)

Nika

@ryan_tucker13 thank you, feel free to remind :)

Ryan Tucker

@busmark_w_nika Amazing! Will do! We're launching in less than 2 weeks! :)

We really appreciate the support. You're welcome to join our Herd any time!

Mykola Kondratiuk

I build AI tools and honestly I don't trust my own agents with anything I can't undo. Like I'll let them draft emails all day but actually sending? Nope, always a human in the loop there. The finance thing is wild to me - seen too many hallucination edge cases to let an agent anywhere near real money. I think the trust question really comes down to reversibility. Read-only access to my calendar, notes, whatever - sure. But anything that creates a side effect in the real world needs a confirmation step. The people skipping that are gonna learn the hard way.

Nika

@mykola_kondratiuk Will you be launching any of your agents publicly in the future?