How much do you trust AI agents?
With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."
I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.
I certainly wouldn't trust something to the extent of providing:
access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)
sensitive health and biometric information (can be easily misused)
confidential communication with key people (secret is secret)
Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?
Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.


Replies
When it comes to accessing personal data, I'm not keen. For coding things I can't where it's isolated from my system, I'm happy to work with it. I think it will be a long time to build trust with AI. We've seen it at its very early stages of development and it's still massively error prone.
minimalist phone: creating folders
@liam_oscarlena You know... but AI is becoming normalised, indeed, it is here for 3 years publicly. I think that more and more people will give data voluntarily and this will become a norm.
Trust is an architecture question, not a settings question.
If your AI sends queries to a cloud server, you are trusting that company's privacy policy. If it runs on your own hardware, there is nothing to trust. The data never leaves.
I have been building a local AI research tool for exactly this reason. Professionals handling confidential information cannot afford to trust cloud AI with client data. A federal judge recently ruled that using cloud AI tools can destroy attorney-client privilege.
The approach I took: everything runs on the user's own device. When it needs to search the web, the user sees and approves every query before it leaves. Sensitive names and details are stripped automatically. The answer comes back with citations from real sources
Launching on Product Hunt March 6th
minimalist phone: creating folders
@danishlynx Cool, then let me know on the launch day :)
To me, trust is calibrated reliance. I trust AI agents for assistance, not authority.
They’re reliable for speeding up work and handling structured tasks — but for high-stakes decisions, they still need human oversight.
PS : The above response is from ChatGPT 🤣
minimalist phone: creating folders
@indu_thangamuthu I noticed because of the dashes :D
@busmark_w_nika Using em dashes were once considered the highly professional act.
Now it has degraded to "Huhhhh.... ChatGPT" 🤣
minimalist phone: creating folders
@indu_thangamuthu But not such long dashes :D
@busmark_w_nika 🤣 ChatGPT made a mistake. Failed to respond like a human
Trust depends on the type of decision. For analysis, research, pattern recognition -I trust AI more than most humans. It doesn't have ego or confirmation bias. But for decisions that require judgment about people - hiring, partnerships, investor relations - full autonomy is a mistake. The best setup isn't "AI does everything" or "AI does nothing." It's AI that challenges your thinking and then lets you decide. The problem with most AI agent products right now is they skip the challenge part and go straight to execution.
minimalist phone: creating folders
@spunchev This is an interesting POV – using Hard data usage vs Soft data usage. That could be another topic to discuss, but also finance is hard data... but wouldn't give access to it anyway.
@busmark_w_nika Exactly right. And the real problem isn't access to hard data - it's having someone challenge your interpretation of it. That's where most founders get stuck.
Do you find they trust the numbers too much, or not enough?
minimalist phone: creating folders
@spunchev I think that most of time it represents hard data, but only use soft clausules to sound more human :D
Honestly? I trust AI agents about as much as I trust a new employee on their first day — they need supervision.
The problem is most teams deploy agents with zero monitoring. I saw a company lose $47K in a weekend because their support agent started approving refunds it wasn't authorized to give. No one noticed until Monday.
That's actually why I built AgentShield — it monitors every AI agent response in real-time and alerts you when something looks risky (unauthorized promises, hallucinated pricing, compliance violations).
The short answer to the question: you can trust AI agents in production, but only if you're watching them. Same way you'd trust any system — with observability.
minimalist phone: creating folders
@jairo_junior This was the best parallel from the whole thread :D Framed it accur8ly.
@busmark_w_nika Thanks Nika! That "new employee on day one" framing is honestly how I think about it every day building AgentShield — the whole product is built around that idea. If you're curious: useagentshield.com
Honestly, anything irreversible or deeply human-judgment based, like investment choices beyond low-risk experiments or health interventions tied to my biomarkers. AI lacks real empathy or accountability like it might optimize a portfolio on historical data but can't grasp my risk tolerance from a bad life experience. I'd never feed it full financial access or therapy-level emotional data.
minimalist phone: creating folders
@swati_paliwal reading this, I have a feeling I am oversharing lol
minimalist phone: creating folders
@swati_paliwal I try to be skeptical too, but sometimes I am fooled :D
The distinction I keep coming back to is read vs. write access, and reversible vs. irreversible actions.
I'm comfortable letting AI read almost anything — it needs context to be useful. What I'm careful about is what it can do with that context. Reading a medical bill is fine. Autonomously disputing a claim on my behalf is a different matter.
Building in the document space, I've landed on a model where AI suggests and humans confirm — every action is a one-click approval, never an autopilot. That's not a limitation, it's actually the right UX: the AI does the cognitive work of reading and understanding, and I stay in control of what happens next.
The trust question isn't really about AI — it's about the design of the human-AI loop. In anything I build, the AI does the cognitive heavy lifting: reading, understanding, extracting meaning. But nothing happens until I say so. Suggestion without action is a very different thing from autonomy.
minimalist phone: creating folders
@henrikpedersen yeah, but in that case we are talking about passive vs active managin the data. I am okay that AI reads something and gives me suggestions, but when actively does things on behalf my name, that's no no
@busmark_w_nika Exactly — passive vs active is a cleaner way to put it. The moment AI acts as you rather than for you, the trust equation changes completely.
human in loop would be better idea, where human reviewing/verifying steps of the agent and let agent do rest of the work
minimalist phone: creating folders
@rajkumar_001 This would be cool! 👆
lead gen. mostly. and automatic replies. can't fully trust with money tho... there's a news here that some Claude bot users bought an entire course just to serve it's master useful information regarding what he's looking for.
minimalist phone: creating folders
@kilopolki Damn, I would go crazy if it used my money like that. 😂
minimalist phone: creating folders
@kilopolki This? :D https://www.instagram.com/p/DUL0RCLFEvv/
Isn't Clawd just like Cowork? I've only been mildly impressed with agents. One goal of any founder is to find people to trust their reputation to and let those people grow and make mistakes with your name on the door. Finding the right people is make or break.
Finding an AI agent is kinda the same thing. You're trusting it with your name/brand and resources. So far I can't say I've been impressed beyond entry level. I'd rather find someone who can truly reason and knows how to get AI to do some grunt work.
minimalist phone: creating folders
@tinyorgtech Yes, but let's say that AI Agent is capable dof oing anything to deliver what you want. And can be like a very proactive idiot who doesn't mind getting it by any means (and that way it is getting related to something you don't like). Here's the example: https://www.instagram.com/p/DUL0RCLFEvv/
@busmark_w_nika I would want to fire that agent. $3000 in training classes. I mean its next predictive response must have sent it there and with payment processing available it goes nuts. Appreciate that user taking a hit for science!
minimalist phone: creating folders
@tinyorgtech TBH, when it comes to payments, I would require an AI agent to confirm it with me first.