Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

3.2K views

Add a comment

Replies

Best
Jayotis Diggory

Define AI Agent. I put together an offline ai does that count or are you saying trust the ai companies? If I had the choice I would not use ai companies and just use my own but its expensive, complex and the data centres got the convenience. We should build a decentralized ai and then we can buy personal ai node instead of a data center if you ask me.

Nika

@jayotis 

Having your own or a local solution would be the ideal scenario. But yeah, it would require large amounts of high-quality data, significant computing power, access to APIs and infrastructure, money etc. As you said, it is not cheap :D

Jayotis Diggory

@busmark_w_nika :) my isolated ai worked but was slow as molasses and needed careful prompting, so careful I just coded it myself in the end. I didn't have a gpu that had ai drivers so I was stuck with cpu only, looks like ai graphics cards with 32GB VRAM is $1000 and rising(I suspect market manipulation to stop us from doing this) so that is what I meant by expensive.
The decentralized ai would not be expensive. The models are p2p shared and compiled, we share cpu/gpu time(think SETI but ai) when we are not using our personal ai and the hardware itself could be a raspberry pi in your basement so not expensive. When We do use it we can use other people idle cpu/gpu so its like you have better hardware.
The numbers you are seeing with ai as far as infrastructure are a false indication of the true cost because it is all primitive tech. We are seeing the Hit & Miss engine of ai basically. all that process power and ram storage is because the companies are brute forcing a solution, rushing to market, wasting absurd amounts of energy and cpu time to make it look good and usable so someone will subscribe.
long story short, you are right that using ai data centers is not correct it is supposed to be a personal experience.

Harrie Vermeulen

During my work we stay away from agents and automations too much, we have a very high security policy set up and with a reason, I also think it is still to far away for the basic users, currently its really at a stage where it helps developers, but we should be focussing on making tools that will help people. Offline agents, not hooked up to anything that will help elderly with reminders for medication and excersize, help children with learning disabilities to get up to speed, set up daily plannings, do groceries not everything has to be online and hooked up to heavy AI. Big data and the internet of things have been topics that have been there for decades and we're still not at a level we ought to be..

Nika

@harrie_vermeulen In what industry do you work when you cannot use AI or agents?

Harrie Vermeulen

@busmark_w_nika Just to be sure, we use AI a lot, but only the secured versions, but agents to change sensitive content is no-go, there are too many validation flows in the processes making it too sensitive for agents to take over fully

Donkey

When it comes to accessing personal data, I'm not keen. For coding things I can't where it's isolated from my system, I'm happy to work with it. I think it will be a long time to build trust with AI. We've seen it at its very early stages of development and it's still massively error prone.

Nika

@liam_oscarlena You know... but AI is becoming normalised, indeed, it is here for 3 years publicly. I think that more and more people will give data voluntarily and this will become a norm.

Danish Ali

Trust is an architecture question, not a settings question.

If your AI sends queries to a cloud server, you are trusting that company's privacy policy. If it runs on your own hardware, there is nothing to trust. The data never leaves.

I have been building a local AI research tool for exactly this reason. Professionals handling confidential information cannot afford to trust cloud AI with client data. A federal judge recently ruled that using cloud AI tools can destroy attorney-client privilege.

The approach I took: everything runs on the user's own device. When it needs to search the web, the user sees and approves every query before it leaves. Sensitive names and details are stripped automatically. The answer comes back with citations from real sources

Launching on Product Hunt March 6th

Nika

@danishlynx Cool, then let me know on the launch day :)

Indu Thangamuthu

To me, trust is calibrated reliance. I trust AI agents for assistance, not authority.

They’re reliable for speeding up work and handling structured tasks — but for high-stakes decisions, they still need human oversight.

PS : The above response is from ChatGPT 🤣

Nika

@indu_thangamuthu I noticed because of the dashes :D

Indu Thangamuthu

@busmark_w_nika Using em dashes were once considered the highly professional act.
Now it has degraded to "Huhhhh.... ChatGPT" 🤣

Nika

@indu_thangamuthu But not such long dashes :D

Indu Thangamuthu

@busmark_w_nika 🤣 ChatGPT made a mistake. Failed to respond like a human

Serge Punchev

Trust depends on the type of decision. For analysis, research, pattern recognition -I trust AI more than most humans. It doesn't have ego or confirmation bias. But for decisions that require judgment about people - hiring, partnerships, investor relations - full autonomy is a mistake. The best setup isn't "AI does everything" or "AI does nothing." It's AI that challenges your thinking and then lets you decide. The problem with most AI agent products right now is they skip the challenge part and go straight to execution.

Nika

@spunchev This is an interesting POV – using Hard data usage vs Soft data usage. That could be another topic to discuss, but also finance is hard data... but wouldn't give access to it anyway.

Serge Punchev

@busmark_w_nika Exactly right. And the real problem isn't access to hard data - it's having someone challenge your interpretation of it. That's where most founders get stuck.

Do you find they trust the numbers too much, or not enough?

Nika

@spunchev I think that most of time it represents hard data, but only use soft clausules to sound more human :D

Jairo Junior

Honestly? I trust AI agents about as much as I trust a new employee on their first day — they need supervision.

The problem is most teams deploy agents with zero monitoring. I saw a company lose $47K in a weekend because their support agent started approving refunds it wasn't authorized to give. No one noticed until Monday.

That's actually why I built AgentShield — it monitors every AI agent response in real-time and alerts you when something looks risky (unauthorized promises, hallucinated pricing, compliance violations).

The short answer to the question: you can trust AI agents in production, but only if you're watching them. Same way you'd trust any system — with observability.

Nika

@jairo_junior This was the best parallel from the whole thread :D Framed it accur8ly.

Jairo Junior

@busmark_w_nika Thanks Nika! That "new employee on day one" framing is honestly how I think about it every day building AgentShield — the whole product is built around that idea. If you're curious: useagentshield.com

Kanishk Saraswat

There's a useful mental model I use: treat AI agents the same way you'd treat a new intern on their first week.

You wouldn't hand them your bank credentials or let them send emails on your behalf unsupervised. But you'd absolutely let them research, draft, summarize, and prep things for your review.

The trust ceiling goes up as you observe their behavior in lower-stakes situations first. Same with agents.

Where I draw the hard line:

- Anything involving financial transactions or credentials

- Communications that go out directly to customers or key relationships

- Decisions that are hard or impossible to reverse

Where I let them run freely:

- Research and synthesis

- First drafts of content

- Internal data processing and summarization

- Repetitive dev tasks with human review at the end

The real unlock is building workflows where agents operate in sandboxed, reversible steps and a human approves before anything consequential happens. That way you get the speed benefits without the catastrophic failure risk.

The trust isn't really about the AI itself, it's about the system design around it.

Nika

@kanishk_saraswat I think we are on the same page/mindset here :)

swati paliwal

Honestly, anything irreversible or deeply human-judgment based, like investment choices beyond low-risk experiments or health interventions tied to my biomarkers. AI lacks real empathy or accountability like it might optimize a portfolio on historical data but can't grasp my risk tolerance from a bad life experience. I'd never feed it full financial access or therapy-level emotional data.

Nika

@swati_paliwal reading this, I have a feeling I am oversharing lol

swati paliwal
@busmark_w_nika I am in general a slightly skeptical person 🫣 and AI fascinates me but it scares me too!
Nika

@swati_paliwal I try to be skeptical too, but sometimes I am fooled :D

Henrik Pedersen

The distinction I keep coming back to is read vs. write access, and reversible vs. irreversible actions.

I'm comfortable letting AI read almost anything — it needs context to be useful. What I'm careful about is what it can do with that context. Reading a medical bill is fine. Autonomously disputing a claim on my behalf is a different matter.

Building in the document space, I've landed on a model where AI suggests and humans confirm — every action is a one-click approval, never an autopilot. That's not a limitation, it's actually the right UX: the AI does the cognitive work of reading and understanding, and I stay in control of what happens next.

The trust question isn't really about AI — it's about the design of the human-AI loop. In anything I build, the AI does the cognitive heavy lifting: reading, understanding, extracting meaning. But nothing happens until I say so. Suggestion without action is a very different thing from autonomy.

Nika

@henrikpedersen yeah, but in that case we are talking about passive vs active managin the data. I am okay that AI reads something and gives me suggestions, but when actively does things on behalf my name, that's no no

Henrik Pedersen

@busmark_w_nika Exactly — passive vs active is a cleaner way to put it. The moment AI acts as you rather than for you, the trust equation changes completely.