Nika

How much do you trust AI agents?

With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."

I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.

I certainly wouldn't trust something to the extent of providing:

  • access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)

  • sensitive health and biometric information (can be easily misused)

  • confidential communication with key people (secret is secret)

Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?

Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.

4.3K views

Add a comment

Replies

Best
Vitalii Baranov

I would never grant an AI agent:

Final Legal Authority: An algorithm can’t be held accountable for a signed contract. Until there’s a legal framework for 'AI responsibility,' I’m keeping the pen.

Private Emotional Communication: Delegating sensitive talks with key people or loved ones is the fastest way to erode trust. Some things must remain human-to-human.

Uncapped Financial Access: Even with the Sapiom model, I’d only 'fund' an agent with a strict 'willing to lose' limit. A hallucination in a transaction could be a disaster.

Landon Reid

As someone building AI agents that make real decisions about property data (zoning compliance, flood risk, buildability), trust comes down to one thing: can you verify the output?

At ReadyPermit, we designed our AI to always cite the source -- the specific municipal code, the FEMA flood map panel, the parcel data. The agent does the heavy lifting of research, but every conclusion links back to verifiable government data.

I trust AI agents for:

- Research and synthesis (pulling from 100+ data sources faster than any human)

- Pattern recognition across large datasets

- First-pass analysis and recommendations

I don't trust them for:

- Final decisions without human review on high-stakes outcomes

- Anything involving legal liability without source verification

- Creative judgment calls that require local context

The key is building AI systems where the human stays in the loop on decisions that matter, while letting the agent handle the 90% of work that's pure data processing.

Landon Reid

Trust = output quality x transparency. I let AI agents handle research, code, and data analysis all day. But I'd never let one send an email or make a financial decision without my review. The best AI agents make you faster, not autonomous. The moment you stop checking the output is the moment you get burned.

Himanshi Chandel

I trust AI agents for efficiency and data tasks, but I verify critical decisions, as human judgment remains essential for accuracy and reliability

Felipe Daguila

Great topic. I am particularly focused in this topic lately.

I will not give access to: 1- Write love letter for my wife :) and 2- Anything that has no human oversight and it is mission critical such as financial data, taxes, work related confidential documents and health information.

Felipe Daguila

Great topic. I am particularly focused in this topic lately.

I will not give access to: 1- Write love letter for my wife :) and 2- Anything that has no human oversight and it is mission critical such as financial data, taxes, work related confidential documents and health information.

Thomas Hansen

Unless you know like a lot about security, opening up Claw outside of your home, even on iMessenger is madness. Psst, you can buy masqueraded phone numbers in some countries ...

Kevin Xu

The shift toward autonomous spending is definitely a "crossing the Rubicon" moment for AI. While Sapiom's raise shows the tech is ready, I still struggle with the lack of ethical accountability—if an agent makes a disastrous financial pivot, you can't exactly sit it down for a performance review.

For me, the hard line is long-term relationship management. I’d never let an agent handle delicate "human-in-the-loop" communications where tone and empathy are 90% of the value. Do you think we’ll eventually see a "verified human" badge for communications to counter this?

Monk Mode

Trust depends entirely on what the tool is doing with your data. I built a Mac menu bar app (TokenBar) that tracks AI spending across providers, and the single most important design decision was making it fully local. No cloud, no accounts, no data leaving your machine. Your API keys and usage data stay on your Mac.

I think a lot of AI tools get trust wrong by defaulting to cloud-first when they do not need to. If something can run locally, it should. Users should not have to trust a random startup with their API keys or usage patterns just to get a simple utility.

For AI agents specifically, I trust them for well-scoped tasks where I can review the output before it ships. I do not trust them for anything irreversible without a human checkpoint. The same way I would not give a new employee full admin access on day one.

Christina Nguyen

Like a lot of people here, there's no way in hell I'm letting AI touch anything personal. I only use Claude Cowork to help me put together research for Retrocodex that I end up researching myself anyway. I review everything it gives me and only put it on the site if I like it. So what's on the site right now is a mix of content I've found myself and content Claude has given me that I ended up researching myself.

Claude has given me plenty of info I might not have found so quickly otherwise, but it hasn't been perfect. I've gotten some wrong info, plenty of expired links, and not-so-great sources after explicitly telling it to find the most academic, trustworthy sources possible.