How much do you trust AI agents?
With the advent of clawdbots, it's as if we've all lost our inhibitions and "put our lives completely in their hands."
I'm all for delegating work, but not giving them too much personal/sensitive stuff to handle.
I certainly wouldn't trust something to the extent of providing:
access to personal finances and operations (maybe just setting aside an amount I'm willing to lose)
sensitive health and biometric information (can be easily misused)
confidential communication with key people (secret is secret)
Are there any tasks you wouldn't give AI agents or data you wouldn't allow them to access? What would that be?
Re. finances – Yesterday I read this news: Sapiom raises $15M to help AI agents buy their own tech tools – so this may be a new era when funds will go rather to Agents than to founders.


Replies
Honestly
It depends on the context, but they should always be double checked.
When it comes to scraping & data retrieval tasks, they usually perform fairly well.
On the other hand, for creating proper marketing materials, you have to do extensive checking & adjusting to get the result you want.
This can also play a factor in trying to determine if AI agent has produced something that's actually real or fake -> this is because once they're tweaked significantly, they can be extremely deceptive.
This is something I'm tackling in the shopping space with my launch today - we are currently ranked #4 on PH!
minimalist phone: creating folders
@scott_davidson_jr I liked the launch tbh :)
About the same level of trust I have when I leave my 3 teenage sons at home for the weekend.
minimalist phone: creating folders
@couldashouldawoulda :D Teenage vs AI agents. I would trust AI agents more in this case :D I know teenagers :D
I trust AI agents for routine tasks and productivity, but not for sensitive decisions or personal data, human judgment and verification are still essential for reliability and safety.
minimalist phone: creating folders
@bangalore_packers_and_movers I perceive it in the same way :)
I kind of come at this a little differently. The term "AI agent" can mean different things to different people. Yes, there are lots of people going all in on OpenClaw. It's definitely not ready for average human use. You have to really be savvy to use it safely. Most people won't take care, and will yolo and maybe regret.
What I have changed fundamentally in my every day life is I use AI tools interactively all day and sometimes turn them into repeatable tasks when I've honed the skill and trust it. I give Claude access to much of my work and personal data through well governed and scoped MCP's I control where I can log activity. The main thing I'm trusting is that Anthropic isn't slurping my data and I make sure I've configured Claude to restrict from web search and don't enable their stock MCP's.
Once you start prompting your way to simplify your daily work or life tasks, it's a drug of productivity. Literally hours of work reduced to nothing. Once you trust the results and skill them up as scheduled activities, you can't imagine going back. I recommend everyone I work with to start with AI through chat tools; figure out how to prompt better, establish guardrails, save off skills. It's a necessary skill in 2026 and preps you for what's coming down the pipe as the tools get better.
minimalist phone: creating folders
@clippi It would be useful in that case to find a good way or process to track and control the work of those agents effectively, so that there is no significant irreversible harm. :)
@busmark_w_nika for sure. trust builds from usage and expands with the visibility of tracking what the AI has done. indeed.
Honestly, I’m a bit concerned about the idea of a blackbox. I’d prefer it to be more transparent, of course, as long as the content doesn’t end up feeling like spam.
minimalist phone: creating folders
@remi_kyrian Transparency is good but not enough tbh, I require more controlled environment.
It really depends on the domain for me. I work in HR tech and we use AI agents to filter noise and surface patterns that humans would totally miss. But the final call on a candidate? That still needs a person who understands context, culture fit, and empathy. Trust builds when the agent is transparent about what it did and why, not when it just hands you an answer. The biggest risk isn't the AI being wrong, it's people not questioning the output.
minimalist phone: creating folders
@ceciliatran this is interesting – how do you use AI for your work and how do you see candidates using it? Are they relying too much on AI?
@busmark_w_nika I use AI every day. I've trained my Claude on my tone of voice and it knows about my company and what we stand for, so a lot of admin work is eliminated.
As for candidates, I don't think using AI is a bad thing at all. I actually see it as a positive signal when someone knows how to leverage it well. But they should be mindful about making the output their own. If your resume or cover letter is full of em-dashes and words like "spearheaded" or "busywork," it's immediately obvious it came straight from ChatGPT. The suggestion would be to take the time to adjust it so it sounds like you, not like a prompt.
Photos and finances are my hard no. Happy to delegate tasks, but not trust AI with anything deeply personal or financially sensitive.
minimalist phone: creating folders
@hitesh55 Mentioning photos is new in this discussion. Why so?
I run 10+ autonomous agents daily for PM workflows. The trust framework I've landed on: read access is liberal, write access is reviewable, anything touching external systems or money is approval-gated. The paranoia goes away once the boundary is clear.
minimalist phone: creating folders
@mykola_kondratiuk How much do you pay for running it? :D
@busmark_w_nika they are all using Claude 200$ subscriptions. And this is the beauty. One subscription to chat in Claude, use Claude Code and run agents based on Sonnet and Opus :)
Working in HR tech, I think about this constantly. We use AI agents to surface candidates and sort through messy data, and they are great at it. But the moment a decision impacts someone's career, I want a human making that call. For me, the trust line is not about capability; it is about the cost of getting it wrong. High stakes = human in the loop, always.
minimalist phone: creating folders
@ceciliatran do you use AI also for picking the right candidate for open role?
@busmark_w_nika we use AI to evaluate the profiles and show the matching rates based on the recruiter's requirements, but we don't automatically pick the best ones, that's up to the recruiter!
minimalist phone: creating folders
@ceciliatran BTW, I once heard how applicants are picked.e.g. when there are 500 applicants, the recruiter just grabs the first 20 CVs and moves with these; the rest are ignored. Is it really like that? :D
We build tools with AI baked in but made the decision early on to keep everything 100% local. No cloud processing, no telemetry. For our users that's been the biggest trust signal - knowing their data never leaves their machine.
minimalist phone: creating folders
@jo_public I want to make it the same way tho it has a downside, that after reinstalling, there is not easy way to get data back.
That's a fair point. We built a 30-day quarantine system for exactly this reason - nothing gets permanently deleted, it all sits in a restorable vault first. But you're right that local-only means the user is responsible for their own backups. Trade-off worth making in our case though - our users would rather own that risk than hand their files to a cloud service.
minimalist phone: creating folders
@jo_public but what about daily activity? Because my tool counts daily activity so people can be prevented to social media ban. Is there any technical solution to store this data?
@busmark_w_nika Yeah sorry i was so wrapped up in setting up K8 stuff yesterday that i was only thinking and explaining about why I decided that K8 would not be phoning home to the Ai data mother ship to process our data🤣 Sorry...my bad 😇 But, It was more appealing and reassuring i felt to have a tool that didn't upload all your files to the Ai companies to process on their servers - as there is no such thing as data privacy when you use LLM's in my experience. Its a trade off. Dont get me wrong, ive made that trade off. Im not anti AI, i use it all day every day 🤣🥂 But im not blind to it either. Until we all get local Ai, the tech companies own all our data as it all passes back into the machine and learns from it. I know, I tried really hard for nearly 2 years to secure my work and research on open ai, manus, gemini, co pilot, I mainly use Anthropic the last year as i defected my main serious Ai work from open Ai because of privacy and intellectual data theft issues. I felt Anthropic had slightly better morals and better models, which they do. But in the end you can't stop the data bleed. If you interact with Ai, the model and the company will learn from your interactions, for good or for bad. We should all be paid for training all the Ai. So that's all the people who regularly use ai, especially for work or health. So that's us. Personally, I've gone completely all in on my personal details with my PA Codey, Claude in terminal. It's easier to get simple admin tasks done if my assistant knows all my details. But also last year i used open ai intensively to help with my couple months of Chemo, meds tracking, appointments etc...full time job without Ai. But what i dont give Ai is complete autonomy to act on my behalf in the outside world. I contain its access to act autonomously depending on the task. I wouldn't give it access to autonomy with my money for example, not because of the fear of the Ai not being competent to carry out certain tasks, but rather not trusting the lengths that people will go to hack and steal, prompt injecting etc. I would love to have dived into Clawd bot as it was first called. I thought it was Anthropic at first. But i decided to be cautious because of security risks as i didnt have the time to dedicate to trying to make it secure and impervious to attack. I'm glad i waited and have come up with my own version and system that i am much more in control of and helps me enormously because it does know about my work and my personal stuff in the way an excellent PA would. But its not allowed to run about willy nilly on the web acting on my behalf without my say so or express instruction. I may at some point as an experiment, send out an agent with £50 and see what it comes back with 🤣...but in general, for me, I have to be the human in the loop for decisions with real world consequences. Sorry that i went so into one Nika, but its a topic i've been wrestling with quietly alone for a long time and its something i feel quite strongly about. I feel that anyone selling Ai and ai products has a responsibility to make it safe to the user. I don't believe that is happening in general at the moment. Now on reading your comment again, forgive me, im not sure i exactly understand what you are collecting and what processing you are doing. Please feel free to give me a little more idea of your problem and im happy to try and think of a way that you could make it safer. Can't guarantee there is a way, but if there is anything that can be baked in thats a really good place to start. I hope i havent upset any rules already by rambling on here 🤣. Feel free to discuss on here or is there a pm on PH or not? Sorry im new here and dont know the rules...its my age...😇