Nika

How much do you share with AI? Or more specifically, with ChatGPT?

Recently, I asked how much you trust AI agents.


– When it comes to finances, you said you don’t.
– With health data, you don’t either.

But now I have a different question.

When you talk to ChatGPT, you give it context. You share what’s bothering you. Often, it involves personal relationships, finances, work situations, doubts, and conflicts.

Do you think that’s okay?

For example, I’m not in favour of AI agents managing things on my behalf.
But the other side of the coin is that I share a lot of information with OpenAI.

And honestly?
I would probably be worried if some of that information leaked. :D

So, where is the line of sharing information and building trust?

AI agents may have more autonomy than a chatbot.

But let’s be real... even with large companies, you never have a 100% guarantee that your data is completely safe.

133 views

Add a comment

Replies

Best
Ryan Tucker

There's always a risk that comes with anything.

It's up to us to determine if the reward outweighs that risk.

I personally tell it a lot as well and have it do things for me. But, I do make sure that my settings are set to "Do not use my data in AI training."

Nika

@ryan_tucker13 Hopefully, they respect your decision, checked in the box. Sometimes it feels like "pro-forma".

Ryan Tucker

@busmark_w_nika haha yea sometimesssss

Nika

@ryan_tucker13 (we all hope for that) :DD

Gianmarco Carrieri

Great follow-up to the trust thread! As someone who both uses AI daily and builds with it (I'm working on Aitinery, an AI travel planner), I think about this from both sides.

As a user: I share way more than I probably should. Code, business ideas, personal frustrations, brainstorming sessions. ChatGPT has basically become my thinking partner. The convenience always wins over the privacy concern in the moment.

As a builder: this question keeps me up at night. When users tell our AI about their travel preferences — budget constraints, health limitations, who they're traveling with — that's genuinely sensitive data. We made a conscious decision to process as little personal data as possible and never store conversation history beyond what's needed for the itinerary.

I think the honest answer is: we all share more than we realize, and the line between "useful context" and "sensitive information" is blurrier than we'd like to admit. The real question isn't whether we should share — it's whether the companies we share with are being responsible with that data. And right now, the answer is... we mostly just hope they are.

Nika

@giammbo I think that sharing our travel plans with AI is still cool. Remember how one guy announced his vacation on social media so burglars publicly saw he was out from home and took their chance :D ups

Gianmarco Carrieri

Haha that burglar story is actually the perfect example of what I mean! The difference is: when you post "off to Bali for 2 weeks!" on Instagram, that data is PUBLIC and you're broadcasting it to everyone including people you don't want to have it.

When you tell an AI "I'm planning a trip to Puglia in September, budget €2000, I have a toddler and my wife is vegetarian" — that's PRIVATE context shared with a specific tool for a specific purpose.

The irony is that people worry about sharing travel plans with AI while simultaneously posting real-time location stories on Instagram. The AI actually needs LESS data to be useful than what we voluntarily blast on social media.

That said, the burglar example raises a real question for builders like me: what happens if someone hacks our database? That's why at Aitinery we anonymize user data to build what we call a "Travel Twin" — an AI profile of your travel style (adventurous vs relaxed, foodie vs cultural, budget vs luxury) WITHOUT storing your personal details. We keep the patterns, not the person. Your Travel Twin knows you prefer seaside towns and eat vegetarian, but it doesn't know your name, your exact dates, or who you're traveling with.

Trust shouldn't be binary. It should be proportional to: what data, shared with whom, stored for how long, and accessible by whom.

Randall Tinfow

No system, human or machines, offers guaranteed confidentiality. Large companies get hacked, have insider issues, face subpoenas for private data, or suffere AI mistakes that expose data.

The DoD (where my son works as a security consultant) measures hacking attempts in the millions per hour.

I've seen my ChatGPT conversations show up in public forums, word for word. I use it rarely any more in favor of a Anthropic Agentic I've built. I still take the following precautions:

  • I've set up the agent with barriers for model training.

  • We anonymize heavily

  • Delete chats with sensitive content

  • Avoid pasting docs or screenshots containing PII.

There are a few practical ways to test data leakage:

Retention checks

Canary tokens

Jailbreaks

Shared links with fake ID

Details online.

Nika

@rtinfow This should be a separate post, thank you for sharing those practical ways. Many people should be reminded about them :)

Melonie Green

I have to say I share a lot with my AI support team. I think there is a line between what you adopt and act out in life based on what is shared/advised, etc. I think there can be a line in terms of giving it account information or outright copy/pasting private codes and access info. But I think it is one of the "safest" places to share what one perceives as private context details. If you truly use it, you know context is everything to getting a really solid reply you can use.

I was not like this at first. But after seeing it as one of my tools similar to email which has access to so many things... I had to let go and start getting the best out of the tool. I use my judgement and think of the nature of what I'm sharing and if I think its too private, I work around it.

The trust is to be seen with the companies who host the agents we use. Similarly to Facebook... we/users thought we were connecting with friends and family. Pretty soon we learned the true set up and business model behind social media- data- our data. Social media was new at the time. We had to continue the experience to learn more.

We will see what is to be discovered/unearthed with the companies like Open Ai, etc.

Another great question!

Nika

@melonie_green1 Thank you. Maybe we will get to the point when AI agents will become a new social media. AKA, it will be necessary to use them so people will notice your existence. 🤷‍♀️

Esther George
To be honest, I share almost everything and anything with my ChatGPT. But I don't share things I would be ashamed of if it ever leaks because I don't even trust myself, so why should I trust an agent I can't see 🤷‍♀️
Nika

@george_esther Okay, I think that I should reconsider my approach, becausse Open AI knows me way better than myself, lol :D

AJ

I have deleted all the saved memories and data from my chatgpt account, I never got plus and at this point I would trust Honey more than Open AI.

I've tried to share non identifying stuff but truth is they can fingerprint your device pretty easily and metadata, cookies, a lot of things really.

There is never a guarantee of safety.

Frankly I'm trusting anthropic less and less these days as well.



Nika

@build_with_aj Anthropic is commercialising themselves even more, so it means more intervention into the users' privacy, data collection etc. Maybe.

Han

I share enough context with AI to get useful advice, like work dilemmas or ideas, but keep sensitive stuff (finances, health, personal IDs) to myself. AI can feel trustworthy, but at the end of the day, it’s still a system, not a human. Balance usefulness with privacy and you’re good.

Nika

@hanatwork Well, but when you want to have the most precise answer, you need to provide as precise information as possible. Or not?

John Baek
I think some helpful guidelines on sharing information outlining the pros and cons, including risks would help many users. This hasn’t been discussed enough when it’s such an important topic.
Nika

@fitnessrefined Everything can be good or bad, but depends who is the owner of the company and who are decision makers (apart from users who give data).

Sergey Kargopolov

I pretty much tell it everything about me and the work that I am doing. I don't share sensitive documents but otherwise, I have no secrets to keep from it. I feel that the more it knows about me, my plans, my goals, my values, my struggles, my immediate challenges, then better suggestions it can provide me with.

Nika

@sergey_kargopolov If you live like this, without any secrets, then you are pretty safe :D

Sergey Kargopolov

@busmark_w_nika but I also turned off the "Train model" option in all applications that I use.