Nika

How much do you share with AI? Or more specifically, with ChatGPT?

Recently, I asked how much you trust AI agents.


– When it comes to finances, you said you don’t.
– With health data, you don’t either.

But now I have a different question.

When you talk to ChatGPT, you give it context. You share what’s bothering you. Often, it involves personal relationships, finances, work situations, doubts, and conflicts.

Do you think that’s okay?

For example, I’m not in favour of AI agents managing things on my behalf.
But the other side of the coin is that I share a lot of information with OpenAI.

And honestly?
I would probably be worried if some of that information leaked. :D

So, where is the line of sharing information and building trust?

AI agents may have more autonomy than a chatbot.

But let’s be real... even with large companies, you never have a 100% guarantee that your data is completely safe.

136 views

Add a comment

Replies

Best
Ryan Tucker

There's always a risk that comes with anything.

It's up to us to determine if the reward outweighs that risk.

I personally tell it a lot as well and have it do things for me. But, I do make sure that my settings are set to "Do not use my data in AI training."

Nika

@ryan_tucker13 Hopefully, they respect your decision, checked in the box. Sometimes it feels like "pro-forma".

Ryan Tucker

@busmark_w_nika haha yea sometimesssss

Nika

@ryan_tucker13 (we all hope for that) :DD

Randall Tinfow

No system, human or machines, offers guaranteed confidentiality. Large companies get hacked, have insider issues, face subpoenas for private data, or suffere AI mistakes that expose data.

The DoD (where my son works as a security consultant) measures hacking attempts in the millions per hour.

I've seen my ChatGPT conversations show up in public forums, word for word. I use it rarely any more in favor of a Anthropic Agentic I've built. I still take the following precautions:

  • I've set up the agent with barriers for model training.

  • We anonymize heavily

  • Delete chats with sensitive content

  • Avoid pasting docs or screenshots containing PII.

There are a few practical ways to test data leakage:

Retention checks

Canary tokens

Jailbreaks

Shared links with fake ID

Details online.

Nika

@rtinfow This should be a separate post, thank you for sharing those practical ways. Many people should be reminded about them :)

Melonie Green

I have to say I share a lot with my AI support team. I think there is a line between what you adopt and act out in life based on what is shared/advised, etc. I think there can be a line in terms of giving it account information or outright copy/pasting private codes and access info. But I think it is one of the "safest" places to share what one perceives as private context details. If you truly use it, you know context is everything to getting a really solid reply you can use.

I was not like this at first. But after seeing it as one of my tools similar to email which has access to so many things... I had to let go and start getting the best out of the tool. I use my judgement and think of the nature of what I'm sharing and if I think its too private, I work around it.

The trust is to be seen with the companies who host the agents we use. Similarly to Facebook... we/users thought we were connecting with friends and family. Pretty soon we learned the true set up and business model behind social media- data- our data. Social media was new at the time. We had to continue the experience to learn more.

We will see what is to be discovered/unearthed with the companies like Open Ai, etc.

Another great question!

Nika

@melonie_green1 Thank you. Maybe we will get to the point when AI agents will become a new social media. AKA, it will be necessary to use them so people will notice your existence. 🤷‍♀️

Esther George
To be honest, I share almost everything and anything with my ChatGPT. But I don't share things I would be ashamed of if it ever leaks because I don't even trust myself, so why should I trust an agent I can't see 🤷‍♀️
Nika

@george_esther Okay, I think that I should reconsider my approach, becausse Open AI knows me way better than myself, lol :D

AJ

I have deleted all the saved memories and data from my chatgpt account, I never got plus and at this point I would trust Honey more than Open AI.

I've tried to share non identifying stuff but truth is they can fingerprint your device pretty easily and metadata, cookies, a lot of things really.

There is never a guarantee of safety.

Frankly I'm trusting anthropic less and less these days as well.



Nika

@build_with_aj Anthropic is commercialising themselves even more, so it means more intervention into the users' privacy, data collection etc. Maybe.

Han

I share enough context with AI to get useful advice, like work dilemmas or ideas, but keep sensitive stuff (finances, health, personal IDs) to myself. AI can feel trustworthy, but at the end of the day, it’s still a system, not a human. Balance usefulness with privacy and you’re good.

Nika

@hanatwork Well, but when you want to have the most precise answer, you need to provide as precise information as possible. Or not?

John Baek
I think some helpful guidelines on sharing information outlining the pros and cons, including risks would help many users. This hasn’t been discussed enough when it’s such an important topic.
Nika

@fitnessrefined Everything can be good or bad, but depends who is the owner of the company and who are decision makers (apart from users who give data).

Sergey Kargopolov

I pretty much tell it everything about me and the work that I am doing. I don't share sensitive documents but otherwise, I have no secrets to keep from it. I feel that the more it knows about me, my plans, my goals, my values, my struggles, my immediate challenges, then better suggestions it can provide me with.

Nika

@sergey_kargopolov If you live like this, without any secrets, then you are pretty safe :D

Sergey Kargopolov

@busmark_w_nika but I also turned off the "Train model" option in all applications that I use.