Mohamed Elnabarawi

Is it ethical for SaaS accounting tools to train AI on user data? 🛡️

by

I've been building a fintech tool for the MENA region, and I made a controversial choice: I built a desktop app.

Everyone told me to build a SaaS. "It's easier to scale," they said. "You can charge subscriptions," they said.

But I couldn't get past one thing: Financial Data Privacy.

As a developer, I know that once data hits a cloud server, "privacy" becomes a policy, not a guarantee. With Local AI (Ollama/Lm studio), we can now do OCR and analysis entirely offline.

My question to other makers: Are we too comfortable sending sensitive user data (like bank statements) to OpenAI/Anthropic APIs? Where do you draw the line between "AI convenience" and "Data Sovereignty"?

I chose the hard path (Desktop + Local AI) to sleep better at night. What would you do?

11 views

Add a comment

Replies

Best
Ben Brauburger

I honestly think we’ve become far too comfortable sending sensitive data into cloud AI tools.

Even in everyday life, people already use AI for deeply personal things, from medical questions to private situations they would never normally share that openly.

Of course, we are told that the data is protected and not passed around, but I think most of us know the reality is probably more complicated than that.

That’s exactly why we’ve also been working with local AI and trying to push this idea further from a privacy and data protection perspective.

I really believe this will become much more important over the next few years, especially for workflows involving financial, medical, or other sensitive information.

I’m curious about your setup though.

How are you using local AI in practice, on what kind of machine, and on which platform?