People are switching from OpenAI to Claude following Sam Altman's announcement today.
TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.
What stance do you take toward big tech companies after this?
Are you staying loyal to ChatGPT, or are you switching to Claude?
BTW, one of the users on Reddit proclaimed about Claude:
"Claude needs time to warm up to you. It's strange, but somehow that's just how it works with Claude. It's the most human-adjacent AI I've ever used." I am switching too.



Replies
The tech industry is getting more and more similar to a drama series ๐
I build on the Claude API. The Pentagon decision made me more confident in that choice, not less. When your infrastructure provider has clear ethical lines, it reduces a risk most founders don't think about -waking up one morning to find your product associated with something you never signed up for.
minimalist phone: creating folders
@spunchevย It seems that Dario is now profiting very well after Sam Altman closed a deal with the Pentagon.
What about quality of coding?
I see that latest Codex is better then latest Claude. Why I need to choose bad quality?
Every big corporation collaborate with government. Even if they say no, you need to check twice.
minimalist phone: creating folders
@volodymyr_nosenkoย We can be sure with this statement in China (100%)
@busmark_w_nikaย not only China. Anthropic officaly building EMEA public sector team, to work closely with EU government. I'm confident that the EU is not interested in Anthropic collaborating with the US government. It's a political decision. The same EU and OpenAI. AI knows a lot if it's integrated in government services.
minimalist phone: creating folders
@volodymyr_nosenkoย I probably missed this one. Is there any report on that Anthropic โ EMEA deal?
@busmark_w_nikaย They are currently actively hiring (check official page) in this direction, and I think they donโt publicize all of their government collaborations in order to avoid getting into problems, as OpenAI did
Copus
This is exactly why values matter in tech. When you build tools that millions of people rely on daily, the decisions you make about ethics aren't abstract โ they directly shape what's possible.
As builders ourselves, we think a lot about this at Copus. We chose to build a platform where creators own their content and curation is transparent. Not because it was the easiest path, but because we saw what happens when platforms optimize purely for engagement without guardrails.
The companies that take principled stances now will earn long-term trust. Short-term revenue from questionable contracts isn't worth the erosion of user confidence.
Honestly, I try not to treat AI tools like sports teams.
Most people switching today will probably switch again next month if another model becomes better, cheaper, or more useful. The space moves too fast to be โloyalโ to one company.
For me itโs pretty practical: I use the tool that helps me get the job done best. Sometimes thatโs ChatGPT, sometimes Claude, sometimes something else.
That said, how companies handle safety, military use, and surveillance does matter. Itโs good that people are paying attention and asking questions. Pressure from users is one of the few things that actually influences big tech decisions.
So my stance is: stay informed, donโt be blindly loyal, and use whatever tool works best for your needs.
minimalist phone: creating folders
@sangeet_banerjeeย The last sentence is worth framing!
I actually prefer Claude more at this point. As a consumer marketer I really can't do with how ChatGPT writes, thinks. It's the same thing over and over again.
The fact that Claude can actually get done so many things for me by just accessing web at the same and in much depth, it really helps with brainstorming.
I am def switching
minimalist phone: creating folders
@simplysanchitaaย Do you use Claude just because of that Pentagon case, or were you using it before as well?
As a builder (we use multiple models under the hood at Hello Aria depending on task type), I think the "switching" narrative misses the bigger shift: users are becoming model-agnostic, but workflow-locked.
People don't actually care which LLM powers their assistant โ they care about the interface and the continuity of context. That's where the real switching cost lives.
Claude has genuinely earned trust through better reasoning and less hallucination on complex tasks. But the power user shift isn't really about Sam's announcement โ it's been building for months as Claude 3 Opus and now Sonnet showed consistent quality improvements.
The real question is what happens when the model quality gap closes completely. Then it's purely a product and distribution game.