TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.
Lately, I ve been reflecting on the quiet fear that, as AI tools become better at creating art, writing, and design, creativity itself might lose its meaning.
It feels like a valid concern because:
AI can produce beautiful art and music faster than a human ever could,
Many creative fields are shifting from original creation to "curating" or "editing" AI outputs,
Instant generation often replaces slow, imperfect human exploration,
Younger generations are growing up with AI co-creation as the norm, not the exception.
I wonder: Will true creativity still matter when "good enough" is instantly available?