Most people are using AI wrong and I was one of them.
For the first year, I used AI like a fancy Google. "Write me a product description." "Summarize this." "Give me 10 ideas for X." Useful? Sure. Transformative? Not really.
TL;DR: Anthropic refused to sign a contract with the Pentagon that would have allowed the U.S. military to use all of its models without restrictions. Anthropic insisted on an exception, and brace yourself, that its models cannot be used: 1) for mass surveillance of citizens, 2) for autonomous killing. Now the administration is threatening that if the founder of Anthropic doesn't change his mind by a certain date, they will come after him.
Google, OpenAI, and Musk (Grok) have all signed the contract.
Following Sam Altman's announcement over the past few hours, people have been speaking out massively about cancelling their OpenAI subscriptions and subscribing to Claude.
I am a Computer Science student doing research into how solopreneurs and small startups create new apps and what their stack looks like. Particularly, I'm interested in how you handle things like authentication, billing, and permissions/authorization in your apps.
Let me know what you're working on below and how you're going about it -- I'd love to connect for some quick calls to learn about your product and talk about your process in building it!
I came across Deutsche Bank s latest report on AI, and it sparked an interesting thought experiment: how likely is it that we ll see AGI (AI that thinks and learns like a human) within the next five years?
The report highlights a fascinating divergence: the view from money vs. the view from science.
Money: the probability inferred from trillions poured into data centers, Nvidia chips, and servers. Investors seem to be betting that AGI is inevitable.
Science: the probability inferred from research papers and AI development models. Experts are far more cautious, suggesting the realistic probability is only 20%.