AI agents are increasingly making real decisions in businesses. They qualify leads, respond to customers, analyze data, and sometimes trigger actions that affect revenue or customer experience. As these systems move from suggesting to actually deciding, mistakes become inevitable.
When that happens, responsibility becomes unclear. The user configured the system, the company built the product, and the underlying models often come from another provider. If an AI agent makes the wrong call and it impacts a customer or revenue, where should accountability actually sit?
Curious how others are thinking about this. Who should be responsible in such cases, and are there any legal guidelines or draft regulations emerging around this?
Most people are using AI wrong and I was one of them.
For the first year, I used AI like a fancy Google. "Write me a product description." "Summarize this." "Give me 10 ideas for X." Useful? Sure. Transformative? Not really.
If you re still sitting on your launch, this is the push.
YC made a special exception for this community: one or more companies that launch tomorrow will get a YC interview and potentially funding. A YC partner will review every eligible launch.
Hot take from NFX Venture Fund: fundraising is broken because of... Endless forms. Not because you don't know the right people. Not because you lack warm intros. Just. The. Forms. Here's how they put it themselves: "Fundraising is broken. Endless forms, vague timelines, ghosted follow-ups." (This I got in my mailbox.) Great. So they built the antidote. You can apply if you're a Harvard, MIT, Berkeley, or Stanford student. Or a woman in HealthTech. No more broken fundraising! The short form is now available... to a very specific group of people. If you think the real bottleneck is something else, you are not alone... What do you think actually breaks fundraising in 2026?
We all have that one piece of content an article, a talk, a thread that genuinely changed how we think about something.
For me it was "1,000 True Fans" by Kevin Kelly. The idea that you don't need millions of followers to build a sustainable creative career completely shifted how I think about building audiences. It is also one of the core ideas behind Copus helping people build real, engaged communities around the content they care about.
According to @RevenueCat 's State of Subscription Apps 2026 report, "hard paywalls convert 5x better than freemium, but with significantly wider variance."
Day 35 download-to-paid, freemium vs. hard paywall
Does access method impact download-to-paid conversion within 35 days?
Road to 1,000,000 Votap users Day 49 | Current: 1250 I felt extremely busy last week and realized I barely did anything. We said: Let s reach out to as many VCs and angels as possible. Sounds productive, right? But your brain doesn t understand that. What does many mean? Who exactly? Where do we find them? What do we say? How many per day? I realized the plan has to be almost stupidly simple. So we sat down for a full day and turned it into a protocol: Find investors here open their page follow wait X time send message log it in the sheet etc... Some people might feel a bit stupid needing this level of structure. I actually think it s the opposite. If your brain is capable of solving hard problems, your time is too valuable to waste on small decisions. Structure lets you spend your energy executing instead of constantly figuring out what to do next. Download Votap from the App Store if you want to follow along. More tomorrow.
A story and an experiment have been spreading on X: Scientists uploaded the brain of a fruit fly into a computer, and now it lives freely in its own simulation.
We managed to clone the physical form of animals more than 30 years ago (for example, the cloning of a goat using SCNT in 1999). There was even a controversial case in China where a scientist was sued after attempting to create gene-edited babies in 2018.
Let me start from the creator s perspective: I personally don t have a product (apart from hiring people for creative work or offering personal consultations).
But as a creator, I constantly share content, insights, and information, value that helps me build trust (for free). Based on that perceived expertise, people eventually decide to work with me (a paid service).
I've been noticing something lately. We went from using AI as a tool to letting AI become the default for almost everything: writing, deciding, planning, even reflecting.
Need to write an email? AI. Need to make a decision? Ask AI. Need to understand how you feel about something? Believe it or not, AI.
The problem isn't the technology. The problem is that we're quietly outsourcing the one thing that makes us valuable: our ability to think for ourselves.
Early-stage founders often try to improve their product as much as possible and tend to take almost any feedback into account.
Sometimes they end up adding every feature users (even non-paying ones) ask for, even when those features are unnecessary. The product then becomes more complicated and harder to use.
And I m not even talking about the stage when the product is already established. At that point, there are more users, and their expectations start to differ.
If you followed Naoma a year ago, you knew us as a sales conversation analytics tool. We connected to your CRM, analyzed rep calls, and surfaced patterns from top performers so the rest of the team could learn from them.
It worked. Teams liked the insights.
But we kept noticing the same thing: the real bottleneck wasn't after the demo. It was getting to the demo in the first place and what happened in those first few minutes before a rep ever joined.
Qualified buyers were waiting 3 6 days for a demo slot. Many dropped off. The ones who showed up often hadn't been properly qualified. Sales reps were spending half their week running intro demos for people who were never a fit.