I stopped asking AI to do tasks. I started asking it to think with me. Here's what changed.
Most people are using AI wrong and I was one of them.
For the first year, I used AI like a fancy Google. "Write me a product description." "Summarize this." "Give me 10 ideas for X." Useful? Sure. Transformative? Not really.
Then I tried something different, and it rewired how I work entirely.
The shift: From "do this" → "think through this with me"
Instead of: "Write a go-to-market strategy for my product"
I started asking: "Here's my current GTM thinking: [dump]. What assumptions am I making that I haven't tested? What would you push back on?"
The difference is massive. The first gets you a generic 5-step plan. The second gets you a thinking partner that stress-tests your blind spots.
3 specific ways this changed my workflow:
Pre-mortems before every major decision
Before shipping a feature, I do a quick AI session: "Assume this feature failed after 3 months. Walk me through the 5 most likely reasons why." The answers are uncomfortable. That's the point.
Devil's advocate on copy and positioning
Instead of "make this better," I ask: "Play a skeptical customer who almost bought but didn't. What objections does this messaging leave unresolved?" My conversion copy got sharper within a week.
Pattern-breaking on stuck problems
When I'm spinning on a problem: "What's the most contrarian take on how to solve this? What would someone do if they had to solve this in 24 hours with no budget?"
The honest caveat
This only works if you push back when the AI is wrong. The goal isn't to accept every output, it's to use the friction to surface your own thinking. Half the value comes from reading a response and thinking "no, that's not right, and here's why."
What's your current AI workflow? Have you found prompting frameworks that actually changed how you work or is most of it still hype?



Replies
This shift is huge and underrated.
The "AI as executor" model hits a ceiling fast — it does the task but you're still the bottleneck deciding which tasks matter. When you flip to "AI as thinking partner," the leverage multiplies.
What I've noticed with Hello Aria (AI assistant via WhatsApp): the most engaged users aren't using it to cross off todos. They're using it to think out loud — voice-noting a problem, getting a reflection back, then deciding what to actually do. The action often looks different after that.
The best AI interactions feel less like a vending machine and more like a thinking session with a smart friend who never judges you for being uncertain.
Murror
@sai_tharun_kakirala Thank you for your point of view. It's helpful and includes real-world evidence for my reference.
minimalist phone: creating folders
I like this approach. But I still haven't figured out how to ask questions the way that AI will teach me. In the end, it always gives me the results.
Murror
@busmark_w_nika Basically, I think AI was initially created to answer and solve our problems, so the final result is usually the answer itself, rather than them proactively thinking and asking us questions. I think we should formulate as many hypotheses for them as possible so they have context and information, and then we can ask them to discuss it with us.
This is quite underrated and I must admit I've done this too (almost went through a "Dear GPT-phase," ngl.
This is specially powerful when you're a complete decision making machine and need more than one head and some stats that can prove to you whatever's in your head can be done if done well, otherwise not. Because I've also been hit with the AI-bias. Doesn't akways work.
Murror
@swati_paliwal Absolutely, like you said. I think that when working extensively with AI, we tend to get caught up in its responses, and sometimes we really need to give our own minds space to truly think about the decisions.
I do this too but found it was too sycophantic on my shitty ideas, I asked it to be a "hostile reviewer" as an adversary to every proposal, and it humbles me really fast lol. The training must have emphasized content from openreview because the specific phrase "hostile reviewer" also improved the nuance capture of the response