Nika

Are there any topics you wouldn't ask AI for advice on?

Are there any topics you wouldn't ask AI for advice on?

We've literally put our entire lives in the hands of artificial intelligence.

From work responsibilities to relationship issues, to advice on philosophy and our bodies.

Europe has always seemed like a place that would regulate, but it seems the US isn't far behind. Today I read a report that a New York bill would ban AI from answering questions related to medicine, law, dentistry, nursing, psychology, social work, engineering, & more.

According to them, the main reason is to prevent AI chatbots from causing harm through their advice and endangering minors, for example. But then there's the view that it's really about protecting the value of professional services (where some charge $500/hour, so people don't bypass paid experts and get advice for free instead.

  • Where is the truth, according to you?

  • + Are there any areas where you wouldn't take advice from AI?

89 views

Add a comment

Replies

Best
Roy McKenzie

Great question. I think I would be open to asking AI, in general, it's perspective on pretty much anything. It would be a datapoint in a cornucopia of data points I would use to judge for myself my next action or conclusion. I will say there are certain AI's I would not go to for certain topics because I know they have a bias because their creators have said as much and I have seen it demonstrated in my own experience. For example, I do not go to ChatGPT or especially not to Claude for any serious political analysis. I believe there is absolute truth and I reject the idea of relative truth. I believe there a lot of people and systems ready to peddle their own version of the truth. Discernment is a muscle that should be constantly exercised.

Nika

@roymckenzie confirmation bias is real – we will try to find information that confirms our beliefs :)

Sean Howell
This is the argument for Poe and running open webui. There are so many interesting complexities to questions we pose to the universe. My only big question is do i reduce my own thinking ability by going to ai first. Otherwise very happy to have a computer expanding my context.
Nika

@howell4change We are aligned in this page :)

Tereza Hurtová
For me it’s not "never ask AI," but know when AI is a thinking partner vs. when you need someone accountable for the advice. I think the line will probably be less about topics and more about accountability. AI can be incredibly useful for thinking things through, getting perspectives, or learning the basics... But when decisions carry real consequences for health, legal status, finances, or safety, I’d still want a human professional involved.What I find interesting right now is how many products are emerging that position LLMs as therapists, coaches, or advisors. If regulation starts restricting AI advice in areas like psychology or medicine, it will be interesting to see where those products land or how they adapt.
Nika

@tereza_hurtova I believe that (r)evolution of AI in healthtech, biotech, and sciences would be way more helpful than in coding, replicating texts and the video. Yesterday was attending one lecture (about movies, so a little bit off topic), but it demonstrated pretty well that if we pay enough attention to a topic and its development, we can achieve and progress far more. :)

Tereza Hurtová
@busmark_w_nika Yeah, I totally agree that we have some areas where I hope AI will be very beneficial. 🙏
Gianmarco Carrieri

The question is less about topic and more about two variables: reversibility + stakes.

Building in travel AI (Aitinery), I'm in a space where AI advice performs well — if it suggests the wrong restaurant or sequences a bad itinerary, you course-correct in real time. Feedback loop is fast, error cost is low, and the model has aggregated more useful knowledge about Tokyo neighborhoods than most individual humans.

The structural problem with medical, legal, or financial advice isn't the topic — it's that errors can compound silently for months before you catch them. A bad restaurant recommendation surfaces in an hour. A flawed legal strategy might surface at trial.

The NY bill framing it as a topic restriction feels like the wrong cut. You could run the same logic against nutrition influencers or financial bloggers, and we don't ban them from giving advice. The actual issue is whether people have the literacy to treat AI output as a well-informed first draft — not a final verdict.

The harder part: that literacy isn't evenly distributed. So regulation might be a blunt instrument trying to solve for the tail of users who don't know when to stop asking and start verifying.

Nika

@giammbo Now that I think about it, some restrictions might actually make sense. Humans tend to trust AI blindly. If something goes wrong, who is responsible?

Would it be a human vs. an AI from a certain company?

In a hospital, a doctor is responsible for your care.

But with AI, the relationship is different; you are ultimately the one choosing to follow the advice.

Gianmarco Carrieri

The accountability gap is real — and it might be the core issue, not the topic itself. When a doctor gives bad advice, there's a chain: liability, malpractice, professional license. With AI, the user absorbs the downside while the model absorbs nothing. That asymmetry is what makes high-stakes advice genuinely different. But I'd push back slightly: the same asymmetry exists with a random article on WebMD or a YouTube medical video — we just don't regulate those either. The question might be less about AI specifically and more about whether we're finally ready to confront that information without accountability has always been a problem, and AI just scales it.

AJ

I would not take advice from AI in medical cases. at best it carries the biases of regular doctors, at worst it's completely off the mark

Nika

@build_with_aj Valid point (tho I tried once to "diagnose" myself, and the AI was right... doctor then only confirmed what AI said)