Is using AI for literature reviews unethical, or are we asking the wrong question?
This debate often gets framed as “Should researchers use AI for literature reviews?”
I think the real question is different.
Is it ethical to spend hundreds of researcher hours on mechanical work when that time could be spent advancing actual knowledge?
Think about a researcher spending an entire weekend searching papers, skimming irrelevant abstracts, copying citations, and fixing references. That’s not insight or discovery. That’s overhead.
I recently came across @Chirpz Agent that helps with literature discovery and citation mechanics. It doesn’t write papers for you. It finds relevant work, explains why it matters, and organizes references.
What stood out wasn’t the tech itself.
It was the time it gives back.
Every hour spent formatting citations is an hour not spent on experimental design, analysis, or deep thinking.
If someone is researching cancer treatments or climate solutions, the public doesn’t benefit from their endurance for busywork. It benefits from their best thinking.
We don’t call surgeons lazy for using modern imaging.
Or architects for using CAD.
Automation doesn’t replace expertise. It frees it.
Where has AI genuinely helped you think better, not just work faster?



Replies
I agree with the direction, but I still think there’s value in manually reading early on. The balance is tricky.
Flexprice
@aanchal_parmar Agreed. Manual first, automate once you know what matters.