A user ran the same query on Perplexity, Gemini, and OrcaSheets. Only one got it right.
by•
Nishit uploaded a large CRM dataset and asked a simple question: rank salespeople by total sales.
Perplexity got it wrong. Gemini got it wrong.
Why? Both were silently sampling a portion of the data instead of processing the full thing. Big dataset? Let's just look at some of it and call it a day.
We processed every row locally on his machine. Correct answer every time.
This is something we think about a lot while building OrcaSheets. When you're making business decisions based on data, "close enough" isn't good enough.
Has anyone else run into AI tools giving confident but wrong answers on large datasets? Would love to hear your stories.

19 views


Replies