Will we work for AI or will AI work for us?
Y Combinator startup will pay humans to help AI agents when they get stuck. (This is what I read today.)
At the same time, I see how Indian employees in production have cameras on their heads, and the AI learns from their movements (practically filming their firing process).
In addition, there was already a site where AI agents hired human actions for stablecoins.
First, AI worked for us.
Now we are starting to work for AI.
And eventually, will AI work (without us)?
I don’t want to portray a Terminator scenario where people will have to unite against AI, but what future awaits us in terms of cooperation/non-cooperation with AI?
We are already becoming its employees.
347 views


Replies
From a financial modeling perspective, I think we're already well into "working alongside AI" — and the picture is more nuanced than the typical replacement narrative.
In my work building project finance and M&A models (renewable energy transactions, structured deals), AI has been genuinely useful for drafting boilerplate, catching formula errors, and explaining methodology. That's real productivity gain — probably 20–30% faster on documentation and QA.
But the parts that actually matter in a deal — structuring the debt waterfall, deciding what sensitivities a lender cares about, understanding why a sponsor is pushing a particular DSCR covenant — those require judgment that's deeply contextual and hard to automate. A model that passes every formula check can still be built on the wrong assumptions.
My read: AI is becoming the junior analyst that never sleeps. Humans who understand the *why* behind the structure — not just the *how* to build the model — will direct the work and own the decisions. The people most at risk aren't the experts; they're the people who were doing rote work without building genuine understanding along the way.
The YC story about paying humans to help stuck AI agents is actually a great illustration of this equilibrium. The AI is doing the heavy lifting; humans are providing judgment when the AI hits an edge case it can't reason through.
I've been thinking about this a lot in the context of publishing financial model templates — the goal isn't to replace the analyst, it's to give them deal-tested infrastructure so they can focus on judgment, not mechanics: https://www.eloquens.com/channel...
For now at least, AI is still most useful when given precise direction. Left to make decisions on its own, the mistakes come pretty quickly.
So the real question isn't whether AI can replace humans — it's whether humans are willing to stay in the loop. The people who get the most out of AI are the ones who stay curious and keep questioning the output, not the ones who just accept whatever it generates.
Do you think that changes as models get better — or will there always be a point where human judgment matters?
Agree, and as someone shipping products in this space the part that nags at me is: every one of those examples is a design choice. Someone decided to put cameras on the workers. Someone decided humans would unblock the agent instead of letting it fail gracefully. Someone designed the gig site that pays humans in stablecoins to do agent errands. None of this is the AI doing anything — it’s builders choosing where to put the human. So the future you’re asking about isn’t being decided by AI; it’s being decided by the people building these systems right now, and most of them aren’t asking the question you just asked. That’s the part I’d want more founders sitting with.