Nika

Will we work for AI or will AI work for us?

  1. Y Combinator startup will pay humans to help AI agents when they get stuck. (This is what I read today.)

  2. At the same time, I see how Indian employees in production have cameras on their heads, and the AI ​​learns from their movements (practically filming their firing process).

  3. In addition, there was already a site where AI agents hired human actions for stablecoins.

  • First, AI worked for us.

  • Now we are starting to work for AI.

  • And eventually, will AI work (without us)?

I don’t want to portray a Terminator scenario where people will have to unite against AI, but what future awaits us in terms of cooperation/non-cooperation with AI?

We are already becoming its employees.

346 views

Add a comment

Replies

Best
HS Kim
Exactly. I think I'm the target here. I've been building with AI for three months. I'm still not sure if I'm using it or it's using me.
Nika

@hellobzec It is exploiting you :D

Tham Yik Foong

I think in short term, it will be a combination of both.
There will be scenario where human gets AI to get human to work.

In other word, we are out-sourcing to AI to out-source other humans.

Nika

@tham_yikfoong Moral of the story is to be kind to those robots :D

Tham Yik Foong

@busmark_w_nika LOL great meme

Nika

@tham_yikfoong It is true hahaha

James Swift

What we're calling AI right now isn't artificial intelligence It's statistical prediction. LLMs calculate the most probable next token based on training data. They don't understand the output, don't have goals, and can't reason about anything. Calling that intelligence is a marketing decision, not a technical one. A calculator doesn't understand maths. An LLM doesn't understand language.


The Terminator scenario requires general artificial intelligence, which is a system that can actually think, learn across domains, and form its own goals. That doesn't exist. Estimates for when it might range from five years to never nobody really knows because we don't fully understand human intelligence well enough to measure the gap.


So the question "will AI work without us" has a boring answer the technology we have now literally can't because it's architecturally incapable of independent action. It's autocomplete at scale. Extremely useful autocomplete, but it can't "employ" anyone or "replace" anyone on its own. It needs a human deciding what to feed it and what to do with the output, nobody is being employed by AI. The YC startup paying humans when the agent gets stuck is a human-run company hiring people to handle the cases where their autocomplete fails. The stablecoin agents "hiring" humans is the same thing with a smart contract as the middleman. A human wrote that contract, a human deployed it, a human profits from it. The employer is still a person.


The real risk isn't machines gaining agency It's companies and governments treating a probability engine as if it already has agency and making irreversible decisions based on that mistake.

Nika

@splitpostio reaching the AGI at some point and maybe going beyond – do you think that AI will understand context at such extent that it will be thinking on its own? Because those AI agents right now can do many things that some people wouldn't be able to accomplish (and definitely not in such a short period of time).

James Swift

@busmark_w_nika Agents are fast and useful but speed isn't understanding. Using the calculator analogy again it solves equations faster than any human but it doesn't know what a number is. Current agents chain together hundreds of autocomplete steps very quickly and the output looks like thinking, but there's no comprehension behind it remove the prompt and the system does nothing because it has no intent.


Whether that gap closes with AGI is genuinely unknown the problem is we don't have a working definition of consciousness or understanding, so we can't measure how close or far we are. It could be five years, fifty, or architecturally impossible with the current approach.

Farrukh Butt

The camera on workers' heads example is the most unsettling one, it's not just AI replacing jobs, it's humans actively training their own replacements without always realizing it

Nika

@farrukh_butt1 I think they know about it, but if they didn't do this, probably would be fired.

Stan Kolotinskiy

From where I sit, it mostly depends on how deliberately you use it. I've been careful to keep myself in the loop on everything Claude Code produces rather than just letting it run. That takes more effort but it also means I'm still the one making decisions. I think the "working for AI" trap happens gradually, when you stop reviewing, stop questioning, and start just executing whatever it suggests - so let's prevent Skynet from happening! :D

Nika

@sk_uxpin we are already doomed. Today, I read how we think less using AI chatbots. https://www.bbc.com/future/article/20260417-ai-chatbots-could-be-making-you-stupider

Stan Kolotinskiy

@busmark_w_nika hahahaha, why am I not surprised :D

Elena K

I don’t think the future is as simple as “AI will work for us” or “we will work for AI.”

What’s happening is more subtle: humans are increasingly becoming the fallback layer for systems that appear autonomous. AI acts, predicts, recommends, and executes - but when it gets stuck, fails, misreads context, or faces ambiguity, a human steps in quietly behind the curtain.

So the issue is not only technical. It’s economic and philosophical.

Who sets the goals?
Who owns the systems?
Who gets paid the most value?
Who becomes replaceable, and who becomes more powerful?

If humans use AI to expand judgment, creativity, and leverage, then AI works for us.
If humans are reduced to invisible correction loops for automated systems, then in practice we start working for AI.

The biggest danger is probably not machine rebellion. It’s a world where human intelligence is still necessary, but broken into smaller, less visible, less valued forms of labor.

So the future of cooperation with AI depends less on the technology itself and more on the structure of power around it.

Nika

@chronicles_of_the_desert and that's the thing – we are at the stage of helping people when it gets stuck. If AI will be learning through these actions (we helped to sort out), we can become at certain point to be totally useless and redundant.

I think that most likely, we will face the situation that only a few powerful people will own the system or control them. But Humans, like ordinary people – it is a big question mark for me what will be happening with them.

Sai Tharun Kakirala

Great question, and honestly something I think about every day building Hello Aria (our AI assistant that manages your day via WhatsApp and iOS). Right now, AI works for us - it handles reminders, task management, calendar syncing, all conversationally. The humans helping AI when stuck model is fascinating - suggests a hybrid where value flows both ways. I think the dystopian version is not AI replacing humans wholesale, it is humans doing micro-tasks without context or agency. The positive version is AI handling the cognitive overhead so humans can focus on creative, relational, irreplaceable work. That is the bet we are making with Hello Aria - freeing people from coordination overhead makes them MORE human, not less.

Nika

@sai_tharun_kakirala when you know how to cooperate, it is the best possible scenario. I am just afraid of the Terminator scenario :D

Adrin D'souza

We’re currently in the messy “training wheels” phase where humans babysit AI (fixing agents, filming tasks, etc.).

But the whole point of building AI is the opposite: eventually it works for us, not the other way around.

The headcam workers and agent-fixers are temporary scaffolding. Once the models are good enough, that flips.

The real question is whether we build AI that amplifies everyone or just concentrates power. What do you think happens in the next 3–5 years?

Nika

@second_son_of_god Maybe I just think too far in advance, like 20 years from now. Then, AI can be totally independent.

Luca Ardito

I think the key distinction is whether humans stay in the loop as decision-makers or get reduced to exception handlers.

The companies that create leverage will be the ones that design AI around human judgment, not just human cleanup.

Nika

@luca_ardito What if AI will totally break free from humans? And will not need any approval, help or anything from us. What then? We will be useless :D

Tijo Gaucher

honestly feels more like a collaborator day-to-day for me — it handles the boring 20% so i can sit with the weird edge cases longer. the cameras-on-workers thing is where it gets queasy though, what's the line between learning from us and just harvesting?

Nika

@tijogaucher Difficult to say what it means collecting and learning from us, but I am pretty sure it will surpass us in anything we train it enough.