How do we define “seniority” and career/skill progress in the age of AI?
We keep hearing: “Juniors won’t stand a chance.”
But companies are still opening internships, which suggests something deeper than just skill-building still matters (like understanding systems, workflows, and how companies actually operate – the management part).
At the same time, AI is changing how we learn:
Instead of building skills from scratch, we often "copy-paste" them.
Instead of trial-and-error, we get near-instant solutions.
For example, I am learning to code based on my own project. But when Claude constantly gives me solutions, my level of understanding may be lower compared to someone who "figures out" these things by googling, trial and error. With Claude, I do not even feel like a junior yet.
At the same time, experienced developers using AI are becoming:
faster
more precise
more productive
How will we now be able to determine seniority and level of progress when each of us owns an AI assistant?
(By years of service? By successfully applying code from the AI? Real-world outcomes?)
I see this topic as important, especially because salary has always depended on seniority.
It would be good to know how the perception of progress and the remuneration associated with it is changing.

Replies
Nika, the result is the only currency. Seniority is no longer your ability to write code, but your ability to take responsibility for a working system.
The more confident and reliable you are in knowing exactly where AI will fail and how that will break the business six months later, the more senior you are. From now on, responsibility will be the difference between a junior and a senior. And industry connections, too.
Flavored Resume
@voizematic I think you've captured this perfectly
BrandingStudio.ai
@busmark_w_nika Seniority was always a proxy for judgment, not hours logged. The problem is we used years of experience as a shortcut for "this person has seen enough things go wrong that they know how to avoid them." AI collapses the time it takes to produce an output but it doesn't collapse the time it takes to develop judgment about which output is actually right.
I taught at university level for over a decade and watched students confuse fluency with understanding constantly, long before AI. What changed with AI is the quality and speed of what you can copy, which makes the illusion of competence more convincing and the underlying gap harder to spot.
My guess is seniority shifts to how good are your questions, how fast do you recognise a bad AI answer, and how well do you architect the problem before you hand it to the model. In my opinion the people who will define the new senior tier are the ones who can use AI to think faster without outsourcing the thinking entirely.
Apparent for Gmail
Should I ask AI the answer to this question? Just kidding.
I tend to agree with the consensus below. This is a much bigger leap than moving from long division to calculators, but I personally agree that the quality of the output still is highly related to the skill of the person using it. That the tool requires skill to get the truly desired find output.