Is AI making knowledge work harder, not easier?
We think yes, and here's why.
When our team at illumi started to fully embrace AI, there were many days we were all overwhelmed. We were pumped, energized, but our brains couldn't catch up with the knowledge we needed to understand what we were actually building. Execution got cheap overnight but comprehension didn't keep up.
HBR recently called it brain fry, and it matched exactly what we felt. (and you can read here to see who got the most serious brain fry)
What made it harder was that everyone on our team comes from very different backgrounds, each holding different specialties. We needed each other's expertise to make our AI outputs useful, but even sharing practices across roles wasn't easy. The knowledge existed across the team; it just wasn't moving between people the way it needed to.
We think this is the deeper problem most teams aren't talking about. AI is only as useful as the knowledge you feed it, and in most teams that knowledge is scattered across different people, roles, and tools with no shared layer holding it together.
We're curious: is your team feeling this, and what are you doing about it?


Replies
I can share my own experience - starting with agents for coding felt very promising, so I went all in. Within a week I was fully exhausted because of the huge amount of code I needed to review and approve. If something fell outside my understanding, the uncertainty would increase, making new pieces even harder to comprehend. I had to stop and rewrite most of what was produced that week. For the rewrite, I needed to have a talk with the team members and significantly tighten our understanding of the business rules and requirements.