Halil Han BADEM

10+ Years of Backend Experience Taught Me How (Not) to Use AI

I want to talk about how I built @MCPCore - a cloud platform where developers create, deploy, and manage MCP servers from their browser - and what 10+ years of backend experience taught me about using AI in production work. Not the hype version. The honest one.

Every idea is already taken. So what?

I'm a backend engineer. I've spent most of my career building server-side systems, and I currently lead a backend team at my company. At some point I wanted to build something of my own. A product. Something real.

But when I looked at what people were shipping, it felt like everything was already done. Every idea I came up with? Someone had built it. That was discouraging for a moment.

Then I realized the problem wasn't my ideas - it was the altitude I was thinking at. Most of us brainstorm at the same level: we think of a project, do a quick search, find out it exists, and start over. But what if you shift your perspective? Instead of building what everyone else builds, build the tools they depend on.

The MCP ecosystem is a perfect example. Anyone can build an MCP server - read the spec, write the code, or even vibe-code the whole thing with AI. But then what? Deploy it somewhere reliable. Set up OAuth. Handle rate limiting. Manage secrets securely. Monitor usage. That's where the real time sink is. Not the code itself, but everything around it.

MCPCore turns all of that into seconds. And I'm not talking about slapping an AI wrapper on existing tools. I mean genuinely reducing the process to: write your logic, hit deploy, done. AI features are part of the platform, but the speed comes from the architecture.

How I actually built it: the 3-layer strategy

Here's where it gets practical. I split the project into three layers, each with a different level of AI involvement. This wasn't accidental - it was a deliberate strategy based on risk and complexity.

Layer 1: Core service - zero AI

This is the foundation. A centralized service that manages all MCP servers on the platform. I designed it for horizontal scaling from day one - when traffic grows, I spin up more instances behind a load balancer with sticky sessions. No architectural changes needed. The maintenance stays simple regardless of how many MCP servers are running.

For code execution, I needed full isolation. So I built a sandboxed environment with my own SDK. I injected HTTP functions (for API calls) and DB functions directly into the sandbox. This means users just write their logic in a code editor - no imports, no library definitions, no boilerplate. Write your code, return the result. That's it.

Outside the sandbox, I added default try-catch wrappers and built SSRF protection from scratch. Security at this level isn't optional.

I wrote this entire layer manually. No AI assistance for the core logic. Why?

Because AI can write code incredibly fast, but it can also introduce bugs that take an unreasonable amount of time to debug. Entrusting a critical service to AI felt like a bad trade-off. Instead, I used AI only for targeted refactoring of specific modules - places where the logic was already solid and I just wanted cleaner code.

There's a joke that's been going around the dev community:

Before AI: 3 weeks coding, 1 week debugging. After AI: 1 day coding, 6 weeks debugging. :D

It's funny because there's real truth in it. Even with the latest models, AI sometimes makes things worse when it tries to "fix" your bugs. It finds one issue, introduces two more. That's been my honest experience.

Layer 2: API layer - hybrid approach

This is where I started blending manual and AI-assisted work. I set up the foundational structure myself: routers, middleware, controllers, and key utility functions. The patterns were mine.

Then I did something specific: I had Claude analyze my entire codebase first. I wanted it to understand my naming conventions, my error handling patterns, my folder structure - basically learn to code like me. You could call this a "skill" in Claude terms.

Why go through this trouble? Because debugging becomes dramatically easier when AI writes code in your style. When something breaks, I know exactly where to look and why it happened. It's not foreign AI-generated code anymore. It's my code that AI happened to write. The patterns are familiar, the structure is predictable, and the mental model stays intact.

Layer 3: Frontend - full AI, my architecture

The entire frontend - including the landing page - was built with AI. But "full AI" doesn't mean I threw prompts at it and hoped for the best. I was very deliberate about the setup:

  • First, I picked a specific UI kit and connected its MCP server to Claude. This way, every component it generated used the right design system automatically.

  • Second, I documented all my API endpoints in Postman and fed them directly to Claude. No guesswork about request/response shapes.

  • Third - and this is the key part - I wrote the Redux store setup, the API layer architecture, and my data management patterns (like my specific useSelector conventions) myself. Then I told Claude to follow those exact patterns.

The result: AI handled the design implementation and endpoint integration, but the state management and data flow followed my architecture completely. Best of both worlds.

What I learned

After building an entire SaaS product this way, here's my framework for using AI effectively:

Critical infrastructure - write it yourself. Use AI only for refactoring isolated modules. The debugging cost of AI-generated infra code is not worth the speed gain.

Business logic and APIs - hybrid approach. Build the foundation and patterns manually, then teach AI your style before letting it contribute. Your patterns, AI's speed.

Frontend and UI - let AI take the lead, but define the architecture, conventions, and data patterns upfront. AI is remarkably good at UI work when it has clear constraints.

The common thread: AI is a multiplier, not a replacement. The more experienced you are, the better you can direct it. And knowing where not to use it might be the most important skill of all.

Thanks for reading, and happy building!

80 views

Add a comment

Replies

Best
Ian Maxwell

The 1 day coding and 6 weeks debugging line was too real. Especially for backend stuff where one small issue can turn into a mess.

Halil Han BADEM
@ian_maxwell2 absolutely!
Kyle Bennett

I liked the shift from idea hunting to infrastructure thinking. That is honestly where a lot of long term value gets built.

Halil Han BADEM
@kyle_bennett6 I agree with you. It needed for long term especially
Leah Josephine

Did your opinion on AI change while building this, or were you already clear from the start that core infra should stay manual?

Halil Han BADEM
@leah_josephine i figure them during the trial and error process
Miles Anthony

The layered approach is solid. Keeping AI away from the most sensitive parts feels like the kind of decision people only make after real production experience.

Halil Han BADEM
@miles_anthony2 yeah we need to keep it in mind that human touch is always needed
Naomi Florence

This was a useful read. A lot of people talk about building fast with AI, but not enough people talk about where speed starts creating hidden cost.

Halil Han BADEM
@naomi_florence1 and the cost sometimes too late to figured when we already to depend on it
Oliver Nathan

The sandbox setup sounds super interesting. The no imports and no boilerplate part feels like a huge win for developer experience.

Halil Han BADEM
@oliver_nathan3 exactly, figured them after lot of tries and researches
Hitesh

I can completely resonate with this. The "Teach AI your style before letting it contribute" part is underrated advice. Most people skip that step and then wonder why the codebase feels like alien.

Halil Han BADEM
@hitesh55 human touch is the key ✨
Sai Tharun Kakirala

This resonates deeply. A decade of production experience teaches you something no tutorial does: AI is a multiplier, not a replacement for good engineering instincts.

Building Hello Aria — an AI assistant in WhatsApp and iOS — we kept running into this. The model can do incredible things, but the "boring" stuff like graceful fallbacks, edge case handling, and knowing when NOT to use AI for something critical... that all comes from backend discipline. Our architecture decisions around message delivery reliability, conversation state persistence, and graceful degradation owe everything to old-school backend thinking.

Launching April 10th on Product Hunt. Every day leading up is a reminder that the judgment layer is irreplaceable.

Umair

the layer framework makes sense but i think the real variable isnt "where you use AI" its whether you can read what it wrote and know if its wrong. senior devs avoiding AI on infra still ship bugs, they just mass produce them slower lol. the 3 weeks coding 1 week debugging thing was true before AI too