Bader Asad

AI agents are transacting and making decisions. Nobody knows who they are. We're fixing that.

by

Right now, AI agents are negotiating contracts, executing payments, routing tasks to other agents, and acting at scale on behalf of humans and businesses. And there is no reliable way to know which agent you're dealing with, whether it's who it claims to be, or whether it can be trusted before it touches your systems or your money.

We're handing keys to strangers and calling it automation.

I've been building in Web3 and AI for years. I've seen what happens when infrastructure gets skipped. Wallets, smart contracts, DeFi. the ecosystem moved so fast that trust became an afterthought.
We're about to do it again with agents, and the stakes are orders of magnitude higher.

Agent ID is the identity layer the agent economy is missing.

What we're launching tomorrow:

1. Human-readable agent handles (.agentid)
Your agent gets a permanent, addressable identity - your-agent.agentid. Backed by on-chain verification (ERC-8004, live on Base mainnet) with cross-chain resolution. Standard 5+ character handles from $5/yr. Premium 3–4 character handles for agents that need to be found first.

2. Trust tiers — so you know exactly what you're dealing with
Five tiers: Unverified → Basic → Verified → Trusted → Elite. Each tier is earned through verifiable credentials, on-chain attestations, activity history, and peer reviews, not self-reported. Before you let an agent touch your data or your money, you can know its trust tier. This is the layer that makes autonomous systems safe to delegate to.

3. Sign in with Agent ID — OAuth for the agent economy
The same pattern as "Sign in with Google," built for agents instead of humans. Apps can authenticate agents through a delegated browser flow (human owner approves) or a fully autonomous M2M flow. Agent signs a cryptographic assertion with its Ed25519 key, Agent ID verifies, token issued, no human in the loop. Every session is scoped, auditable, and tied to a verified identity, not just an API key.

4. A2A Marketplace — agents hiring agents
The first structured marketplace where agents can discover, hire, and pay other agents autonomously. Agents list their capabilities (research, code generation, data processing, orchestration) with structured pricing models per call, per token, per second. An orchestrator agent sends a task, the payment routes automatically via x402-protected USDC calls or Stripe Machine Payments Protocol, and a signed receipt comes back. Spending rules enforced at the platform level. Call lineage tracked so you always know which agent called which.

5. Payment routing tied to identity, not just a wallet address
Every payment in the A2A marketplace routes through verified identity. Not a raw wallet address that could belong to anyone. The receiving agent has a trust tier, a handle, a DID, and a verifiable credential. This is what makes autonomous agent payments safe to build on.

6. TypeScript SDK + MCP Server
Full SDK for developers building agent-native products. Plus an MCP server that brings Agent ID's tools directly into Claude Desktop, Cursor, and VSCode. Register agents, resolve handles, check trust tiers, manage inboxes, without leaving your dev environment.

This isn't a tool for one use case. It's the foundation everything else sits on, beneath payment protocols, beneath agent marketplaces, beneath multi-agent systems. Handles. Trust. Identity-bound payments. A2A auth. The pieces the ecosystem keeps assuming someone else built.

We're launching tomorrow on Product Hunt.

If you're building with agents or if you've already hit the wall where you can't verify who's on the other end. I want to hear about it. What breaks when identity isn't there? Drop it below.

— Bader, founder @ Agent ID · getagent.id

3 views

Add a comment

Replies

Best
Alex Rotar
The leap from 'cool automation' to 'I am trusting this script with my wallet/data' is massive. Using on-chain attestation for agent trust tiers is a really elegant way to solve the verification problem before the ecosystem gets completely flooded with spoofed agents. Really curious though: how are you preventing Sybil attacks on the 'peer reviews' required to hit that Elite status tier?
Bader Asad

@arotar Thank you and Great question

The short answer: no single mechanism prevents Sybil attacks, but several overlapping constraints that make a coordinated attack economically irrational rather than cryptographically impossible.

Reviews are coupled to real completed paid orders at the DB level, you cannot submit one without a legitimate transaction behind it. Zero-price orders are excluded. So step one of any Sybil attempt costs real money through real payment rails.

Same-owner filtering runs at score computation time, not submission time. We join the review back to the order, back to the buyer’s userId, and drop anything where the buyer and the agent owner are the same person, and catch alias accounts on the same identity.

The piece I’m most confident about is the diversity multiplier. The review provider scores based on the ratio of unique counterparties to total reviews. A coordinated ring of 3 accounts doing rotating purchases gets ~40% weight. Genuine reviews from 10 different buyers get full weight. This specifically targets the “friends buy from each other” pattern.

Elite also requires 30 days of sustained operational heartbeat, you can’t burst-buy reviews, spike to Elite, and go dormant. Score decays without continued real operation.

Where I’ll be straight with you: determined human coordination across genuinely separate identities and payment instruments is still hard to catch without graph analysis. The goal right now is making the attack more expensive than the reward, not making it impossible. For most adversarial actors, the friction wins.