What's New in AgentReady: Better Compression, Privacy-First API & More
We listened to our community and made some major changes to put privacy first.
A new privacy-first API We redesigned our API — now the official version — to handle token compression with privacy at its core. We only require your AgentReady key. Your LLM API key stays yours — we never see it:
import requests, os
from openai import OpenAI
# Step 1: Compress messages with AgentReady
res = requests.post("https://agentready.cloud/v1/comp...",
headers={"Authorization": "Bearer ak_live_116e......"},
json={"messages": [{"role": "user", "content": your_text}]})
compressed = res.json()["messages"]
# Step 2: Send to YOUR LLM with YOUR key
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
response = client.chat.completions.create(model="gpt-4o", messages=compressed)Here's everything else we shipped:
Optimized compression — the new API compresses data more efficiently, reducing token usage further.
OpenClaw integration — AgentReady now works seamlessly with OpenClaw.
Benchmark page — we created a benchmark page AgentReady — Make the Web Readable for AI Agents
PIP & NPM packages — integrate AgentReady directly into your Python or JavaScript projects with a single install.
Token usage tracking — better visibility into how your tokens are being used.
Self-hostable version (coming soon) — compress tokens entirely on your local machine. Nothing leaves your environment. The only external call is a license key check against our server.
Get started in seconds We also streamlined the sign-up flow — you can now register and get your API key in less than 10 seconds here: AgentReady — Make the Web Readable for AI Agents
You can find everything else here:
homepage: AgentReady — Make the Web Readable for AI Agents
docs: Quick Start Guide — Get Running in 2 Minutes | AgentReady



Replies