Launched this week
TextCompressor

TextCompressor

Reduce your LLM API bill 11–45% with zero code changes

3 followers

TextCompressor is a drop-in proxy that compresses prompts before they reach your LLM — removing stop words and filler while preserving meaning. Point your existing OpenAI client at our API, add one header, done. → Light: 16.7% token savings, -2.7pp accuracy → Medium: 33.5% token savings, -5.1pp accuracy → Aggressive: 45.9% token savings, -6.6pp accuracy Works with OpenAI, Anthropic, Ollama, LM Studio — anything OpenAI-compatible. No AI used in compression — pure CPU
TextCompressor  gallery image
Launch Team