Launched this week

TextCompressor
Reduce your LLM API bill 11–45% with zero code changes
3 followers
Reduce your LLM API bill 11–45% with zero code changes
3 followers
TextCompressor is a drop-in proxy that compresses prompts before they reach your LLM — removing stop words and filler while preserving meaning. Point your existing OpenAI client at our API, add one header, done. → Light: 16.7% token savings, -2.7pp accuracy → Medium: 33.5% token savings, -5.1pp accuracy → Aggressive: 45.9% token savings, -6.6pp accuracy Works with OpenAI, Anthropic, Ollama, LM Studio — anything OpenAI-compatible. No AI used in compression — pure CPU



You don't know How I need this. It's been a hassle battling with claude everytime. Thank you @brad_cassels can't wait to try it out.
@daniel_nwankwo Thanks the numbers get better on smarter LLMs