Launching today

Foil
An AI agent that monitors your AI agents
34 followers
An AI agent that monitors your AI agents
34 followers
All observability tools simply log traces with no context. Foil reimagines how you deploy and maintain agents by learning your agents behavior and responsibilities. We call these agent profiles and they evolve as your agent runs. Hallucinations, behavioral drift, loops, grounding failures: caught automatically with context, not just logged. 🎁 Product Hunt Exclusive - 50% off Pro for 3 months with code FOILPH
Interactive









Free Options
Launch Team / Built With




Foil
Hey PH! We built Foil because we realized we were spending more time monitoring our AI agents than actually building them. We had traces, dashboards, logs but making sense of it all was a full-time job. We were the bottleneck, manually reviewing thousands of traces to figure out why and when agents were stuck, hallucinating, etc.
So we asked: what if we reimagined the way AI agents are monitored?
Foil is essentially an AI agent whose job is to watch your other agents. It learns how each one behaves (we call these agent profiles), sets its own health checks (anchors), and flags problems with context not just "error rate went up" but why and what changed. A customer support bot and a code review agent get evaluated differently because they should be.
Here's what's under the hood:
👁️ Agent Profiles - Foil automatically learns each agent's behavior: tool patterns, error rates, traffic shape. A living baseline, not a static dashboard.
🎯 Anchors - Auto-generated health checks (e.g. "error rate < 5%") that evaluate every trace against the profile
🔎 Smart Search — Query across all your agents, traces, users, and models in natural language. Ask "which agent has the highest error rate?" and get an instant answer with charts
👣 Tracing - Understand every decision your agent made
🔍 Detection - Hallucinations, behavioral drift, loops, prompt injection, PII leakage, and RAG grounding failures - caught in real-time
📊 Metrics - Per-agent dashboards for cost, latency, quality, and usage over time
🧠 Feedback Loop - Mark false positives or confirm real issues. Foil learns from you and gets more accurate over time
🎇 Multimodal support - We support all file types for Documents, Images, Audio, Video, Code which is used for agent training and semantic search.
🎁 Product Hunt Exclusive - 50% off Pro for 3 months with code FOILPH
Try it: run npx @getfoil/foil-js wizard --dir <where_agents_lives> --agent-name <name_for_agent> --agent-description <what_agent_does> --api-key <secret_special_key> in your project. It detects your framework and instruments your code automatically.
Ask us anything here!
Quick start example: github.com/getfoil/foil-examples
How does Foil distinguish between intentional behavioral changes in an agent and problematic behavioral drift when updating its agent profiles over time?
Foil
@mordrag If there is intentional change that will alter agent behavior the user can reset learning manually and the profile will reset and learn for traces going forward. Otherwise after an agent reaches steady state we treat the agent as mature and alert if there is drift.
From personal experience when building agents the first attempt is experimentation; prompting, data, etc. so in the beginning we allow experimentation and guide the user to what is working and what is not. Users can test prompts see results and work from there. Once it has reached a point of maturity the profile has more weight and takes over to guide the evaluations what the agents responsibilities, tone, audience, and many other factors is to make sure it stays on track.