We will build a free custom AI model for your agentic workflow. Looking for 3 teams.
We are building NeoSmith and we are looking for 3 teams to work with right now at no cost.
Here is the deal in one line: point your LLM to ours and we automatically create a dedicated Small Language Model for your exact workflow. Nothing from your end. No dataset, no labeling, no fine-tuning work, nothing. You just keep running your agents the way you already do.
What actually happens is this. NeoSmith reads your production traces, figures out what your workflow is doing, and builds a purpose-built model trained specifically on your task. Not a general small model. A model that has only ever seen your workflow, your inputs, your expected outputs.
The result surprises most people. Cost drops by around 70%. Speed goes up 3x. And accuracy goes up too, not down. A specialist that knows one thing beats a generalist that knows everything on that specific task every time.
We need 3 teams to prove this on real production data.
If you have agents running in production right now, anything repetitive like support triage, document extraction, classification, routing, structured output generation, we will run the full distillation for you for free and hand you back a working model with a full before/after report.
You do not pay anything. You do not commit to anything. You just show us your workflow and we do the rest.
You are a fit if: ✓ You have a real agentic workflow live in production right now ✓ You are paying a growing LLM inference bill ✓ Your workflow does something repetitive and narrow
Comment below with one line about what your agent does. I will reply to every single one.
Website: https://neosmith.ai/

Replies
Interesting idea.
Our team works on document extraction workflows where structured data is pulled from PDFs and invoices and then validated before export. Curious how your model handles documents with inconsistent layouts or complex tables.
Great question. This is exactly the problem a dedicated SLM solves better than a frontier model.
A general-purpose LLM handling inconsistent PDF layouts is guessing. It has no prior knowledge of your specific document structures, your supplier formats, your table conventions, or your validation rules. Every document is a cold start.
A NeoSmith SLM trained on your extraction workflow is different. It learns from your actual production documents, including the messy ones. Inconsistent column headers, merged cells, multi-line rows, rotated tables, footer-bleed into data zones, vendor-specific quirks. All of that gets embedded into the model weights through training on your real data.
For complex tables specifically, the model learns your schema deeply. It knows which fields are required, which are optional, what valid value ranges look like, and how to handle ambiguous cases based on how your team has historically resolved them.
On validation before export, this is where the SLM approach has a compounding advantage. Rather than running a separate validation step, the model can be trained to output structured data that already conforms to your export schema, with confidence signals on fields it is uncertain about. Fewer downstream errors, less manual review.
Happy to walk through how this would work on your specific document types. What does your current extraction pipeline look like?