Jan-Hendrik Richter

Reduced Recipes — Skip the Tuscany story. Get to the ingredients.

I love cooking food, but these damn online recipe websites have always been driving me insane.

Ads stacked on ads. Videos that start playing before the page loads. A 2,000-word personal essay about a holiday in Tuscany before you see a single ingredient.

So I built something about it.

Recipe data cannot be copyrighted. Ingredients, method, timings are factual information. They belong to everyone. So I built a crawler that extracts exactly that and discards everything else. A couple of days later I had a fully distributed architecture on Cloudflare at roughly 1/100th the cost of a conventional AWS setup.

170,000+ recipes and growing daily, every one stripped to its core. Recipes from all over the world, auto-detected and translated to English. Search by what is in your fridge. Shopping lists that merge quantities across multiple recipes automatically. A cook mode built for the kitchen counter, not a desk.

No account needed. No ads. No story about Tuscany.

We will reach 1 million recipes within the month and are targeting 10 million within 6 months. At that point Reduced Recipes would be the largest recipe website by volume in the world.

reduced.recipes

26 views

Add a comment

Replies

Best
Jan-Hendrik Richter

Would love your feedback on this, it's completely open source, repo can be found here: https://github.com/ReducedRecipes/reduced-recipes-monorepo

Prince Kumar

Finally just ingredients and steps, no distractions. Curious how you’re handling source attribution and keeping quality high at scale?

Jan-Hendrik Richter

@prince__kumar Great question. Every recipe links back prominently to the original source with the author name and domain displayed on the recipe card itself. We never claim the recipe as our own and the whole point is to drive people back to the original creator for the full experience, photos, and context.

On quality at scale, we lean heavily on Schema.org structured data rather than scraping prose. Most serious food blogs emit clean ld+json Recipe blocks which gives us machine-readable ingredients and instructions that are already well-structured. If a site does not have Schema.org markup we flag it rather than attempt heuristic extraction, so we would rather have fewer recipes than bad ones.

We also track a schema_valid flag per recipe so we can identify which entries came from clean structured data versus fallback parsing. Right now the overwhelming majority of our index is Schema.org sourced which keeps quality consistent.

The honest answer is that quality at 170k is manageable. At 1M it will need more work, specifically around deduplication since the same recipe appears on dozens of sites, and around flagging stale data when the original source updates a recipe. That is on the roadmap.