Parikshit Deshmukh

OpenUI - The open standard for Generative UI

The Open Standard for Generative UIMake your AI apps respond with interactive UI components like cards, tables, forms and charts instead of text. Streaming-native, token-efficient, and works with any AI model (GPT,Claude,M2.5) and agent framework like ai-sdk, Google ADK .

Add a comment

Replies

Best
Parikshit Deshmukh
Hey Product Hunt! 👋 Parikshit here, co-founder of https://thesys.dev Introducing OpenUI - the open standard for generative UI Why we built this: Over the past two years, Thesys has powered generative UI for 10,000+ developers through our managed platform, C1. At that scale, we kept hitting the same wall. JSON : the standard format everyone (including us) used for structured UI output, kept breaking in production. It's too verbose, so rendering felt slow. It's too rigid, so custom design systems fought against it. And LLMs kept producing malformed output because deeply nested JSON isn't what they were trained to generate. We tried better validation, better prompts. The error rates improved. The problems didn't go away. So we designed a new format. OpenUI Lang uses code-like syntax that mirrors how LLMs actually learned structure, from billions of lines of code. The results: 🌱 67% fewer tokens than json-render → faster responses, lower cost ⚡️ 3x faster rendering → compared to our previous JSON-based approach 🎯 Near 0% malformed output → LLMs produce valid OpenUI Lang the way they produce valid functions 🦾 Model-agnostic → Works with all LLMs including OpenAI, Anthropic, Gemini, Mistral, Ollama 📦 Framework-agnostic → Works with your favorite frameworks including Vercel AI SDK, LangChain, CrewAI 📱UI library agnostic → Hook your own design system or the popular ones like ShadCN, Radix and so on. Get started: 📖 Docs → openui.com/docs 🎮 GitHub → github.com/thesysdev/openui 💬 Discord → https://discord.com/invite/Pbv5P... We're open-sourcing this because we believe generative UI should be shared infrastructure, not locked behind any one platform. If you're building AI interfaces or thinking about it,we'd love you to try it, break it, and tell us what's missing. Know more about Thesys : Https://thesys.dev
Max Zhuk

How does OpenUI handle the challenge of adapting to different UI design paradigms and aesthetic preferences between various AI model outputs, given that these can vary significantly?

Zahle Khan

@zhukmax Interesting point. Currently OpenUI is not opinionated to any paradigms. We recommend C1 by Thesys which has extensively tested to follow your preferences.

Aditya Pandey

Really proud of what we shipped here.

For a year we watched the same three problems surface across 10,000+ developers building AI-generated interfaces: slow rendering, broken output, hard-to-integrate designs. We kept patching. The problems kept coming back.

Turns out they were all symptoms of the same root cause: the format we were using didn't fit how LLMs think.

So we built one that did. The results were immediate. 3x faster, 67% fewer tokens, dramatically more reliable.

Open source and free. Hope it helps.

Denis Akindinov

What types of LLMs and frameworks are currently supported by C1's 2-line integration, and how does it handle UI rendering consistency across different platforms?

Zahle Khan

@mordrag C1 was designed to LLM and framework agnostic. However we currently recommend GPT5 and Sonnet4 for production use. C1 only support web but we are planning to support for native mobile apps in upcoming months.

Slava Akulov

MCP integration in 2 lines is a strong hook — the real friction in generative UI isn't rendering components, it's getting the LLM to emit the right structure reliably. We build LLM workflows for structured financial data and the jump from "returns JSON" to "returns interactive UI" is where most teams get stuck. Curious: how does C1 handle edge cases where the model's UI intent is ambiguous — do you fall back gracefully or does the developer define constraints upfront?

Zahle Khan

@slavaakulov Due to strict schema enforcement, when schema breaks we retry internally to handle the interaction

Arwen

This is one of those "why didn't this exist sooner" ideas. I'm so tired of AI responses being giant walls of text when what I actually need is a table or a card. The fact that it works with GPT, Claude, and Google ADK with just 2 lines of code is really appealing — nobody wants to be locked into one model these days. One question: how does it handle streaming? Like if the AI is generating a chart in real-time, does it render progressively or wait for the full response? That'd be a dealbreaker for chat-style apps where latency matters.

Zahle Khan

@sparkuu OpenUI is built to be streaming native. Users can expect to see first render within 500ms-1s.

Handuo

An open standard for generative UI is exactly what the ecosystem needs. Having 10K+ developers already using this through Thesys gives it real momentum. Congrats on the launch!

Zahle Khan

@handuo thanks for the support!

Lev Kerzhner
🔌 Plugged in

That is so cool! LLM visuals tend to be really frustrating. super curious to see how you resolved this. Shared with our marketing team :)

Daisuke Ishii 石井 大輔

great idea!

Zahle Khan

@ishiid thank you. excited to see what you build.

Daisuke Ishii 石井 大輔

@zahle_khan wow - I will prepare it to show you

Ossy
Loving the sound of this. How much ACC testing was done on this?
Zahle Khan

@orateur Hey Ossy, what do you mean by ACC testing?

Vagt
@orateur hey,what is ACC
12
Next
Last