OpenUI - The open standard for Generative UI
by•
The Open Standard for Generative UIMake your AI apps respond with interactive UI components like cards, tables, forms and charts instead of text. Streaming-native, token-efficient, and works with any AI model (GPT,Claude,M2.5) and agent framework like ai-sdk, Google ADK .



Replies
One good update, here! Congrats on the launch, @pgd!
Thesys
@pgd @neilverma thanks for the support
@pgd @zahle_khan interesting launch.
Looking at how C1 sits between the LLM response and the rendered interface, it feels less like a UI toolkit and more like a middleware layer translating model output into interactive UI.
Especially when the system can transform responses into forms, charts and cards in real time.
Curious how you think about this internally.
Is OpenUI evolving mainly as a UI standard, or closer to an interface runtime layer for LLM-powered applications?
Thesys
@pgd @cauan_martins OpenUI is designed to evolve into a composable system to generate UI with language models. You could easily swap out the Language Model, the components or the render.
@pgd @zahle_khan That makes sense.
If the model, components, and renderer can all be swapped independently, it starts to feel more like a runtime layer for generative interfaces rather than a traditional UI toolkit.
Interesting direction for building LLM-native applications.
Thesys
@pgd @cauan_martins yes we are laying the foundations to build GenUI
@pgd @zahle_khan That makes a lot of sense.
If GenUI becomes a real paradigm, it feels like the interface layer of AI apps starts shifting from static UI frameworks to something closer to a runtime that translates model outputs into interactive components.
In that sense OpenUI almost looks less like a UI standard and more like infrastructure for how AI systems render interfaces dynamically.
Curious how you see this evolving — do you imagine GenUI becoming something like a shared interface layer across AI applications?
Trangram
creative domain name