Smarter AI for real-code design. Create and refine layouts faster with components from open-source or custom libraries that follow your design system—so every idea moves closer to production.
Replies
Best
What’s the most surprising thing users are doing with image upload so far?
@almasivic Thanks for the question! Probably how often people upload quick sketches or screenshots of existing UIs and use them as a starting point for hi-fi layouts. It’s a simple flow, but interesting to see in practice
Report
This looks great! Curious through, how do you prevent AI from inventing variants?
The short version is: we anchor generation/refinement to real component libraries + real props, so the AI selects only from what exists instead of inventing new variants. That’s the big advantage of a code-backed approach vs generic mockup AI.
And for the Custom Library AI, the goal is even stricter: when your library is connected via Git, we want the AI to learn your actual components/props/tokens, so it stays inside your rules by default rather than improvising.
Report
This is awesome! Curious, which library and LLM combo gives the best results right now?
Report
Maker
@dtt071 Hey Donal! This is definitely a personal preference, but for myself - I've used MUI + Claude Sonnet 4.5 the most.
We did just add GPT 5.1 which is giving us some amazing results, especially with ShadCN - so I'm keen to test this more.
Report
Yessssss. I love this direction: AI that helps you stay inside your system.
Super interesting. How do you handle accessibility defaults when AI refines layouts?
Report
Maker
@harmain_shehzad Love this question - and it’s one of the big reasons we’re so opinionated about real, code-backed components.
Our baseline approach is:
Leverage the accessibility baked into the underlying libraries (MUI, shadcn AntD, React Bootstrap, etc.). When the AI is refining layouts using real components instead of drawing new UI from scratch, we inherit the semantics/ARIA patterns those systems already handle well.
Prefer “refine within structure” vs “rewrite the structure.” Most of the refinement flow is aimed at adjusting hierarchy, spacing, composition, and content without breaking the semantic intent of the component tree.
Design-system grounding helps safeguard patterns. Especially for enterprise libraries, keeping AI inside known component/prop boundaries is a practical way to avoid stray, inaccessible novelty UI.
We’re also actively thinking about the next layer of trust here: such as more explicit a11y guardrails and checks during refinement (e.g., nudges when contrast/state/structure looks risky).
Report
This looks great! I am going to test shortly. Without getting ahead of myself.. What's next?
We will be launching our upgraded AI assistant in early 2026 (likely January), and then contueing to roll out our Enterprise offering.. and a little thing called UX Playground (more to come 😉)
Report
Does the image to layout flow map to real components reliably, or does it still need manual cleanup?
Short honest answer: it’s pretty solid for common patterns, but you should expect some cleanup on complex screens.
What we’re seeing so far:
Best results: dashboards, forms, tables, standard marketing sections — especially when the target library has clear, well-documented primitives (MUI/shadcn/AntD).
Where it can wobble: highly bespoke layouts, dense enterprise UI, or screens with lots of custom “house components” that don’t exist in the chosen OSS library.
The big improvement vs generic image-to-UI tools is that we’re aiming to map to real components + props, not just visually similar blocks.
Even when it’s not perfect, you usually end up with a real starting point you can refine with prompts instead of rebuilding from scratch.
If you’ve got a specific screen type in mind I'd love to test it out!
Report
What kind of prompts work best for layout refinement?
Refinement prompts work best when you’re specific about intent + constraints, not just vibes.
If you're ever stuck or confused, check out our AI Prompt library which will show you some great examples of effective prompts and formats.
A simple formula that works almost every time
“Do X to achieve Y, while keeping Z.”
For example:
“Reorder the layout to prioritize the primary workflow, while keeping the same components.”
“Improve hierarchy for first-time users, while keeping the current density.”
“Make this more dashboard-like, while preserving the table and filter patterns.”
Report
I’d love to see how this handles component props/variants at scale, especially in enterprise/custom design libraries.
Report
Maker
@malik_yasir5 Happy to jump on a call and show you what it can do!
Report
Love this! As a design-challenged non-techie, I've always struggled to turn my ideas into page layouts that look good and have clean UX. This makes things MUCH easier. Great work guys.
Replies
What’s the most surprising thing users are doing with image upload so far?
UXPin Merge
@almasivic Thanks for the question! Probably how often people upload quick sketches or screenshots of existing UIs and use them as a starting point for hi-fi layouts. It’s a simple flow, but interesting to see in practice
This looks great! Curious through, how do you prevent AI from inventing variants?
@thomas_dupuy2 Great question Thomas.
The short version is: we anchor generation/refinement to real component libraries + real props, so the AI selects only from what exists instead of inventing new variants. That’s the big advantage of a code-backed approach vs generic mockup AI.
And for the Custom Library AI, the goal is even stricter:
when your library is connected via Git, we want the AI to learn your actual components/props/tokens, so it stays inside your rules by default rather than improvising.
This is awesome! Curious, which library and LLM combo gives the best results right now?
@dtt071 Hey Donal! This is definitely a personal preference, but for myself - I've used MUI + Claude Sonnet 4.5 the most.
We did just add GPT 5.1 which is giving us some amazing results, especially with ShadCN - so I'm keen to test this more.
Yessssss. I love this direction: AI that helps you stay inside your system.
@uwedreiss exactly!!
Super interesting. How do you handle accessibility defaults when AI refines layouts?
@harmain_shehzad Love this question - and it’s one of the big reasons we’re so opinionated about real, code-backed components.
Our baseline approach is:
Leverage the accessibility baked into the underlying libraries (MUI, shadcn AntD, React Bootstrap, etc.).
When the AI is refining layouts using real components instead of drawing new UI from scratch, we inherit the semantics/ARIA patterns those systems already handle well.
Prefer “refine within structure” vs “rewrite the structure.”
Most of the refinement flow is aimed at adjusting hierarchy, spacing, composition, and content without breaking the semantic intent of the component tree.
Design-system grounding helps safeguard patterns.
Especially for enterprise libraries, keeping AI inside known component/prop boundaries is a practical way to avoid stray, inaccessible novelty UI.
We’re also actively thinking about the next layer of trust here:
such as more explicit a11y guardrails and checks during refinement (e.g., nudges when contrast/state/structure looks risky).
This looks great! I am going to test shortly. Without getting ahead of myself.. What's next?
@zem_service great to hear!
We will be launching our upgraded AI assistant in early 2026 (likely January), and then contueing to roll out our Enterprise offering.. and a little thing called UX Playground (more to come 😉)
Does the image to layout flow map to real components reliably, or does it still need manual cleanup?
@rehan_anwar2 Hey Rehan!
Short honest answer: it’s pretty solid for common patterns, but you should expect some cleanup on complex screens.
What we’re seeing so far:
Best results: dashboards, forms, tables, standard marketing sections — especially when the target library has clear, well-documented primitives (MUI/shadcn/AntD).
Where it can wobble: highly bespoke layouts, dense enterprise UI, or screens with lots of custom “house components” that don’t exist in the chosen OSS library.
The big improvement vs generic image-to-UI tools is that we’re aiming to map to real components + props, not just visually similar blocks.
Even when it’s not perfect, you usually end up with a real starting point you can refine with prompts instead of rebuilding from scratch.
If you’ve got a specific screen type in mind I'd love to test it out!
What kind of prompts work best for layout refinement?
@rachel_abbott Hey Rachel,
Refinement prompts work best when you’re specific about intent + constraints, not just vibes.
If you're ever stuck or confused, check out our AI Prompt library which will show you some great examples of effective prompts and formats.
A simple formula that works almost every time
“Do X to achieve Y, while keeping Z.”
For example:
“Reorder the layout to prioritize the primary workflow, while keeping the same components.”
“Improve hierarchy for first-time users, while keeping the current density.”
“Make this more dashboard-like, while preserving the table and filter patterns.”
I’d love to see how this handles component props/variants at scale, especially in enterprise/custom design libraries.
@malik_yasir5 Happy to jump on a call and show you what it can do!
Love this! As a design-challenged non-techie, I've always struggled to turn my ideas into page layouts that look good and have clean UX. This makes things MUCH easier. Great work guys.
@ryan_wardell1 Thanks Ryan, this made our day! 🙌
That’s exactly the kind of builder we had in mind for 2.0: people with strong ideas who don’t want to fight layout rules or second-guess UX patterns.
The prompt library + refinement flow should make it way easier to go from “rough vision” to something that looks polished and coherent fast.
If you feel like sharing, what kind of page are you working on right now - landing page, waitlist, dashboard, or something else?