Qwen-Image-Layered decomposes images into transparent RGBA layers, unlocking inherent editability. You can move, resize, or delete objects without artifacts. Supports recursive decomposition and variable layer counts.
This is a really interesting release from the Qwen team. The model natively decomposes images into layers, which is basically the foundation for any precise editing work.
It is fine-tuned based on Qwen-Image. Right now, the Hugging Face demo allows splitting up to 10 layers, which is usually enough for most workflows.
The model size is a bit heavy. Hopefully, we will see a lighter or distilled version in the future!
Can't wait for SaaS video-editors to appear online to take advantage of this opportunity! 🚀
Report
Hey, so cool to see my ex-company’s product on here! Qwen-Image-Layered is such a cool idea — as someone who’s suffered through way too many manual cutouts, turning images directly into editable layers feels like magic. Really smart approach.
Major props for focusing on such a specific yet painful problem. It's refreshing to see a bigger team move this nimbly and deliver something so genuinely useful.
One tiny thought for the future — if the AI could auto-tag layers while decomposing (like "person-foreground", "text-header", "sky-background"), it’d be even easier to navigate, especially when there are lots of layers.
Anyway, super stoked to see this launch. Congrats to the team! Looking forward to catching up 👏
Report
This was so needed! Using LLM image generators was really limiting without layers
Report
This is a solid step forward for practical image editing. Native layer decomposition is exactly what’s been missing for precise control, not just “generate and hope.”
Agreed on the model size — a distilled version would make this much easier to adopt in real-world workflows.
Report
Oh man...I can't wait to use this. I have been using inkscape to do all this manually for so long now. I use it to make some really cool 3D printed art from comics and stuff. I can't wait to try this! Like, I am genuinely super excited! If I was a dog, my tail would be straight up vibrating at this idea! I hope my friends and family don't get mad at all the 3D printed art I start churning out. I hope to someday start making 'limited edition' comic book covers for people. I think this will get me really close.
Report
@junyang_lin I wanted to try, but don't you release the windows version?
Report
It is a significant milestone because it addresses the "tangled pixel" problem that has plagued AI image editing. Traditionally, AI edits images by regenerating the entire canvas (or a masked area), which often leads to "drift"—where the surrounding parts of the image change unintentionally.
By shifting from flat pixels to RGBA layers, Qwen-Image-Layered brings professional Photoshop-style logic to generative AI.
Can't wait to see APIs built on top of it
Report
When will the windows version be released?
Report
Interesting! Soon China will overtake the US in the number of AI projects)
Replies
Flowtica Scribe
Hi everyone!
This is a really interesting release from the Qwen team. The model natively decomposes images into layers, which is basically the foundation for any precise editing work.
It is fine-tuned based on Qwen-Image. Right now, the Hugging Face demo allows splitting up to 10 layers, which is usually enough for most workflows.
The model size is a bit heavy. Hopefully, we will see a lighter or distilled version in the future!
DeepTagger
Wow, this is absolutely huge!
Can't wait for SaaS video-editors to appear online to take advantage of this opportunity! 🚀
Hey, so cool to see my ex-company’s product on here! Qwen-Image-Layered is such a cool idea — as someone who’s suffered through way too many manual cutouts, turning images directly into editable layers feels like magic. Really smart approach.
Major props for focusing on such a specific yet painful problem. It's refreshing to see a bigger team move this nimbly and deliver something so genuinely useful.
One tiny thought for the future — if the AI could auto-tag layers while decomposing (like "person-foreground", "text-header", "sky-background"), it’d be even easier to navigate, especially when there are lots of layers.
Anyway, super stoked to see this launch. Congrats to the team! Looking forward to catching up 👏
This was so needed! Using LLM image generators was really limiting without layers
This is a solid step forward for practical image editing. Native layer decomposition is exactly what’s been missing for precise control, not just “generate and hope.”
Agreed on the model size — a distilled version would make this much easier to adopt in real-world workflows.
Oh man...I can't wait to use this. I have been using inkscape to do all this manually for so long now. I use it to make some really cool 3D printed art from comics and stuff. I can't wait to try this! Like, I am genuinely super excited! If I was a dog, my tail would be straight up vibrating at this idea! I hope my friends and family don't get mad at all the 3D printed art I start churning out. I hope to someday start making 'limited edition' comic book covers for people. I think this will get me really close.
@junyang_lin I wanted to try, but don't you release the windows version?
It is a significant milestone because it addresses the "tangled pixel" problem that has plagued AI image editing. Traditionally, AI edits images by regenerating the entire canvas (or a masked area), which often leads to "drift"—where the surrounding parts of the image change unintentionally.
By shifting from flat pixels to RGBA layers, Qwen-Image-Layered brings professional Photoshop-style logic to generative AI.
Can't wait to see APIs built on top of it
Interesting! Soon China will overtake the US in the number of AI projects)