Hello ProductHunt! 👋
The new Qwen 3.5 small models are now available for iPhone and iPad in Locally AI.
The new models beat models 4 times their size, support vision and reasoning toggle. Four sizes are available: 0.8B, 2B, 4B, and 9B (available on supported iPads).
Enjoy!
Report
@adrgrondin The 0.8B option is a smart inclusion. Most apps in this space only ship the biggest model they can fit and then wonder why people bounce after waiting 30 seconds for a response. Having that range lets people actually find what works for their device instead of guessing.
@zaczuo Thanks a lot! Many improvements are planned to make the experience even better 🚀
Report
Great launch 👏
Running powerful models locally on iPhone and iPad is exactly where things are heading — privacy-first, fully offline, no logins, no cloud dependency.
Excited to see Qwen integrated here. Strong reasoning + vision capabilities, and having that fully on-device is a big step forward in user control and data ownership.
Curious about:
– inference speed across different devices
– memory usage and optimization
– how the model download and UX flow are handled
If performance holds up, this could be a serious alternative to cloud-based AI apps. Congrats on the launch 🚀
@mx_mt I would recommend the latest iPhones, but even older models work well. There are models for all iPhones; you choose which size you want to run. The models are not bundled in the app; you choose which one to download once the app is installed.
Hope this makes things clearer!
Report
When is Qwen 3.5 coming to the Mac App?!?
It's killing me to not have it yet - I want to use the 9B model on my Mac!
Qwen 3.5 2B vision on an iPhone 16 Pro is astonishing. This is absolutely the future of device AI. I can't wait for OSes that use AI as the kernel for everything.
Report
Running Qwen 3.5 on-device with vision + reasoning toggle is impressive — how's battery drain on the 4B and 9B models during extended sessions? Are you seeing any thermal throttling on older iPhones, or do you recommend a minimum spec?
Replies
Locally AI
@adrgrondin The 0.8B option is a smart inclusion. Most apps in this space only ship the biggest model they can fit and then wonder why people bounce after waiting 30 seconds for a response. Having that range lets people actually find what works for their device instead of guessing.
Flowtica Scribe
Literally the best app to experience the latest @Qwen3 local AI models on your phone! 🚀
Locally AI
@zaczuo Thanks a lot! Many improvements are planned to make the experience even better 🚀
Great launch 👏
Running powerful models locally on iPhone and iPad is exactly where things are heading — privacy-first, fully offline, no logins, no cloud dependency.
Excited to see Qwen integrated here. Strong reasoning + vision capabilities, and having that fully on-device is a big step forward in user control and data ownership.
Curious about:
– inference speed across different devices
– memory usage and optimization
– how the model download and UX flow are handled
If performance holds up, this could be a serious alternative to cloud-based AI apps. Congrats on the launch 🚀
Locally AI
@mx_mt I would recommend the latest iPhones, but even older models work well. There are models for all iPhones; you choose which size you want to run. The models are not bundled in the app; you choose which one to download once the app is installed.
Hope this makes things clearer!
When is Qwen 3.5 coming to the Mac App?!?
It's killing me to not have it yet - I want to use the 9B model on my Mac!
Needle
curious to check it out
Qwen 3.5 2B vision on an iPhone 16 Pro is astonishing. This is absolutely the future of device AI. I can't wait for OSes that use AI as the kernel for everything.
Running Qwen 3.5 on-device with vision + reasoning toggle is impressive — how's battery drain on the 4B and 9B models during extended sessions? Are you seeing any thermal throttling on older iPhones, or do you recommend a minimum spec?