Hello ProductHunt! ๐
The new Qwen 3.5 small models are now available for iPhone and iPad in Locally AI.
The new models beat models 4 times their size, support vision and reasoning toggle. Four sizes are available: 0.8B, 2B, 4B, and 9B (available on supported iPads).
Enjoy!
Report
@adrgrondin Seeing a vision + reasoning toggle for the Qwen 3.5 small lineup (0.8B, 2B, 4B, 9B) makes on-device model choice feel less like a gamble. Do you show per-device RAM, disk, and battery estimates before download? A default picker and a timeout for reasoning mode would save a ton of trial and error.
Report
@adrgrondinย The 0.8B option is a smart inclusion. Most apps in this space only ship the biggest model they can fit and then wonder why people bounce after waiting 30 seconds for a response. Having that range lets people actually find what works for their device instead of guessing.
@zaczuoย Thanks a lot! Many improvements are planned to make the experience even better ๐
Report
"Offline + private + no login = the holy trinity that 90% of AI apps ignore because cloud is easier to monetize. Locally AI is betting on the right side of the privacy conversation.
As someone building Fillix, a Chrome extension that makes job hunting embarrassingly easy, the 'no login required' UX decision hits close to home. The best tools get out of your way instantly. Apple Silicon optimization is the cherry on top. Congrats on shipping!
Report
Great launch ๐
Running powerful models locally on iPhone and iPad is exactly where things are heading โ privacy-first, fully offline, no logins, no cloud dependency.
Excited to see Qwen integrated here. Strong reasoning + vision capabilities, and having that fully on-device is a big step forward in user control and data ownership.
Curious about:
โ inference speed across different devices
โ memory usage and optimization
โ how the model download and UX flow are handled
If performance holds up, this could be a serious alternative to cloud-based AI apps. Congrats on the launch ๐
@mx_mt I would recommend the latest iPhones, but even older models work well. There are models for all iPhones; you choose which size you want to run. The models are not bundled in the app; you choose which one to download once the app is installed.
Hope this makes things clearer!
Report
When is Qwen 3.5 coming to the Mac App?!?
It's killing me to not have it yet - I want to use the 9B model on my Mac!
Qwen 3.5 2B vision on an iPhone 16 Pro is astonishing. This is absolutely the future of device AI. I can't wait for OSes that use AI as the kernel for everything.
Report
Running Qwen 3.5 on-device with vision + reasoning toggle is impressive โ how's battery drain on the 4B and 9B models during extended sessions? Are you seeing any thermal throttling on older iPhones, or do you recommend a minimum spec?
Replies
Locally AI
@adrgrondin Seeing a vision + reasoning toggle for the Qwen 3.5 small lineup (0.8B, 2B, 4B, 9B) makes on-device model choice feel less like a gamble. Do you show per-device RAM, disk, and battery estimates before download? A default picker and a timeout for reasoning mode would save a ton of trial and error.
@adrgrondinย The 0.8B option is a smart inclusion. Most apps in this space only ship the biggest model they can fit and then wonder why people bounce after waiting 30 seconds for a response. Having that range lets people actually find what works for their device instead of guessing.
Flowtica Scribe
Literally the best app to experience the latest @Qwen3 local AI models on your phone! ๐
Locally AI
@zaczuoย Thanks a lot! Many improvements are planned to make the experience even better ๐
"Offline + private + no login = the holy trinity that 90% of AI apps ignore because cloud is easier to monetize. Locally AI is betting on the right side of the privacy conversation.
As someone building Fillix, a Chrome extension that makes job hunting embarrassingly easy, the 'no login required' UX decision hits close to home. The best tools get out of your way instantly. Apple Silicon optimization is the cherry on top. Congrats on shipping!
Great launch ๐
Running powerful models locally on iPhone and iPad is exactly where things are heading โ privacy-first, fully offline, no logins, no cloud dependency.
Excited to see Qwen integrated here. Strong reasoning + vision capabilities, and having that fully on-device is a big step forward in user control and data ownership.
Curious about:
โ inference speed across different devices
โ memory usage and optimization
โ how the model download and UX flow are handled
If performance holds up, this could be a serious alternative to cloud-based AI apps. Congrats on the launch ๐
Locally AI
@mx_mt I would recommend the latest iPhones, but even older models work well. There are models for all iPhones; you choose which size you want to run. The models are not bundled in the app; you choose which one to download once the app is installed.
Hope this makes things clearer!
When is Qwen 3.5 coming to the Mac App?!?
It's killing me to not have it yet - I want to use the 9B model on my Mac!
Needle
curious to check it out
Qwen 3.5 2B vision on an iPhone 16 Pro is astonishing. This is absolutely the future of device AI. I can't wait for OSes that use AI as the kernel for everything.
Running Qwen 3.5 on-device with vision + reasoning toggle is impressive โ how's battery drain on the 4B and 9B models during extended sessions? Are you seeing any thermal throttling on older iPhones, or do you recommend a minimum spec?