Chris Messina

GPT‑5.3‑Codex‑Spark - An ultra-fast model for real-time coding in Codex

15x faster generation, 128k context, now in research preview for ChatGPT Pro users. Codex-Spark is optimized for interactive work where latency matters as much as intelligence. You can collaborate with the model in real time, interrupting or redirecting it as it works, and rapidly iterate with near-instant responses. Because it’s tuned for speed, Codex-Spark keeps its default working style lightweight: it makes minimal, targeted edits and doesn’t automatically run tests unless you ask it to.

Add a comment

Replies

Best
Chris Messina

Seems like a competitor to Windsurf's SWE-1.5 model — aimed at quickly fixing obvious code problems w/o burning excessive tokens.

The cost of intelligence keeps coming down!

Wilco Kruijer

@chrismessina But they didn't announce pricing for this, did they? I think that's really going to make or break this.

Johnny Lagneau

Great point @chrismessina. The token efficiency vs "agentic loop" trade-off is the real battleground right now. We're seeing a massive shift in how these models are being used for brand visibility too (tracking AI perception). Would love to get your thoughts on that angle for an upcoming report.

Mike Ciesielka

Pretty quick turnaround from the Cerebras partnership

Mike Ciesielka

Also saw this tip from @steipete about how to extend some of the functionality added for Spark to other models for Codex users: https://x.com/steipete/status/2022130415839195433

Daniel Dewar

The 128k context window combined with real-time collaboration is exactly what we needed for our internal dev workflows. The minimal style approach makes iteration so much faster than heavier models. Are there plans to expose the model through an SDK for integration into build pipelines?

tmatsuzaki

I’m not expecting Codex to be fast.
What I truly love about this product is its accuracy.

Speed is nice, of course — but if it comes at the cost of accuracy, then it ends up being no different from Claude or Gemini.