All activity
Djbutterrock left a comment
744B MoE with 40B active is serious scale impressive to see it close the gap with frontier models. Would love more transparency on real-world agent benchmarks beyond synthetic evals.

GLM-5Open-weights model for long-horizon agentic engineering
Djbutterrock left a comment
This is such a sharp observation. AI surfaces the hidden gears behind a system things you could only guess from docs before. Bugs, bottlenecks, and decision paths suddenly become visible. For me, AI made data flow and dependency chains way more obvious suddenly you feel the architecture instead of just drawing it.
AI Makes Architecture Visible to Everyone
Musa MollaJoin the discussion
Djbutterrock left a comment
Totally agree this isn't about model memory, it’s about workflow design. Tools like Cursor or Codex are great at generating code, but if the context isn’t owned or linked to real tasks, every session feels like starting from scratch. A task-first approach like yours makes a lot of sense context should live in tasks, docs, and decisions, not a clipboard. Curious to see how this approach scales...
Cursor, Codex CLI, Gemini CLI are powerful — but why do they still forget the task?
Mihir KanzariyaJoin the discussion
