From citynoise to OpenMarker: Why taking a break was the best move.
I spent months building citynoise, a localised social platform designed to build stronger communities in cities. The feedback was always "it’s cool," but the reality was zero engagement. I fell into the classic trap: I built it, but nobody came. I decided to take some time off from the project to clear my head.
The "Aha!" Moment at University
While taking time off from citynosie, I focused on University more. As a Course Rep, I attended a Course feedback meeting where I noticed a pattern. I kept hearing the same two complaints over and over:
Students were frustrated by slow marking turnarounds and deadlines being pushed back, and vague feedback.
Lecturers were drowning in a "marking overload," spending more time grading text than actually mentoring students.
Since most coursework is text-based an LLM’s natural habitat I realised I could bridge this gap. That was the idea of OpenMarker.
Introducing OpenMarker (v1.0.1)
OpenMarker is an open source desktop platform that allows educators to run LLM models locally as a assistant. This was a non-negotiable for me for two reasons:
Data Privacy: Student work never leaves the lecturer's computer.
Infrastructure Costs: Unlike citynoise, I didn't want to be buried under Infrastructure costs.
Right Tool for the job: I always think from the bottom up, and when we approach things that way, we don’t need a 30B or 40B (or bigger) model. What we need are good, practical models to start with. In v1.0.1, Currently have provided 4 models to run with, and the best part is that they will run locally on almost any modern machine, ranging from 3B to 7B parameter models. Even with just a 3B model, I got good results.
The Results: 7% Variance
I put the tool to the test using my own Year 2 coursework and the actual feedback I received from my lecturers. The results blew me away:
Consistency: OpenMarker achieved only a 7% variation compared to human markers.
Quality: In many cases, the AI feedback was more detailed and actionable than the human equivalent.
The best part is that these results came from a 3B model. With a good marking matrix and prompts in v1.0.2, I will be working on tuning the model itself to improve performance even further.
OpenMarker is not a Replacement; It’s an Assistant
The goal of OpenMarker isn't to replace. It's to handle the "heavy lifting" of grading long texts so educators can spend their energy where it actually matters: reaching out and spending time with students.
What’s Next?
I haven’t given up on citynoise I’m taking the lessons I learned about "solving real pain" and planning a proper relaunch on Product Hunt.
For OpenMarker v1.0.2, I’m working on:
Reducing that 7% variance even further.
I’ve already found some improvements for the software, such as editable marking sections and support for different file types.
I’m also working on a way for the LLM to mark assignments by imitating the lecturer’s style, so the feedback not only follows their grading criteria but also reflects their tone and phrasing.


Replies