
Hot100.ai
The weekly AI project chart judged by AI
167 followers
The weekly AI project chart judged by AI
167 followers
The weekly chart for AI-powered projects — judged by Flambo. Discover standout apps built with Cursor, Bolt, v0, Replit, Lovable, Claude Code and more... Flambo scores every submission based on innovation and utility New rankings every Monday.









Hot100.ai
Hot100.ai
Just a quick (if not a little lengthy update on some of the work in January and since launch)
Somewhat quietly and amidst a January of upside downness (not a Stranger Things reference in the slightest - could barely stomach the first episode of final season) the project hit a couple of milestones and I've been using the Token Burner to fix both some user facing issues and optimise the data layer for our machine friends.
I've (ChatGPT) compiled the main updates into a 'Slack-esque' Release Notes 2014 style for a somewhat humorous yet informative overview of these updates. I will (me) now make some small adjustments to a Figma file so that the image is on-brand and accompanies the release notes posted below. Read on for the highlights : )
Hot100.ai — Release Notes (Jan 2026)
The Hot 1,000
This started as a chart. It’s now a little dataset.
1,000 projects are now live on Hot100.ai. Tools, side projects, experiments, and things that somehow shipped. Thanks to everyone submitting, voting, and stress-testing this with us.
Categories (finally)
Scrolling is no longer a requirement.
All (1019 projects) are now grouped into 11 categories, making it easier to browse with intent instead of being a character building exercise - handled lovely with Supabase:
Writing & Content
Image & Design
Developer Tools
Productivity
Education
E-commerce
Analytics & Data
Communication
Audio & Video
Health & Wellness
AI agents can read us
Hot100 is now optimized for agents and answer engines.
I've added:
llms.txt and GEMINI.md
Semantic search for natural language queries
Category-aware responses
Cleaner structured data for LLMs
When an AI is asked what tools to use, Hot100 can answer with actual data.
MCP upgrades
The MCP endpoint has been expanded and hardened.
Chart + category access
Stable schemas for rankings, scores, and metadata
Faster responses for agent workflows
Designed for live discovery use cases
If you’re building agents, this is now usable infrastructure.
Premium Discovery Index (PDI) powered by Stripe
A small boost, not a shortcut.
PDI is a one-time $19 upgrade for approved projects:
Better visibility (more on the details of this another time)
A premium badge
Priority placement when scores are tied
The AI judge still decides quality.
Paid submissions: removed (didnt work at all)
tried a $5 submission fee.
It mostly just stopped people submitting.
So I turned it off.
That one’s on me — experiment run, data collected, lesson learned. Submissions are free again.
Good stuff. Best of luck with the launch!
Thanks for sharing your learnings about the platform and your methods at AI Meetup Copenhagen last month ⚡
I don't have time to test hundreds of tools myself so the the idea of publishing usage data and metrics about which IDEs, models, methods and frameworks are gaining traction is what's most interesting to me.
Those kinds of real-world signals are valuable for anyone trying to understand where the market is going - and probably also VCs and other types of investors.
Hot100.ai
@martin_schultz totally! I think that data insight is valuable and interesting. Understanding how consumers build with these tools, its all a new data set, somewhat. We've had 400+ project submitted to date and there are patterns emerging. I've got an
and the plan is to send weekly newsletters with some of this telemetry. I did a little bit of analysis last week and pulled a couple of slides together, which I can share here. Thanks again for the invite to the Meetup - happy to come back and talk about how the launch goes etc when there is a spot : )
Playmaker
Congrats on the launch!
Can you give some more detail on how Flambo scores each project? "Innovation" and "Utility" can be interpreted quite broadly so I am curious to know what goes on behind the curtain!
Hot100.ai
@neilswmurray Flambo runs on gpt-4o-mini with a low temperature (0.25), so the scoring stays consistent.
Every project gets evaluated with the same structured prompt.
It looks at a few things:
what the project does and how it’s described
the problem it’s solving
the tools used to build it
and whether there’s a live product to try
It doesn’t dig through repos or do anything heavy like that. It’s judging based on what’s submitted and how well the story and the end result line up. The scoring is two parts:
Innovation — is this bringing something new or interesting?
Utility — is it actually useful, clear in purpose, and understandable?
Both are scored on a 1.0–10.0 scale.
There are also some light adjustments. For example:
small bonus if it’s live and easy to try
Also bonus if the project has been security checked by the builder.
small penalty if it’s just a waitlist or extremely vague
Final score is simply the average of Innovation and Utility, rounded to one decimal place.
For the chart, Flambo’s score is the main signal. Human votes are there too, but they act more like momentum than the deciding factor. But they can and will swing scores that are tied etc, tbh I've tweaked the scoring model a few times in Beta. Expect to keep doing that when appropriate.
And for now, I’m still reviewing every project myself. Just keeping an eye on quality and making sure the whole thing feels right as it grows.
Appreciate the question!
Playmaker
Love seeing a fresh take on ranking and discovery, we have so many directories that it’s easy to lose sight of what actually adds value.
I like the idea of scoring projects based on utility, seems like a no brainer but at the same time hard to pull it off?
Also, would it be fair to assume certain models or tools might get more upvotes because their have a bigger brand presence in the AI space? Would love to know more on what behind the scoring system for sure.
Hot100.ai
@rvlt_tv Scoring on utility sounds obvious, but you’re right, its tricky. The way we’ve handled it is to break it down into simple, consistent checks. Flambo isn’t trying to diagnose code deeply or judge “taste.”
It just looks at what the thing does, how clearly the problem is defined, and whether the execution matches the intent. In the future, you could imagine further work being done to 'prove the utility', but not yet.
On the brand/tool bias question — yes, big names absolutely create gravity. What ive seen so far is that there are the foundational models / 'big 4' that are leading the way in terms of uptake, more projects/builders mention OpenAi i.e, obvs @Lovable is super popular of course but doesn't actually feature a lot across our site. Its early days, what I'm seeing is that its become a 'Stack Sport' and I think thats still playing out.
Thoughtful question, cheers, Helder.
Looks great - super interesting to see how this (& AI tooling space) will evolve - it's moving fast!
Also as the judging is AI powered, should hot100 be on the hot100 list too - or is that too meta? Curious - How does it rank against its own criteria - you must have tested it!
Hot100.ai
@soopert Ah, Tom : ) There's a question. I actually have not tested that, in total honesty. But I will endeavour to run that test for you. It has been interesting tweaking things along the way to arrive at a tone of voice (and scorecard) that feels helpful, constructive, but also has a pov on what scores highly. Agreed, moving super fast, exciting times.
Reflag
Congrats on the launch @darlingdash ! Love the focus on scoring projects based on quality rather than largest network of upvoters 💪
Hot100.ai
@makwarth Thanks — really appreciate that. And yeah, we’re launching on Product Hunt of all places, which isn’t lost on me 😄 PH is a great place to share new work and its the OG.. We want the chart to reflect quality and originality, not just who can rally the biggest crowd on launch day and who their network is.
We know from experience the work that goes into GTM and all the things happening behind the scenes to help ideas and products get eyeballs. Vibe Coding and all of these new tools has kicked the door down in terms of who can make their ideas come to life now. Time for something new.
I wanted with this approach to try and level that field for the new generation.
That’s the idea.