What are you building, and what does your stack look like?
by•
I am a Computer Science student doing research into how solopreneurs and small startups create new apps and what their stack looks like. Particularly, I'm interested in how you handle things like authentication, billing, and permissions/authorization in your apps.
Let me know what you're working on below and how you're going about it -- I'd love to connect for some quick calls to learn about your product and talk about your process in building it!
1.4K views

Replies
After reading through everyone's projects it's not easy coming back to post mine. I built a simple AI assisted item creator for B2C and B2B called Porkchop (https://porkchopped.xyz) I've tried to make the creation process as quick and easy as possible while still providing quality. My first lesson learned already is to make sure to always do customer discovery and product validation. I learned a lot building this project. The first being that building is only half the battle. If you have any questions just let me know. I've used a few billing platforms to test, but right now I've landed on Stripe, but definitely open. If you visit the site, you can see I just use Google OAuth for authentication. It was pretty easy to set up.
building an open-source AI dashboard for PM teams. the interesting part right now is the governance layer - tracking and triage are mostly automated, but who governs what the agents decide is still wide open. that's the part most tools skip.
Building TokenBar, a macOS menu bar app that tracks AI token usage and spending across 20+ providers (OpenAI, Anthropic, Cursor, Gemini, Copilot, OpenRouter, etc.).
Stack: Swift + SwiftUI, native macOS. No Electron, no web views. Everything runs locally on your Mac. I chose native because menu bar apps need to be lightweight and fast. Nobody wants a 200MB Electron app just to show a number in their menu bar.
The trickiest part of the stack was handling all the different provider APIs. Each one returns usage data in a completely different format, different auth methods, different rate limits. It is basically 20+ micro-integrations stitched together.
tokenbar.site if anyone wants to check it out. $5 one-time purchase.
Building TokenBar (tokenbar.site) - a native macOS menu bar app that tracks your AI spending across 20+ providers in real time.
Stack: Swift + SwiftUI for the entire app. No Electron, no web views, fully native. Uses each provider's API to pull usage data and processes everything locally on the user's machine. Zero cloud infrastructure needed which keeps costs at basically zero.
The idea came from my own frustration. I was paying for ChatGPT Plus, Claude Pro, Copilot, Cursor, and making API calls for side projects. Had no idea I was spending $180+/month until I manually added it all up. Built TokenBar to automate that tracking.
Pricing: $5 one-time (Basic) / $10 one-time (Pro). No subscriptions - which feels right for a tool that helps you manage subscription fatigue.
Biggest challenge so far: each AI provider has a different API format for usage/billing data. Normalizing all of that into a clean unified view took way more work than expected. But the result is you just see one number in your menu bar and can drill down by provider, day, week, or month.
Would love feedback from anyone here who uses multiple AI tools daily. What providers would you most want to track?
Hey Ryan, solo founder here. Building Clinch (clinch.land), an AI job-search copilot: parses your resume, ingests jobs daily, ranks them against your profile with hybrid semantic + keyword search, and can auto-apply in the background.
Stack
- Backend: FastAPI + Celery, Redis as broker + pubsub, Postgres on Supabase with pgvector for hybrid search. I didn't overthink too much the vector database here since the data is already stored on supabase. Still experimenting with the embedding models though
- Frontend: React + Vite + TS + shadcn, TanStack Query, types generated from the FastAPI OpenAPI spec so frontend + backend never drift
- LLMs: OpenAI (GPT-5-nano for cheap structured calls, heavier models for big prompts) with Claude Haiku as fallback. All prompts and traces in Langfuse
- Auto-apply: I tested some apis like skyvern , browser use. Ended up building my own
- Hosting: Render (web, celery workers, celery beat, static frontend)
Auth, billing, perms — what you asked about:
- Auth: Supabase magic links. Free tier is generous and I haven't outgrown it.
- Permissions / authorization: almost 100% Postgres RLS. Frontend always hits the anon key, backend uses service-role only for admin and background tasks and always scopes by user_id. .eq('user_id', auth.uid()) policies give me
per-user access control without writing any middleware. Massively less code than rolling my own perms layer. Worth the Supabase lock-in IMO.
- Billing: not live yet.nmostly a Stripe Checkout wiring job.
Observability ended up mattering more than I thought as a solo dev: Langfuse (LLM cost + traces), Axiom (structured logs), PostHog (product analytics + error tracking). Hooking all these mcp to claude code allows me to iterate faster and catch production bugs super fast.
Building Nibble, a consumer safety app that aggregates food, product, drug, and vehicle recalls from government agencies across 13 countries (US, Canada, Australia, UK, Japan, Germany, France, and more). Users get personalized alerts based on dietary restrictions, allergies, location, and the brands they follow.
Frontend: React + Vite + TypeScript, Tailwind, ~97K LOC. 13 i18n locales (including Arabic RTL). PWA with Capacitor
for Android.
Backend: Express.js API, Supabase for auth + Postgres DB. 41 cron-driven ingestion pipelines that scrape/parse
government recall feeds daily. Some are clean APIs, some are HTML scraping, a few require PDF parsing. Keeping them
all healthy is the hardest part of the whole project.
Auth: Supabase Auth with email + Google OAuth. Supabase's rate limits are brutal (2 attempts/hour, at least on some paid tiers), so everything gets validated client-side first.
Billing: Stripe for web, RevenueCat for native (iOS/Android). Monthly ($2.99) and Annual ($24.99). Free tier gives you
full recall search, alerts, and 3 profiles with 10 scans/day. Premium unlocks unlimited profiles, scans, bookmarks,
and household sharing (Accounts can be invited to your 'family', getting recalls and being able to send recall alerts to different family members). Founding member discount is feature-flagged for early adopters. Core safety alerts stay free
because the person checking if their baby formula got recalled shouldn't need a credit card.
Hosting: Railway for both web and API. 4 separate cron services on Railway handling the ingestion schedules.
Solo dev, ~158K LOC total. The stack is straightforward but the data layer is where all the complexity lives.
I've spent a lot of time building smaller tools and Chrome extensions,
but today I’m launching my biggest project yet: Vynly (you can check my maker badge for the launch!).
It’s a social platform dedicated to AI imagery and agent content.
Stack-wise, making the jump from lightweight extension architecture to a platform that can handle fast AI agent interactions and heavy media hosting has been a massive but fun learning curve.
Has anyone else here recently made the jump from building micro-tools to larger social platforms?
Building Monk Mode, a Mac app for people who need the useful parts of YouTube, X, and Reddit without getting sucked into the feeds.
The whole idea is feed-level blocking instead of full site blocking, so you can still open direct links, search, DMs, etc. It blocks stuff like YouTube Home + Shorts, X For You, and Reddit front pages.
Stack is native macOS with local rules because I wanted it fast and hard to bypass. Site is mac.monk-mode.lifestyle if you want to see it.
I'm building a image processing saas (productbg). Stack :
- laravel tailwind blade php mysql (full stack)
- stripe
- python for ai
- vps servers for frontend (ovh)
- external servers with gpu for processing
I'm using AI tools and bought a bunch of servers with gpu that I'm hosting at my home/personnal office (Ive got optical fiber with 1gb bandwith) that do some ai image processing. It's way cheaper than having hosted gpu but it requires to have some logistics (electrical safety, redonduncy, ...).
This is a great research topic, Ryan. As a solo founder launching PictaBase today, I can tell you that for a "pro-sumer" tool, the stack isn't just about what's easy—it's about what's provable and sustainable.
I built PictaBase to be a relational visual database for creators (born from my 30 years in Hollywood post-production). Here are the hard facts on the stack:
The "Hardened" Backend
Language: PHP 8.4 (using 100% strict_types).
Total Codebase: 38,500 lines.
Static Analysis: Hardened at PHPStan Level 8. This was non-negotiable for me to ensure the app is professional-grade and error-free.
Development Hack: I used a "Synthetic Peer Review" workflow—pitting Gemini and Opus against each other to review every line of code before launch.
The Infrastructure (Privacy-First)
Authentication & Permissions: We use Project-scoped signed cookies via CloudFront to manage access to high-res assets.
Storage (The "Pixel-Blind" Architecture): This is the most important part. Image bytes never transit our application server. We use S3 presigned POST for direct browser-to-bucket uploads. This protects user privacy and reduces our server load.
Data Sovereignty: Every tag and note is written to a .meta.json sidecar file in the user's own S3 bucket. No "data ransom" here—if you leave PictaBase, you take your metadata with you.
Billing & Sustainability
Model: Lifetime Deal (LTD) with future storage top-ups to monetize.
Sustainability Hack: We manage egress costs by using CloudFront-backed derived assets (thumbnails/web-optimised versions), which makes the lifetime model economically viable for us even with high-res creators.
I'd be happy to jump on a call and show you how a 30-year film vet ended up building a PHP 8.4 powerhouse!
One question for your research: Are you finding that most solopreneurs are leaning toward "No-Code/Low-Code" tools, or is there a resurgence in "Hard-Coded" rigor like what we're doing?