Ryan Hendrickson

What are you building, and what does your stack look like?

byβ€’

I am a Computer Science student doing research into how solopreneurs and small startups create new apps and what their stack looks like. Particularly, I'm interested in how you handle things like authentication, billing, and permissions/authorization in your apps.

Let me know what you're working on below and how you're going about it -- I'd love to connect for some quick calls to learn about your product and talk about your process in building it!

971 views

Add a comment

Replies

Best
Umair

building openslop.ai - free open-source AI video creation workflows. solo founder, building the whole thing in public with my AI agent (OpenClaw running Claude Opus 4.6). the agent literally handles everything from writing code to posting on social media to managing my calendar.

stack is intentionally minimal because IMO most solo founders over-engineer their infrastructure:

- agent framework: OpenClaw (open source, runs on my laptop)

- model: Claude Opus 4.6 via Anthropic API

- the agent writes, tests, and deploys code through shell commands. no IDE

- video pipeline: script gen + TTS + image gen + ffmpeg assembly, all orchestrated by the agent

- no auth, no billing yet - free from day one because underlying model costs are dropping so fast that charging per-token right now feels like charging per-SMS in 2008

FWIW the most interesting part isn't the stack, it's that the agent is effectively my cofounder. it reads my messages, browses the web, posts comments (including this one tbh), manages memory across sessions. the "stack" conversation matters way less when your agent can swap out any component in an afternoon

Ryan Hendrickson

@umairnadeem Interesting that you're leaning into the 'slop' title in your company name -- are you worried about that invoking negative connotations with your audience? Very interesting as well that your effective cofounder is an OpenClaw instance. Is it deploying subinstances to work on the code, or is it directly doing the work?

Umair

@ryan_hendricksonΒ haha the name is intentional - leaning into the meme rather than pretending AI content isnt "slop". the audience gets the joke and it makes us more memorable than another generic "AI studio" name.


on openclaw - it was an experiment where i gave an AI agent autonomous access to my computer. it wasnt writing code, it was engaging with reddit communities, identifying creator pain points, and driving waitlist signups. it independently found threads, crafted responses, and had genuine conversations. 30+ people personally thanked it and it drove 300+ DMs from creators. wrote about the experiment on linkedin and HN.

no subinstances, it was directly doing the work with its own subagents - browsing, reading threads, composing messages, sending DMs. pretty wild to watch honestly

Nakajima Ryoma

Building Tomosu β€” an iOS app that flips the usual digital wellbeing approach. Instead of blocking distractions, your phone starts completely quiet by default (no notifications, no badges), and you consciously unlock what you need with a Focus Session. Stack: SwiftUI + Apple Screen Time API, all data stays on device. The hardest part was actually making "doing nothing" feel intentional rather than broken πŸ˜„ Would love to connect!

Ryan Hendrickson

@nakajima_ryomaΒ That's a sweet concept -- get your things done before you get to use your phone, instead of swapping back and forth between work and phone. I've heard before that the Screen Time API is tricky, did you run into major issues when trying to get Tomosu to work?

Nakajima Ryoma

@ryan_hendricksonΒ Thanks! Yeah the Screen Time API was quite an adventure πŸ˜„ The biggest thing is Apple keeps it intentionally opaque for privacy β€” you work with tokens instead of actual app identifiers, so you can't even "see" what the user picked. Building intuitive UX on top of that was the real challenge. Documentation is also pretty sparse, so lots of trial and error. But honestly, once you get past that learning curve, the framework is well-designed. Happy to chat more about it if you're curious!

Nadia Eldeib

Caveat: I'm no longer a solopreneur (met my co-founder 5 years ago and have never looked back and we now have a small team).

Stripe is pretty extraordinary for payments / billing, although if you're building something AI-native then might be worth looking at tools like Orb or @Limitr as well.

Ryan Hendrickson

@nseldeibΒ Love that you've been able to find a co-founder and building out a small team! Agreed, it's pretty hard to beat Stripe's developer experience. I'll have to look into Orb and Limitr, I haven't heard of them before.

Nadia Eldeib

@ryan_hendricksonΒ thanks, and hope these are helpful options.

Aleks Asenov

Building Signum β€” a domain trust scanner that tells you if a website is legit or a scam before you pay. Powered by AI + 15 threat intelligence sources.

Stack: FastAPI on Railway, Supabase for auth + database, Stripe for billing, Resend for emails, vanilla JS on Vercel.

Supabase handles auth with JWT refresh tokens out of the box β€” saved weeks. Stripe Checkout with webhooks updates user plan in real time. Kept the whole thing lean as a solo build.

Ryan Hendrickson

@alanxoΒ This is great, I've had friends reach out to me before because they did/were about to order something from a less than reputable site. Not everyone is very technical or can notice the "smells," so a tool like this is very helpful. (with the caveat of it shouldn't create a false sense of trust!)

I've got two suggestions looking at it; number one would be to be reducing the amount of information displayed given that this (in my opinion) would be targeted towards a non-technical audience who just wants a "go/no-go" signal, and possibly a why in a no-go situation. My other would be to make this an extension that a more technical user can install on a more vulnerable family member/friend's computer to help prevent them from falling victim to scams.

Aleks Asenov

@ryan_hendricksonΒ Thanks Ryan, really appreciate the thoughtful feedback.

You're right on both points. The "false sense of trust" caveat is something I think about a lot β€” the goal is to surface red flags, not to certify safety.

On the UX simplicity - noted. The scan result does lead with a plain-English verdict and a single score, but I can see how the findings list might feel overwhelming for non-technical users. Working on a cleaner "just tell me yes or no" view.

On the extension - actually just submitted it to the Chrome Web Store yesterday. Exactly the use case you described: install it for someone less technical and it's one click to check any site before they pay.

nim

building speakeasy β€” ios app that converts article urls into audio. paste a link, get a podcast-style listen. works for substack, medium, blogs, twitter threads etc

stack:

  • mobile: expo (react native), nativewind for styling, revenuecat for subs

  • - backend: fastapi + postgres, deployed on hetzner via coolify

  • - tts: inworld ai (primary), openai tts as fallback

  • - storage: icloud drive via react-native-cloud-storage

  • - auth: device id backed by keychain, no accounts

for auth/billing: no login required which simplifies things a ton. billing through revenuecat which abstracts ios/android in-app purchases. would def be open to chatting about it

Ryan Hendrickson

@sup_nimΒ Nice, so you can just drop in the URL and the app acts as an audio player? Does it save the generated audio and timestamp you left off to go back to it later?

Would love to talk about your products if you have a moment. Pick a time that works for you here!

Yukendiran Jayachandiran

Building LucidExtractor -- an AI-powered web scraping and SEO analysis platform. Solo founder, bootstrapped.

The stack:

  • Backend: Python/FastAPI on Google Cloud Run (serverless, scales to zero when idle -- saves a lot on hosting as a solo dev)

  • - AI: Google Gemini 2.5 Flash for data extraction. Users describe what they want in plain English instead of writing CSS selectors

  • - Browser automation: Playwright with stealth patches

  • - Database: Firestore for user data, Redis for caching and rate limiting

  • - Frontend: React + Vite + Tailwind CSS

  • - Auth/billing: Firebase Auth + Stripe

  • - Infra: Cloud Build for CI/CD, GCS for file storage

For permissions: Firebase Auth handles sessions. I built a credit-based system where each API call costs credits based on complexity. No complex role-based access yet.

The hardest part was pricing actually. Went through 4 iterations before landing on credit-based plans that work for both light and heavy users.

The live product is at lucidextractor.liceron.in if you want to see it. Happy to do a quick call to walk through the architecture for your research!

Ryan Hendrickson

@yukendiran_jayachandiranΒ Very cool! Nice that it can do all of the parsing before ever returning, saving bandwidth and compute for the end user.

I'd love to do a quick call! Let me know if any of the times here work for you, otherwise I can open up some other timeslots if nothing lines up. Looking forward to hearing more about LucidExtractor and your stack!

Grigory Reznichenko

I'm building balans.finance, it's in open beta right now.
it's a personal finance tool, it basically allows you to do your personal bookkeeping in multiple currencies (very relevant for people who live in multiple countries). I hope to integrate OpenBanking soon too, so that all transactions are recorded automatically.
My tech stack is pretty simple - default Ruby on Rails stack (Postgres for db, Hotwire for frontend). Authentication is done with devise gem (with some custom implementations). Billing is via Paddle (just basic API integration, not external libraries). Authorization is via 'pundit' gem.

Ryan Hendrickson

@grigory_reznichenkoΒ Multiple currencies is a very nice feature in a personal finance app, and I can imagine it's something that commonly overlooked in non-business settings. How prevalent is OpenBanking, is it something that has wide support right now, or something that is still picking up steam?

Matthew Kenchington

@ryan_hendrickson I'm building ColdCheck. It' an AI writing assistant that learns how you write and generates drafts that actually sound like you.

Before I was a developer, I spent 15 years as a screenwriter. Screenwriting is about voice: every character has one, every writer has one, and the moment it disappears, the whole script falls apart.

With LLMs, everything comes back in the same flat tone and created an em-dash epidemic.

Users describe intent, get a draft, edit, send. The human stays in the loop. Here's the rub:

Stack Next.js 15 and TypeScript, deployed on Google Cloud Run. Postgres and pgvector via Prisma on Supabase. The core trick is voice fingerprinting: extracting stylometric signals (sentence length, phrasing tendencies, tone, structural patterns) from writing samples and conditioning generation on them. Multiple model providers for embeddings and generation. Heavy RAG.

Auth NextAuth v5: Google and Microsoft OAuth for sign-in, plus email/password. Chrome extension (Manifest V3) injects drafts directly into Gmail and LinkedIn compose windows without storing thread content.

Billing Stripe subscriptions with webhook-driven entitlements. There's no polling, no stale state. Plan tiers unlock generation limits, org features, and extension access. Usage tracked server-side.

Permissions Multi-tenant: teams share an org, but voice profiles and drafts are scoped per-user. Row-level enforcement at the query layer. Org admins can manage members without ever touching another user's writing data. Role separation: Platform Admin / Org Owner / Org Admin / User.

Happy to compare notes or talk further if it's helpful.

Ryan Hendrickson

@matthew_kenchingtonΒ Nice! Do you provide some of your previous writing samples when you sign up, and it fine tunes its own writing style to match? This is probably something that would pair very well with voice typing; when I use voice typing, I feel like it generally gets the content of what I'm saying, but never replicates the tone of how I say things in writing.

Would love to talk further if you have time in the next week. You can pick a time that works for you here, or let me know if you have other times that would work better for you.

Matthew Kenchington

@ryan_hendricksonΒ That’s exactly how you create your initial β€œvoice fingerprint” and then it continues to learn when you edit what it drafted or add different contexts to your β€œintent”. I have a version that (for the most part) works with spoken voice and can pick up more nuance based on inflection, speed, volume, etc…, but that’s entering a different league. I’ll set something up with you.

Ryan Hendrickson

@matthew_kenchingtonΒ I look forward to speaking with you!

Maria Vannicola

Hey @ryan_hendrickson β€” happy to share.

I’m currently launching Specimen, a mobile-first app for traveling creatives to find and connect with trusted collaborators across cities.

I’m a non-traditional founder (on-set fashion hairstylist with a business/marketing background), so my stack is intentionally lightweight and fast to iterate.

High level stack:

  • Frontend: iOS (SwiftUI)

  • Design: Figma (for flows + early visualization)

  • Backend: Firebase (Auth, Firestore, real-time updates)

  • Auth: Firebase Auth (email + OAuth)

  • Community / feedback: Discord

Firebase handles authentication and permissions, which lets me move quickly while keeping things secure and scalable. I’m keeping billing simple for now and focusing on nailing the core network experience pre-monetization.

Happy to chat more if it’s useful β€” always down to compare notes on building from inside a specific industry.

Ryan Hendrickson

@maria_vannicolaΒ Nice! Was there something that brought you to use Firebase over other alternatives? I agree with your decision to focus on building the network first, which will make it easier to open doors to monetization later.

Daniel Haven

Disclaimer: I don't consider myself a solopreneur. More like an indie developer trying to get some ideas out. This is more my experience in going from web development to dedicated Apple platform development.

Today, I worked on a native Swift iOS/macOS application and put it on TestFlight. Authentication and cross-device persistence come out of the box with Apple platform development. Handling payments is a bridge I have to cross when I'm closer to the release phase.

My app and another app are built on SwiftUI and SwiftData. I try to stick as close to Apple's intended conventions as possible to reduce complexity and stay compliant.

Unlike web apps, building mobile apps comes with the added responsibility of making sure your app sticks within the rigid guidelines of whatever store you're putting it on, and Apple has pretty rigid guidelines. They view every app as an extension of their brand.

The benefit compared to making a responsive (i.e., fits on a mobile screen) web app is that:

  1. You give the user a native experience that makes UX seamless and intuitive, especially if you stick within the intended guardrails of Apple and their design system.

  2. If the app is good and has users from TestFlight, the App Store shows it to your intended demographic with positive ratings and reviews, making it easier to get eyes on it and grow the user base compared to a web app. You can also get it featured if you meet conditions.

Also, for TestFlight link sharing, I use departures.to. It's kind of like Product Hunt, but for TestFlight links. I got a pretty nice amount of testers putting my initial app on it.

An iOS App I'm Currently TestFlighting: Wake Up, Get Up

Ryan Hendrickson

@iamdhavenΒ Interesting about departures.to, I had never heard of that platform. I'll have to look at that for one of my own apps.

Agree with you on the experience of mobile apps. Having a web app is nice for cross compatibility, especially in a desktop environment, but the experience of a mobile app working on the platform it was designed for is hard to beat.

Love the idea of an app to track morning doomscrolling. It's a habit that's easy to fall into, and hard to get out of. Having that data/number as a motivator could be super helpful. Does it hook into the iOS screen time APIs, or is it more of a journal for yourself?

Daniel Haven

Right now, it's a simple tracker. I use it as a way to motivate myself to get out of bed on time. The fewer minutes I spend in bed, the nicer my stats look on the home screen.

Also, I expose Apple Shortcuts actions for reading, creating, and updating logs.

First
Previous
123
β€’β€’β€’
Next
Last