All activity
Mykola Kondratiukleft a comment
The production safety guards are underrated in DB tooling. Destructive query protection is the kind of thing you only care about after you have made a painful mistake. Rust for a desktop DB client is an interesting call - curious how startup time compares to DBeaver in practice.

QoreDBThe fast, open-source database client built with Rust
Mykola Kondratiukleft a comment
Depends what you are optimizing for. For exploration and prototyping I keep going back to terminal - Claude Code in a bare shell forces you to be specific about what you want, which is actually useful. For production work on an existing codebase, IDE wins because the context window problem is real and Cursor handles it better with file references and inline diffs. The interesting pattern I have...
AI in your IDE (e.g. Cursor) vs AI in your terminal (Claude Code) β whatβs the better flow?
Aaron O'LearyJoin the discussion
Mykola Kondratiukleft a comment
The approval step is the right design choice - fully autonomous email agents that just send things are a liability. The interesting UX problem is what happens when you come back after 8 hours and Stamp has queued 40 changes. Does it batch them into a review flow or show them one by one?

StampThe AI Secretary that thinks, writes, and works like you
Mykola Kondratiukleft a comment
The "what does active user mean" problem is real and expensive. Every company I have worked at has had at least three competing definitions living in different dashboards. The shared semantic layer approach makes sense - it is the same problem that good data teams solve manually, just formalized. How does Data Studio handle it when business definitions legitimately change over time - does it...

Metabase Data StudioBuild the semantic layer that makes AI analytics trustworthy
Mykola Kondratiukleft a comment
Slack as the control plane for ads is a good call - media buyers already live there. The real test will be how it handles edge cases: campaigns hitting budget caps mid-day, sudden performance drops, audience fatigue signals. Does Viktor flag those proactively or wait to be asked?

Viktor for Media BuyersManages your Meta and Google Ads from Slack
Mykola Kondratiukleft a comment
The GUI-only tools use case is what gets me. So much internal tooling in companies never gets API access - it lives in dashboards, legacy web apps, Figma. This bridges that last mile without needing to build integrations first. Curious how it handles multi-step flows where intermediate state matters - like filling a form where field 2 options depend on field 1.

Computer Use in Claude CodeLet Claude use your computer from the CLI
Mykola Kondratiukleft a comment
Support tooling for founders is usually an afterthought - either paying for Intercom before you have revenue, or handling everything manually through email threads. The AI angle makes sense here. What does Letterbook do when it doesn't know the answer - does it hand off to you, or does it try to figure it out?
LetterbookAI support platform built for founders
Mykola Kondratiukleft a comment
Localization is one of those things that gets shoved to the end of every sprint and done badly. The MCP + SKILLS approach for agent-driven workflows is interesting - what does a typical localization workflow look like when the agent runs it end-to-end? I'm curious how it handles context (strings that need different translations depending on UI placement) vs just raw key-value substitution.

Ollang DXThe AI Language Execution Layer for Enterprise
Mykola Kondratiukleft a comment
Real read/write MCP access to Notion databases is actually useful for sprint planning workflows - I can see agents updating task status, pulling blockers from a database, writing standup summaries back to a page. The "turning scattered data into actionable workflows" bit is doing a lot of work though. What's the write latency like in practice? Notion's API has rate limits that can make...

Notion MCPYour Notion workspace, inside every AI agent
Mykola Kondratiukleft a comment
The Boards + dependency mapping before agents start building is a legitimately different take. Most coding agents are just "here's a prompt, generate code" - having explicit task ordering baked into the IDE means the agent isn't deciding what to tackle next based on vibes. How does it handle when a dependency is partially built? Does the dependent task queue or does the agent try to work around...

InvokeAgentic coding IDE with visual planning boards and canvas
Mykola Kondratiukleft a comment
The credentials never showing up in logs or chat transcripts detail is the actually important thing here. I've seen agent setups where the auth storage is secure but the credential ends up in tool call output anyway - solved the wrong problem. Does token rotation work automatically? If a service refreshes the token mid-session does latchkey pick that up, or does the agent need to restart?

LatchkeyCredential layer for local AI agents
Mykola Kondratiukleft a comment
Discovery is the real gap right now - vibe-coded products are shipping fast but hard to find outside PH and Twitter noise. A curated layer that filters by AI-built specifically makes sense. How are you verifying what counts as AI-built vs just AI-assisted?
VibecodedHubThe discovery platform for AI-built products
Mykola Kondratiukleft a comment
Running Claude Code and Codex side-by-side has become my default for anything non-trivial - they catch different things and the diff between their outputs is often the most useful signal. The context handoff between models is where it gets tricky, especially when they diverge on architecture decisions.
Parallel CodeUse Claude Code, Codex, and Gemini in parallel
Mykola Kondratiukleft a comment
Convex as the backend default is an interesting pick - real-time and schema-managed out of the box, which matters when the agent is generating the whole stack. curious what the failure mode looks like when the generated auth flow doesn't quite match the app concept though, is that still a manual fix?

BNAAI agent that builds full-stack iOS & Android apps with auth
Mykola Kondratiukleft a comment
the stable ID approach is what makes this actually useful long-term. agents guessing CSS selectors based on snapshots is brittle - they break on any refactor. connecting to the running DOM state via MCP is the right layer. does it handle shadow DOM components or is that still a gap?

DomscribeGive your AI coding agent eyes on your running frontend
Mykola Kondratiukleft a comment
streaming thinking steps alongside tool calls in one feed is what almost nothing does by default. curious what happens with concurrent tool calls - does the stream stay coherent? also 8MB is impressive, what's the runtime?
CrabTalkThe agent daemon that hides nothing. 8MB. Open Source
Mykola Kondratiukleft a comment
the shared context angle is interesting - most agent setups Iβve run have a problem where the agentβs view of state diverges from whatβs actually in the browser. having the agent read from the same error logs and API traces youβre seeing removes a whole class of debugging problems. how does it handle concurrent agents - do they each get isolated views or a shared session?

1DevToolMulti-project IDE with persistent terminals and 9 dev tools
Mykola Kondratiukleft a comment
low latency TTS for voice agents is genuinely hard to get right. the failure mode Iβve seen is when the TTS step adds enough delay that it breaks the conversational feel - any ballpark on p95 latency for a 100-word response? also curious how voice cloning handles accented speech in non-English languages, thatβs usually where it falls apart

Voxtral TTS by Mistral AIMultilingual TTS model with realistic and expressive speech
Mykola Kondratiukleft a comment
the "manage credentials across the stack" part is what catches my eye. Running agents that need to touch Stripe + DB + auth + observability means you end up with a mess of env vars and service accounts. Centralizing provisioning through the CLI makes sense for human devs but curious whether the credential scoping works for agents too - can you issue a project config scoped to just what one...
Stripe ProjectsProduction-ready dev stack from your terminal
Mykola Kondratiukleft a comment
The "asks when needed" part is doing a lot of work here. How does it decide when itβs stuck vs when it should just try another approach? In my experience running coding agents, the failure mode isnβt usually the agent giving up - itβs the agent confidently applying a fix that passes CI but breaks something else downstream. Does it have any awareness of test coverage gaps or is it purely CI...

Claude Code auto-fix Auto-fix PRs in the cloud while you stay hands-off
