April 19th, 2026
Spotify channels Amazon
This newsletter was brought to you byDigitalOceanBooking it
gm legends. Itās Sunday.
This week: Spotify takes a page from Amazon, the most ridiculous projects in tech, three hard-learned tips for building an AI agent that calls APIs, and how to build a viable business around an app thatās meant to be deleted. Plus, 5 of the top products from this week's leaderboard.
Donāt delete this email, legend. Be like Spotify. Enjoy.
P.S. Launching soon? Weād love to hear about it ā editorial@producthunt.co š«¶
Spotify learns to read

First music, then podcasts, now booksāphysical books.
US and UKĀ SpotifyĀ users can now buy books directly from the streaming app (on Android; iOS is coming this week).
To prep for the addition and help out folks who switch between hard copies and audiobooks, it rolled out several audiobook features back in February:Ā
- Page Match, which lets people snap a photo of a book page, then takes them to that point in the audiobook
- Audiobook Charts, a weekly leaderboard for the hottest reads (okay, listens)
- Audiobook Recaps, which help you get up to speed after a long pause in reading
Of course, books used to be Amazonās territory, back before Amazon becameĀ Amazon. Spotify sees it as a way to profitability. But itās not the only app looking to fill up your bookshelf. Here are five bookish products that have launched on Product Hunt since December:
- DreamBooks is a platform for your kids to find their next favorite read; it looks and feels like āthe childrenās corner of a library,ā not a lifeless library website.
- Book Reading Habit is a habit training app designed to get you reading more.Ā
- Obooko is a platform for finding and downloading free ebooks.
- Xteink X4 is like a Kindle but smaller; itās an eReader with a magnet that sticks to your phone.
- Readever lets you join your own private book club with famous people ā or, at least, their AI likenesses.
Ā
Building an AI agent that calls APIs? Read this first
The 3 hardest technical problems I hit building an AI agent that calls real APIs. I wish someone had written [these] down before I spent a month figuring them out:
1. LLMs send partial payloads on write operations.
You ask the agent to update a record. It sends only the fields you mentioned in the prompt. The PUT request goes through, returns 200, and you've silently wiped every field you didn't specify.
The fix:Ā Before every write call, fetch the current resource state via the companion GET endpoint and deep-merge the LLM's payload on top. The LLM only needs to specify what's changing ā the executor fills in the rest.
2. LLMs hallucinate success when API calls fail.
Ā A tool returns a 404. The agent says, "Done, the record was updated!"Ā
The fix:Ā Explicitly prefix every error response with "Error:" and add one line to the system prompt ā if a tool returns a message starting with Error:, report it directly. Do not assume success. Without this, the agent will confidently lie every time.
3. Query parameters break in subtle ways.
The LLM passes query params as a plain string instead of a dict. The request fires, looks fine in logs, returns nothing. No error. Just silence.
The fix:Ā Coerce string inputs to dicts in the tool executor and be extremely explicit in the field description about the expected shape ā including a concrete example.
None of this shows up in tutorials or documentation. You only find it by shipping something real and watching it break. If you're building anything that connects an LLM to a real API, what failure modes have you hit?
Ā

DigitalOcean Deploy is a one-day event focused on what happens when AI hits production. Latency, throughput, cost per token, reliability ā all the things that show up the second something hits production.
There will be a fireside chat with NVIDIAās Kari Briski on agentic AI. Teams from Character, Workato, VAST Data, Arcee, and the vLLM ecosystem will be there breaking down how they are actually running inference at scale plus a first look at what DigitalOcean is building next.
April 28 in San Francisco. Free to attend.
How to succeed when your product canāt be sticky

By Mona Truong of Murror AIĀ
There's something counterintuitive about building an AI product in the mental health and self-awareness space: if you're doing it right, your users should eventually need you less.
Most product teams optimize for stickiness. More sessions, more time in app, more daily returns. But at Murror, we've been wrestling with a different question ā what if the goal of our product is to help someone build enough self-understanding that they don't need to open the app as often?
When someone uses Murror to process a difficult emotion or reflect on a pattern in their relationships, the ideal outcome isn't that they come back tomorrow to do the same thing. It's that they start recognizing those patterns on their own, in real time, without us.
This creates a genuinely hard product challenge. How do you build a sustainable business around a product that's designed to reduce its own usage?
Ā
How did this make money?

Dogecoin started as a joke and has a $14B market cap. Fidget spinners have made hundreds of millions and can be found in a landfill near you. The pet rock once earned money like The Rock.
All of these feel silly. Yet theyāve been very successful. So, Nika wants to know: āWhich business idea do you think was so ridiculous that it feels absurd how much money it made?ā
Leaderboard highlights





Every Sunday
Everything you missed this past week on Product Hunt: Top products, spicy community discourse, key trends on the site, and long-form pieces weāve recently published.
