Ben Lang

ASI:One - A personal AI with memory that plans and acts for you

by
ASI:One is a personal AI that remembers your preferences, collaborates with others’ AIs, and executes tasks. Plan nights out, align groups, and book the details automatically. It is connected to millions of agents through Agentverse, giving you on-demand capabilities for research, planning, and real-world tasks.

Add a comment

Replies

Best
Humayun

Hi Product Hunt 👋

AI tools have become powerful. But most of them still feel disconnected. You ask. They answer. The context resets. Nothing carries forward.

We built ASI:One to move beyond that.

ASI:One is a personal AI system that remembers, adapts, collaborates, and takes action. It is designed to stay with you over time, not just respond to isolated prompts.

No complicated installations on PC or cloud.

Here’s what you can do with it:

🧠 Build a personal AI that evolves with you

Shape its personality, set preferences, and let it remember what matters.

👥 Create Group Chats with AI built in

Invite others by email and let the AI help coordinate discussions in real time with friends and colleagues.

🗂️ Launch structured Collabs

Set a clear objective and let the AI break it into steps, track progress, and keep context intact.

🤖 Agents on demand! Type @agent inside a conversation and bring in domain-specific capabilities from Agentverse instantly.

📅 Connect Google Calendar and Gmail

Schedule events and handle follow-ups directly from your workspace without jumping between tools.

Under the hood, ASI:One routes tasks across multiple models and agents depending on what you are trying to accomplish. Research, planning, scheduling, coordination, and execution happen in one place.

We designed ASI:One for people who want an AI that works with them long term, not just for a single conversation.

We are here all day to answer questions and hear your thoughts 🙌

Tibo Wiels

Group chats with my friends their AIs: how does it handle privacy and sensitive information?

Sana Wajid

@tibo_wiels Great question, and honestly one of the first things we spent time thinking about.

In group chats, the AI doesn’t just freely access everything. It works within the boundaries of what each person has shared in that specific context. Your personal memory, preferences, or connected data aren’t automatically exposed to others.

So if we’re in a group, the AI helps coordinate, summarise, and move things forward based on what’s happening in that chat, not by pulling in your private history. The idea is to keep collaboration useful without breaking trust. You get the benefit of shared context where it’s needed, while your personal layer stays yours.

Matheus Santos

I’ve tested a few products with the same idea of a personal AI agent, and I always put them through a few real-world checks, especially email and daily task handling. A lot of them tend to fall short, or they simply don’t follow through consistently.

So far, ASI:One is passing those tests, and that’s pretty impressive. It feels like a strong product.

One question I had: when do you plan to start charging? I’ve been using it quite a bit and testing it a lot, but I haven’t run into any limits yet.

Overall, this looks like a great tool. Congrats to the team on the launch, and wishing ASI:One a great launch! 🚀

Rishank Jhavar

@matheusdsantosr_dev Really appreciate you putting it through real-world use, that’s honestly the only way these systems get meaningfully tested.

And you’re right, the gap is rarely in answering, it’s in following through consistently. That’s exactly what we’ve been trying to close.

On pricing, we’ve intentionally kept things open during this phase. Right now the focus is on usage, feedback, and understanding where people actually get value from the system day to day. For now, keep pushing it as much as you can. That’s the most useful thing for us at this stage. And thanks again for the kind words, means a lot to the team! 🙌

Rajashekar Vennavelli

@matheusdsantosr_dev 

Really appreciate you putting it through real-world tests, especially around email and daily tasks. That’s exactly the kind of usage we care about, so it’s great to hear it’s holding up well for you.

Thanks again for taking the time to test it deeply and share this, feedback like this genuinely helps a lot 🙏

Samir Asadov

A personal AI with persistent memory that actually plans and acts is exactly where I see the most underused signal — markets. Prediction markets like Polymarket carry leading indicators that retail and institutional players both ignore because the data is noisy. I built PolyMind (https://polyminds.netlify.app/) to surface AI-driven alerts on the largest Polymarket trades in real time. Curious whether ASI:One's planning layer can plug into external probability streams like that.

Sana Wajid

@samir_asadov Nice, this is a solid use case!

You’re right about prediction markets, the signal is there but most people don’t know what to do with it in real time. Where this could get interesting with ASI:One is probably not just consuming PolyMind alerts, but tying them to context. If the system already knows what I’m tracking or thinking about, a spike in probability or a large trade doesn’t stay an isolated alert, it becomes something actionable.

The planning layer is exactly where this fits. External streams like yours can be pulled in as agents, and then used inside a broader flow instead of just dashboards or notifications.

PolyMind looks like a clean layer on top of Polymarket. Very interesting project! :)

Rajashekar Vennavelli

@samir_asadov Really interesting angle, markets are such a noisy space, its a good idea to bring this on ASI:One and surface the signal

Brian H

How does it handle recurring tasks or scheduling around a user's calendar? Building something that schedules TV around when people are actually free and curious how you approach the planning layer.

Sana Wajid

@brian_h4  With ASI:One recurring tasks are more like delegation than scheduling. You basically tell it once - do X at Y time - and it keeps running in the background until you change it. So when calendar is connected, it gets smarter. It can align tasks with your actual availability, review your week, prep you ahead of events, or adjust around conflicts instead of blindly firing at a fixed time.

So in your example, it’s not just schedule when free, it’s:

  • Understand when you’re free

  • Run the task at the right time,

  • and adapt as your schedule evolves.

Brian H

@sana_wajid That's really helpful context. The "adapt as your schedule evolves" part is exactly what we're solving for TV. Your plan changes week to week with work trips, family stuff, mood shifts, so rigid scheduling fails.

Are you using predictive scheduling or more reactive and responsive? We're experimenting with both for CouchTime and curious how you approach the complexity.

Sana Wajid

@brian_h4 Right now it’s more reactive and responsive. The system stays close to your current context, calendar changes, chats, and adjusts accordingly instead of overcommitting upfront. That said, there’s definitely a layer of lightweight prediction starting to come in. Things like patterns in availability, habits, or when you usually engage with certain tasks can help guide better timing.

Brian H

@sana_wajid Super helpful, that tracks with what we’re seeing too. We’re prioritizing reactive scheduling first (availability + quick user edits like “skip tonight” or “move to Thursday”) because it keeps trust high and feels controllable. Then we layer lightweight prediction on top (habit windows, typical watch days, completion patterns) as suggestions, not hard commitments.

That hybrid seems to be the sweet spot for us: adaptive in real time, but still getting smarter week to week.

Hal Gottfried

I have some questions just think that I'm kind of thinking about I apologize if they come across blunt it's not my intention, but I'm seeing a lot of the same things being deployed and I'm trying to figure out what makes them different.

Is this just another implementation of Open Claw? What prevents a user from configuring this on their own? What makes yours unique?

How do you address the publicly documented issues and concerns?

I've seen so many iterations of this lately. What makes you guys really stand out in your opinion?

Attila Bagoly

@hgottfried Fair questions, and honestly ones we think about a lot too. This isn’t just OpenClaw.

What we’re building is closer to a connected system rather than a setup. The differentiation shows up in how things work together:

  • Your personal AI isn’t isolated, it can collaborate with others’ AIs in group chats and collabs

  • There’s a real network layer through Agentverse, where agents that can do things are discoverable and can be pulled in on demand

On concerns, completely fair. A lot of products in this space look similar early on. For us, the focus has been on making the system reliable, scoped properly, and actually useful in day-to-day use.

The real difference, in our view, is that this is not just a tool or a workflow, it’s a system that connects people, agents, and capabilities in one place.

Sana Wajid

@hgottfried  @attila_bagoly adding to what Attila said, I think the shift becomes clearer when you actually use it for a few days. It stops feeling like I’m using an AI tool and starts feeling like there’s a system working alongside me.

The moment your AI starts interacting with other AIs and pulling in the right capabilities when needed, you’re no longer orchestrating everything yourself. That’s where it moves beyond typical setups.

Completely fair to question it though, this category needs scrutiny.

Abdul Rehman

Congrats @rishankjhavar
BTW, how curated is the Agentverse network right now? Are agents verified in some way or is it more open marketplace?

Rajashekar Vennavelli

@abod_rehman Thank you for trying out our product! Yes, its an open market place but also we have a verification layer for the agents and the verified agents have higher rankings, trust and are more discoverable over the unverified agents. But all agents are evaluated by our system to flag and malicious agents!

Rishank Jhavar

@abod_rehman Thank you! Right now, it’s an open marketplace with a verification layer on top. Verified agents get higher visibility and trust, but the ecosystem stays open so new agents can come in and be used.

On top of that, there are system-level checks to flag anything malicious, so there’s a balance between openness and safety.

Tijo Gaucher

the "collaborates with others' AIs" part is what I'm curious about — is that agent-to-agent handoff happening in plain language, or is there a protocol underneath? feels like the messy part everyone handwaves past

Sana Wajid

@tijogaucher Great question — The answer is It's both.

There's a formal protocol underneath, but the interface is natural language.

The messy part isn't handwaved — it's abstracted. Developers can:

  • Use the protocol directly (via uagents SDK) for fine-grained control

  • Or let ASI:One handle orchestration automatically through the agentic LLM

The protocol ensures reliability, while natural language keeps it flexible. Best of both worlds.

Martí Carmona Serrat

Personal-AI-with-persistent-memory-across-group-chats is where most assistants silently fail — context resets every session and the coordination value evaporates. Bringing in @agent from Agentverse mid-conversation is the right extensibility hook. Curious how ASI:One handles memory conflicts when two group members' preferences diverge on the same plan.

Sana Wajid

@mcarmonas Great point, this is exactly where most systems break.

In group contexts, ASI:One doesn’t try to merge everyone into a single shared memory. Each person’s preferences stay scoped to them, and the system works more like a coordinator than a decision-maker.

So when preferences diverge, it doesn’t overwrite or average them out. It surfaces the differences, keeps track of who prefers what, and helps move the group toward a resolution, whether that’s suggesting options, highlighting trade-offs, or adapting the plan.

The goal is to preserve individual context while still making coordination smoother, not to force a single version of truth.

Natalia Iankovych

Can you analyze LinkedIn posts for meaning?

Sana Wajid

@natalia_iankovych Not sure I fully got your question, are you asking if ASI:One can analyze LinkedIn posts for meaning, or something broader around how it interprets content?

Happy to answer, just want to make sure I’m addressing the right thing.

Natalia Iankovych

@sana_wajid Yes. My potential clients often look for contractors on LinkedIn. I need to find those posts. Keyword search doesn’t work - there can be thousands of posts per day. It needs to be analyzed by AI and highlight only the posts where a potential client is looking for a contractor.

Sana Wajid

@natalia_iankovych Yes, that use case makes sense.

This is exactly the kind of thing a LinkedIn agent can help with: go beyond keyword matching, read posts for intent, and surface the ones where someone is actually looking for a contractor or vendor.

So instead of filtering for words, the agent can look for meaning: hiring intent, urgency, project need, budget signals, role fit, and whether it looks like a real opportunity. Here's an example of one of Li agents that's active on ASI:One (you can call it directly on ASI:One using @linkedin-lead-agent): https://agentverse.ai/agents/details/agent1q24rcxnx2ds9t4l7r64wpfpawc9pagqa0stz0qfyh0yxd94en3y8q4zw93h/profile

Natalia Iankovych

  @sana_wajid Thanks. I've just tried it out but it says it's outside what the service can help with.