Create AI agents that work 24/7, cost 90% less, and don't hallucinate on data—powered by Agent MAX. Automate daily repetitive tasks with the world's most advanced AI agent engine built for autonomous work. Hundreds of integrations. 90% cheaper. Near-zero hallucinations. 100x memory.
@cruise_chen hey Cruise - we usually try to simplify it down to any workflow or SOP (standard operating procedure) that contains 1 or 2 steps requiring intelligence. E.g. categorizing, summarizing, translating, creative writing, decision making based on some data etc.
Would love to hear more feedback as you try it!
Report
Wow, Incredible sounds amazing! Love the promise of near-zero hallucinations, thats a game changer. How do you handle complex integrations with legacy systems given the 100x memory?
@jaydev13 Hey Jay! for a lot of legacy systems we can't utilize them as our agents connects to APIs using our LFA system (Language friendly APIs). However, there's a ton of smart ways to work around this with RPA wrappers - that you turn into an API.
Report
@philip_alm_at_incredible Fair enough. Thanks for internal working explanation. I will explore more. I am hunting my product next week, would love to have you try it out!
Could you give an example of something that can be made wit 100 credits? And am I right in understanding that: - AI creates the automation - Actually invoking the automation uses minimal (or no?) credits (except if it needs the LLM to do something)?
Since our Agents are using our own Agentic AI models (w/ Agent MAX), they run very cheap compared to most models - and a large amount of data in a workflow doesn't really impact cost much.
100 credits gets you through a ton of flows. The easiest way to figure it out is running your agent once - and then checking the settings -> billing tab. It contains exact details of how many credits have been consumed per agent.
Some examples I've tested:
~ process 1000 CRM records and update them ~0.4 credits = can do this daily for months with 100 credits
~ autosummarize all emails I get and send to slack ~0.2 credits per run
~ autogenerate posts for LinkedIn/X + analyze previous performance ~0.15 credits per run
Again, easiest to try and see what it is!
We've tried really hard making this viable on scale!
Report
The 'stuck detection' feature is honestly my favorite part. Nothing hurts more than checking your logs and seeing an agent burned through credits because it kept retrying a failed step all night. Quick question though, when it recovers from a loop, does it keep the full context history or does it summarize to save tokens? We handle massive documents at SquarePact, so I know how tricky that memory management is to get right.
@govindsajit it's def. a good one! Lots of developers struggle building the system design around the Agent. It's hard to build with llms, usually takes a few months or years of trial & error. We packed all our learnings into it.
Report
Great user experience and product! Definitely, it makes a huge difference. Great job, Team Incredible!
The top three creators who have published their Agents in the community!
Report
This hits different. Actual autonomous agents that don't hallucinate could be a game changer for enterprise ops. Most tools just chat, you're building actual workers.
Key q - how does this handle context for domain-specific tasks like ITSM or customer support? Can you train on proprietary data? Also, how reliable is the task completion rate in production?
So for domain-specific stuff like ITSM or support, agents connect directly to your apps and work with your actual data during execution. No generic outputs. Deep Memory gives them 10-100x more context so they can handle big datasets and long workflows without losing track.
On proprietary data, we don't train on any of it. Ever. Everything runs in isolated sandboxed environments, gets processed for the task, then auto-deleted. For reliability, that's really the core of what we built. Tasks aren't marked done until every step is verified, agents self-heal when things break.
Congrats on the launch! The engineering behind reducing hallucinations and boosting memory is seriously impressive, especially for tasks that span hours instead of minutes. How did you approach testing reliability across so many different integrations?
@vik_sh Hey Viktor! We do automatic testing across all apps -
we built something we call "LFAs" = language friendly APIs. It lets us convert MCPs/any standard API format e.g. OpenAPI into a very LLM friendly representation & then we can auto-test features & an AI fixes incorrect params if the documentation was incorrect.
Replies
Agnes AI
Does Agents sleep? - Never! Just curious - what are most suitable tasks for Incredible where it could perform the best?
Incredible
@cruise_chen hey Cruise - we usually try to simplify it down to any workflow or SOP (standard operating procedure) that contains 1 or 2 steps requiring intelligence. E.g. categorizing, summarizing, translating, creative writing, decision making based on some data etc.
Would love to hear more feedback as you try it!
Wow, Incredible sounds amazing! Love the promise of near-zero hallucinations, thats a game changer. How do you handle complex integrations with legacy systems given the 100x memory?
Incredible
@jaydev13 Hey Jay! for a lot of legacy systems we can't utilize them as our agents connects to APIs using our LFA system (Language friendly APIs). However, there's a ton of smart ways to work around this with RPA wrappers - that you turn into an API.
@philip_alm_at_incredible Fair enough. Thanks for internal working explanation. I will explore more. I am hunting my product next week, would love to have you try it out!
Haveyoubeenhere
Could you give an example of something that can be made wit 100 credits?
And am I right in understanding that:
- AI creates the automation
- Actually invoking the automation uses minimal (or no?) credits (except if it needs the LLM to do something)?
Incredible
@martibis You are close to being 100% correct!
Credits are subtracted when the Agent runs.
Since our Agents are using our own Agentic AI models (w/ Agent MAX), they run very cheap compared to most models - and a large amount of data in a workflow doesn't really impact cost much.
100 credits gets you through a ton of flows. The easiest way to figure it out is running your agent once - and then checking the settings -> billing tab. It contains exact details of how many credits have been consumed per agent.
Some examples I've tested:
~ process 1000 CRM records and update them ~0.4 credits = can do this daily for months with 100 credits
~ autosummarize all emails I get and send to slack ~0.2 credits per run
~ autogenerate posts for LinkedIn/X + analyze previous performance ~0.15 credits per run
Again, easiest to try and see what it is!
We've tried really hard making this viable on scale!
The 'stuck detection' feature is honestly my favorite part. Nothing hurts more than checking your logs and seeing an agent burned through credits because it kept retrying a failed step all night. Quick question though, when it recovers from a loop, does it keep the full context history or does it summarize to save tokens? We handle massive documents at SquarePact, so I know how tricky that memory management is to get right.
Incredible
@govindsajit it's def. a good one! Lots of developers struggle building the system design around the Agent. It's hard to build with llms, usually takes a few months or years of trial & error. We packed all our learnings into it.
Great user experience and product! Definitely, it makes a huge difference. Great job, Team Incredible!
Incredible
@george_gevorkian thank you a ton George! Would love to hear more of your feedback :)
SyncSignature
Let's go @nikola_plantic_tomasic
Incredible
@neelptl2602 Highly appreciated!
Incredible
The top three creators who have published their Agents in the community!
This hits different. Actual autonomous agents that don't hallucinate could be a game changer for enterprise ops. Most tools just chat, you're building actual workers.
Key q - how does this handle context for domain-specific tasks like ITSM or customer support? Can you train on proprietary data? Also, how reliable is the task completion rate in production?
Very interesting! 🚀
Incredible
@imraju hey! super question :)
So for domain-specific stuff like ITSM or support, agents connect directly to your apps and work with your actual data during execution. No generic outputs. Deep Memory gives them 10-100x more context so they can handle big datasets and long workflows without losing track.
On proprietary data, we don't train on any of it. Ever. Everything runs in isolated sandboxed environments, gets processed for the task, then auto-deleted. For reliability, that's really the core of what we built. Tasks aren't marked done until every step is verified, agents self-heal when things break.
More detail here if you want to dig in: docs.incredible.one/agents/agent-max
Congrats on the launch! The engineering behind reducing hallucinations and boosting memory is seriously impressive, especially for tasks that span hours instead of minutes. How did you approach testing reliability across so many different integrations?
Incredible
@vik_sh Hey Viktor! We do automatic testing across all apps -
we built something we call "LFAs" = language friendly APIs. It lets us convert MCPs/any standard API format e.g. OpenAPI into a very LLM friendly representation & then we can auto-test features & an AI fixes incorrect params if the documentation was incorrect.
This happens in the background continuously.
I love the "vibe-automation" trend! 🔥
Incredible
@pasha_tseluyko 💯