Pieces Long-Term Memory Agent - The first AI that remembers everything you work on
by•
Ever wish you had an AI tool that remembered what you worked on, with who, and when across your entire desktop? Pieces Long-Term Memory Agent captures, preserves, and resurfaces historical workflow details, so you can pick up where you left off.
If your current AI assistant was a real person, you'd FIRE them.
No idea what you worked on yesterday
Makes you manually give them all your information
And even forgets your name!!!
Cutting edge LLMs (as great as they are) have the memory of a goldfish. 🐟
Pieces is the first AI that remembers EVERYTHING you do.
"Who asked me about that API bug last quarter and how did we solve it?" - is a question that would make ChatGPT break down into tears. Pieces can answer it, show you the links you clicked, find emails where you talked about it, and summarizes the entire thing so you can jump right back into your work with ZERO context swtiching.
Stop wasting time using assistants that don't grow with you.
All you context, all your memories, all your AI models. - All in one place.
Really intrigued by Pieces' approach to long-term memory! While the memory chunking system looks promising, I'm curious about how you handle memory contamination issues. When multiple conversations or contexts overlap, how do you maintain clarity and prevent incorrect information bleed?
Also wondering about the memory cleanup process - is there a way to identify and remove potentially contaminated or outdated memory blocks? Would love to hear more about your solution to these challenges, as memory pollution has been a significant hurdle in long-term memory implementations.
@zongze_x thanks for the great technical question, you are spot on, the issue of memory contamination is complex and a core challenge in designing features like this one. I suspect you can appreciate that writing a full answer here is tough - but would make an excellent topic for a technical article (watch this space). At a high-level, our approach to identifying and minimizing contamination happens at three levels:
On entry: we are very selective about what is added to the LTM. By analysing where the users focus is and how what they are focusing on currently relates to the big picture of there workstream we can prevent a lot of corruption at source.
On roll up: when we roll up memories into periodic summaries our agent looks for narratives and themes across workflow elements. When we find contradictions, we resolve them by comparing those narratives to cut out random chatter and keep focus on core tasks.
At query time: when you interact with your workstream data, through the copilot or the summaries, those interactions are used to infer which aspects are useful/truthful and which are not, which allows us to elevate quality information whilst demoting the noise.
Additionally, signals from all of these levels are used to periodically clean contamination from your stored memories. It's a work in progress but I have found the LTM to be much more resistant to context corruption than other solutions out there.
Hey 👋 super cool launch. It's a beautiful co incidence I just finished a paper on long term memory as a weight in a new form of transformer architecture. Paper is still in review but your launch is fun and practical.
@themisty Hey Krishna! Thank you for checking out the launch! Your paper sounds really interesting, I'm sure a lot of people from my team would enjoy reading it. Where will it be published?
@elliezub Thanks will definitely ping the team. In the meantime I definitely don't mind if you guys can feel my product nonilion ? I don't mind a solid feedback 🙌
I enabled LTM-2, does that mean there is no need to install plugins since it uses the universal screen recording interface? Or would installing the VS Code plugin give better results for memory capture?
Hey @tleyden, Thanks for your support! Means a lot;
PiecesOS captures your information and works with couchbase database locally to store this information. Plugin's allow you to bring this memory in your choice of IDE. I am usually coding in VS Code and chrome so I have both the plugins.
While using the plugin you can provide more context by adding the codebase etc. Hope this clarifies your question!;
Report
@tsavo_at_pieces@tleyden you don't need to install the plugins for memory capture, pieces runs just like magic, still the VS extension makes working with pieces easier, so I would install it anyways.
Hey @tleyden , thanks for the great question and your awesome intuition—you’re spot on!
With LTM-2 enabled, our system already leverages a uniform screen segmentation (we never actually record any video.. too heavy to process 😅), vision processing, and accessibility APIs, so it works incredibly well out of the box. That said, installing plugins (like the VS Code one) can provide even richer data—think deep stack traces, AST details, and more discrete file paths.
Believe it or not, we're already leveraging some integrations sending what we call “Tier 3” data, which gets blended with the lower-level visual and accessibility data and interconnected there on the device through a couple classic algos. That said, both data sources direct-from-plugin and uniform at the OS are unique and additive so you'll definitely be seeing us continue to invest in the plugins 🌟
Anyway, hopefully that answers your question and thanks again for your support!
This is awesome! Long-term memory in AI is definitely something that’s been missing, and you guys nailed it. Huge congrats to the team for making it happen—this will really change how we work with AI.
This launch sounds incredibly promising! The concept of an AI with long-term memory definitely addresses a major pain point for developers. The ability to recall important details from past projects could be a game-changer for maintaining workflow and enhancing productivity.
Congrats on the launch! Best wishes and sending lots of wins :) @tsavo_at_pieces
Congratulations! This looks like a very valuable addition to the community with a very elegant execution.
As someone who is currently trying to take care of two elderly adults I can't help but Wonder if there is an adaptable version for use in Elder Care. I could see this concept adopted for the marketplace of aging professionals who need reminders on their task updates and procedures as well.
@terrence_kelleman That's a great question. Currently we are focusing on tech professionals such as developers, support, devops and so on, but the technology can and most likely will be adapted for all knowledge workers. Will it work for elder care or consumers in general? That's a great question. Because it is intrinsically tied to the activities you are doing on your computer it can help with this if it is related to them using a computer. If these folks are using a desktop or laptop computer to plan activities, get medical updates, and so on, then Pieces can summarize these. "When am I meeting Susan for lunch this week?" "What was the link my doctor shared on best practices with this medication?".
Thank you for your support, @terrence_kelleman. I have noticed that people from various domains are using Longterm Memory for their specific use cases. Personally, I utilize it to manage my schedule since I deliver many workshops and communicate through multiple channels. Pieces helps me easily find available time slots.
I can see how this could be beneficial for many others in numerous ways! Do share your feedback if you are using it for a different usecase;
Report
The logo is awesome. The software looks fantastic for a dev like me who has zero memory.
Quick question: Are there any limitations due to monitors or similar? I’m using an ultrawide 49’’ + 27’’.
In the video, I saw an ultrawide, so I suppose that the software has been tested for that. 😂
@stemonte I've not tried that large, I "only" have a 39" ultra wide, but it works fine for me! If you want to lend me your setup for a while I'll be happy to test for you 😁
@stemonte Be aware that the larger the monitor, the more system resources will be used. But Pieces uses only 1-2% of CPU typically, so the increase will be minimal. We only extract memories from the current active window as well to keep them more human-centric, so using multiple monitors has no impact. I'm also guessing that if you have large monitors you probably are not running on a 2016 low spec IBM Thinkpad 😝. So your CPU impact will be negligible.
Report
@jim_bennett1 I’m gonna try it myself 😂😂
And yes, I have a mac studio m2, so no problem for the cpu
Replies
Pieces for Developers
If your current AI assistant was a real person, you'd FIRE them.
No idea what you worked on yesterday
Makes you manually give them all your information
And even forgets your name!!!
Cutting edge LLMs (as great as they are) have the memory of a goldfish. 🐟
Pieces is the first AI that remembers EVERYTHING you do.
"Who asked me about that API bug last quarter and how did we solve it?" - is a question that would make ChatGPT break down into tears. Pieces can answer it, show you the links you clicked, find emails where you talked about it, and summarizes the entire thing so you can jump right back into your work with ZERO context swtiching.
Stop wasting time using assistants that don't grow with you.
All you context, all your memories, all your AI models. - All in one place.
Pieces for Developers
@jackross Exactly! LTM-2 is the major upgrade in AI assistants that we have all been waiting for. Thank you for the support Jack!
Pieces for Developers
Can’t wait to integrate work stream activities into my workflow! I’ll never need to use the Chat GPT interface again 😎
Pieces for Developers
@sam_parks_at_pieces Workstream Activities are a game-changer for sure 😎
Atoms
Really intrigued by Pieces' approach to long-term memory! While the memory chunking system looks promising, I'm curious about how you handle memory contamination issues. When multiple conversations or contexts overlap, how do you maintain clarity and prevent incorrect information bleed?
Also wondering about the memory cleanup process - is there a way to identify and remove potentially contaminated or outdated memory blocks? Would love to hear more about your solution to these challenges, as memory pollution has been a significant hurdle in long-term memory implementations.
Pieces for Developers
@zongze_x thanks for the great technical question, you are spot on, the issue of memory contamination is complex and a core challenge in designing features like this one. I suspect you can appreciate that writing a full answer here is tough - but would make an excellent topic for a technical article (watch this space). At a high-level, our approach to identifying and minimizing contamination happens at three levels:
On entry: we are very selective about what is added to the LTM. By analysing where the users focus is and how what they are focusing on currently relates to the big picture of there workstream we can prevent a lot of corruption at source.
On roll up: when we roll up memories into periodic summaries our agent looks for narratives and themes across workflow elements. When we find contradictions, we resolve them by comparing those narratives to cut out random chatter and keep focus on core tasks.
At query time: when you interact with your workstream data, through the copilot or the summaries, those interactions are used to infer which aspects are useful/truthful and which are not, which allows us to elevate quality information whilst demoting the noise.
Additionally, signals from all of these levels are used to periodically clean contamination from your stored memories. It's a work in progress but I have found the LTM to be much more resistant to context corruption than other solutions out there.
Nonilion
Hey 👋 super cool launch. It's a beautiful co incidence I just finished a paper on long term memory as a weight in a new form of transformer architecture. Paper is still in review but your launch is fun and practical.
All the best 👍
Pieces for Developers
@themisty Hey Krishna! Thank you for checking out the launch! Your paper sounds really interesting, I'm sure a lot of people from my team would enjoy reading it. Where will it be published?
Nonilion
@elliezub Dear Ellie, I am excited to have readers interested already :) it will be on Arxiv, fingers crossed. Still under heavy reviewing lol.
Pieces for Developers
@themisty Sounds great! Looks like we are connected on Linkedin now, so hopefully you will post about it once it's published. Can't wait to read it!
Pieces for Developers
@themisty Can't wait to read your paper Krishna! Thanks for the support as always!!
Nonilion
@elliezub Thanks will definitely ping the team. In the meantime I definitely don't mind if you guys can feel my product nonilion ? I don't mind a solid feedback 🙌
Really impressive product @tsavo_at_pieces + team!
I enabled LTM-2, does that mean there is no need to install plugins since it uses the universal screen recording interface? Or would installing the VS Code plugin give better results for memory capture?
Pieces for Developers
Hey @tleyden, Thanks for your support! Means a lot;
PiecesOS captures your information and works with couchbase database locally to store this information. Plugin's allow you to bring this memory in your choice of IDE. I am usually coding in VS Code and chrome so I have both the plugins.
While using the plugin you can provide more context by adding the codebase etc. Hope this clarifies your question!;
@tsavo_at_pieces @tleyden you don't need to install the plugins for memory capture, pieces runs just like magic, still the VS extension makes working with pieces easier, so I would install it anyways.
Pieces for Developers
Hey @tleyden , thanks for the great question and your awesome intuition—you’re spot on!
With LTM-2 enabled, our system already leverages a uniform screen segmentation (we never actually record any video.. too heavy to process 😅), vision processing, and accessibility APIs, so it works incredibly well out of the box. That said, installing plugins (like the VS Code one) can provide even richer data—think deep stack traces, AST details, and more discrete file paths.
Believe it or not, we're already leveraging some integrations sending what we call “Tier 3” data, which gets blended with the lower-level visual and accessibility data and interconnected there on the device through a couple classic algos. That said, both data sources direct-from-plugin and uniform at the OS are unique and additive so you'll definitely be seeing us continue to invest in the plugins 🌟
Anyway, hopefully that answers your question and thanks again for your support!
Cheers,
-Tsavo
Pieces for Developers
Pieces for Developers
@bishoy_hany1 Same! The possibilities are really endless with how it can improve your workflow. Thank you for the support Bishoy!
Voquill
This is awesome! Long-term memory in AI is definitely something that’s been missing, and you guys nailed it. Huge congrats to the team for making it happen—this will really change how we work with AI.
Pieces for Developers
@henry_habib Thank you very much, do try it out and let us know your favorite prompt;
@henry_habib Thanks for all the support Henry! Can't wait to see how LTM changes the way you use AI!
Shram
This launch sounds incredibly promising! The concept of an AI with long-term memory definitely addresses a major pain point for developers. The ability to recall important details from past projects could be a game-changer for maintaining workflow and enhancing productivity.
Congrats on the launch! Best wishes and sending lots of wins :) @tsavo_at_pieces
Pieces for Developers
@whatshivamdo Really appreciate the support Shivam! LTM is definitely the biggest productivity boost I've gotten from LLMs in a loooong time!
Pieces for Developers
@whatshivamdo Thank you for the support Shivam! 🤝
Pieces for Developers
@terrence_kelleman That's a great question. Currently we are focusing on tech professionals such as developers, support, devops and so on, but the technology can and most likely will be adapted for all knowledge workers. Will it work for elder care or consumers in general? That's a great question. Because it is intrinsically tied to the activities you are doing on your computer it can help with this if it is related to them using a computer. If these folks are using a desktop or laptop computer to plan activities, get medical updates, and so on, then Pieces can summarize these. "When am I meeting Susan for lunch this week?" "What was the link my doctor shared on best practices with this medication?".
Pieces for Developers
Thank you for your support, @terrence_kelleman. I have noticed that people from various domains are using Longterm Memory for their specific use cases. Personally, I utilize it to manage my schedule since I deliver many workshops and communicate through multiple channels. Pieces helps me easily find available time slots.
I can see how this could be beneficial for many others in numerous ways! Do share your feedback if you are using it for a different usecase;
Pieces for Developers
@stemonte I've not tried that large, I "only" have a 39" ultra wide, but it works fine for me! If you want to lend me your setup for a while I'll be happy to test for you 😁
Pieces for Developers
@stemonte Be aware that the larger the monitor, the more system resources will be used. But Pieces uses only 1-2% of CPU typically, so the increase will be minimal. We only extract memories from the current active window as well to keep them more human-centric, so using multiple monitors has no impact. I'm also guessing that if you have large monitors you probably are not running on a 2016 low spec IBM Thinkpad 😝. So your CPU impact will be negligible.