Pieces Long-Term Memory Agent - The first AI that remembers everything you work on
byβ’
Ever wish you had an AI tool that remembered what you worked on, with who, and when across your entire desktop? Pieces Long-Term Memory Agent captures, preserves, and resurfaces historical workflow details, so you can pick up where you left off.
Replies
Best
Pieces Long-Term Memory Agent is an incredibly powerful tool for managing information and long-term memory! With its intuitive ability to organize and connect information, Pieces makes it easy for me to access and share the information I need, even if it's from years ago.
I'm blown away by Pieces' seamless integration with various apps and services, making it incredibly user-friendly. With Pieces, I can save time and boost my productivity in managing information.
If you're looking for a tool that can help you manage information and long-term memory more effectively, then Pieces Long-Term Memory Agent is the perfect choice!
@jason_torres2Β Thanks for all of the support Jason!! Can't wait to see Torc launch on Product Hunt sometime in the future π
Report
Life-changing dev tool!! I personally met the founders last year at a Flutter conference in NYC. They are as amazing as their product! We had such a fun night and a lot of drinks (π π ) together. Here, you can still read my first feedback from September last year: https://x.com/lucasjosefiak/status/1837809165311279283?s=46
It has been motivating to follow you improve Pieces over the past months. Well done Tsavo and team!!
@mahatisinghΒ Thank you for supporting our launch Mahi! And actually, non-developers can use it too. Anyone working on research, projects, or creative work can benefit from having an external memory that keeps track of what matters. π
If you end up trying it out, we would love your feedback!
Report
I have been using for the last 5 days and I installed the update this morning and turned on the LTM2 work stream activities and connected with GitHub and asked the pieces copilot to assist with planning on a NVIDIA Hackathon submission and it leveraged my work stream activity and included the NVIDIA AI Workbench with NIMS Anywhere project with openUSD kit application for viewing openIFC projects. This was based on different applications that I was setting up environments for and a coding project that I have been working on. Very impressive on how it uses the workstream activity for adding to the context. I also like the ability to use local only models when needed. Having the flexibility to use offline or hybrid local and cloud is very useful.
@todd_suttonΒ Thatβs really cool! Love hearing how youβre using it for something like a Hackathon submission π€―
Thank you so much for all the feedback! If in the future you have any more feedback, we would love to hear about it. Feel free to join our Discord community since that is the easiest place to get in touch with our team: https://discord.gg/vTBBscy6Er
We support a range of LLMs, so there are limits based on the context window size of the LLM - which varies depending on which one you choose. This means that we have to limit what gets sent as context if you choose an LLM with a smaller context window. We have a smart RAG system that extracts context from the LTM as well as any files or folders you choose, and the chat history, and sends this. So you can't for example a million line of code project to your prompt, but our RAG system can cope with massive projects.
If you are referring to how big the Pieces Long-Term Memory is, then we store 9 months of memories.
Report
This is such a cool product. I think it's going to completely change how I work! Congrats on the launch !π₯³
Very cool product. I realise you have an SDK which we can use to build on top of, but I was curious about how (if at all) you handle caching of prompts. Because for some use cases prompts might be repetitive and can significantly make a product efficient/affordable if such prompts are cached
@faizanjan_Β Thanks for supporting us! We don't use prompt caching - the system prompts we generate contain the relevant context for your calls including long-term memories, or file and folder context. This means we don't have consistent system prompts that would benefit from prompt caching.
Replies
Pieces Long-Term Memory Agent is an incredibly powerful tool for managing information and long-term memory! With its intuitive ability to organize and connect information, Pieces makes it easy for me to access and share the information I need, even if it's from years ago.
I'm blown away by Pieces' seamless integration with various apps and services, making it incredibly user-friendly. With Pieces, I can save time and boost my productivity in managing information.
If you're looking for a tool that can help you manage information and long-term memory more effectively, then Pieces Long-Term Memory Agent is the perfect choice!
Pieces for Developers
Thanks, @theo_garcia, for your kind words and support. It means a lot to us!
And yet another awesome drop from the @Pieces for Developers crew! Cheers
Pieces for Developers
Thanks @jason_torres2 for your kind words, we appreciate it a lot;
Pieces for Developers
@jason_torres2Β Thanks for all of the support Jason!! Can't wait to see Torc launch on Product Hunt sometime in the future π
Life-changing dev tool!! I personally met the founders last year at a Flutter conference in NYC. They are as amazing as their product! We had such a fun night and a lot of drinks (π π ) together. Here, you can still read my first feedback from September last year: https://x.com/lucasjosefiak/status/1837809165311279283?s=46
It has been motivating to follow you improve Pieces over the past months. Well done Tsavo and team!!
Pieces for Developers
@lucas_josefiakΒ Thanks for the support! This was a night that I will never forget π
@mark_at_piecesΒ I'm surprised that you can remember that nightπ
NYX
Pieces for Developers
@olive_senΒ Thank you for the support Olive! π
Oasi
Pieces for Developers
@mrrabbarΒ Thank you for the support Savvas! We really appreciate it π
Looks like a great tool for developers. Great work!!
Pieces for Developers
@mahatisinghΒ Thank you for supporting our launch Mahi! And actually, non-developers can use it too. Anyone working on research, projects, or creative work can benefit from having an external memory that keeps track of what matters. π
If you end up trying it out, we would love your feedback!
I have been using for the last 5 days and I installed the update this morning and turned on the LTM2 work stream activities and connected with GitHub and asked the pieces copilot to assist with planning on a NVIDIA Hackathon submission and it leveraged my work stream activity and included the NVIDIA AI Workbench with NIMS Anywhere project with openUSD kit application for viewing openIFC projects. This was based on different applications that I was setting up environments for and a coding project that I have been working on. Very impressive on how it uses the workstream activity for adding to the context. I also like the ability to use local only models when needed. Having the flexibility to use offline or hybrid local and cloud is very useful.
Pieces for Developers
@todd_suttonΒ Thatβs really cool! Love hearing how youβre using it for something like a Hackathon submission π€―
Thank you so much for all the feedback! If in the future you have any more feedback, we would love to hear about it. Feel free to join our Discord community since that is the easiest place to get in touch with our team: https://discord.gg/vTBBscy6Er
Does it have a memory limit like ChatGPT?
Pieces for Developers
@khidwalia07Β Not sure what you mean here.
We support a range of LLMs, so there are limits based on the context window size of the LLM - which varies depending on which one you choose. This means that we have to limit what gets sent as context if you choose an LLM with a smaller context window. We have a smart RAG system that extracts context from the LTM as well as any files or folders you choose, and the chat history, and sends this. So you can't for example a million line of code project to your prompt, but our RAG system can cope with massive projects.
If you are referring to how big the Pieces Long-Term Memory is, then we store 9 months of memories.
This is such a cool product. I think it's going to completely change how I work!
Congrats on the launch !π₯³
Pieces for Developers
@brein_1942Β Thank you for checking it out and for the support Charlie! We would love to hear how it ends up impacting your workflow π
Equip AI Interview
Very cool product. I realise you have an SDK which we can use to build on top of, but I was curious about how (if at all) you handle caching of prompts. Because for some use cases prompts might be repetitive and can significantly make a product efficient/affordable if such prompts are cached
Pieces for Developers
@faizanjan_Β Thanks for supporting us! We don't use prompt caching - the system prompts we generate contain the relevant context for your calls including long-term memories, or file and folder context. This means we don't have consistent system prompts that would benefit from prompt caching.