Kevin William David

Pieces Long-Term Memory Agent - The first AI that remembers everything you work on

byβ€’
Ever wish you had an AI tool that remembered what you worked on, with who, and when across your entire desktop? Pieces Long-Term Memory Agent captures, preserves, and resurfaces historical workflow details, so you can pick up where you left off.

Add a comment

Replies

Best
Theo Garcia

Pieces Long-Term Memory Agent is an incredibly powerful tool for managing information and long-term memory! With its intuitive ability to organize and connect information, Pieces makes it easy for me to access and share the information I need, even if it's from years ago.


I'm blown away by Pieces' seamless integration with various apps and services, making it incredibly user-friendly. With Pieces, I can save time and boost my productivity in managing information.


If you're looking for a tool that can help you manage information and long-term memory more effectively, then Pieces Long-Term Memory Agent is the perfect choice!

Ali Mustufa Shaikh

Thanks, @theo_garcia, for your kind words and support. It means a lot to us!

Jason Torres

And yet another awesome drop from the @Pieces for Developers crew! Cheers

Ali Mustufa Shaikh

Thanks @jason_torres2 for your kind words, we appreciate it a lot;

Ellie

@jason_torres2Β Thanks for all of the support Jason!! Can't wait to see Torc launch on Product Hunt sometime in the future πŸ‘€

Lucas Josefiak

Life-changing dev tool!! I personally met the founders last year at a Flutter conference in NYC. They are as amazing as their product! We had such a fun night and a lot of drinks (πŸ˜…πŸ˜…) together. Here, you can still read my first feedback from September last year: https://x.com/lucasjosefiak/status/1837809165311279283?s=46


It has been motivating to follow you improve Pieces over the past months. Well done Tsavo and team!!



Mark Widman

@lucas_josefiakΒ Thanks for the support! This was a night that I will never forget πŸ˜‚

Lucas Josefiak

@mark_at_piecesΒ I'm surprised that you can remember that nightπŸ˜‚

Olive Sen
Personalization and hyper customisable software is now possible with AI. More power to you, keep building awesome products.
Ellie

@olive_senΒ Thank you for the support Olive! πŸ™

Savvas Konsta
Congratulations! this will be a very helpful tool!
Ellie

@mrrabbarΒ Thank you for the support Savvas! We really appreciate it πŸ™

Mahati Singh

Looks like a great tool for developers. Great work!!

Ellie

@mahatisinghΒ Thank you for supporting our launch Mahi! And actually, non-developers can use it too. Anyone working on research, projects, or creative work can benefit from having an external memory that keeps track of what matters. πŸ™

If you end up trying it out, we would love your feedback!

Todd Sutton

I have been using for the last 5 days and I installed the update this morning and turned on the LTM2 work stream activities and connected with GitHub and asked the pieces copilot to assist with planning on a NVIDIA Hackathon submission and it leveraged my work stream activity and included the NVIDIA AI Workbench with NIMS Anywhere project with openUSD kit application for viewing openIFC projects. This was based on different applications that I was setting up environments for and a coding project that I have been working on. Very impressive on how it uses the workstream activity for adding to the context. I also like the ability to use local only models when needed. Having the flexibility to use offline or hybrid local and cloud is very useful.

Ellie

@todd_suttonΒ That’s really cool! Love hearing how you’re using it for something like a Hackathon submission 🀯

Thank you so much for all the feedback! If in the future you have any more feedback, we would love to hear about it. Feel free to join our Discord community since that is the easiest place to get in touch with our team: https://discord.gg/vTBBscy6Er

Shubham

Does it have a memory limit like ChatGPT?

Jim Bennett

@khidwalia07Β Not sure what you mean here.

We support a range of LLMs, so there are limits based on the context window size of the LLM - which varies depending on which one you choose. This means that we have to limit what gets sent as context if you choose an LLM with a smaller context window. We have a smart RAG system that extracts context from the LTM as well as any files or folders you choose, and the chat history, and sends this. So you can't for example a million line of code project to your prompt, but our RAG system can cope with massive projects.

If you are referring to how big the Pieces Long-Term Memory is, then we store 9 months of memories.

Charlie

This is such a cool product. I think it's going to completely change how I work!
Congrats on the launch !πŸ₯³

Ellie

@brein_1942Β Thank you for checking it out and for the support Charlie! We would love to hear how it ends up impacting your workflow πŸ™

Faizan Jan

Very cool product. I realise you have an SDK which we can use to build on top of, but I was curious about how (if at all) you handle caching of prompts. Because for some use cases prompts might be repetitive and can significantly make a product efficient/affordable if such prompts are cached

Jim Bennett

@faizanjan_Β Thanks for supporting us! We don't use prompt caching - the system prompts we generate contain the relevant context for your calls including long-term memories, or file and folder context. This means we don't have consistent system prompts that would benefit from prompt caching.

First
Previous
β€’β€’β€’
456
β€’β€’β€’
Next
Last