MrLeap 3 hours ago

I've been waiting to see some paper that is like "a shallow tree of key/values is all you need" to tackle model plasticity.

AI memory seems predominately a tension between compression and lookup speed.

These large vectors are keys for lookups, a language of expression, and a means of compression. Learning new things is always easier when you can map it back to something you already know. There's this page about forest fire simulations in a scientific computing book I read back in college more than a decade ago. I remember it viscerally because I've solved 100 different problems with that as its seed. I can barely remember anything else in the book. I don't remember it because I read it over and over, I remember it because it was useful and kept being useful.

If some new technique or idea is 90% similar to something I already know, I'll learn it easily. If it's 60%, I need to churn it around, put in a lot of learning effort. If it's 0%, it's noise from this angle.

> 4. Establishes meaningful links based on similarities > 5. Enables dynamic memory evolution and updates

Wondering how much compression occurs in #5.

  • manmal an hour ago

    Fast forward and B+ trees end up making sense (again).

manmal an hour ago

That‘s cosmic. I was wondering about the feasibility of exactly this system today while driving to work. Zettelkasten seems formalized/rule-based enough to work, and the limited context of each Zettel should be ideal for LLMs.

I guess the biggest risk is that two related notes don’t end up getting connected, so the agent can get stuck in a local optimum. I guess once a certain total number of notes has been reached, it becomes quasi impossible to make all the connections, because there are just too many possibilities?

fudged71 2 hours ago

Coming from the TFT space of Roam/Tana/Obsidian… this is awesome to see.

Further still, it would be neat to see a hybrid system where humans and agents collaborate on building and maintaining a knowledgebase.

fallinditch 2 hours ago

This is interesting. Does this approach mean that it becomes possible to use conversations with an LLM to fine tune the LLM in very specific ways?

In other words, if this AgenticMemory can give structure to unstructured conversations, and if this makes conversational feedback more useful for the model to learn, then can we use it to continually refine the model to be better at our particular use case?

theodorewiles 3 hours ago

yes i have been thinking about this for some time. One other thing with zettle I'm not sure was implemented here is you can have topic notes that just refer / summarize other notes; it would be very interesting if these could be autonomously created by some kind of clustering algorithm based on underlying links. Kind of like summary-of-summary.

Also curious if there might be some improvements if you dont rely on semantic similarity and just do all the pairwise "how related are these memories and in what way" LLM test like https://www.superagent.sh/blog/reag-reasoning-augmented-gene....

th0ma5 4 hours ago

Has a section for results that seems to just define what results are. Other than mirroring what other software is doing, how do we know any of these concepts are viable in the long term?