Recursively summarizing enables long-term dialogue memory in LLMs

Most open-domain dialogue systems struggle with retaining important information in long-term conversations. Current methods rely on training specific retrievers or summarizers, which is time-consuming and relies heavily on labeled data quality. To address this issue, we propose a method that utilizes large language models (LLMs) to recursively generate summaries and enhance long-term memory. By stimulating LLMs to remember small dialogue contexts and generate new memory using previous memory and following contexts, our method enables more consistent responses in extended conversations. This approach shows promising potential for LLMs to model extremely long contexts.

https://arxiv.org/abs/2308.15022

To top