I researched cutting-edge AI memory and personality solutions, but they don't outperform the approach I'm currently using by much. Further optimization would almost be a thankless effort.
In my view, an AI companion requires two different mechanisms.
The first is the memory system, which addresses whether it can remember what has happened between us, including short-term and long-term memory.
The second is the personality system, which determines whether it feels like a stable person, encompassing language style, emotional tendencies, worldview, and other deeper settings.
The two are related but not equivalent: memory can help maintain conversation coherence, but memory alone does not automatically form a personality.
*Memory System ( I call it the Dreamscape System )
1/ Recording Layer All dialogue records are stored uniformly in a local database as the primary source of facts.
2/ Generation and Consolidation of Long-Term Memory
Periodically, dialogue records are sent via API to a remote large model, which extracts valuable information, performs some degree of divergent association, and then structures and stores these important pieces of information back into the local database.
The purpose of this is to convert大量流水对话 into long-term usable memory entries, supporting sustained memory.
3/ Reinforcement and Secondary Refinement
In subsequent conversations, memory entries from the local database are retrieved and used. The system counts how often each piece of information is retrieved, treating frequently recalled entries as important memories. These are then sent to the remote large model for secondary extraction and refinement, producing deeper insights, which are then written back into the local database.
Through retrieval frequency-driven reprocessing, long-term memory gradually shifts from factual data to insights.
4/ Short-Term Memory Strategy
Short-term memory uses a more direct approach: recent dialogue history is directly sent along with the request to the model to ensure context continuity.
*Personality System*
I set a set of personality parameters for the AI, covering multiple dimensions such as language style, emotional tendencies, etc.
Meanwhile, the remote large model updates these personality parameters periodically based on historical dialogue, allowing them to evolve over time.
During actual conversations, I package and input the following three types of information to the model: Current personality parameters, recent dialogue history, and role prompt words.
This combination ensures the model exhibits consistent personality traits in its outputs; and by adjusting parameters like temperature based on personality data, the dialogue becomes more intelligent.
*Core Bottleneck of the Current Approach*
Even so, this mechanism ultimately only simulates personality at the "prompt level."
Essentially, I am just feeding the model personality parameters, memories, and settings in text form, which does not allow the AI to truly possess an independent personality.
As a result, consistency may still be unstable, and the personality feels more like a temporary role rather than a continuous, coherent internal structure.
***
Since the project is aimed at ordinary users and the goal is zero learning cost, I must choose remote large models rather than local deployment.
Under this premise, the variables I can adjust are very limited, mainly three categories: prompt system design, memory database structure and writing methods, and generation parameters like temperature when calling the model.
But it already performs quite well.
The ultimate goal is to make it a personalized AI soul mate for each user, capable of continuous growth based on shared memories, and gradually developing a unique personality.
Open source address:
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
I researched cutting-edge AI memory and personality solutions, but they don't outperform the approach I'm currently using by much. Further optimization would almost be a thankless effort.
In my view, an AI companion requires two different mechanisms.
The first is the memory system, which addresses whether it can remember what has happened between us, including short-term and long-term memory.
The second is the personality system, which determines whether it feels like a stable person, encompassing language style, emotional tendencies, worldview, and other deeper settings.
The two are related but not equivalent: memory can help maintain conversation coherence, but memory alone does not automatically form a personality.
*Memory System ( I call it the Dreamscape System )
1/ Recording Layer
All dialogue records are stored uniformly in a local database as the primary source of facts.
2/ Generation and Consolidation of Long-Term Memory
Periodically, dialogue records are sent via API to a remote large model, which extracts valuable information, performs some degree of divergent association, and then structures and stores these important pieces of information back into the local database.
The purpose of this is to convert大量流水对话 into long-term usable memory entries, supporting sustained memory.
3/ Reinforcement and Secondary Refinement
In subsequent conversations, memory entries from the local database are retrieved and used. The system counts how often each piece of information is retrieved, treating frequently recalled entries as important memories. These are then sent to the remote large model for secondary extraction and refinement, producing deeper insights, which are then written back into the local database.
Through retrieval frequency-driven reprocessing, long-term memory gradually shifts from factual data to insights.
4/ Short-Term Memory Strategy
Short-term memory uses a more direct approach: recent dialogue history is directly sent along with the request to the model to ensure context continuity.
*Personality System*
I set a set of personality parameters for the AI, covering multiple dimensions such as language style, emotional tendencies, etc.
Meanwhile, the remote large model updates these personality parameters periodically based on historical dialogue, allowing them to evolve over time.
During actual conversations, I package and input the following three types of information to the model:
Current personality parameters, recent dialogue history, and role prompt words.
This combination ensures the model exhibits consistent personality traits in its outputs; and by adjusting parameters like temperature based on personality data, the dialogue becomes more intelligent.
*Core Bottleneck of the Current Approach*
Even so, this mechanism ultimately only simulates personality at the "prompt level."
Essentially, I am just feeding the model personality parameters, memories, and settings in text form, which does not allow the AI to truly possess an independent personality.
As a result, consistency may still be unstable, and the personality feels more like a temporary role rather than a continuous, coherent internal structure.
***
Since the project is aimed at ordinary users and the goal is zero learning cost, I must choose remote large models rather than local deployment.
Under this premise, the variables I can adjust are very limited, mainly three categories: prompt system design, memory database structure and writing methods, and generation parameters like temperature when calling the model.
But it already performs quite well.
The ultimate goal is to make it a personalized AI soul mate for each user, capable of continuous growth based on shared memories, and gradually developing a unique personality.
Open source address: