AIVille
  • Welcome to AIVille
    • 🌾The Genesis of AIVille
    • 🚂Architecture of Generative Agents
      • 📚Memory Stream & Retrieval
      • ☁️Reflection
      • 📆Planning and Action
  • 🎮AI Agents in AIVille Game
    • 🤖Roles in AIVille
    • 🚂Core Gameplay Loops in AIVille
  • AIVille Evolution
    • 🚨The Current Limitations in AI and dApp Integration
    • 🤖The Rise of eMCP
      • 🔖eMCP: The Bridge Between AI and Blockchain
      • 🚀The Evolution of AIVille
    • 🧗‍♀️AIVille: From AI Simulation to Web3 MCP Infrastructure to AI Game Framework
      • 🏁Phase 1: AI-Native Simulation on BNB Chain
      • 🏁Phase 2: Multi-Chain AI Agent Coordination
      • 🏁Phase 3: eMCP Infrastructure Layer for Web3
      • 🏁Phase 4: Modular AI Framework for Web3 Games
  • Architecture and Highlights
    • ⛓️AIVille System Architecture
    • 💎Design Highlights
  • TOKENOMICS
    • 💸Dual-Token Economics
      • 💵$DINAR (In-game & AI Behavioral Token)
      • 💵$AGT (Governance & Utility Token)
      • ➰Value Creation Loops
      • 🌉Cross-Chain Design & Expansion
      • ✈️Governance & Treasury
  • ROADMAP
    • 🌳AIVille Roadmap
  • Official
    • 🖇️Official Links
Powered by GitBook
On this page
  1. Welcome to AIVille
  2. Architecture of Generative Agents

Reflection

When an agent has only raw observational memories, it becomes difficult to generalize or draw meaningful inferences. To address this, we introduce a second type of memory: Reflections. These are higher-level, abstract thoughts generated by the agent itself. Since reflections are treated as memory objects, they are included during retrieval alongside basic observations.

Reflections are generated periodically. In our implementation, they are triggered when the cumulative importance score of the most recent events exceeds a defined threshold.

🔄 The Reflection Process:

  1. Trigger Threshold Reached: When the sum of importance scores from recent events exceeds the threshold, the reflection process is initiated.

  2. Generate Abstract Questions: The language model reviews the latest 100 memory entries and synthesizes three high-level questions that capture core patterns or concerns.

  3. Query for Related Memories: Each question is then used as a query to retrieve relevant memory objects from the memory stream (including past reflections).

  4. Extract Insights: For each question, the LLM generates five high-level insights, which represent the reflections.

  5. Store as Reflections: These insights are saved back into the memory stream as new memory objects. They are also linked to the original observations used to generate them, forming a Reflection Tree, where:

    1. Leaf nodes = raw observations

    2. Non-leaf nodes = abstracted thoughts and reflections

PreviousMemory Stream & RetrievalNextPlanning and Action

Last updated 13 days ago

🚂
☁️