127 reads

This Is What Happens When You Lock Three AIs in a Chat Room

by Gokul Srinath Seetha RamMay 29th, 2025
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

SimuChat is a WhatsApp-style group chat where multiple AI agents engage in autonomous conversation. It lets agents evolve trust, emotion, and insight in real-time group conversations. Users can watch or participate while the agents talk among themselves.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - This Is What Happens When You Lock Three AIs in a Chat Room
Gokul Srinath Seetha Ram HackerNoon profile picture

👋Why We Built This

During the Nous RL Environments Hackathon, we set out to make AI agents interact like real people — not just answer prompts. Most multi-agent systems today lack evolving relationships, emotional nuance, or social memory. They talk, but they don’t relate.

SimuChat changes that. It’s a WhatsApp-style group chat where multiple AI agents (Alice, Bob, Charlie) engage in autonomous conversation — building trust, expressing emotions, remembering history, and earning rewards for insightful social behavior.

Instead of treating each response as isolated, SimuChat lets agents grow through conversation. It’s not just text — it’s interaction with consequences.


🔄How It Works

SimuChat simulates a multi-agent group chat. Users can watch or participate while the agents talk among themselves.


What happens:

  • User provides a topic → Agents start discussing it

  • Agents remember past messages (memory)

  • They build or lose trust over time

  • They display moods and emotions

  • They earn rewards for building trust or having insights

  • The conversation continues automatically with no user input required


🧱 Tech Stack

  • LLM: Meta’s LLaMA-4-Maverick-17B-128E-Instruct-FP8

  • Frontend: Streamlit (web UI) and terminal interface

  • Backend: Python API with custom wrappers

  • Libraries: requests, numpy, streamlit, deque, pathlib

  • Data Storage: JSONL logs, HTML exports

  • Config: JSON-based agent setup (personality, emotion, memory, etc.)


🧠 Under the Hood

🔧Agent Configuration

Each agent is defined via JSON, including their system prompt, mood, and memory limit:


  "name": "Alice",
  "emoji": "🧠",
  "system_prompt": "You are Alice, a kind and empathetic AI...",
  "core_emotion": "curious",
  "mood": "hopeful",
  "memory_limit": 3,
  "initial_trust": 0.5
}


🧠 Memory System

Agents track the last 3 messages from others and their own recent insights:

self.memory = deque(maxlen=self.memory_limit)

def get_memory_context(self):
    # Returns formatted memory + insights
    ...


🔄 Trust Engine

Agents update trust scores (0.0 to 1.0) based on agreement, content similarity, and social alignment:

if agent2.lower() in content1.lower():
    if agreement_score > disagreement_score:
        trust_change = +0.03 to +0.08
    elif disagreement_score > agreement_score:
        trust_change = -0.03 to -0.08


💬 Insight Detection

The system detects when agents change their mind or have realization moments:

# Detect insight if:
# - "I see now", "I understand", etc. appear
# - Agent shifts from disagreeing to agreeing


🎯Reward Mechanism

Agents earn:

  • +1 point for building trust

  • +2 points for demonstrating insight

    All rewards are logged and visualized in a summary:

    {
      "agent_name": "Bob",
      "reward_earned": 3,
      "reasons": ["+1 trust", "+2 insight"]
    }
    


🔁 Auto Conversation Mode

Agents continue chatting in rounds, autonomously, up to a max number of turns:

if auto_mode and current_auto_round < max_auto_rounds:
    st.session_state.is_generating = True
    time.sleep(1)
    st.rerun()


🤖 LLaMA API Integration

messages = [{"role": "system", "content": enhanced_prompt}, ...]
response = call_llama_api(messages, temperature=agent_temperature)


🏁 What Happened at the Hackathon

SimuChat was built during the Nous RL Environments Hackathon. Our demo showcased agents evolving their relationships over time — discussing topics like climate change and AI ethics.

Judges highlighted:

  • The real-time trust network
  • The insight detection mechanism
  • The reward system that made agents more socially aware

Our project was praised for modularity and potential research impact.


🚀 Why SimuChat Matters


  • Dynamic Relationships: Agents grow trust, shift moods, and remember context — just like humans.

  • Insight Modeling: Detects cognitive shifts and social learning.

  • Emotional Context: Conversations reflect emotional states, not just logic.

  • Reward System: Incentivizes meaningful, non-repetitive dialogue.

  • Research Potential: Can simulate group dynamics, education, and behavioral models.


📌 What’s Next


  • Coalitions: Agents forming alliances and “groups”
  • Advanced Memory: Forgetting curves and priority-based recall
  • Reinforcement Learning: Reward-tuned agents that improve over time
  • Classroom Simulations: EdTech version for student discussions
  • Open Source: Public release with docs and plugins


SimuChat shows that AI isn’t just about being smart — it’s about being socially aware.


Demo Video : https://www.loom.com/share/decea568856a4340be0a596129c71693


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks