You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project applies the $\alpha$-Rank evolutionary methodology to evaluate and rank joint strategies in a stochastic version of the Graph Coloring Game. Unlike traditional game-theoretic concepts like Nash equilibrium, which often fail to capture the complexity of agent dynamics, evolutionary approaches focus on how strategies persist over time. By transforming dynamic games into empirical forms and evaluating strategies based on their stability across repeated interactions, this approach identifies joint strategies that remain resistant to change in the long term.
This repository contains the code for the paper Ranking Joint Policies in Dynamic Games using Evolutionary Dynamics, accepted at AAMAS 2025.
Prerequisites
This project uses Poetry for dependency management. To set up the environment:
git clone https://github.com/nataliakoliou/Collaborative-Graph-Coloring-Game.git
cd cgcg
Install dependencies:
poetry install
How to Run the Code
The repository provides three main entry points for running the code:
# Train policies as Deep Q-Networkspoetryrunlearner# Generate the empirical payoff matrix through multiple game simulationspoetryrunsimulator# Apply α-Rank to rank joint policiespoetryrunevaluator
Acknowledgments
I would like to express my sincere gratitude to the creators of $\alpha$-Rank for their foundational work in the paper α-Rank: Multi-Agent Evaluation by Evolutionary Dynamics. Their novel evolutionary methodology provided the theoretical framework for my own research on ranking joint policies in dynamic games. I would also like to thank DeepMind for including $\alpha$-Rank in their OpenSpiel library. Their implementation was straightforward and easy to apply to my own custom game.