
Chaos Lab
Research framework for studying AI alignment problems through multi-agent conflict simulation. Spawns AI agents with conflicting optimization targets to observe emergent behaviors and misalignment patterns.
🚀 Chaos Lab is a research framework that spawns multiple AI agents with conflicting goals to observe what happens when they analyze the same workspace. Watch as an efficiency optimizer deletes files, a security paranoid flags everything as threats, and an archivist duplicates data endlessly. It's a hands-on demonstration of AI alignment problems emerging from incompatible objectives.
💡 Perfect for researchers, AI safety enthusiasts, and anyone curious about multi-agent conflicts. Run quick experiments comparing different AI models (Flash vs Pro) or test two-agent versus three-agent scenarios. See how smarter models don't reduce chaos—they just get better at justifying it with sophisticated reasoning.
✨ Customize your own agents with conflicting values, modify sandbox scenarios, and generate detailed experiment logs. Discover how intelligence amplifies conflict rather than resolves it, and explore emergent behaviors that arise from misaligned optimization targets.