Aurelius is a decentralized platform for generating high-quality AI alignment data by systematically probing language models to uncover safety failures and edge-case behaviors. It evaluates responses across multiple ethical and safety dimensions using open-source moderation tools, LLM-based judges, and custom alignment metrics. The aim is to create a transparent, verifiable foundation for alignment research and to build a long-term ecosystem where diverse perspectives help define, measure, and improve the safety of increasingly capable AI systems.