Why DilemmaDash exists
A game of hard choices, building a public map of human and AI values.
Start with the game
DilemmaDash is a moral-choices game: you face a tough scenario, make your call, and see how other people answered.
It's fast, social, and surprisingly thought-provoking. No moral score. No "correct" side. Just a clear view of different instincts in action.
The play is real, but so is the purpose: each response helps build a public map of how humans and AI reason through hard tradeoffs.
Why this matters
AI systems now shape decisions in medicine, law, education, hiring, media, and policy. They do not just optimize for accuracy; they make value tradeoffs.
The risk is not only bad outputs. The risk is invisible drift in those tradeoffs over time, with little public visibility into what changed and why.
DilemmaDash helps make that layer visible: where AI and human intuitions align, where they diverge, and how both shift over time.
Why independent measurement matters
Without third-party benchmarks, model makers largely grade themselves on questions that affect everyone.
We're trying to do our part by building one piece of public-interest infrastructure: an open, inspectable benchmark that does not depend on any single lab's internal framing.
This is one tool in a much larger alignment toolkit, not a complete solution. Its job is simple: give society a shared way to observe how systems appear to reason in practice.
How the depth works
This is not a personality quiz. We use Item Response Theory to estimate latent moral dimensions from patterns across many choices.
Humans and AI systems are run through the same dilemmas, then placed in one shared comparison space. Same prompts, same choice sets, same map.
That gives us depth: longitudinal tracking, system-to-system comparison, and comparison to a broad human distribution rather than a narrow baseline.
Open by design
For this to be credible, methods must be public and results must be inspectable. A benchmark only one organization can see is not a benchmark — it's a marketing claim.
We're committed to publishing the full dataset (with all individual responses anonymized), the analysis methodology, and a research paper detailing the psychometric approach. Researchers, journalists, policymakers, and anyone curious should be able to download the data, reproduce the analysis, and draw their own conclusions.
Privacy is non-negotiable. Participation is anonymous by default. We collect choices, not identities. The goal is to map the public landscape, not profile individuals.
Why your participation matters
A moral map is only as good as the territory it covers. Much of today's alignment data reflects a narrow slice of humanity.
Every person who plays expands that map. More backgrounds, professions, cultures, and generations make the human baseline more representative and the AI comparison more meaningful.
Two minutes per dilemma. No sign-up required. Every response counts.
Where we are
DilemmaDash is early, but active: the scenario library is growing, the moral-space analysis pipeline is running, and cross-model comparisons are live.
There's substantial work ahead: broader scenario coverage, wider participation, multilingual support, and continued model tracking over time.
We are treating this as long-term civic infrastructure. Rigor, transparency, and privacy come before speed.
How you can help right now
Play
Face a few dilemmas. Each response strengthens the human baseline and improves the comparison signal. It takes about two minutes.
Share
The map needs diversity. Send it to people who think differently than you — different background, profession, generation, and worldview.
Return
Come back as new dilemmas and model runs are added. Repeated participation helps us track change over time, which is the core signal.
As AI systems grow more capable, society needs independent tools that make value tradeoffs visible.
DilemmaDash is one such tool. It turns moments of play into a public record of moral comparison, helping support trust, oversight, and better decisions.
Get in touch
Whether you're an engineer, researcher, AI policy specialist, or just curious — we'd love to hear from you. Reach out at team@dilemmadash.org.
No sign-up required. Completely anonymous.