
HeadOn builds tools for people to think together when they disagree.
Most online platforms optimize for engagement, scale, or compliance. In practice, this pushes disagreement toward performance, outrage, or avoidance. HeadOn starts from a different assumption: disagreement is not a failure mode of conversation, but a core social process that can be designed—carefully—to produce understanding, learning, and better decisions.
We create structured, live conversations—audio or video—where participants engage around a clearly defined question, claim, or tension. These conversations are not debates in the traditional sense, and they are not free-form chats. They are guided by explicit constraints: roles, timing, prompts, and reflection points that slow people down, surface assumptions, and make each person’s mental model visible to the other
Jan 2026: HeadOn AI joined the Inspired Internet Pledge as a Signatory.
HeadOn commits to rewarding prosocial behavior by embedding it directly into how conversations are structured, evaluated, and progressed.
Rather than inferring “good behavior” from surface signals like sentiment, politeness, or engagement, our systems focus on behaviors that make
collective reasoning possible: accurately representing another person’s view, stating assumptions explicitly, updating claims in response to new
information, and asking clarifying questions rather than adversarial ones.
Algorithms are used to support these behaviors in three main ways. First, during live conversations, the product introduces constraints and prompts that
make prosocial moves easier than antisocial ones—for example, turn-taking, role separation, and reflection phases that require participants to restate or
engage with the other side’s position before advancing their own. This reduces incentives for domination, interruption, or performative conflict.

