
The Social Billboard is a UK-based technology platform focused on improving how digital content is discovered, rated, and recommended for children,
parents, and creators.
Our core goal is to move beyond engagement-only ranking systems and toward trust-, quality-, and wellbeing-aware content environments.
We are building tools that:
- Analyse digital content (starting with YouTube and podcasts) using transcripts, metadata, and behavioural signals
- Generate trust and quality scores based on factors such as tone stability, educational value, emotional intensity, and content risk patterns
- Curate age-appropriate playlists and discovery spaces for children and families
- Surface early warnings when content shifts toward higher-risk or lower-quality themes
- Support creators in understanding how their content affects trust, safety, and long-term audience value
Rather than treating safety as a moderation problem after harm occurs, our approach focuses on proactive risk reduction and design-level incentives. We aim to reward content that supports learning, creativity, emotional stability, and skill development, while reducing amplification of content that relies on volatility, outrage, or emotional manipulation.
Jan 2026: The Social Billboard joined the Inspired Internet Pledge as a Signatory.
The Social Billboard commits to designing and operating our platform in ways that prioritise children’s wellbeing, developmental appropriateness, and
long-term user value over short-term engagement.
Specifically, we commit to the following principles and practices:
- Moving beyond engagement-only ranking signals
We commit to developing and testing alternative content discovery signals that prioritise trust, educational value, emotional stability, and long-term satisfaction rather than purely optimising for likes, clicks, or watch time. - Building age-appropriate discovery environments
We commit to creating curated, developmentally appropriate content spaces for children and families, with age-progressive access rules and reduced exposure to open adult networks. - Proactive risk reduction, not reactive moderation
We commit to using transcript analysis, behavioural patterns, and tone stability indicators to detect early signs of content risk escalation and surface warnings before harm occurs. - Supporting transparency and explainability
We commit to providing clear, human-readable explanations for why content is recommended, how trust and quality scores are generated, and what signals influence ranking outcomes. - Integrating parent and educator feedback loops
We commit to building user feedback mechanisms that allow parents and educators to influence content discovery preferences and report concerns that inform our trust and risk models. - Prioritising emotional safety and wellbeing-aware design
We commit to incorporating design features that encourage mindful usage, emotional check-ins, and offline breaks after intense or extended content sessions. - Avoiding manipulative growth practices
We commit to avoiding dark patterns, addictive interface mechanics, and engagement-maximisation strategies that conflict with child wellbeing or healthy digital habits. - Collaborating with research and policy stakeholders
We commit to engaging with academic partners, digital wellbeing researchers, and policy bodies to validate our trust frameworks and continuously refine our approach using evidence-based guidance. - Publishing high-level transparency updates
We commit to sharing high-level information about our trust scoring methodology, safety principles, and ranking signal philosophy to support public accountability and research collaboration. - Evolving our roadmap in alignment with child wellbeing standards
We commit to adapting our product direction as new research, regulatory guidance, and best practices emerge in the field of children’s digital wellbeing and safety-by-design.
Our overarching commitment is to help demonstrate that healthier digital environments can be built through better systems design, not just content restrictions or blanket bans.

