Modulate builds intelligent voice technology that combats online toxicity and elevates the health and safety of online communities. The company partners with game studios, VR platforms, and other online social organizations to offer more effective and comprehensive trust & safety solutions, working closely with their partners to help them not only deploy their AI moderation tools, but to more comprehensively plan and execute a thorough, responsible trust & safety strategy. Modulate’s advanced machine learning frameworks have helped their customers protect tens of millions of players against online toxicity to ensure safe and inclusive spaces for everyone. Modulate is a proud partner of the ADL, helping to support the Center for Technology and Society research about online hate, child safety, and radicalism.
2023-24 Action Plan
Modulate is privileged to see a cross-section of different platforms from across the online ecosystem, giving us the opportunity to collate best practices and takeaways in a way even the largest individual platforms cannot. We are committing to using our position to organize at least two webinars this year to help showcase the brilliant and thoughtful strategies being used by some less well known platforms, in order to provide inspiration and encouragement to others looking to build safer online ecosystems.
This commitment is inherently measurable via simply asking “did we host these webinars or not?” On top of this, we’ll be monitoring the performance of these webinars as well to understand our impact and optimize our thought leadership for maximum value moving forward.
We believe this fits into the “Share lessons collaboratively across the tech industry” Pledge principle perfectly!
The games industry knows well that simply banning bad actors isn’t sufficient, but historically there have been few tools available other than the blunt “ban hammer,” or at least few tools within reach of trust & safety teams who are deeply limited in budget and executive support.
Over the next year, Modulate is committed to publishing a series of white papers on best practices for moderation which truly recognizes the fact that most users do not have ill-intent, and that coaching and support are often more powerful than punishment. These white papers will bring in expertise from Modulate’s own staff as well as academic researchers and third-party studios.
Our target launch date for the first white paper is Q4 2023.
We believe this white paper effort aligns with all three principles: helping our customers tune for wellbeing, looking more directly at the end user experience, and of course sharing lessons more widely in the industry.