- Intrinsic secures $3.1 million in a seed round with support from investors.
- Intrinsic's platform aids safety teams in moderating content and targeting abusive behavior in digital spaces.
- The platform allows customization of moderation models using manual review and labeling tools.
A few years back, Karine Mellata and Michael Lin crossed paths at Apple's team, which is dedicated to combating fraud and algorithmic risks. As engineers, they worked together to tackle issues like online abuse, spam, account security, and developer fraud for Apple's expanding user community.
Mellata and Lin started Intrinsic, a startup that wants to provide safety teams with the necessary tools to stop abusive behavior on their products. In a recent seed round, Intrinsic raised $3.1 million with support from the Urban Innovation Fund, Y Combinator, 645 Ventures, and Okta.
Intrinsic's platform is created to moderate user and AI-generated content. It provides the necessary infrastructure for social media companies and e-commerce marketplaces to identify and address the range that goes against their policies. Intrinsic prioritizes safety by seamlessly integrating products that automate actions such as banning users and highlighting content for review.
Mellata argues that there are no ready-made classifiers for these specific categories, and even a well-equipped trust and safety team would require significant time to develop new automated detection categories.
Additionally, the platform offers manual review and labeling tools, enabling customers to customize moderation models using their data.
Edited by Shruti Thapa