Safety Check (KOR) is a safety engine that classifies unethical expressions into 11 different categories. With a deep-learning classification model, Safety Check classifies the given text into its precise category and displays the likelihood of its prediction.
Hi, Product Hunt!
Iβm Allie, the project manager at TUNiB.
Comments containing hate speech and passive-aggressive cyberbullying became serious among people nowadays. Although manually checking use-reported comments or chats has been the convention for filtering these unethical texts, this approach has clear limitations in terms of time and ethicality. Too many hate words are exchanged online, and the people who filter them are too few. TUNiB has a vision to prevent harmful comments from hindering our Internet experiences, and would like to introduce Safety Check as the ideal safeguard that protects users from all kinds of unethical texts.
Classification categories currently supported are: insult, swear words, obscenity, violence, and aversion of gender, age, race, disabled, religion, politics, and occupation.
We are offering a one-month free trial (limited to 10K API calls). Leave an inquiry at [https://tunibridge.ai/#talkToSal..., and we will contact you shortly!
Please let me know if you have any questions. π
Replies
BLOONY
BLOONY
BLOONY
BLOONY