Interesting project from Jigsaw (an Alphabet company) which uses tech to scaleable fight online abuse. It's currently in beta but they've worked with a few big partners like Wikipedia and NYT.
In the meantime, Scale might be a good solution for those that need human oversight over internet abuse. cc @lucy_guo
Report
@rrhoover So cool that they are working on this with other companies, the more content they have to analyze the better. Definitely needs to be in the API topic.
I'm afraid this will lead to (perhaps accidental) censorship. For example, when I typed "Liberals are not stupid" it said that it was 93% toxic.
Report
@ninjinka I don't think the intent of the tech is stringent auto-moderation (at least right now). As it stands presently, the API seems a lot more suitable for things along the lines of assisting human moderators in quickly identifying potentially unsavory comments (for more focused and lower-effort moderation) as well as preemptively preventing toxic comments by increasing conscientiousness during composition. Maybe we'll see a really powerful auto-moderator once the model is polished enough.
@ninjinka Also, I think it gives you an opportunity to write something better. "Liberals are not stupid" may not appear toxic, but it actually is poor framing (and one that will automatically cause defensiveness).
You might write: "Liberals want ______" or "Liberals are aware that ___________" or "Liberals believe in _________"
I don't think we should think of comments in isolation as toxic or not, but also in their ability to prevent future toxic responses too.
@matthewboyle25 Their AI doesn't get along with sarcasm. One would probably get flagged if they used sarcasm in the messages.
Report
This is dangerous. A program, trained by wikipedia and the nyt out of all people, shouldn't be able to censor or block people's minds and thoughts. The intention is good, but unfortunately abuse had been a thing ever since we could speak to each other
The intention is good, but it may end up in bad places. I get that you're trying to make internet discussions more meaningful. But if the technology becomes good enough it may spread. For example, imagine CEO's setting up Slack so that coworkers can't chat about anything but work. That would kill creativity. People aren't robots, they need an occasional chitchat.
Report
Cool idea! Didn't explore the API, but I wonder how many orgs would be content to simply suggest to the poster that what they are saying may be toxic, vs just disallowing some types of comments altogether. Could bring up some messy (but necessary) convos on free speech vs hateful/abusive speech online.
Also - I wonder why they frame things, even positive things, in terms of toxicity. Something I imagine they'll improve in the future...
Report
@_tyoung I think this has to do more with the feedback they get. I'm sure there are more than a few people who say a comment is toxic just for kicks, or to maybe see if they can break the algorithm. Remember Tai?
Be sure to click the "Seems wrong?" link and give your feedback when you find something like that.
Product Hunt
Material Design Palette Deck
Material Design Palette Deck
Moto G7 Power
Cat On – Animated Stickers
Play
Zabhost
Showcase Jobs