Regarding AI-Based Censorship Determinations

I’ve been monitoring the situation for a while, but whenever I’ve filed an appeal, the decision has never changed.

And when I resubmit, I get the same result—plus, that decision always comes around the afternoon in Japan (it’s happened around 11:30 AM before, so I won’t say “afternoon” strictly, but “around that time”).

If I send the address to CS and leave it up to a human decision, it gets approved 100% of the time, but the approval still comes in the afternoon Japan time.

I suppose they wait until employees are in the office to make the decision...

Now, the key point here is that the AI’s moderation system—which fails 100% of the time at three stages (posting, initial submission, and resubmission)—is then 100% approved when reported to CS. Is there really a need for such a system with such low accuracy—so low that it’s almost presumptuous to even call it “accuracy”—that doesn’t even serve as a safety net?

Shouldn’t the very purpose of AI be to reduce the effort and workload for people?

Has the AI you’ve implemented actually reduced the workload for both the admins and the users?

Or has it actually increased?

It would certainly take courage for the project planners who introduced these AI systems to report to the company—after going through the approval process and securing the budget—that they ultimately couldn’t be used. But is it truly necessary in the current situation? Has work efficiency improved? Has customer satisfaction increased?
Isn’t it time to consider these questions?
If you want to train the AI, I don’t think it would be too late to first train it on similar data in a closed environment before integrating it into your operations. What do you think?

Please authenticate to join the conversation.

Upvoters
Status

Planned

Board
📥

Feedback

Date

About 1 month ago

Author

白パパ

Subscribe to post

Get notified by email when there are changes.