Meta announced plans to significantly reduce its human content moderation workforce over the next few years, replacing thousands of contractors with AI-based systems. The company says the transition will allow it to catch more policy violations faster than current approaches, though it didn't specify how many jobs might be eliminated.
The shift represents another major change to Meta's content moderation strategy, coming just over a year after the company ditched third-party fact checkers and scaled back proactive moderation efforts. Human moderators will remain involved in "critical decisions" such as account disablements and law enforcement reports, but AI will handle the majority of routine content review.