Klaus Schwab’s World Economic Forum (WEF) has launched a new effort to censor the Internet using artificial intelligence (AI).
WEF announced plans to moderate online conversations by using AI that identifies “misinformation,” harmful content, “hate speech,” or anything else deemed to be “disinformation.”
The globalist organization’s censorship proposal would require “subject matter experts” to provide training sets to the AI.
The “experts” will teach the software to recognize, flag, and restrict any information that the WEF considers to be “dangerous.”
On Wednesday, the World Economic Forum published an article outlining a plan to overcome frequent instances of “unsafe” information.
The group said its plan seeks to tackle “child abuse, extremism, disinformation, hate speech, and fraud” online.
According to ActiveFence Trusty & Safety Vice President Inbal Goldberger, who authored the article, the organization insists that the operation cannot be handled by human “trust and safety teams.”
NEW – Klaus Schwab's World Economic Forum proposes to automate censorship of "hate speech" and "disinformation" with AI fed by "subject matter experts."https://t.co/A4JDrh7RaK pic.twitter.com/LYqFhik3Wk
— Disclose.tv (@disclosetv) August 11, 2022
The system works through “human-curated, multi-language, off-platform intelligence,” input provided from expert sources, to create “learning sets” for the AI machine,” Goldberger stated.
“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in,” she added.
In other words, trust and safety teams can help the AI with anomalous cases, allowing it to detect nuances in content that a purely automated system might otherwise miss or misinterpret, according to Goldberger.
“A human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya,” she explained.
As time goes on and the AI practices with more learning sets, it begins to identify the kinds of content moderating teams would find offensive, reaching “near-perfect detection” at a massive scale,
Goldberger said the system would protect against “increasingly advanced actors misusing platforms in unique ways.”
Trust and safety teams at online media platforms, such as Facebook and Twitter, bring a “nuanced comprehension of disinformation campaigns” that they apply to content moderation, said Goldberger.
That includes working with government organizations to filter content communicating a narrative about COVID-19, for example.
As Slay News previously reported, the Centers for Disease Control and Prevention (CDC) coordinated with Big Tech companies on what types of content to label as misinformation on their sites.
Social media companies have also targeted conservative content, including posts that negatively portray abortion and transgender activism, or contradict the mainstream understanding of climate change, by either labeling them as “misinformation” or blocking them entirely, according to The Daily Caller.
The WEF document did not specify how members of the AI training team would be decided, how they would be held accountable, or whether countries could exercise control over the AI.
Elite business executives who participate in WEF gatherings have a track record of proposals that expand corporate control over people’s lives.
At the latest WEF annual summit, in March, the head of the Chinese multinational technology company Alibaba Group boasted of a system for monitoring individual carbon footprints derived from eating, travel, and similar behaviors.
“The future is built by us, by a powerful community such as you here in this room,” WEF founder and chairman Klaus Schwab told an audience of more than 2,500 global business and political elites.