Google Announces ‘Pre-Bunking’ Censorship Tools for Behavioral ‘Interventions’

Big Tech giant Google has announced the development of its new “pre-bunking” censorship tools that seek to manage behavioral “interventions.”

Google has dubbed its project “Info Interventions” and claims it is based on behavioral science.

The tools are effectively aimed at predicting so-called “misinformation” and “disinformation.”

By “pre-bunking” information, Google hopes it can block it before it is viewed by anyone online.

These “interventions” could then be used to “teach” users how to become resilient to online harm, Google claims.

Google also likens its new tools to a vaccine that protects users from “disinformation” by promising that “pre-bunking misinformation” will make online users “immunized.”

To explain how its “interventions” will work, Google has put up a site that states the goal is to provide accuracy prompts that would refocus users’ attention toward whatever Google determines is “accurate information.”

And to reach it, the “hypothesis” currently seems to be that “reminding individuals to think about accuracy when they might be about to engage with false information can boost users’ pre-existing accuracy goals.”

This method of effectively training users to behave in a desired way is unsurprisingly attempting to draw from behavioral science research and Google says it has been validated by digital experiments.

This “gift to the world” comes thanks to Google’s unit called Jigsaw.

Jigsaw was set up to “explore threats to open societies, and build technology that inspires scalable solutions.”

In March 2021, Jigsaw published a  Medium post declaring that research suggests there could be a powerful way to reduce “misinformation” simply by “reminding” Internet users to think about accuracy.

In other words, users would be told the information is “false” and then given the official groupthink info which has been deemed “correct.”

Slay the latest News for free!

We don’t spam! Read our privacy policy for more info.

There’s even an attempt to guilt-trip users into thinking they are helping spread “misinformation by being prone to distractions” – whereas having accuracy as defined by Google on your mind might reduce that.

Currently, Google explains on its “Interventions” page, if a user scrolls through a feed they may encounter “potential misinformation.”

That would then activate “an accuracy prompt” that would partially cover the information already labeled as misinformation.

The prompt contains a short explanation as to why a user is seeing it, but in general, a user’s attention is now supposed to shift from the content they wanted to see, to the prompt, which means they will be directed to consider “accuracy” instead.

They will also be subjected to something called “information literacy tips.”

Primed like this, the user’s attention is now all on the “reminder” with the content left far behind.

And more importantly, as far as Google is concerned, the next time they encounter similar content, the hope is they will “think twice,” presumably, about engaging with it.

Google says experiments have been carried out on its censorship tools together with MIT and the University of Regina.

“Those who received accuracy tips were 50% more discerning in sharing habits versus users who did not,” Google claims the results show.

“Pre-roll videos on YouTube drove up to an 11% increase in confidence, three weeks after exposure.”

SHARE:
Advertise with Slay News
join telegram

READERS' POLL

Who is the best president?

By completing this poll, you gain access to our free newsletter. Unsubscribe at any time.

By Frank Bergman

Frank Bergman is a political/economic journalist living on the east coast. Aside from news reporting, Bergman also conducts interviews with researchers and material experts and investigates influential individuals and organizations in the sociopolitical world.

Subscribe
Notify of
6
0
Would love your thoughts, please comment.x
()
x