E-MAIL THIS LINK
To: 

Facebook pays contractors to read your 'encrypted' WhatsApp messages, shares info with prosecutors
[EN.ALGHADEERTV.NET] When Facebook acquired WhatsApp, it promised to respect the privacy of its users. That hasn’t been the case, and the firm now employs thousands of staff to read supposedly encrypted chats.

Social media behemoth Facebook acquired WhatsApp in 2014, with CEO Mark Zuckerberg promising to keep the stripped-down, ad-free messaging app “exactly the same.” End-to-end encryption was introduced in 2016, with the app itself offering on-screen assurances to users that “No one outside of this chat” can read their communications, and Zuckerberg himself telling the US Senate in 2018 that “We don’t see any of the content in WhatsApp.”

Allegedly, none of that is true. More than a thousand content moderators are employed at shared Facebook/WhatsApp offices in Austin, Texas, Dublin, Ireland, and Singapore to sift through messages reported by users and flagged by artificial intelligence.

Based on internal documents, interviews with moderators, and a whistleblower complaint, ProPublica explained how the system works in a lengthy investigation published on Wednesday.

When a user presses ‘report’ on a message, the message itself plus the preceding four messages in the chat are unscrambled and sent to one of these moderators for review. Moderators also examine messages picked out by artificial intelligence, based on unencrypted data collected by WhatsApp. The data collected by the app is extensive and includes:

“The names and profile images of a user’s WhatsApp groups as well as their phone number, profile photo, status message, phone battery level, language and time zone, unique mobile phone ID and IP address, wireless signal strength and phone operating system, as a list of their electronic devices, any related Facebook and Instagram accounts, the last time they used the app and any previous history of violations.”

These moderators are not employees of WhatsApp or Facebook. Instead, they are contractors working for $16.50 per hour, hired by consulting firm Accenture. These workers are bound to silence by nondisclosure agreements, and their hiring went unannounced by Facebook.

Likewise, the actions of these moderators go unreported. Facebook releases quarterly ‘transparency reports’ for its own platform and subsidiary Instagram, detailing how many accounts were banned or otherwise disciplined and for what, but does not do this for WhatsApp.

Many of the messages reviewed by moderators are flagged in error. WhatsApp has two billion users who speak hundreds of languages, and staff sometimes have to rely on Facebook’s translation tool to analyze flagged messages, which one employee said is “horrible” at decoding local slang and political content.

Aside from false reports submitted as pranks, moderators have to analyze perfectly innocent content highlighted by AI. Companies using the app to sell straight-edge razors have been flagged as selling weapons. Parents photographing their bathing children have been flagged for child porn, and lingerie companies have been flagged as forbidden “sexually oriented business[es].”

“A lot of the time, the artificial intelligence is not that intelligent,” one moderator told ProPublica.

WhatsApp acknowledged that it analyzes messages to weed out “the worst” abusers, but doesn’t call this “content moderation.”


Posted by: Fred 2021-09-09
http://www.rantburg.com/poparticle.php?ID=612084