\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Facebook is rating the trustworthiness of its users on a scale from zero to 1

Facebook is rating the trustworthiness of its users on a sca...
Mustard legal warrant
  08/21/18
this along with that t-mobile 'scam likely' shit is just ano...
very tactful elite resort foreskin
  08/21/18
...
brilliant internet-worthy ratface station
  08/21/18
Literally a Black Mirror episode
cerebral dog poop dysfunction
  08/21/18
So it begins
passionate pale house wrinkle
  08/21/18
"but as Facebook has given people more options, some us...
Galvanic godawful principal's office pozpig
  08/21/18


Poast new message in this thread



Reply Favorite

Date: August 21st, 2018 11:22 AM
Author: Mustard legal warrant

Facebook is rating the trustworthiness of its users on a scale from zero to 1

(Andrew Harrer/Bloomberg News)

By Elizabeth Dwoskin

, Silicon Valley reporter

August 21 at 10:00 AM

SAN FRANCISCO — Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to 1.

The previously unreported ratings system, which Facebook has developed over the past year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content — but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare for which it had to account.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said.

Users’ trustworthiness score between zero and 1 isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.

[Facebook’s fight against fake news has gone global. In Mexico, just a handful of vetters are on the front lines.]

It is unclear what other criteria Facebook measures to determine a user’s score, whether all users have a score and in what ways the scores are used.

The reputation assessments come as Silicon Valley, faced with Russian interference, fake news and ideological actors who abuse the company’s policies, is recalibrating its approach to risk — and is finding untested, algorithmically driven ways to understand who poses a threat. Twitter, for example, now factors in the behavior of other accounts in a person’s network as a risk factor in judging whether a person’s tweets should be spread.

2:04

How to spot fake news

Consider these points before sharing an article on Facebook. It could be fake. (Monica Akhtar/The Washington Post)

But how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming — a predicament that the firms increasingly find themselves in as they weigh calls for more transparency around their decision-making.

“Not knowing how [Facebook is] judging us is what makes us uncomfortable,” said Claire Wardle, director of First Draft, a research lab within Harvard’s Kennedy School that focuses on the impact of misinformation and that is a fact-checking partner of Facebook. “But the irony is that they can’t tell us how they are judging us — because if they do, the algorithms that they built will be gamed.”

The system Facebook built for users to flag potentially unacceptable content has in many ways become a battleground. The activist Twitter account Sleeping Giants called on followers to take technology companies to task over the conservative conspiracy theorist Alex Jones and his Infowars site, leading to a flood of reports about hate speech that resulted in him and Infowars being banned from Facebook and other tech companies’ services. At the time, executives at the company questioned whether the mass reporting of Jones’s content was part of an effort to trick Facebook’s systems. False reporting has also become a tactic in far-right online harassment campaigns, experts say.

2:52

Alex Jones’s content stripped from YouTube, Apple, Facebook and Spotify

Apple, Facebook, YouTube and Spotify have moved to remove the content of prominent right-wing talk show host Alex Jones for violating hate speech policies. (The Hollywood Reporter)

Tech companies have a long history of using algorithms to make all kinds of predictions about people, including how likely they are to buy products and whether they are using a false identity. But as misinformation proliferates, companies are making increasingly sophisticated editorial choices about who is trustworthy.

[On WhatsApp, fake news is fast — and can be fatal]

In 2015, Facebook gave users the ability to report posts they believe to be false. A tab on the upper right-hand corner of every Facebook post lets people report problematic content for a variety of reasons, including pornography, violence, unauthorized sales, hate speech and false news.

Lyons said she soon realized that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false to make efficient use of fact-checkers’ time. That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.

“One of the signals we use is how people interact with articles,” Lyons said in a follow-up email. “For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true.”

The score is one signal among many that the company feeds into more algorithms to help it decide which stories should be reviewed.

“I like to make the joke that, if people only reported things that were false, this job would be so easy!” said Lyons in the interview. “People often report things that they just disagree with.”

She declined to say what other signals the company used to determine trustworthiness, citing concerns about tipping off bad actors.

(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654579)



Reply Favorite

Date: August 21st, 2018 11:30 AM
Author: very tactful elite resort foreskin

this along with that t-mobile 'scam likely' shit is just another precursor for this social credit scheme to turn everyone into acquiescent gc vassals

(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654636)



Reply Favorite

Date: August 21st, 2018 11:32 AM
Author: brilliant internet-worthy ratface station



(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654645)



Reply Favorite

Date: August 21st, 2018 11:33 AM
Author: cerebral dog poop dysfunction

Literally a Black Mirror episode

(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654652)



Reply Favorite

Date: August 21st, 2018 11:34 AM
Author: passionate pale house wrinkle

So it begins

(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654659)



Reply Favorite

Date: August 21st, 2018 11:35 AM
Author: Galvanic godawful principal's office pozpig

"but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare for which it had to account.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” Lyons said. "

jfc at facebook acting SHOCKED by this turn of events

(http://www.autoadmit.com/thread.php?thread_id=4056942&forum_id=2#36654667)