Social media reconsiders its relationship with the truth
Published Date: 8/21/2019
Source: axios.com
For years, Facebook and other social media companies have erred on the side of lenience in policing their sites — allowing most posts with false information to stay up, as long as they came from a genuine human and not a bot or a nefarious actor.The latest: Now, the companies are considering a fundamental shift with profound social and political implications: deciding what is true and what is false.The big picture: The new approach, if implemented, would not affect every lie or misleading post. It would be meant only to rein in manipulated media — everything from sophisticated, AI-enabled video or audio deepfakes to super-basic video edits like a much-circulated, slowed-down clip of Nancy Pelosi that surfaced in May.Still, it would be a significant concession to critics who say the companies have a responsibility to do much more to keep harmful false information from spreading unfiltered.It would also be an inflection point in the companies' approach to free speech, which has thus far been that more is better and that the truth will bubble up."There is pressure on platforms to act in a more editorial or curatorial way," says Sam Gregory of the human rights nonprofit WITNESS. "You're seeing a greater range of options being deployed by platforms."What's happening: To defend against the spread of manipulated media, which experts believe threaten elections, businesses and human rights, the companies are now discussing potential new policies to call them out or even take them down.In recent meetings, experts and representatives from the biggest social networks have debated definitions and new rules for dealing with this vexing question.In May, the Partnership on AI, WITNESS, and the BBC convened a workshop in London to lay out the problem.In June, the Carnegie Endowment for International Peace gathered experts and representatives from several big social media companies in San Francisco to focus on the threat to the 2020 election.Meanwhile, pressure is mounting. House Intelligence Chairman Adam Schiff asked Facebook, Twitter and Google in July how they are dealing with deepfakes. In written responses, the companies pointed to existing policies against nonconsensual porn and election manipulation, but said they were entertaining new ones.There's a new realization among some of the companies that their approach to date may no longer be defensible, says Charlotte Stanton of the Carnegie Endowment, who convened the previously unreported June meeting."It's great to believe in the conflict of ideas, but the reality is that when we're inundated with so much information, that doesn't really work," Stanton says. "There was an 'aha' moment for some of the platforms when we had that discussion."When it comes to shying away from judging veracity, "there is some introspection on whether or not that position is the best one," says Claire Leibowicz, a research lead at the Partnership on AI who attended the May and June meetings.The big issues that still hang over the companies:How to decide when manipulated media is appropriate or not.Whether to take an offending post down, hide it or label it.How to best label it — a nascent question that both Carnegie and the Partnership on AI are soliciting scientific research to explore.But, but, but: Even as social media companies realize they may need to intervene to police some forms of falsehood on their sites, they're under blistering attack from conservatives in the U.S. who claim that the companies' moderation policies are biased against them. A reluctance to ruffle lawmakers and the president could delay or water down new policies.