SAN FRANCISCO/NEW YORK - It took hours before Instagram removed from its site grisly photos of murdered 17-year-old Bianca Devins’s body earlier this month. In March, YouTube, Twitter and Facebook were accused of taking too long to remove video of a shooting at a mosque in New Zealand, which killed at least 49 people, and other content promoting it. In the United States there are no legal consequences for the companies whose platforms are used to spread such disturbing content.

That’s thanks to a 23-year-old provision passed by Congress known simply as Section 230. It’s coming under increasing attack – not just from lawmakers concerned at the rising power of Silicon Valley, but even from within, with IBM one of the latest to call to reassess its use. But scrapping it entirely wouldn’t solve the problem, either. Breakingviews examines the issues.

 

SO WHAT EXACTLY IS SECTION 230

The name simply refers to where the provision appears in the 1996 Communications Decency Act and was designed to address a perverse incentive created by legal cases over content earlier in the decade.

One determined that CompuServe had no liability for anything on its platform as it was merely a distributor of content and did not moderate it. In the other, Prodigy was held to be responsible as it policed third-party material. The message, in other words, was that companies would be better off turning a blind eye – even if, for example, child pornography was found on the platform.

So Congress created Section 230 to give what it dubbed interactive computer services legal protection if they published offensive third-party content, and if they removed certain material, like comments promoting violence. The bill also states that a provider of an interactive computer service would not be treated as a publisher of third-party content. Democratic Senator Ron Wyden, who helped write this Section 230, described it as both a “shield” and a “sword.”

WHAT’S IBM’S GOAL

IBM wants the broad protections of Section 230 to be reined in – but only a bit. The company’s policy executive, Ryan Hagemann, argued in a blog post earlier this month that the provision needs a refresh to put more of the responsibility of monitoring back onto the platforms. This is to help curb content that promotes terrorism, violence, suicide, electoral fraud and opioid sales.

One approach IBM is proposing is to introduce a “reasonable care” standard. That, the argument goes, would ensure that a company was making good-faith efforts to remove questionable user-generated material as quickly as possible.

WHY IS IBM DOING THIS

It’s true that Big Blue isn’t really in the business of user-generated content. But the thinking is that an outright change to Section 230 could ripple well beyond online platforms and make others liable too like web hosting and data storage firms.

HOW ARE ONLINE PLATFORMS POLICING QUESTIONABLE MATERIAL

Facebook, Twitter and other sites have recently improved their track record in taking down material by extremist groups like ISIS. But their rules are sometimes unclear or too lax. And they’re often unable to keep up with the amount of content flooding their platforms. Some 4.5 billion people now use the internet, compared with just 36 million in 1996. The amount of material online has grown exponentially since then. On YouTube alone, 300 hours of video are uploaded every minute.

There are other problems. For example, YouTube algorithms suggest similar videos based on a user’s viewing history. So seemingly innocent videos of children playing in a swimming pool can be exploited by predators. The company says it removed more than 800,000 videos that violated child safety policies in the first quarter alone, as well as disabling comments and limiting recommendations.

HOW WOULD CONGRESS CHANGE THE LAW

That’s the key question lawmakers focused on after it was discovered that Russia used social-media platforms to meddle in the 2016 U.S. presidential election.

Some are calling for carving out exceptions to Section 230 that leave internet platforms legally exposed on certain issues. That was introduced for content on sex trafficking last year. That could, for instance, be applied to opioid sales. The issue can get murky too. Senator Josh Hawley submitted a bill that wants requiring internet publishers with more than $500 million in annual global revenue to submit to external audits that prove their content-removing policies are politically neutral.

WHAT WOULD HAPPEN IF SECTION 230 IS ROLLED BACK

One option is for companies to go back to relying on past court precedent and no longer moderate content. More likely, though, is that these providers would crack down more on what’s allowed on their platforms. So people may no longer be allowed to freely post blogs, comments and other material. In other words, internet sites would become highly curated.

Scrapping it altogether, though, would radically change the internet. Twitter may not be able to sustain its current business model. A site like Yelp YELP.N could be sued out of existence, along with Wikipedia, Medium and Reddit. Tech giants wouldn’t be the only companies to suffer. Every online service with a chat room or comments section would also be subject to legal liability, from payment systems like Venmo to gaming platforms like Steam.

The one carve-out for sex trafficking has already had effects. For example, Craigslist took down personal ads afterward. That may not sound like a big change but the more chinks are put in the old armor, the closer it’ll get to falling apart. Politics no longer has patience for tech firms so they may be better off supporting an IBM-like solution than risk a total gutting.

On Twitter https://twitter.com/GinaChon and https://twitter.com/jennifersaba

 

CONTEXT NEWS

- IBM is calling for lawmakers to revise Section 230 of the Communications Decency Act, which protects an “interactive computer service” from legal liability for third-party content on its platform.

- Ryan Hagemann, IBM government and regulatory affairs executive, said in a blog on July 10 that a “reasonable care” standard should be added to the provision so that online companies can be held legally responsible when they fail to use reasonable care to moderate content, like quickly deleting child pornography.

- Some members of Congress, including Republican Senator Josh Hawley, have called on eliminating the liability protection in Section 230 for a variety of reasons such as having an anti-conservative bias or for foreign election-meddling.

 

(Editing by Antony Currie and Amanda Gomez) ((gina.chon@thomsonreuters.com; jennifer.saba@thomsonreuters.com; Reuters Messaging: gina.chon.thomsonreuters.com@reuters.net; jennifer.saba.thomsonreuters.com@reuters.net))