The penalties will be set out Monday in a government position paper that says the United Kingdom will make internet companies legally responsible for unlawful content and material that is damaging to individuals or the country.
The government said that an independent regulator would be created to enforce the new rules, which focus on removing content that incites violence, encourages suicide or constitutes cyber-bullying. Content related to terrorism and child abuse would face even stricter standards.
Companies would be required to "take reasonable and proportionate action" to address objectionable content on their platforms, the government said in a statement. A "code of practice" would include measures to minimize the spread "of misleading and harmful disinformation with dedicated fact checkers, particularly during election periods," it added.
Europe has taken a much more robust approach to tech regulation than the United States, confronting industry giants over competition issues, data protection, privacy and tax. Calls for stricter regulation have increased recently in the United Kingdom after social media was blamed for the suicide of a British teenager and Facebook failed to stop the live broadcast of a mass killing in New Zealand.
"For too long these companies have not done enough to protect users, especially children and young people, from harmful content," Prime Minister Theresa May said in a statement. "That is not good enough, and it is time to do things differently."
The United Kingdom is casting a wide net. The plan, which the government will continue to develop over the next 12 weeks before proposing as legislation, extends to any company that "allows users to share or discover user generated content or interact with each other."
That would include Twitter (TWTR) and Google (GOOGL) video platform YouTube, but also popular internet message boards such as Reddit.
The companies would be required to have an easy and effective complaints function, where users would "receive timely, clear and transparent responses to their complaints." Social media firms would also be forced to publish annual reports on the amount of harmful content on their platforms and explain what they were doing to address the issue.
The government said that the new regulator would be empowered to block access to websites or apps that break the rules, thereby disrupting their business models. It also said that individual senior managers would face civil fines and criminal liability, without providing further details.
Kent Walker, senior vice president at Google, wrote in a blog post earlier this year that the company was seeking to address illegal and harmful content by increasing transparency and working with regulators.
Facebook (FB) CEO Mark Zuckerberg said in March that governments and regulators should play a "more active role" online. But he said that a global approach was needed to ensure that "the Internet does not get fractured" and "entrepreneurs can build products that serve everyone."
"Lawmakers often tell me we have too much power over speech, and frankly I agree," he wrote in the Washington Post. "I've come to believe that we shouldn't make so many important decisions about speech on our own."
Rebecca Stimson, Facebook's head of UK public policy, said Monday that new rules should support innovation, the digital economy and freedom of speech. "These are complex issues to get right and we look forward to working with the [UK] government and parliament to ensure new regulations are effective," she said in a statement.
Twitter's UK head of public policy Katy Minshall said in a statement on Monday that the company is "deeply committed" to the safety of its users, touting what she said are "70 changes to our policies and processes last year."
"We look forward to engaging in the next steps of the process, and working to strike an appropriate balance between keeping users safe and preserving the open, free nature of the internet," Minshall said.
No comments:
Post a Comment