US President Donald Trump sits before signing a decree regarding social media companies in the Oval Office of the White House in Washington, the United States, on May 28, 2020.
Jonathan Ernst | Reuters
The debate over social media content has reached a level of intensity in recent days that we could never have imagined. Earlier this week, President Trump went on Twitter to suggest that there would inevitably be biases from the use of postal ballots – implications that Twitter believed amounted to misinformation. Twitter, which has long had a mechanism to report content from public figures while keeping it online, used it for the first time against President Trump’s content, noting that its allegations regarding the mailings were misleading and a link to a page. with more information on postal voting.
The administration’s response was swift: a decree on social media targeting the content policies of Internet companies. The president’s new policy was immediately attacked by public experts, with many researchers suggesting that legally, the order was a mess in that it attempted to override section 230 of the Communications Decency Act, which confers on Internet companies immunity for content removal decisions.
Attention to content policy issues only redoubled on Friday morning: Trump and the official White House account tweeted that “when looting begins, filming begins”, referring to protests by George Floyd – words for which Twitter has once again hit Trump, reporting the tweet on the basis that it glorifies violence. Examining the content issues doesn’t seem to get any better soon.
Online misinformation, hate speech and violence
This careful review of content policies is indeed important. The world is focusing on the issue of adopting content policy reforms, aimed at keeping offensive content off the platforms of Internet companies. As recent events in many parts of Europe, Sri Lanka, India, Brazil and the United States – perhaps the most remarkable, the genocidal conduct of Myanmar military officials – make it clear that Internet companies must do more to keep their platforms free from misinformation, hate speech, discriminatory content and violence.
But we must not forget that the discussion of content policy regulation is somewhat of a black hole in our current political climate. There are two reasons for this. The first is political conflict, particularly in the United States, over how and to what extent we should maintain the national commitment to freedom of expression on the forum of mainstream Internet platforms. Debates surrounding First Amendment rights in the United States are controversial, with Conservatives voicing deep concern over accounts of far-right opinion leaders such as Richard Spencer, Jared Taylor and Laura Loomer , who spread the spirit of white supremacy. through their tweets, publications and videos.
Second – and I think more critical – the standards applied to regulate hate speech, misinformation and other categories of offensive content will vary widely around the world. Different cultural norms, ranging from fairly liberal to ultra-conservative, exist between countries and within countries. It will be a monumental challenge for civil society, national governments and international organizations to arrive at a set of standards that the Internet should apply in perpetuity. This is a task that will require many hard discussions over many years – and even after all of these deliberations, we may not have a clear path to move forward on the syndication of global standards.
We must categorize political discussions around hate speech, disinformation, terrorism, etc. as content policy issues and deal with them independently of a second class of regulations: economic regulations that attack the business models of Silicon Valley Internet companies, focused on privacy, transparency and competition.
Economic regulation versus moderation of content
Admittedly, the two categories of future regulations – content policy and economic policy – are vital and just as important. Society wants peace; but our internet platforms are wreaking havoc on spreading misinformation, hating hate on spreading the messages of white supremacists and sowing violence on authoritarian authorities as a platform to spread racist conspiracies. Society wants equity; but our Internet companies systematically exploit the individual, artificially and unjustly deactivate the dynamism and dynamism of open markets and take questionable decisions behind our backs.
But the public, politicians and regulators around the world have mainly focused on regulating content policies. The reason is understandable: politics and public perception focus on the here and now. At the same time, we cannot ignore the issue of economic regulations – policies that target the corrosive business model of mainstream Internet businesses.
We cannot allow our dismay at the Russian disinformation campaign and misplaced industry judgments about what should and should not stay outweighing the deepest concern: that it is the business model of the mainstream Internet that generated and maintains these harms. We cannot allow our deliberations on the regulation of content policy to be left to deeper problems. To treat these problems and contain them at their source, one should not whip only with the leaves of the weed. We must poison its bad roots.
While it is important to address content to limit discrimination, protect elections and save lives, these are largely all administrative concerns that will ultimately be determined at the discretion of regional politics and culture. It is not an intellectual debate; drawing the lines of content acceptability is a determination of the collective attitudes of users in a given locality. In the meantime, Internet companies can hire content policy frameworks to explore user concerns and reflect them in the governance of the platform.
Mark Zuckerberg and the “Arbitrators of Truth”
Consumer Internet companies have decided that determining what constitutes offensive content is a responsibility that may need to be removed from the industry. Consider, for example, the seemingly benevolent proclamation of Facebook CEO Mark Zuckerberg that he does not wish to be the arbiter of the truth.
Did he say that for the sake of humanity? The answer is probably no: he does not want to be the arbiter of the truth because he does not want the heavy responsibility to rest on his shoulders and those of his business. Why should he blame Russian activity on its platforms and its impact on the 2016 US presidential election when we, as a company, cannot even determine what types of content should be considered fake news? Whatever the negative externality, he wishes to pass on the responsibility for such determinations.
But go to whom? It doesn’t seem important to industry leaders, as long as it’s a third party – an entity external to the company – that has the trust of the public. This third party could be a government agency, a civil society organization, an industry consortium, a review committee, or a non-profit organization created exclusively to resolve questions about offensive content. In the industry’s view, the organization should simply have public authority and confidence in its local jurisdiction. It should be seen as the source of truth by platform users.
The industry knows that the many questions that such arrangements for addressing content policy challenges would necessarily raise – in terms of who should have such authority over content policy, to what extent regional and national governments should be involved. in decision-making processes, how to prevent political influence, and, perhaps most critically, just where to draw the line – will take forever to develop.
Consider the situation in the United States, where Democrats and Republicans cannot even bring themselves to adopt the common sense policy advocated in the Honest Ads Act – which simply proposes to impose transparency on the provenance and distribution of advertisements digital policies. If we fail to resolve this issue after four years of deliberation, it is unlikely that we will be able to develop the content policy standards that Twitter, Google, Snapchat, Facebook and Microsoft should follow soon.
The industry knows this. Its leaders are aware that heated debates around content policy will persist for a very long time given our political situation and that, even if they persist, we will focus less on the more fundamental problem of economic regulation.
Their biggest fear is economic regulation. They fear real standards of confidentiality, competition and transparency that would force changes in their business practices, because such regulations, if seriously designed to curb consumer exploitation, would seriously hamper their business models. This would jeopardize both their personal wealth and the interests of their shareholders. Any reduction in the economic model would considerably reduce the profit margins of companies. The extent of this margin reduction would only depend on the severity of the advanced regulatory standards.
Mainstream Internet leaders will secretly encourage public debate on fake news and hate speech; they will add fuel to these enlightening deliberations for as long as they can, distracting us from the more subtle and fundamental issues at the heart of the industry’s trade regime. The new Facebook “watchdog” is a perfect example: the board should be designed by the company to address not only issues of content policy violation, but also the economic impact of business itself. This is where the real demons of society reside.
All this to say that we must always prioritize the issue of economic regulations; let the place of war for the development of strategies for the adoption of a comprehensive privacy law be our fulcrum. Let’s not die on the battlefield of content policy regulation – which will lead to a series of global debates that will likely never have a clear unifying international standard, taking into account differing political opinions, even within countries like states. -United. The more our attention is diverted to the problem of content policy, the less we will focus on healing society from the virus hiding beneath it – the more its malevolence will spread.
–By Dipayan Ghosh is co-director of the Digital Platforms & Democracy Project at Harvard Kennedy School. He was a privacy and public policy advisor at Facebook and previously an economic advisor at the Obama White House. He is the author of the Brookings Institution’s next book on the future of technology, “Terms of Disservice”. This commentary has been extracted from the book and adapted for publication.