Media or Corporations? Social Media Governance Between Public and Commercial Rationales

Author(s):  
Daniela Stockmann

In public discussions of social media governance, corporations such as Google, Facebook, and Twitter are often first and foremost seen as providers of information and as media. However, social media companies’ business models aim to generate income by attracting a large, growing, and active user base and by collecting and monetising personal data. This has generated concerns with respect to hate speech, disinformation, and privacy. Over time, there has been a trend away from industry self-regulation towards a strengthening of national-level and European Union-level regulations, that is, from soft to hard law. Hence, moving beyond general corporate governance codes, governments are imposing more targeted regulations that recognise these firms’ profound societal importance and wide-reaching influence. The chapter reviews these developments, highlighting the tension between companies’ commercial and public rationales, critiques the current industry-specific regulatory framework, and raises potential policy alternatives.

2019 ◽  
pp. 203
Author(s):  
Kent Roach

It is argued that neither the approach taken to terrorist speech in Bill C-51 nor Bill C-59 is satisfactory. A case study of the Othman Hamdan case, including his calls on the Internet for “lone wolves” “swiftly to activate,” is featured, along with the use of immigration law after his acquittal for counselling murder and other crimes. Hamdan’s acquittal suggests that the new Bill C-59 terrorist speech offence and take-down powers based on counselling terrorism offences without specifying a particular terrorism offence may not reach Hamdan’s Internet postings. One coherent response would be to repeal terrorist speech offences while making greater use of court-ordered take-downs of speech on the Internet and programs to counter violent extremism. Another coherent response would be to criminalize the promotion and advocacy of terrorist activities (as opposed to terrorist offences in general in Bill C-51 or terrorism offences without identifying a specific terrorist offence in Bill C-59) and provide for defences designed to protect fundamental freedoms such as those under section 319(3) of the Criminal Code that apply to hate speech. Unfortunately, neither Bill C-51 nor Bill C-59 pursues either of these options. The result is that speech such as Hamdan’s will continue to be subject to the vagaries of take-downs by social media companies and immigration law.


Author(s):  
Şükrü Oktay Kılıç ◽  
Zeynep Genel

A handful of social media companies, with their shifting strategies to become hosts of all information available online, have significantly changed the news media landscape in recent years. Many news media companies across the world have gone through reorganizations in a bid to keep up with new storytelling techniques, technologies, and tools introduced by social media companies. With their non-transparent algorithms favoring particular content formats and lack of interest in developing solid business models for publishers, social media platforms, on the other hand, have attracted widespread criticism by many academics and media practitioners. This chapter aims at discussing the impact of social media on journalism with the help of digital research that provides an insight on what storytelling types with which three most-followed news outlets in Turkey gain the most engagement on Facebook.


Author(s):  
Ella Gorian

The object of this research is the relations in the area of implementation of artificial intelligence technologies. The subject of this research is the normative documents of Singapore that establish requirements towards development and application of artificial intelligence technologies. The article determines the peculiarities of Singaporean approach towards regulation of relations in the indicated sphere. Characteristic is given to the national initiative and circle of actors involved in the development and realization of normative provisions with regards to implementation of digital technologies. The author explores the aspects of private public partnership, defines the role of government in regulation of relation, as well as gives special attention to the question of ensuring personal data protection used by the artificial intelligence technologies. Positive practices that can be utilized in Russian strategy for the development of artificial intelligence are described. Singapore applies the self-regulation approach towards the processes of implementation of artificial intelligence technologies, defining the backbone role of the government, establishing common goals, and involving representative of private sector and general public. Moreover, the government acts as the guarantor of meeting the interests of private sector by creating an attractive investment regime and citizens, setting strict requirements with regards to data usage and control over the artificial intelligence technologies. A distinguishing feature of Singaporean approach consists in determination of the priority sectors of economy and instruments of ensuring systematicity in implementation of artificial intelligence. Singapore efficiently uses its demographic and economic peculiarities for proliferation of the technologies of artificial intelligence in Asian Region; the developed and successfully tested on the national level model of artificial intelligence management received worldwide recognition and application. Turning Singapore into the international center of artificial intelligence is also instigated by the improvement of legal regime with simultaneous facilitation in the sphere of intellectual property. These specificities should be taken into account by the Russian authors of national strategy for the development of artificial intelligence.


Author(s):  
Jeffrey W. Howard

Social media are now central sites of democratic discourse among citizens. But are some contributions to social media too extreme to be permitted? This entry considers the permissibility of suppressing extreme speech on social media, such as terrorist propaganda and racist hate speech. It begins by considering the argument that such restrictions on speech would wrong democratic citizens, violating their freedom of expression. It proceeds to investigate the moral responsibilities of social media companies to suppress extreme speech, and whether these ought to be enforced through the law. Finally, it explores an alternative mechanism for combatting extreme speech on social media—counter-speech—and evaluates its prospects.


2021 ◽  
Vol 9 (1) ◽  
pp. 56-71
Author(s):  
Balázs Bartóki-Gönczy

Social media platforms are mainly characterised by private regulation. However, their direct and indirect impact on society has become such (fake news, hate  speech, incitement to terrorism, data protection breaches, impact on the viability of professional journalism) that private regulatory mechanisms in place (often opaque and not transparent) seem to be inadequate. In the present paper, I would  first address the problem of the legal classification of these services (media service provider vs. intermediary service provider), since the answer to this question is a  prerequisite for any state intervention. I would then present the regulatory  initiatives (with a critical approach) at the EU and national level which might shape the future of ‘social media platform’ regulation. 


Author(s):  
Molly K. Land

The internet would seem to be an ideal platform for fostering norm diversity. The very structure of the internet resists centralized governance, while the opportunities it provides for the “long tail” of expression means even voices with extremely small audiences can find a home. In reality, however, the governance of online speech looks much more monolithic. This is largely a result of private “lawmaking” activity by internet intermediaries. Increasingly, social media companies like Facebook and Twitter are developing what David Kaye, UN Special Rapporteur for the Promotion and Protection of the Right to Freedom of Opinion and Expression, has called “platform law.” Through a combination of community standards, contract, technological design, and case-specific practice, social media companies are developing “Facebook law” and “Twitter law,” displacing the laws of national jurisdictions. Using the example of content moderation, this chapter makes several contributions to the literature. First, it expands upon the idea of “platform law” to consider the broad array of mechanisms that companies use to control user behavior and mediate conflicts. Second, using human rights law as a foundation, the chapter makes the case for meaningful technological design choices that enable user autonomy. Users should be able to make explicit choices about who and what they want to hear online. It also frames user choice in terms of the right to hear, not the right to speak, as a way of navigating the tension presented by hate speech and human rights without resorting to platform law that sanitizes speech for everyone.


2019 ◽  
Vol 5 (4) ◽  
pp. 205630511988169 ◽  
Author(s):  
Ana Jorge

This article looks at the discourses of Instagram users about interrupting the use of social or digital media, through hashtags such as “socialmediadetox,” “offline,” or “disconnecttoreconnect.” We identified three predominant themes: posts announcing or recounting voluntary interruption, mostly as a positive experience associated to regaining control over time, social relationships, and their own well-being; others actively campaigning for this type of disconnection, attempting to convert others; and disconnection as a lifestyle choice, or marketing products by association with disconnection imaginary. These discourses reproduce other public discourses in asserting the self-regulation of the use of social media as a social norm, where social media users are responsible for their well-being and where interruption is conveyed as a valid way to achieve that end. They also reveal how digital disconnection and interruption is increasingly reintegrated on social media as lifestyle, in cynical and ironic ways, and commodified and co-opted by businesses, benefiting from—and ultimately contributing to—the continued economic success of the platform. As Hesselberth, Karppi, or Fish have argued in relation to other forms of disconnection, discourses about Instagram interruptions are thus not transformative but restorative of the informational capitalism social media are part of.


2019 ◽  
Vol 72 (1) ◽  
pp. 1-16
Author(s):  
Alton Y.K. Chua ◽  
Snehasish Banerjee

Purpose The purpose of this paper is to explore the use of community question answering sites (CQAs) on the topic of terrorism. Three research questions are investigated: what are the dominant themes reflected in terrorism-related questions? How do answer characteristics vary with question themes? How does users’ anonymity relate to question themes and answer characteristics? Design/methodology/approach Data include 300 questions that attracted 2,194 answers on the community question answering Yahoo! Answers. Content analysis was employed. Findings The questions reflected the community’s information needs ranging from the life of extremists to counter-terrorism policies. Answers were laden with negative emotions reflecting hate speech and Islamophobia, making claims that were rarely verifiable. Users who posted sensitive content generally remained anonymous. Practical implications This paper raises awareness of how CQAs are used to exchange information about sensitive topics such as terrorism. It calls for governments and law enforcement agencies to collaborate with major social media companies to develop a process for cross-platform blacklisting of users and content, as well as identifying those who are vulnerable. Originality/value Theoretically, it contributes to the academic discourse on terrorism in CQAs by exploring the type of questions asked, and the sort of answers they attract. Methodologically, the paper serves to enrich the literature around terrorism and social media that has hitherto mostly drawn data from Facebook and Twitter.


Significance Facebook has indefinitely suspended Trump from its main platform and Instagram, while Twitter has done so permanently for his role in instigating violence at US Capitol Hill on January 6. These developments spotlight the role of social media firms in spreading and tackling hate speech and disinformation, and their power unilaterally to shut down public speech. Impacts Democratic control of the White House and Congress offers social media companies a two-year window to ensure softer regulation. The EU will push its new digital markets legislation with vigour following the events at US Capitol Hill. Hard-right social media will find new firms willing to host their servers, partly because their user numbers run to millions not billions.


Author(s):  
Soraya Chemaly

The toxicity of online interactions presents unprecedented challenges to traditional free speech norms. The scope and amplification properties of the internet give new dimension and power to hate speech, rape and death threats, and denigrating and reputation-destroying commentary. Social media companies and internet platforms, all of which regulate speech through moderation processes every day, walk the fine line between censorship and free speech with every decision they make, and they make millions a day. This chapter will explore how a lack of diversity in the tech industry affects the design and regulation of products and, in so doing, disproportionately negatively affects the free speech of traditionally marginalized people. During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. At the same time, the pervasiveness of online hate, harassment, and abuse has become evident. These problems come together on social media platforms that have institutionalized and automated the perspectives of privileged male experiences of speech and violence. The tech sector’s male dominance and the sex segregation and hierarchies of its workforce result in serious and harmful effects globally on women’s safety and free expression.


Sign in / Sign up

Export Citation Format

Share Document