OpinionResearch10 Apr 2019

Is Christchurch the Port Arthur moment for the social media giants?

Australian Parliament has introduced stricter requirements for social media in the wake of the attack in New Zealand, however the new law still leaves many important questions unanswered.

Christchurch-resized
Armed police guard at the Al Noor mosque in Christchurch after it was officially reopened following the attack, on March 23, 2019. Photo Credit: Carl Court / Staff.

The Australian Parliament has introduced a new law that makes it a crime for social media services and internet providers to fail to “expeditiously” remove “abhorrent violent material”. This applies to audio-visual material of persons engaging in terrorism, murder, torture, rape and kidnapping.

This is an understandable response to the horror of the Christchurch attacks, and the way social media platforms were used to stream the attacks live to millions of people.  The question is, will these laws make Australia safer – as the gun buy-back did, in the wake of the Port Arthur tragedy? Or are they just about making us feel like we have a degree of control over the digital world, when the reality is very different?

Current Social Media Landscape

Until very recently, social media giants like Facebook have pushed away any efforts to make them legally accountable for the content on their sites. They have argued they are just platforms – simply providing the space, while the users provide the content.

This argument has enabled Facebook to maintain that the obligation to regulate content falls on the users themselves – Facebook will respond, as soon as users alert it to the abusive or hateful content. In other words, it will take any hateful posts down – after the fact.

This argument has also deflected attention from the lack of transparency that we as a community have over the algorithms that Facebook is using, to curate the content that individuals are seeing. 

As Roger McNamee, author of Zucked, has argued, the logic that Facebook is a benign pin-up board in the online space breaks down when you consider the business model that Facebook is operating under and profiting from.

Data Harvester

First, Facebook is capturing the data that we put onto its platforms. It then takes the insights obtained from this data and sells them to generate revenue.

Facebook vacuums up enormous quantities of data from its users – data from content that you post or that you like or respond to, data from what you search for, and data from your messages to your friends using “private” messenger. Facebook puts this together with data from other applications you access through your Facebook login, like Uber and PayPal.

The result is an exquisitely detailed picture of you achieved through metadata. This picture is then used to offer information to marketers through online advertising. The data is also sold through third party data aggregators. This is how Facebook made $16.9 billion in revenue last year.

Curator in chief

Second, far from just being an online pin-up board, Facebook is already heavily curating content for us, even before we logon. This is achieved through its use of algorithms, which determine what content we see, and when.  

These algorithms are not simply technical pieces of code that are value ‘neutral’.  Facebook algorithms operate in such a way that the advertising that I see through my Facebook site will reflect my personal likes, dislikes and belief systems. Equally, the advertising that a white-supremacist, gun-owning Facebook user sees will reflect their own likes, dislikes and belief systems. The alarming result is each of our own beliefs can be reinforced and amplified by the recommended content Facebook and other online platforms then provide to us. Because of this content bubble, it’s then easy for Facebook users (including the white-supremacist gun-owner) to believe everyone thinks the same way they do. In an increasingly polarised world, the implications for this blinkered perspective are deeply concerning.

Legal fix or fiction?

The new Australian law deals with an extreme and very narrow set of circumstances – situations where social media or internet services are used to livestream abhorrent content. While an important and worthy topic of focus, as the changes were rushed through last week, there has been no real discussion of how the new law will be enforced.

Even more concerning is that our parliamentarians are yet to pay attention to the other ways that social media platforms can contribute to, sustain and even amplify extremist views.

Unlike the many other complex systems that we interact with on a daily basis – banking, telecommunications, insurance, health care – this new law doesn’t change the fact that there is still no system of governance over what occurs on social media platforms. There is no independent oversight. There is no equivalent safety standard for the companies which operate these platforms, or system of consumer protection for people using or impacted by social media platforms.   

Wanted: effective systems of governance and accountability

If we want to build online spaces that are safe rather than safe-havens for bigotry and hatred, we need to lay the foundations for governance and accountability. This will require us to revert to first principles on a number of issues, including:

  • Who do we think should own the data trail that occurs each time we interact with the online world?
  • What system of checks and balances do we think is appropriate for the use, treatment and sale of data and metadata?
  • Who is responsible for ensuring that when this data is fed into algorithms, the outputs do not help light and sustain the fires of extremism? What is the equivalent of consumer safety standards for algorithms?
  • What are our rights as consumers and as people affected by the operations of these systems?

Regulation of complex technical systems is not easy, but it is also something we have experience in.  As a society, we regulate everything from power grids to telecommunications.  Over time, we have developed governance frameworks for these systems that balance complex sets of values and competing interests. This has been achieved in ways that enable innovation and commerce to continue, while balancing the rights of consumers with the broader interests of safety and accountability.

Given how ubiquitous and deeply embedded in our lives platforms like Facebook have become, it is high time to go back to first principles and consider what governance and regulation of digital systems and spaces we want and need as a community.

While tackling racism, bigotry and hatred is an incredibly complex task, we could start by looking for ways to ensure these views are starved of airtime and oxygen rather than allowed to flourish and grow online.   Considerations of safety and human rights must take centre stage in these discussions.

Editors Note: In the last few days the UK has proposed new laws that establish a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. This will be backed up by an Independent Regulator: Online Harms White Paper,

Fiona David
by Fiona David
Fiona was appointed as the inaugural Research Chair of the Minderoo Foundation in 2018. In her previous role as Executive Director of Global Research for Minderoo’s Walk Free Foundation, Fiona lead the team that created the Global Slavery Index, from its 1st to 4th edition. A lawyer and criminologist, Fiona has worked for more than twenty years at the intersection of crime, law reform and human rights.
5 minute read
Share this article
Other Stories