Safeguard your business against defamatory content on social media

The rapid evolution of social media and various other digital platforms means legislation is regularly playing catch up. Liability and ownership of defamatory content on social media is one such emerging area of law.

Staff Writer 15 Mar 2021
4 mins

The rapid evolution of social media and various other digital platforms means legislation is regularly playing catch up. Liability and ownership of defamatory content on social media is one such emerging area of law.

Whereas most of us can understand the risks associated with typing something hurtful or libellous on social media, but a recent case shows how a simple emoji could also put you or your business at risk as well. Can administrators of private Facebook pages be liable for defamatory posts by third parties?

These questions were answered in a recent webinar hosted by our friends at HopgoodGanim Lawyers, Samantha Clements and Brett Bolton.

In this article, I will be reviewing two case studies discussed in HopgoodGanim’s webinar.

Case Study 1: Think next time you emoji

In what might be the first time a court has had to determine the defamatory meaning of an emoji, a New South Wales court had the challenging task of determining whether the ‘zipper-mouth’ face emoji could have defamed the plaintiff, Zali Burrows.

The emoji in question was used in a twitter thread about Ms Burrow’s alleged misconduct as a lawyer. The initial twitter post suggested that Ms Burrows was subject to disciplinary action due to mishandling a matter in court. The defendant, Adam Houda replied to the tweet with the zipper-mouth emoji 🤐.

The lawyer for Ms Burrows argued that the “zipper-mouth face” emoji was “worth a thousand words” and could be damaging to Ms Burrows’ reputation.

The court agreed that the emoji could cause an ordinary, reasonable social media user to make unfavourable assumptions about Ms Burrows.

This decision is exceptionally notable as it sets a precedent that emojis are capable of being defamatory and serves as a warning that emojis are not excused from defamation law.

As emojis become increasingly used in our communication, there is no doubt this area of law will only continue to expand, and play catch up 🏃.

HopgoodGanim’s Key Takeaways

  • A simple emoji is not simple after all
  • An emoji alone is enough to substantiate an action for defamation
  • Courts now say emojis can be just as damaging as words
  • Do not use emojis or memes where you are not sure of their meaning or where meaning is ambiguous. The test in defamation is not what you think it means or wanted it to mean, but what suggestions the words and images are capable of giving rise to.

Case Study 2: Dylan Voller

Backtracking four years – you may remember the case of Indigenous Northern Territory youth detainee, Dylan Voller, whose mistreatment in the Don Dale youth detention centre led to a 2016 royal commission.

Dylan Voller became headline news, with articles about him appearing on news Facebook pages such as the Sydney Morning Herald, the Australian, the Centralian Advocate, Sky News Australia and The Bolt Report. These articles on social media encouraged thousands of comments by the public, with some of these comments Mr Voller finding defamatory, leading him to sue the media companies.

Dylan Voller claims he continued to suffer distress and damage to his reputation (ABC News: Steven Schubert)

The Court concluded that media companies made it possible for defamatory comments to become visible and harm Mr Voller.

The Court was particularly adamant that the media companies had the capacity to moderate and even hide all comments before they became visible, but failed to do so.

In a joint statement, News Corp, Nine and Sky said the court had shown Australian defamation law was “completely out of step with the realities of publishing in the digital age”.

New South Wales supreme court justice Stephen Rothman argued that the media companies could have used a word filter, adding common words to pre-filter the majority of comments.

The Voller case led to the Australian federal government announcing plans in late 2019 to make platforms such as Twitter and Facebook responsible for the content posted by third parties, as part of a wide range of planned defamation law reform.

HopgoodGanim’s Key Takeaways

  • Every public Facebook page – whether it be held by politicians, businesses or courts – is now liable for third party comments on those pages.
  • Before posting, assess the nature and subject matter of the content and be cautious not to publish content that could be seen as inviting controversial comments.
  • There are then two key approaches to moderating comments:
  1. Hiding: Monitor the comments as they are posted and ‘hide’ those that contain potentially defamatory allegations. This will keep it hidden from everyone except the person who wrote the comment and their friends (meaning they won’t know that the comment is hidden).
  2. Filter: Facebook’s settings allow you to block certain words from appearing on your page. This means that comments containing the words would need to be ‘unhidden’ to appear publicly. This is the more proactive approach.

__________

Robyn is Canning’s Purple’s Digital Content Officer. She has both theoretical knowledge and practical experience in content creation, social media management and videography/photography in the retail, tourism and education sectors. Robyn has a keen interest in finding new content formats, platforms and methods for storytelling.

More Purple News: