Significant online defamation damages in Canada — are online platforms immune?
Canadian courts have a reputation of awarding relatively modest damage awards in tort cases, especially when compared to our neighbours to the South. However, a recent BC Supreme Court case, Rook v Halcrow, demonstrates that Canadian courts will award significant damages in relation to online defamation — in that case, the defendant acted with malice when undertaking a protracted online campaign to defame a former lover on social media and the court awarded damages in excess of $230,000. For businesses that host content online, this award raises the spectre of the potential liability that intermediaries face when their users post defamatory content on or via their platforms or services. Operators of online services should not assume that Canada protects intermediaries from tort claims (including defamation) just because their or common jurisdictions provide a shield for intermediary liability.
Potential liability for third party intermediaries
The United States and the United Kingdom, for example, have legislated liability shields which protect intermediaries from claims brought by third parties who have been defamed on or via the intermediary’s platform or service. By contrast, in Canada, we see an increasing willingness of the courts to hold intermediaries responsible, whether by damages or by injunction, for the posts of their users.
In the United States, the Communications Decency Act insulates internet services providers from liability that could otherwise exist at common law, except in very limited circumstances. This, together with robust First Amendment protections and a strong free speech culture, has allowed online service providers (and, largely, courts) to take a very hands-off approach to content moderation, as they face remote risk of liability for the defamatory or damaging posts of their users. It is worth noting that these broad protections are controversial and have become the target of critique from legislators and pundits alike in response to recent upswells of hate speech and connected real-world violence.
In the United Kingdom, the Defamation Act, 2013 provides that a defence is available for a defamation action brought against an online operator in respect of a third party’s post to show that the operator was not the person who posted the statement. This defence, however, is defeated if the claimant can demonstrate that (a) it was not possible for the claimant to identify the poster, (b) the claimant gave the operator a notice of complaint, and (c) the operator failed to respond to the notice of complaint in accordance with the applicable regulations. The defence is also defeated if the claimant shows that the operator acted with malice in relation to the posting of the statement concerned.
If only Canada were to have such clear laws. Here, it is critical for operators of online platforms to understand that this issue remains largely unlegislated and left to the common law; which holds that a person will not be responsible, as a publisher, if the person’s sole participation in the publication of the defamatory material is merely their “innocent” involvement in the purely administrative or mechanical phases of publication. In practice, this defence is only available where (i) the service provider has had no knowledge of the actual libel, (ii) there is no evidence that the service provider ought to have been aware of the alleged libel on their service, and (iii) committed no negligence in failing to find out about the libel in question (Crookes v. Newton). In British Columbia, for example, it was held in Carter v. BC Federation of Foster Parents that a website operator may be an innocent disseminator where it has merely posted a link to another website without knowledge that a defamatory statement existed there.
As a result of the lack of a legislated liability shield and increasing damage awards in this space, intermediaries must consider and implement clear policies on how to respond to takedown requests by users who claim to have been defamed, to ensure that they remain mere “innocent administrators” and not tread into culpability. This also involves careful consideration of how the operator’s platform (and, to what degree, the operator) promotes, elevates or pushes content on its users. Unfortunately, intermediaries often do not have the requisite information to undertake any analysis on whether a given post is defamatory; not to mention the fact that such companies generally do not want to be in the business of censoring content. Therefore, prior to being made aware of a court order or injunction regarding any defamatory posts, intermediaries are put in the difficult position of having to respond to potentially unsubstantiated take down requests from users who claim to have been defamed on their platforms.
In Rook, the court granted an injunction that restrains the defendant and other persons with knowledge of the order, wherever they are located in the world, from publishing any of the comments contained in the schedule attached to the judgment. It is worth noting that Canadian courts, since Equustek, take the position that they have jurisdiction to grant worldwide injunctions when it is necessary to ensure the injunction’s effectiveness. In Equustek, the court acknowledged that the internet has no borders: its natural habitat is global and, in some instances, the objectives of an injunction can be attained only where is applies globally.
The concern for intermediaries with respect to injunctions is wrapped up in how the intermediary responds to notice of the injunction. It is well established in Canadian law that a non-party can be held in contempt for aiding and abetting a person violating an injunction. While there is certainly no positive duty for intermediaries to make themselves aware of any injunctions which may impact the content on their platforms, intermediaries ought to take this into account when drafting their policies with respect to defamatory posts and their threshold for removal.
Rook v. Halcrow
It is easy to underappreciate the incredible power that modern social media services like Twitter and Facebook provide. It was not long ago that the average person’s biggest platform for expressing their opinion was to write into their local newspaper and hope that one’s message was deemed fit to print. This process took time and went through several human filters before being published for the local community’s consumption and yet still, opinions were published that were untrue and caused harm. Today that process is instantaneous and unfiltered, and the audience unlimited, serving to magnify the risk and potential harm of defamatory statements.
In Rook, the plaintiff, R, claimed damages for defamatory posts he alleged were posted by the defendant, H, who was a former romantic partner. After an on-again off-again relationship, R was alerted to defamatory statements showing up on various platforms, especially, on Instagram. A 53-page appendix to the judgment reveals the breathtaking scope and number of posts. The posts ranged in severity; from accusing R of being heartless and uncaring to R being guilty of sexual assault and spreading sexually transmitted infections. H alleged that she did not publish the posts, however she did not provide any evidence to support this assertion. The judge observed that there was clear and compelling evidence that H did, in fact, post the material. The judge relied on expert evidence produced at the trial, various texts messages that the parties exchanged about taking down the posts, the phraseology used in the posts, and the absence of evidence that anyone else had the motivation to make the posts (not to mention, the intimate knowledge of R’s personal details that were used in the posts).
After briefly reviewing the elements of the tort of defamation, the judge concluded that the posts were defamatory in their literal meaning. Accordingly, it was not necessary to resort to their inferential meanings chronicled in the appendix to the judgment. R’s ex-wife testified that she had read all of the postings and from the number of comments and viewings on many of the postings, the judge concluded that the posts had been read by many, and thus had been published. The judge then turned to the question of damages. After a review of the law on damages in defamation, the judge noted that the tort of defamation is designed to protect a person’s reputation and therefore will consider injured feelings and anxiety. A court may award aggravated damages where there has been actual malice and the conduct in question is insulting, high-handed, spiteful, malicious or oppressive, exacerbating the plaintiff’s mental distress.
R lived in the Vancouver area since 1992, and worked in mining and investment. As a director for public companies, his public reputation was found to be of enhanced importance. He testified as to the anxiety the posts caused him, and how this was aggravated by the references to his ex-wife and daughter. The judge found that H mounted a campaign against R that was as relentless, extensive and motivated by malice and awarded CAD$175,000 in general damages, CAD$25,000 in aggravated damages, US$29,870.00 in compensation for costs R incurred in engaging reputation consultants to remove the postings, plus costs. We note that arriving at this decision involved a review of other cases where significant damages for internet defamation were awarded:
- $115,000 in general, aggravated and punitive damages (Hee Creations Group Ltd. v. Chow);
- $300,000 in general and aggravated damages plus $100,000 in punitive damages (Magno v. Balita); and
- $115,000 and $106,000 in damages to the two plaintiffs (British Columbia Recreation and Parks Association v. Zakharia).
In each of these cases, like Rook, a finding of malice impacted the amount of damages awarded.
Takeaway
Rook v. Halcrow is a useful case to examine the issue of intermediary liability because the damages award is significant, driven by the malice of the information, the ease and repetitiveness of the communications, and the harm caused to the plaintiff. While spurned ex-lovers should certainly take note and consider their online behaviours carefully, Canada’s lack of legislation preventing intermediaries for being liable for that same content also requires some thought on behalf of such intermediaries. As online platforms continue to compete for the attention of their users in the increasingly crowded marketplace of content, they must not assume that they can always hide behind intermediary immunity, particularly in Canada where there is no such thing, but instead must develop clear policies and processes that serve to best insulate them from increasing liability by ensuring that they remain but mere innocent publishers of third-party content.
By Ryan J. Black and Tyson Gratton, DLA Piper