Facebook breached PIPEDA, says Federal Court of Appeal

Facebook breached PIPEDA, says Federal Court of Appeal

The Office of the Privacy Commissioner of Canada (OPC) investigated a complaint into the scraping of Facebook user data by the app “thisisyourdigitallife” (TYDL) and its subsequent selling of the data to Cambridge Analytica (CA) for psychographic modelling purposes between November 2013 and December 2015. The OPC made an application to the Federal Court of Canada (FCC) and argued that Facebook breached the Personal Information Protection and Electronic Documents Act (PIPEDA) because of its practice of sharing Facebook users’ personal information with third-party applications (apps) hosted on the Facebook platform.

The FCC dismissed the OPC’s application and confirmed that the OPC had not shown that Facebook failed to obtain meaningful consent from users for disclosure of their data, and it had not shown that Facebook failed to adequately safeguard user data. The OPC appealed. The Federal Court of Appeal FCA) allowed the OPC’s appeal—it found that the FCC made an error in its analysis of meaningful consent and safeguarding under PIPEDA. Therefore, the FCA concluded that Facebook breached PIPEDA’s requirement to obtain meaningful consent from users prior to data disclosure and failed in its obligation to safeguard user data.

What happened?

Most are familiar with the Cambridge Analytica scandal that Facebook was involved with, but some may not know about how the whole thing started. It all began when Facebook launched its Platform that enabled third parties to build apps that could run on Facebook and be installed by users. There was also an app programming interface that allowed the third party apps to receive user information, called Graph API. By 2013, 41 million apps were available on Facebook.

Facebook required third-party apps (apps) to agree to its Platform Policy and Terms of Service in order to get access to Platform. For example, one provision required apps to only request user data that was necessary to operate their apps, and only use friends’ data in the context of the user’s experience on the apps. Another required apps to have a privacy policy that told users what data the apps would use and how they would use or share data.

Facebook admitted that it did not assess or verify the actual content of the apps’ privacy policies. In fact, it only verified that the hyperlink to an app’s privacy policy linked to a functioning web page.

Additionally, in November 2013, Dr. Kogan, a Cambridge professor, launched the TYDL app on Platform. The app had a personality quiz. Through Platform, Dr. Kogan was able to access the Facebook profile information of every user who installed TYDL—and all of the information of each installing user’s Facebook friends.

Just 272 Canadian users installed TYDL, and this enabled the disclosure of the data of over 600,000 Canadians. In December 2015, the media reported that user data obtained by TYDL was sold to CA and a related entity, and that the data was used to develop psychographic models for the purpose of targeting political messages towards Facebook users leading up to the 2016 American presidential election. It was not until this point that Facebook removed TYDL from Platform. Yet, Facebook never notified affected users, and it did not ban Dr. Kogan or CA from Platform.

Subsequently, the OPC received a complaint about Facebook and concerns about compliance with PIPEDA. The OPC commenced an investigation.

What did the OPC find?

The OPC found that Facebook’s superficial and ineffective safeguards and consent mechanisms allowed the app, TYDL, to gain unauthorized access to the personal information of millions of Facebook users. Some of that information was subsequently used for political purposes.

Following its investigation, the OPC found the following:

  • Facebook failed to obtain valid and meaningful consent of installing users. In fact, Facebook relied on apps to obtain consent from users for its disclosures to those apps, but Facebook could not show that TYDL actually obtained meaningful consent for its purposes (including political purposes) or that Facebook made reasonable efforts, in particular by reviewing privacy communications, to ensure that TYDL and apps in general were obtaining meaningful consent from users.
  • Facebook failed to obtain meaningful consent from friends of installing users. Facebook relied on overbroad and conflicting language in its privacy communications that was clearly insufficient to support meaningful consent. That language was presented to users, generally on registration, in relation to disclosures that could occur years later, to unknown apps for unknown purposes. Facebook further relied, unreasonably, on installing users to provide consent on behalf of each of their friends to release those friends’ information to an app, even though the friends did not know.
  • Facebook had inadequate safeguards to protect user information. Facebook relied on contractual terms with apps to protect against unauthorized access to users’ information, but then put in place superficial, reactive, and thus ineffective monitoring to ensure compliance with those terms. Furthermore, Facebook was unable to provide evidence of enforcement actions taken in relation to privacy-related contraventions of those contractual requirements.
  • Facebook failed to be accountable for the user information under its control. Facebook did not take responsibility for giving real and meaningful effect to the privacy protection of its users. It abdicated its responsibility for the personal information under its control, effectively shifting that responsibility almost exclusively to users and apps. Facebook relied on overbroad consent language, and consent mechanisms that were not supported by meaningful implementation. Its purported safeguards with respect to privacy, and implementation of such safeguards, were superficial and did not adequately protect users’ personal information.

The OPC discussed in a statement that Facebook’s refusal to act responsibly was deeply troubling given the vast amount of sensitive information people entrusted to the company.

The OPC could have had the ability to make an order declaring that Facebook violated Canadian privacy laws and order a remedy that could be enforced, but the OPC did not have that power and was forced to go to the FCC and ask for an enforceable order of its findings.

What did the FCC decide?

The FCC decided that the matter should be dismissed. In fact, the court reviewed Facebook’s data policy, terms of service, Platform policy, the user controls, and the educational resources explaining privacy basics. It noted that Facebook had teams of employees who were dedicated to detecting, investigating and combating violations of Facebook’s policies. It also noted that Facebook took about six million enforcement actions during the period in question, but Facebook did not provide the reasons for the actions.

The FCC also examined TYDL’s privacy policy and noted that it was unclear whether it was shown to users, and Facebook did not verify the contents of third-party policies.

At this point, the court stated that the purpose of PIPEDA was to balance two competing interests, and the court had to interpret PIPEDA in a flexible, common sense, and pragmatic manner. The court stated that the question before it was whether Facebook made reasonable efforts to ensure users and users’ Facebook friends were advised of the purposes for which their information would be used by the apps.

The FCC held that:

  • The OPC failed to discharge its burden to establish that Facebook breached PIPEDA by failing to obtain meaningful consent. The court ignored the OPC’s statistical evidence and insisted that there was no evidence, and then confirmed that the OPC did not show that there was a privacy violation.
  • Facebook’s safeguarding obligations ended once information was disclosed to the apps, and there was insufficient evidence to conclude whether Facebook’s contractual agreements and enforcement policies constituted adequate safeguards (there was an “evidentiary vacuum”), and as such, the OPC failed to discharge its burden of showing that it was inadequate for Facebook to rely on good faith and honest execution of its contractual agreements with apps.
  • There was no need to address the issue of remedies that were sought by the OPC.

The FCC dismissed the application. The OPC appealed.

What did the FCA decide?

The FCA allowed the OPC’s appeal. That is, the court agreed with the OPC that the FCC made errors in its analysis when it sided with Facebook. The OPC argued that:

  • The FCC erred by setting the bar too low in its interpretation of meaningful consent, as it did not consider whether Facebook got meaningful consent in light of the fact that it never even read TYDL’s privacy policy (the policy did not include anything about political purposes).
  • The FCC erred by failing to distinguish between meaningful consent for installing users and meaningful consent for friends of installing users, despite the different consent processes and protections for these groups.
  • The FCC erred in determining meaningful consent by calling for subjective evidence of user experience, expert evidence, or evidence of what Facebook could have done differently, instead of applying an objective, user-focused reasonableness standard.
  • The FCC erred in failing to consider Facebook’s conduct before the personal information was disclosed (such as Facebook’s failure to review privacy policies of apps, even in the presence of privacy-related red flags). Also, the FCC should have treated this as prima facie evidence of Facebook’s failure to take appropriate steps to safeguard information and drawn further inferences from the evidence available, especially given the difficulties associated with showing that an organization failed to internally safeguard one’s personal information.
  • The FCC erred in finding that there was an “evidentiary vacuum” with respect to both the meaningful consent and safeguarding issues, as the record contained extensive and fulsome evidence of a breach of these obligations by Facebook.

The FCA found the following:

  • The FCC erred when it premised its conclusion exclusively or in large part on the absence of expert and subjective evidence given the objective inquiry.
  • The FCC failed to inquire into the existence or adequacy of the consent given by friends of users who downloaded apps, separate from the installing users of those apps. As a result, the FCC did not ask itself the question required by PIPEDA: whether each user who had their data disclosed consented to that disclosure. These were overarching errors that permeated the analysis.
  • The FCC did not engage with the evidence which framed and informed the content of meaningful consent under clause 4.3 and section 6.1 of PIPEDA. The court did not turn to the implications of the evidence that was in fact before the FCC with respect to the application of clause 4.3 and section 6.1, noting the paucity of material facts.
  • The FCC erred because there was indeed considerable probative evidence before the court, including the Terms of Service and Data Policy, the transcript of Facebook’s Chief Executive Officer, Mark Zuckerberg’s testimony that he imagined that probably most people did not read or understand the entire Terms of Service or Data Policy, that 46 percent of app developers had not read the Platform Policy or the Terms of Service since launching their apps, that TYDL’s request for information was beyond what the app required to function contrary to Facebook’s policies, and the decision to allow TYDL to continue accessing installing users’ friends’ data for one year in the face of red flags regarding its non-compliance with Facebook’s policies.

Interestingly, the FCA commented that it was the responsibility of the FCC to define an objective, reasonable expectation of meaningful consent. It stated, “To decline to do so in the absence of subjective and expert evidence was an error.” Moreover, the FCA noted the curious double reasonableness requirement and stated, “If a reasonable individual were unable to understand how their information would be used or disclosed—as here—this ends the inquiry. An organization cannot exercise reasonable efforts while still seeking consent in a manner that is itself inherently unreasonable.”

Further, the FCA also noted that the data policy offered mundane examples of how the apps could use user data, and it did not contemplate large-scale data scraping, which occurred in this case. In particular, the FCA pointed out that the language in the policy was simply too broad to be effective.

The FCA also pointed out that the word, consent, had content and in this case the content was legislatively prescribed. It included an understanding of the nature, purpose and consequences of the disclosure. The FCC had to ask whether the reasonable person would have understood that in downloading a personality quiz, they were consenting to the risk that the app would scrape their data and the data of their friends, to be used in a manner contrary to Facebook’s own internal rules. It stated, “Had the question been asked of the reasonable person, they could have made an informed decision.” Indeed, the court emphasized that other contextual evidentiary points supported this perspective of a reasonable person. For instance, when looking at the contractual context, we see that these were consumer contracts of adhesion.

In terms of safeguarding, the FCA stated that the unauthorized disclosures in this situation were a direct result of Facebook’s policy and user design choices. In fact, Facebook invited millions of apps onto its platform and failed to adequately supervise them. The FCA stated that the FCC “failed to engage with the relevant evidence on this point, and this was an error of law.”

Facebook did not review the apps’ privacy policies even though the apps were able to download users’ data and that of their friends. Facebook also did not act on TYDL’s request for unnecessary information—a red flag. The FCA stated, “Facebook’s failure to take action upon seeing red flags amounted to Facebook turning a blind eye to its obligation to adequately safeguard user data.” And this was part of a larger pattern: Facebook never notified users about the scraping and selling of their data once Facebook became aware of this practice. Similarly, it did not ban Dr. Kogan or CA from Platform.

The FCA also clarified that Facebook’s conduct after the disclosure to TYDL was irrelevant—the safeguarding principle dealt with an organization’s internal handling of data, not its post-disclosure monitoring of data. However, it was important to note that Facebook’s post-disclosure actions contextually supported the finding that it did not take sufficient care to ensure the data in its possession prior to disclosure was safeguarded.

The FCA also mentioned that Facebook was entitled to rely on the good faith performance of contracts, but only to a point. It was telling that Mark Zuckerberg admitted that it would be difficult to guarantee that there were no bad actors using its Platform. The FCA stated that it was incongruent to expect a bad actor to carry out a contract in good faith. Facebook therefore should have taken further measures to monitor third-party contractual compliance.

And when it came to balancing under PIPEDA, the FCA highlighted that PIPEDA’s purpose, as set out in section 3, referred to an individual’s right of privacy, and an organization’s need to collect, use or disclose personal information. This is what had to be balanced. An organization had no inherent right to data, and its need had to be measured against the nature of the organization itself. There was a critical difference between one’s right to privacy and a company’s need for data, as set out in section 3.

The FCA held that Facebook’s practices between 2013-2015 breached Principle 3, Principle 7, and section 6.1 of PIPEDA and a declaration should issue to that effect.

The FCA noted that the Federal Trade Commission in the United States fined Facebook $5 billion for its role in this scandal. But the FCA noted that time has passed, and practices have evolved since this time period. The FCA stated, “The Court will not issue orders which would be of no force or effect.” It noted that the events that gave rise to this application took place a decade ago.

Therefore, the FCA allowed the OPC’s appeal with costs, and declared that Facebook’s practices between 2013 and 2015 constituted a violation of PIPEDA. The FCA stated that there would need to be a consent remedial order, and if there was not, the parties would have to make further submissions.

What can we take from this development?

As we can see from this case, organizations need to comply with PIPEDA’s consent and safeguarding provisions, and it is not good enough to say that there are too many apps on a company’s platform and it is too difficult to read the apps’ policies.

More specifically, in this case it was necessary for Facebook to have sufficient policies, and review and monitor the policies of third-party apps to confirm compliance with their own policies. That is, it was important for Facebook (and the apps) to obtain meaningful consent from each user (users who installed the apps, and users who were the installer’s friends). Also, safeguarding obligations do not end once information is disclosed to the apps. Rather, it is necessary for Facebook to adequately supervise the apps and ensure that there is compliance with company policies.




Cambridge Analytica
facebook
OPC
PIPEDA
privacy
privacy policy
user data
Share

Related Posts

Imagen 1

The new age of workplace gossip – TMI!

I’ve discussed workplace gossip here before, and what bosses can do to prevent it or at least reduce the potential harm, but there are a couple of hyper-modern developments that I didn’t get into: reality television and the Internet. These two things have created a culture of “sharing”, for lack of a better word, that encourages people at play or work to divulge the most mundane and private details of their lives to others—the kind of information that one previously might only have shared with family or best friends.

Adam Gorley

Read more
Imagen 1

Privacy risk management – by design

I’ve discussed the Privacy by Design principle before, in the Inside Internal Control newsletter. In case you don’t know, PbD is an approach developed by Dr. Ann Cavoukian, the Privacy Commissioner of Ontario, which proactively embeds privacy protection by default in the design of an organization’s practices and products.

Colin Braithwaite

Read more
Imagen 1

Employers discussing employee medical condition with other employees

In general, an employer, manager, supervisor or HR professional discussing an employee’s medical condition with other employees is just plain inappropriate…

Marie-Yosie Saint-Cyr, LL.B. Managing Editor

Read more