CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

If you don’t got it, don’t flaunt it: FTC Issues Warnings to Companies Claiming APEC Privacy Certification

Posted in Privacy, Regulatory Compliance, Standards

The United States Federal Trade Commission (“FTC”) has issued warning letters to 28 companies claiming  to be certified participants in the Asia-Pacific Economic Cooperative (“APEC”) Cross-Border Privacy Rules (“CBPR”) system. This is an important reminder for companies, including Canadian companies, that the use of international certifications  is something in which regulators take a keen interest.

Background

The APEC CBPR system provides a common standard for cross-border flows of personal information based on the APEC Privacy Framework. Companies that wish to self-certify as CBPR compliant must implement privacy policies and practices compliant with the CBPR program requirements and then obtain certification of this compliance from an APEC-recognized Accountability Agent.  Once an organization has been certified for participation in the CBPR system, these privacy policies and practices will become binding as to that participant and will be enforceable by an appropriate authority, such as a regulator, to ensure compliance. Participating countries include the United States, Japan and Mexico. Canada joined in 2015, meaning CBPR-certified Canadian companies can freely transmit personal information to other self-certified companies in these jurisdictions (and vice-versa). It is anticipated that the system will expand to eventually encompass the remaining 16 APEC member economies.

Warning Letters

The FTC warning letter is targeted at companies that claim to be APEC CBPR certified but have not presented evidence of taking the correct steps to obtain the certification.  This leaves them open to an enforcement action based on the FTC’s authority over unfair or deceptive acts or practices:

We are writing because your website indicates that you represent that you participate in the [APEC CBPR]. However, our records indicate that your organization has not taken the requisite steps to be able to claim participation in the APEC CBPR system, such as undergoing a review by an APEC-recognized Accountability Agent. A company that falsely claims APEC CBPR system participation may be subject to an enforcement action based on the FTC’s deception authority under Section 5 of the Federal Trade Commission Act (“FTC Act”). Indeed, we have brought many cases against companies that we allege, among other things, have falsely claimed to participate in  international privacy program such as the APEC CBPR system, see generally https://www.ftc.gov/tips-advice/business-center/legalresources?type=case&field_consumer_protection_topics_tid=251.

We ask, therefore, that your organization (1) immediately remove from its website, privacy policy statement, and any other public documents all representations that could be construed as claiming APEC CBPR participation; and (2) contact us within 45 days at apec.cbpr@ftc.gov to inform us that you have done so.

The FTC did not release the names of the organizations to which it sent letters. This gives the organizations a chance to demonstrate compliance and revise their websites and thereby avoid the reputational damage  associated with being publicly cited by the regulator.  However, the fact that the FTC publicized the issuance of the warning letters likely indicates that it views the problem of unsubstantiated certifications as an issue which needs to be addressed.

Lessons for Canadian Business

While the Canadian privacy regime generally benefits from having broad private-sector privacy legislation that permits transfers of personal information under specified conditions, companies may want to (or be required to) obtain certification in certain circumstances. If they do, they should keep in mind the following points:

1. Regulators all over the world, including in Canada, are more closely scrutinizing self-certification. The FTC letter is part of a broader crackdown on APEC CBPR certifications. In May 2016, the FTC reached a settlement with Very Incognito Technologies, Inc., doing business as Vipvape, based on allegations that Vipvape represented on its website that it was APEC CBPR certified when, according to the complaint, it was not.

The FTC’s action on the APEC CBPR self-certification program can be seen as part of a broader regulatory concern with deceptive  or misleading attestations of compliance. False (or out of date) certifications were also an issue with the now-defunct US-EU Safe Harbor certification program, and in August of 2015, the FTC announced settlements with thirteen companies it charged had misled consumers by claiming they were certified members of the Safe Harbor framework.

Companies which are part of self-certification programs should be watchful of regulatory actions and be prepared to respond to requests for information from their regulators. It would be prudent to expect regulators in other jurisdictions including Canada, the EU and Australia to be more aggressive in their investigation of self-certification systems in future.  The best way for a company to avoid trouble is straightforward: do not falsely claim a certification your organization does not have (and ensure that any such certifications validly obtained have not expired).

2. Have your compliance monitored by a reputable organizations APEC CBPR sets out a certified process and ongoing requirements for becoming an “Accountability Agent” which can certify that a company is meeting standards required by APEC for cross-border flows of personal information.  In order to become an Accountability Agent, an organization must apply  first to the relevant authority where it intends to operate (such as the Office of the Privacy Commissioner in Canada or the FTC and Department of Commerce in the United States).  Once the organization has obtained  the approval of the relevant authority, its application is forwarded to the APEC Joint Oversight Panel for approval. The process takes time, and is detailed.

Companies should be wary of organizations which claim to offer auditing and certification on the cheap. Consider asking questions about the would-be Accountability Agent’s experience with regulators in different jurisdictions, their technical capabilities, and if any organization which they certified has ever experienced a privacy breach or regulatory investigation.  The bona fides of an Accountability Agent may also be confirmed online.

3. Retaining records of audits is important, but… Companies being assessed for compliance with an international privacy framework will be request by regulators to produce documentation of certification. No company wants to be in a situation where a regulator asks questions about their international certifications and the supporting documentation is unavailable, incomplete or out of date.  A similar situation can arise if a company has entered into contracts in which it has represented that it has valid certifications – the counter party may ask for proof (either at the time of execution, or during the life of the contract). As a result, companies will want to ensure that documentation supporting its certifications is updated regularly, stored securely, and can be produced in response to a regulatory inquiry.

However, retaining information about compliance with international privacy standards also comes with risk. Regulators are not the only ones interested in this information.  A company’s privacy audit results can be valuable evidence to opposing counsel in future litigation.  Companies should consider engaging their own counsel prior to undertaking an auditing or compliance process to ensure they are taking steps to protect privilege (if appropriate) and understand potential litigation risk.

Conclusion

The recent FTC warning letters are an important reminder that regulators are interested in privacy self-certification programs.  Prudent organizations should ensure their certifications are valid and up to date, and that they are prepared to respond to regulators if necessary.

*Arie van Wijngaarden is a JD/MBA student in McCarthy Tetrault’s Toronto office.

German Regulator Finds Banks’ Data Rules “impede non-bank competitors”

Posted in Big Data, European Union, Financial, FinTech
Kirsten Thompson

“Open Banking” is an emerging term in financial services / financial technology that refers, among other things, to the use of open application programming interfaces (“APIs“) enable third party developers to build applications and services around a financial institution. This requires a financial institution to throw open the doors to its customer data and allow it to be used by developers and other third party providers. Think of it as an app store for banks, where the apps allow consumers to compare rates, manage their accounts, obtain credit and make payments – all without having to actually engage a bank.

In Europe, this is set to become the norm in early 2018, thanks to the revised Payment Services Directive (“PSD2“) which was passed in January. PSD2 is designed to create a more level playing field for third party payment processors by making banks in Europe offer APIs that provide access to account information to third parties.

Some banks are embracing this, and see it as an opportunity to drive value in innovative new ways. Other banks are not as keen, and are taking steps to cut out the interlopers to preserve existing value and protect the customer relationship.

Long before there was a concept of “open banking”, there were similar products available, products that don’t rely on the openness of banking but rather the willingness of an account holder to share his or her login information. Users provide their user IDs and passwords for the financial accounts they want to consolidate, so that the aggregation service can access these accounts to gather their financial information (a process known as “screen scraping”). A single third party web portal then displays the information, dashboard-style.

Concern in Canada and the US

In March of 2011, the Financial Consumer Agency of Canada (“FCAC”) issued a statement, warning Canadians to be aware of the possible risks of disclosing their online banking and credit card information to financial aggregation services. Aside from the obvious data security and privacy risks, the FCAC cautioned that using such a service could also violate the terms and conditions of the account:

Consumers should be aware that if they disclose their online banking information to any other party, including financial aggregators, they may risk losing their protection against unauthorized transactions. Some financial institutions’ user agreements clearly state that users will be responsible for unauthorized transactions if they provide other parties, including financial aggregators, with their passwords and account information.

The FCAC reminded consumers it was their responsibility to manage their online banking and credit card credentials in accordance with the terms of their user agreements, as well as to review their user agreements and to understand their responsibilities thereunder.

In 2015, media reported that a number of US banks had cut off data to these financial aggregators, citing concern that the rising use of such sites will overload bank servers, on top of worries that customer data could potentially be vulnerable to hackers. The aggregators charged that the banks, facing increasing competition from these companies, were becoming too protective of their customer information.

Germany Finds Banks’ Data Rules Violate Competition Law

The German competition regulator has now weighed in, finding that rules set buy the German Banking Industry Committee violate both German and European competition law by imposing “special conditions for online banking” that mean customers cannot use their PINs (personal identification numbers) and TANs (transaction authentication numbers) in non-bank payment systems.

This, said the German regulator, has “significantly impeded” the use of non-bank providers for online purchases, preventing people from using lower-priced alternatives.

The German Banking Industry Committee had cited security concerns as the basis of the rules but the German  competition regulator (the Bundeskartellamt) dismissed this, saying that “the rules currently used cannot be considered as a necessary part of a consistent security concept of the banks and they impede non-bank competitors”.

Andreas Mundt, president of the Bundeskartellamt, said:

The online banking conditions of the German Banking Industry Committee hinder the offer of new and innovative services in the growing market for payment services in the e-commerce sector. In essence, it is about whether non-bank payment services can also use PINs and TANs. We have taken careful consideration of the justified interest of the banking industry that security in online banking has to be safeguarded. However, the rules currently used cannot be considered as a necessary part of a consistent security concept of the banks and they impede non-bank competitors.

The Bundeskartellamt has only declared certain specified clauses of the banks’ terms and conditions illegal, not the entire agreement. It also suspended the enforcement of its decision, meaning the parties are not under tight deadlines to change their course of action, although they must make the necessary changes. The Bundeskartellamt also noted that rules governing the activity of non-bank payment solution providers are currently undergoing a European legislative process.

EU-US Privacy Shield Adopted: Now What?

Posted in European Union, Legislation, Privacy
Keith Rose

On July 12, 2016, the European Commission formally issued its adequacy decision endorsing the EU-US Privacy Shield, following the approval of the deal by the Article 31 Committee on July 8.  Although the European adequacy decision has immediate effect, U.S. organizations will not be able to take advantage of the Privacy Shield until the U.S. Department of Commerce begins accepting self-certifications, on August 1.

Self-Certification

The Department of Commerce has issued guidance to companies wishing to self-certify under the Privacy Shield.  Only U.S. organizations subject to the jurisdiction of either the Federal Trade Commission or the Department of Transportation will be eligible for self-certification.  This will exclude some organizations, such as banks and telecommunications companies, which are outside the jurisdictions of those agencies.

Eligible organizations that wish to self-certify should carefully review the guidance as well as the seven framework principles and the sixteen supporting principles (the “Principles”) that they must commit to adhere to.  Although participation in the program is voluntary, once made, the commitment to adhere the Principles will be enforceable under U.S. law.

Many of the Principles will be familiar to U.S. organizations that have previously participated in the former Safe Harbour regime, although they have now been elaborated in more detail, creating new compliance obligations.  There are some significant practical differences in the new model, including an obligation for organizations to provide access, at no cost to the individual, to an independent recourse mechanism, stricter limitations on onward transfers to third parties (including service providers)

Organizations should be cautious about any representations that suggest compliance with Privacy Shield if the organization has not formally self-certified.  The FTC has recently issued a number of warning letters to organizations it alleges are claiming compliance with the APEC Cross-Border Privacy Rules system without actually meeting the certification requirements.  Moreover, the U.S. government has formally stated in a letter to the European Commission that it intends to actively police false claims of participation in the Privacy Shield program.

Legal Challenges Likely

Legal challenges to the Privacy Shield framework are probably inevitable.  For example, Max Schrems, the Austrian whose successful challenge invalidated the previous Safe Harbour regime (see our previous articles, here, here, and here) apparently intends to challenge the Privacy Shield as well.

The Article 29 Working Party had expressed some skepticism of a previous draft of the Privacy Shield.  The deal was then strengthened at the negotiating table address concerns relating to bulk data collection, the independence of the Privacy Shield Ombudsperson mechanism for review of complaints about state access to personal information, and data retention.

Even after these enhancements, it is perhaps unclear whether the proposed Ombuds mechanism would qualify as a means of “redress”, as that concept has been described by the CJEU.  The terms of reference provide only that the Ombudsperson will “respond” to the complaint, in one of two ways: either to confirm either that relevant safeguards provided by U.S. law were complied with or, if that is not the case, that the non-conformance has been remedied.  The Privacy Shield Ombudsperson will expressly not be permitted to report on any remedial action taken.  Nor will the mechanism involve any possibility of access to, rectification of, or erasure of, any personal data in the hands of any state actors.  As the Commission noted in the adequacy decision, these were explicit requirements set out by the CJEU in the Schrems decision.

In response, the new adequacy decision simply states that “The Commission’s assessment has confirmed that such legal remedies are provided for in the United States, including through the introduction of the Ombudsperson mechanism.”  [See para. 124.]

It remains to be seen whether the CJEU agrees with this assessment.  Until such a decision has been rendered, the Privacy Shield mechanism may offer less stability than most organizations would prefer.  Moreover, the mechanism will be subject to annual reviews and the obligations it imposes may be subject to further elaboration over time.

Alternatives to Privacy Shield

U.S. organizations which do not wish to, or are not eligible to, participate in the Privacy Shield self-certification program can instead continue to rely on other mechanisms recognized by European law, including Standard Contractual Clauses (although these are themselves currently subject to a challenge and reference to the CJEU) or Binding Corporate Rules.

GDPR on the Horizon

All of this must also be assessed in light of the new General Data Protection Regulation (GDPR), set to come into force in the EU in 2018.  The GDPR will impose significant new obligations on data processors (including some data processors located outside of the EU) including record keeping, data security, and breach notification obligations.  Non-European data processors who offer goods and services to individuals in the EU, or who monitor the behavior of individuals in the EU, may be directly liable for fines up to € 20 M or 4% of annual global revenues.

Organizations will have to consider how they will respond to the new GDPR obligations whether or not they self-certify under the Privacy Shield.  Furthermore, the GDRP also tightens the rules by which the “adequacy” of foreign laws respecting the protection of personal information must be assessed.  This raises the spectre of further challenges to (or evolutions of) the Privacy Shield itself in the future.

Implications for Canadian Organizations

Canada’s privacy laws have been endorsed in 2001 as adequate in a separate decision of the EC.  This decision was not directly affected by the Schrems decision and it remains in effect.

However, there has been some speculation that the Privacy Shield has effectively raised the bar and that Canada’s laws may be subject to new scrutiny.  The Canadian adequacy decision is scheduled to be reviewed as part of a larger review, which is not due until 2020, but a review could be triggered at any time by a direct challenge.

To date, there have been no suggestions of any particular changes to Canadian privacy legislation that might be considered to strengthen the case for a renewed adequacy decision.

However, Canadian organizations which store or process personal information about EU citizens may wish to consider how their practices might be assessed against the Principles articulated in the Privacy Shield agreement.

In any event, they will have to consider how the GDPR may apply to them and what changes that may require, particularly in light of the significant penalties that can be assessed under the new regulation.

As a result, Canadian organizations that deal with European data will need to pay close attention to the changing global compliance landscape and should expect that they will face new compliance challenges over the next 18-24 months.

Federal Privacy Commissioner Provides Submission on New Data Breach Notification and Reporting Regulations

Posted in Data Breach, Legislation, Privacy, Regulatory Compliance
Kirsten Thompson

The Office of the Privacy Commissioner of Canada (“OPC“) has provided its views on the data breach reporting and notification requirements that are soon to be prescribed by regulation under the Personal Information Protection and Electronic Documents Act, SC 2000, c 5 (“PIPEDA“).

On June 18, 2015, the Digital Privacy Act (also known as Bill S-4) received Royal Assent in Canada’s Parliament. The Digital Privacy Act amended PIPEDA.  Among other important changes, the Digital Privacy Act amended PIPEDA to require mandatory notification of both the OPC and affected indivdiuals, and introduced a record-keeping requirement (and fines for organizations which fail to meet either of these new requirements).

These new data breach requirements in PIPEDA will come into force once the Government passes regulations, and to that end, the Government has circulated a Discussion Paper and solicited comments.

The OPC has provided its Submission, and as body charged with administering and ultimately enforcing the resulting regulations, the OPC’s views are of significance (although they are not determinative of the final form of the regulations).

When Organizations Will Need to Report

A challenge organizations face when dealing with a breach affecting personal information is whether to report the breach to the OPC. Currently voluntary, this dilemma will not go away when it becomes mandatory – rather, the question will simply become one of how to determine whether the trigger (“real risk of significant harm”) has been met.

The OPC is of the view that the current set of factors enumerated in subsection 10.1(8) of PIPEDA are sufficient and any other further guidance on conducting a risk assessment could be provided by the OPC in due course. [1]

The Discussion Paper had also asked if encryption should provide a kind of “get out of jail free” card insofar as encrypted information that is lost or accessed would be presumed to present no or a low  “real risk of significant harm”. The OPC was against equating encryption with a diminished risk of significant harm. This raises the question of why the OPC has regarded the use of encryption as an adequate security safeguard to be considered under Principle 4.7.3.

What the Report Should Look Like

The OPC is of the view that any new mandatory breach reports should be in written  form (digital or paper) and require the following information:

  • Name of responsible organization;
  • Contact information of an individual who can answer questions on behalf of the organization;
  • Description of the known circumstances of the breach, including:
    • Estimated number of individuals affected by the breach;
    • Description of the personal information involved in the breach;
    • Date of the breach, if known, or alternatively estimated date or date range within which the breach is believed to have occurred;
    • A list of other organizations involved in the breach, including affiliates or third party processors;
  • An assessment of the risk of harm to individuals resulting from the breach;
  • A description of any steps planned or already taken to notify affected individuals, including:
    • date of notification or timing of planned notification;
    • whether notification has been or will be undertaken directly or indirectly and, when applicable, rationale for indirect notification;
    • a copy of the notification text or script;
  • A list or description of third party organizations that were notified of the breach, pursuant to s. 10.2(1) of PIPEDA, as well as Privacy Enforcement Authorities from other jurisdictions;
  • A description of mitigation measures that have been or will be undertaken to contain the breach and reduce or control the risk of harm to affected individuals,
  • A description of the organization’s relevant security safeguards, taking into consideration any improvements made or committed to, to protect against the risk of a similar breach reoccurring in the future.

The information is not substantially different than that already required by Alberta, which already has a mandatory breach reporting regime, although the OPC’s proposed approach would require more detail. [2] Also, the proposal that organizations provide a “description of the organization’s relevant safeguards” is not found in the Alberta requirements and may give rise to privilege and litigation risk issues. As well, organizations are likely to balk at disclosing this information because it potentially telegraphs an organization’s security strategy and vulnerabilities to bad actors. This is particularly true since this information is at risk of public disclosure via the Access to Information regime.

The OPC believes organizations should have an ongoing obligation to provide updates “as soon as feasible”, a requirement also not found in the Alberta requirements.

What Notification to Individuals and Third Parties Should Look Like

The OPC essentially adopts its own document, “Key Steps for Organizations in Responding to Privacy Breaches” and proposes that the regulations require the following elements be included in notifications to affected persons:

  • Description of the circumstances of the breach incident;
  • Date of the breach, if known, or alternatively estimated date or date range within which the breach is believed to have occurred;
  • Description of the personal information involved in the breach;
  • Description of the steps taken by the organization to control or reduce the harm;
  • Steps the individual can take to reduce the harm or further mitigate the risk of harm;
  • Contact information of an individual who can answer questions about the breach on behalf of the organization;
  • Information about right of recourse and complaint process under PIPEDA.

The OPC is of the view that direct notification should be required (e.g. direct communication with each affected individual) and that indirect notification (e.g. via newspaper ads, websites, etc.) should be allowed only with permission and only in certain circumstances. Organizations will be pleased to know that the OPC accepts that “prohibitive costs to the organization and [unreasonable interference] with its operations” are one of the circumstances in which the OPC would accept indirect notification. However, the OPC suggests that organizations must first “[demonstrate] that they may validly use indirect notifications”. It is unclear if to be “valid” an organization will have to demonstrate, for instance, prohibitive costs or other criteria, or that “validity” will be evaluated on the basis of likelihood of the message effectively reaching the target demographic.

On this latter point, the OPC is of the view that indirect notification would need to be to the appropriate geographic market, be relevant to the product or service and the type of customer interaction, be for an appropriate length of time and in plain English, and where appropriate, allow organizations to use third parties to conduct such notification

With respect to the notification of third parties (potentially vendors, industry organizations, other organizations in that sector), the OPC has sensibly supported a permissive approach to notifying third parties, instead of a mandatory one.

What an Organizations Record Keeping Obligations Would Be

The OPC appears to regard the new record-keeping requirements (which require organizations to keep a record of all breaches of security safeguards) as a mechanism for general oversight.    

The OPC is of the view that such records  should include “sufficient information to demonstrate compliance with PIPEDA’s new notification requirements and should contain sufficient information to enable the Office to effectively perform its oversight functions.” More significantly, “[t]he content of these records should also assist the OPC in understanding the process through which organizations determine whether or not to notify affected individuals.”

Relying on this, the OPC believes the following data elements should be included in records of breaches:

The OPC would like to see all such incidents documented and recorded on an individual, non-aggregated basis. For organizations such as financial institutions or large retailers which face upwards of  200 threat incidents a week, this could be onerous.

With respect to retention the OPC suggests that records be maintained for a period of five years from the date of creation of the record, after which records could be destroyed.

 An Organizations Obligations to non-Canadians
The OPC notes that organizations that are subject to PIPEDA may collect personal information which pertains to individuals who reside outside of Canada (for instance, residents of the U.S.). As such, the OPC is of the view   the data breach notification and reporting requirements should consider the extent to which organizations may have to notify individuals outside of Canada who may be affected by a data breach undergone by an organization subject to PIPEDA. At a minimum, the OPC suggests that regulations should require organizations to consider the breach notification laws of those jurisdictions., as well as any local notification requirements.
Future OPC Guidance
 [1] Subsection 10.1(8) reads “The factors that are relevant to determining whether a breach of security safeguards creates a real risk of significant harm to the individual include (a) the sensitivity of the personal information involved in the breach; (b) the probability that the personal information has been, is being or will be misused; and (c) any other prescribed factor.

[2] Section 19 of the Personal Information Protection Act Regulation, Alta Reg 366/2003

 

 

Federal Court of Appeal Comments on New Tort of “Publicity Given To Private Life”, Overturns Certification Order

Posted in Class Actions, Data Breach
Emily MacKinnon

The Federal Court of Appeal has provided some guidance on the recently-recognized tort of intrusion upon seclusion and the as-yet-unrecognized tort of publicity given to private life.

In a class action decision largely reversing a Federal Court certification order, 2016 FCA 191, the Court of Appeal suggested that recognition of the tort of publicity given to private life may be just around the corner and provided some insight on what the test for such a tort might be. Notably, the Court held that the tort requires “publicity” of a broad scope—not merely to a small group. The Court also suggested that publication to persons with confidentiality obligations is not tortious.

The Court also reiterated that another privacy tort already established in Canadian law, intrusion upon seclusion, requires bad faith or reckless conduct, and that an isolated administrative error will not suffice. Companies can take comfort from the requirement for intentional conduct, but should remain cautious: while an “isolated” administrative error will not suffice, an adverse inference could potentially be drawn from a repeated error.

The facts: revealing return addresses

The class action was prompted by packages mailed to medical marijuana program registrants between November 12 and 15, 2013. Those packages were marked with a return address of “Marihuana Medical Access Program Health Canada”. Apart from these packages—called an “administrative error” by the Deputy Minister of Health Canada—mail to program registrants was marked with only “Health Canada” as the return address.

A class action launched by two recipients of the packages was certified on July 27, 2015, by Phelan J. of the Federal Court. The class action raised the recently-recognized tort of intrusion upon seclusion and the as-yet-unrecognized tort of publicity given to private life, among other causes of action.

In a unanimous decision authored by De Montigny J.A. released June 24, 2016, the Federal Court of Appeal largely overturned the certification order, leaving the plaintiffs with a class action certified only in negligence and breach of confidence. In holding that the plaintiffs failed to plead material facts to support the other causes of action, the Court of Appeal cast light on the elements of the nascent tort of publicity given to private life, and reaffirmed the intentionality requirement for intrusion upon seclusion.

Publicity given to private life: a broad scale required

The Court of Appeal’s decision not only imports the elements of the U.S. tort of publicity given to private life, but also relies on the U.S. interpretation of one of those elements.

Phelan J. initially certified the action for publicity given to private life—a tort as-yet-unrecognized in Canada, but with roots in the United States. The Court of Appeal agreed that the tort could exist, relying on the Ontario Court of Appeal’s reasoning in Jones v. Tsige, 2012 ONCA 32 (recognizing the tort of intrusion on seclusion, discussed elsewhere on this blog). But De Montigny J.A. overturned Phelan J.’s certification of this cause of action, holding that the plaintiffs had failed to plead material facts that could support it.

De Montigny J.A.’s analysis is striking because it not only relies on the elements of the American tort, but also on the kinds of facts that could meet those elements. The U.S. tort requires “publicity” of a private matter:

One who gives publicity to a matter concerning the private life of another is subject to liability to the other for invasion of his privacy, if the matter publicized is of a kind that

  1. a) would be highly offensive to a reasonable person, and
  2. b) is not of legitimate concern to the public.

The plaintiffs in this case pleaded publicity to Canada Post employees, friends and family of the program registrant, and persons in erroneous receipt of the packages. But De Montigny J.A. held, relying on U.S. law, that the “publicity” must be on a broader scale, and that the publicity pleaded in this case was insufficient. De Montigny J.A. also noted that Canada Post employees have obligations of confidentiality, suggesting—without providing authority for the suggestion—that disclosure to persons with confidentiality obligations is not tortious.

The Court’s reasons suggest that the scope of this nascent tort will be restricted. As it was following Tsige, however, it remains unclear who could be liable as a publisher of private information, and whether this tort could apply to mass privacy breaches.

Nevertheless, companies should be alert to prevent any unauthorized publication of private matters, as the Court’s decision suggests that recognition of the tort is merely waiting for the right set of facts.

Intrusion upon seclusion: not merely an isolated administrative error

De Montigny J.A.’s decision reiterates that the still-new tort of intrusion upon seclusion requires reckless or bad faith conduct: an isolated administrative error will not suffice.

Relying again on Tsige, De Montigny J.A. repeated that intrusion upon seclusion requires an allegation of bad faith or recklessness. He held that the plaintiffs had pleaded, at most, an “isolated administrative error”. Such an error, he held, is a far cry from the intentional and repeated conduct in Tsige and could not possibly support a cause of action for intrusion upon seclusion.

The Court’s emphasis on the requirement for recklessness or bad faith will be reassuring to those concerned about the potential scope of this new tort. But caution is still warranted. The Court took note of the difference in frequency between the mis-labelled packages—four days—and the conduct in Tsige—four years—leaving open the possibility that a repeated error, even an administrative one, might be viewed in a different light.

 

Shareholder Derivative Lawsuit Against Target’s Directors and Officers Dismissed

Posted in Data Breach, Retailing
Diego Beltran

It has been an open question as to whether derivative claims brought by shareholders against officers and directors of a breached corporation would gain a foothold in the litigation environment. With the recent dismissal of such a claim in the Target case, it appears that these types of actions still face significant hurdles. 

The ruling issued on July 7, 2016 by U.S. District Judge Paul A. Magnuson in St. Paul, Minnesota granted the motion brought forward by the Special Litigation Committee (“SLC”) of the Board of Directors of Target Corporation and the other defendants to dismiss the consolidated derivative claims that had been filed against Target’s directors and officers. The ruling noted that the plaintiffs did not oppose the motion to dismiss, except to retain the right to seek legal fees and expenses.

Background

Target was sued in early 2014 by several shareholders following a massive 2013 data breach. Four of those claims were ultimately consolidated into the claim that was dismissed while another claim was stayed pending this result. The plaintiffs claimed that Target’s directors and officers failed to properly provide for and oversee an information security program and to give customers prompt and accurate information in disclosing the breach.

Target’s board established the SLC in June of 2014 and eventually expanded the SLC’s mandate to include all derivative lawsuits. Minnesota courts defer to a corporation’s SLC decision to dismiss a derivative action if the SLC demonstrates that it (i) possessed a disinterested independence and (ii) conducted a good faith investigation into the derivative allegations.

The SLC produced its report in March of 2016 where it described the SLC members, both of whom were disinterested and independent parties, neither of whom had ever served on Target’s Board of Directors, been employed by Target, or otherwise represented the company, its investigative methodology and the factors it considered in making its determinations. The SLC concluded that “it would not be in Target’s best interests to pursue claims against the officers or directors identified in the Demand and derivative complaints, including those named in this action.”

This is another data breach derivative case to have been dismissed. In 2014, a New Jersey federal judge dismissed a derivative action against the directors and officers of the Wyndham Worldwide Corp. The claims made against Wyndham in that case were similar to the claims made against Target, and similarly to Target’s board the Wyndham board rejected the shareholder demands. The court in that case concluded that the plaintiff had failed to show proof that Wyndham board’s refusal investigate and remedy the hotel chain’s security protocols was a sign of bad faith.

Conclusion

While this case was dismissed, class actions by injured consumers, financial institutions and payment card networks are likely to follow a data breach. Boards need to continue to be vigilant on cyber security risk management establishing processes and procedures to bolster the security of their systems and to respond to an attack. Insurance continues to be a topic of discussion especially given the increasing costs of a breach. Target has reportedly incurred in $291 million of cumulative data breach related expenses, partially offset by expected insurance recoveries of $90 million, for net cumulative expenses of $201 million.

Deadline for Privacy Consent Submissions Extended to July 31, 2016

Posted in Privacy
Kirsten Thompson

On May 11, 2016, Privacy Commissioner Daniel Therrien announced the Office of the Privacy Commissioner of Canada (“OPC”) would seek public input on the issue of how Canadians can give meaningful consent to the collection, use and disclosure of their personal information in an increasingly digital age. The OPC has released a discussion paper (“Report”) on considerations related to “enhancing” the consent model under thePersonal Information Protection and Electronic Documents Act and a notice of consultation and call for submissions inviting all interested parties to answer specific questions related to the Report and also to provide any thoughts on issues raised. The deadline for submissions has just been extended to July 31, 2016.

For more on this, please see our previous post here.

Privacy Commissioner Seeks Public Input on Consent Model

Posted in Big Data, Internet of Things, Legislation, Privacy
Kirsten ThompsonBreanna Needham

On May 11, 2016, Privacy Commissioner Daniel Therrien announced the Office of the Privacy Commissioner of Canada (“OPC”) would seek public input on the issue of how Canadians can give meaningful consent to the collection, use and disclosure of their personal information in an increasingly digital age. The OPC has released a discussion paper (“Report”) on considerations related to “enhancing” the consent model under the Personal Information Protection and Electronic Documents Act (“PIPEDA”) and a notice of consultation and call for submissions inviting all interested parties to answer specific questions related to the Report and also to provide any thoughts on issues raised. The deadline for submissions is July 13, 2016.

The Report – An Overview

The Report considers the approaches taken by other jurisdictions to the issue of consent, including the EU General Data Protection Regulation (“GDPR”) reform initiative, which also recently included the initiation of a public consultation process, and the US approach, as governed by the Federal Trade Commission.

The Report also focuses on challenges that both businesses and individuals face when it comes to providing meaningful consent in an era of Big Data and the Internet of Things (“IoT”):

The consent model of personal information protection was conceived at a time when transactions had clearly defined moments at which information was exchanged. Whether an individual was interacting with a bank or making an insurance claim, transactions were often binary and for a discrete, or limited, purpose. They were often routine, predictable and transparent. Individuals generally knew the identity of the organizations they were dealing with, the information being collected, and how the information would be used…[N]ew technologies and business models have resulted in a fast-paced, dynamic environment where unprecedented amounts of personal information are collected by, and shared among, a myriad of often invisible players who use it for a host of purposes, both existing and not yet conceived of. Binary one-time consent is being increasingly challenged because it reflects a decision at a moment in time, under specific circumstances, and is tied to the original context for the decision, whereas that is not how many business models and technologies work anymore.

The Report goes on to offer several possible solutions to the problems in the current consent model and poses questions for reflection for the public consultation process.

The Suggested Changes

While noting that “[c]onsent should not be a burden for either individuals or organizations, nor should it pose a barrier to innovation and to the benefits of technological developments to individuals, organizations and society”, the OPC’s proposed “enhancements” to consent will likely cause concerns for business.

A great deal of the focus in the proposed reform revolves around creating processes that simplify complicated concepts such that individuals will be able to readily comprehend and appreciate the purposes to which their personal information may be put.

The proposed solutions are intended to address several specific challenges, including making informed consent and information related to privacy preferences more readily comprehensible individuals, creating “no-go zones” or “proceed with caution zones” to protect particularly vulnerable groups in high risk sectors, devising accountability processes that include independent third parties, placing a greater emphasis on fairness and ethical balance with regards to the use of personal information, and stronger regulatory oversight of privacy protection that includes enforcement mechanisms that can be implemented for deterrence purposes.

Proposed Enhancements to Consent

The Report advocates for privacy policies that lack opacity and privacy preferences that can be managed with greater ease through the following mechanisms and considerations:

  • Greater transparency in privacy policies – through communicating privacy information at integral points in time to increase the ease with which a consumer can understand the flow of information and utilizing layered privacy policies that are simultaneously inclusive and intelligible.
  • Managing privacy preferences across services – through the use of an independent third party that screens and controls preferences and the related release of personal information.
  • Technology specific safeguards – through built in compliance mechanisms and broadly constructed recommendations for best practices, including comprehensive disclosure requirements to consumers both pre- and post-purchase.
  • Privacy as a default setting – whereby privacy is an inherently integrated component by default.

What this means to business remains to be seen. “Layered” privacy policies will, at a minimum, require most organizations to rewrite their current their policies and add an additional layer of technological administration. The call for “dynamic, interactive data maps and infographics, or short videos” is unlikely to be met with enthusiasm by business, either. While the goal of transparency and readability is laudable, it is doubtful that consumers will spend any more time on these items than they do on existing text-based policies.

The use of an independent third party to manage privacy preferences across devices places the burden for doing so squarely on business. In this proposal, users would associate themselves with a standard set of privacy preference profiles offered by third parties and these third party websites would then vet apps and services based on the user’s privacy profile. It seems unlikely that these proposed third parties would offer this service for free.

Proposed Alternatives to Consent

The Report contemplates practicable alternatives to the traditional approach to consent, such as the de-identification of data and types of information that may not necessarily require consent, as well as the necessary changes to the applicable legislative framework that may be required for implementation.

  • De-identification – While the anonymization of information necessarily strips it of the contextual factors related to personal information that necessitate consent, the increasing sophistication of both data sets and the methods for analysis leave concerns about the value of this approach as a privacy protection mechanism.
  • “No-Go Zones” – Areas or zones of personal information of vulnerable groups whose data would be subject to a limited level of processing or potentially a complete prohibition.
  • Legitimate Business Interests – Situations in which personal data could be processed for a legitimate purpose that would no longer require consent unless another fundamental right necessarily required it.

Proposed Governance Considerations

The Report advocates for a greater level of accountability associated with ensuring the adequacy of privacy protections, encouraging transparency and assuring that best practices are being implemented consistently. This would include codes of practice that function to create transparent obligations and suggestions for best practices by using privacy trustmarks to create accountability mechanisms by which regulators can evaluate and designate organizations as compliant, as well as ethical assessments and autonomous organizations with specifically delineated goals focused on protecting the privacy of individuals.

Proposed Enforcement Models

While the Report considers situations in which self-regulation at both the industry and organization level may be appropriate, it also strongly suggests that there is a need for independent oversight, with accountability facilitated through fines and the ability to create orders, as opposed to recommendations, in order to maximize effectiveness. While independence is seen as the cornerstone of any regulatory body in the future for ensuring privacy and meaningful consent, the Report focuses on a proactive compliance model that would serve a stronger deterrent purpose than that of the OPC as it exists today.

What Does this Mean for Businesses?

In the era of the IoT and Big Data, traditional conceptualizations of consent processes no longer necessarily apply. The OPC has expressed concerns about opaque consent processes that individuals don’t actually read or comprehend, and has indicated that the solution to this may include sector specific regulation on the collection and use of data as well as the associated consent processes utilized in obtaining personal information. Many businesses may need to both re-visit and re-word existing privacy policies and consent protocols in order to increase transparency, as well as the accessibility and intelligibility of the policies surrounding data and the purposes to which personal information will potentially be put.

Court Finds a Lesser Expectation of Privacy in Cameras than in Cell Phones and Computers

Posted in Internet of Things, Privacy, Social Media, Uncategorized
Joel Payne

Driven in part by advances in recording device technology such as wearable cameras and drone-mounted cameras, the trend of self-recording one’s life continues to grow.  The videos recorded on these devices are popular on social media and range from the mundane to the extreme.  Some even include criminal acts: illegally scaling structures and in some cases BASE jumping off of them, pushing cars and motorcycles to dangerous speeds, and all manner of other illegal acts that may endanger the performer and the public.  Given that filming one’s own crimes is a stupid thing to do, it is no surprise that courts are starting to see these videos introduced as evidence against the filmmaker/offender. However, some recording devices appear to attract a lesser expectation of privacy than others, based largely on judicial perceptions of predominant use.

Background

In R. v. Roy, 2016 ABPC 135, Judge H.M. Van Harten of the Provincial Court of Alberta made a ruling in a voir dire (an application in the course of a criminal trial to determine the admissibility of evidence) that contains an interesting discussion about individuals’ expectations of privacy in personal recording devices.  In this particular case, the device was a helmet-mounted GoPro camera.

The accused in this case, Mark Roy, and a friend were riding their motorcycles in Banff National Park in the June 2014.  It is alleged that park wardens witnessed Roy and his friend driving badly and speeding.  One of the park wardens reported witnessing Roy popping a “wheelie”, which the warden considered to be “stunting” in violation of the Traffic Safety Act.  When wardens attempted to stop Roy and his friend, the pair allegedly refused to stop and evaded the wardens during a short pursuit but were ultimately apprehended later. During the apprehension, an RCMP constable noticed that Roy had a GoPro camera attached to his motorcycle helmet and demanded that Roy turn over the camera; Roy refused.  The constable then arrested Roy and seized the camera.

Decision

One of the issues in this decision was whether the GoPro had been unreasonably seized from Roy, contrary to s. 8 of the Charter of Rights and Freedoms.  The RCMP obtained a warrant before accessing the images on the camera, so this issue was limited to whether the constable’s decision to take the GoPro upon arrest was itself an unreasonable seizure.  Judge Harten had no trouble finding that the constable was justified in seizing the GoPro to preserve evidence incidental to the arrest.

The judge, however, went on (arguably in obiter) to discuss Roy’s reasonable expectation of privacy in the GoPro.  It is this part of the analysis that raises interesting questions about privacy expectations in personal recording devices.  The judge started by recognizing that (at para. 25):[i]t’s well-known that people wear helmet mounted cameras to record their adventures be they skydivers, skiers, bungee-jumpers or, as in this case, motorcyclists. These recordings often find their way onto the Internet or become the subject of “reality TV” shows.

The judge then referred to the Supreme Court of Canada decision in R. v. Fearon, 2014 SCC 77, in which the Court created a new legal framework for permitting searches of cell phones incidental to arrest. The judge cited Fearon for the proposition that:

…the expectation of privacy in one’s personal digital devices is high, the level of expectation may vary depending on the type of device and the circumstances in which it is found.

Based on this proposition, the judge found:

The user of a helmet mounted camera who is under arrest in the circumstance, in which Roy found himself here, has a significantly lower expectation of privacy.

Should privacy expectations vary with the type of device?

The judge’s interpretation of Fearon is not entirely consistent with the Supreme Court of Canada’s analysis in that case.  Justice Cromwell, for the Fearon majority, affirmed the notion that a cell phone is not like a briefcase or a document (at para. 51).  Instead, the Court recognized that cell phones are essentially computers.  They hold an immense amount of data, which may include intimate details about a person’s life.  For these reasons the search of a cell phone may be a far more significant invasion of privacy than other searches incidental to arrest (for example, the search of someone’s pockets for physical items) (at para. 58).  The Court also noted two specific qualifications to its decision.  First, the particular capacity of a cell phone should not affect the analysis of the legality of a search.  A relatively unsophisticated cell phone should still be treated as the equivalent of a computer (at para. 52).  Second, the expectation of privacy is not affected by whether the cell phone is password protected or not (at para. 53).

While the type and nature of a device is undoubtedly part of the analysis of a person’s reasonable expectation of privacy in that device, the judge’s analysis in Roy seems to have assumed too readily that a GoPro is unlike a cell phone.  First, like cell phones, GoPro cameras have tremendous storage capacity.  It is irrelevant whether they are password protected or not.  And they are arguably as likely to contain intimate details of someone’s life as a cell phone—especially an unsophisticated cell phone.  While GoPro cameras are often mounted to record the use of a vehicle, that is not their only function and they are not permanently mounted.  A GoPro on a vehicle’s dash or on a helmet could easily contain video or images or someone’s children, a significant life event, an intimate encounter with another person, or routine work activities if used in the context of employment.

The device-specific, use-based approach to the analysis is unlikely to be helpful in the long term, either to law enforcement or individuals. There are a multitude of analogous recording devices on the market that come in all sorts of sizes and with a range of different functions and uses – many with network capacity that suggest an ability (but not a requirement) to publish or otherwise disclose the recorded information. A principled approach to recording devices, as suggested in Fearon, is more likely to result in a consistent and comprehensive legal framework.

Federal Agency Sanctioned for Private Company’s Actions (or, why there’s one less reality TV show on tonight)

Posted in Privacy, Privacy Act

The Office of the Privacy Commissioner of Canada (“OPC”) has found the Canada Border Services Agency (“CBSA”) responsible for the intrusive actions taken by reality TV producers –  a private sector company – the party that was responsible for obtaining and releasing personal information of a detainee.  While the OPC conceded that the collection of  the detainee’s personal information was part and parcel of what the CBCA is permitted to do, it found that by allowing TV cameras to be present during that collection, the CBSA permitted  a “real-time disclosure” of  that personal information in violation of its obligations under the Act. This an unusual, and expansive, understanding of the concept of “disclosure”.

Background

The media has recently reported that the hit reality television series, Border Security: Canada’s Front Line, will not be returning for a fourth season after the OPC recommended that the CBSA  end its participation in the program.

Border Security began airing in 2012 and had an audience of several millions of Canadians. In short, the program captured encounters between CBSA officers and the public and showcased what happened when people try to smuggle (among other things) Colorado marijuana, firearms, too much currency, and/or Chinese Peking duck into the country. It also highlighted situations where people attempted to enter Canada without the required documentation.

The Incident

On March 13, 2013, the show filmed the CBSA raiding a construction site in Vancouver, where officers found Oscar Mata Duran hiding. Officers proceeded to question Mr. Duran about his identity, immigration status, and employment. Mr. Duran had provided his verbal consent to be video recorded during this initial interrogation. Subsequently, Mr. Duran was processed at an immigration detention facility, where he was presented with a consent form in Spanish that would allow a private production company, Force Four Production, to film his interactions with the CBSA during his time at the detention centre. Following his stay at the detention centre, Mr. Duran was deported to his home country, Mexico.

The British Columbia Civil Liberties Association ( “BCCLA”) subsequently filed a complaint on Mr. Duran’s behalf, alleging that the CBSA’s participation in the television program violated, among other things, the laws regarding disclosure of personal information by a government agency. The CBSA argued that the program educated the public in Canada and around the globe “about the CBSA’s contribution to keeping Canada safe and prosperous, and would demonstrate the challenges that CBSA officers face and the professionalism with which they carry out their mandate”.

 The Law on Personal Information

The Privacy Act is legislation that recognizes a right to privacy by protecting Canadians’ personal information collected by the federal government. The Act applies to the federal public sector, which includes about 250 departments, agencies, and Crown corporations.

Section 3 of the Act defines personal information as information about an identifiable individual that is recorded in any form. Section 4 of the Act states that “no personal information shall be collected by a government institution unless it relates directly to an operating program or activity of the institution.”

Section 8 of the Act governs the rules regarding disclosure of personal information and provides that:

 Personal information under the control of a government institution shall not, without the consent of the individual to whom it relates, be disclosed by the institution except in accordance with this section.

Section 8(2) lists various circumstances where personal information may be disclosed, which includes when, “in the opinion of the head of the institution, the public interest in disclosure clearly outweighs any invasion of privacy that could result from the disclosure”.

The OPC’s Finding

By way of the Finding, the Privacy Commissioner applied sections 3, 4, and 8 of the Privacy Act in order to determine whether the CBSA violated federal law by failing to obtain Mr. Duran’s consent prior to disclosing his personal information. In the end, the Commissioner concluded that the CBSA violated the Privacy Act by engaging in the television program and disclosing people’s personal information in the process.

What makes this Finding particularly interesting is that the Commissioner essentially applied the Privacy Act to the CBSA due to the intrusive actions taken by Force Four Productions –  a private sector company – the party that was responsible for obtaining and releasing Mr. Duran’s personal information. Normally, the Personal Information Protection and Electronic Documents Act (“PIPEDA”) applies to businesses and organizations in the private sector that use, store, and collect personal information. However, the Commissioner found that due to the CBSA’s contractual relationship with Force Four Productions, and the Agreement that governed this relationship, the private actor’s conduct could be imputed onto the CBSA, thereby implicating the Privacy Act.

The Commissioner clearly stated that as a matter of principle, federal government institutions cannot contract out of their obligations under the Privacy Act. The Commissioner found that “the spirit and intent of the Act would be completely thwarted should federal government institutions have the authority to enter into agreements to facilitate the engagement of activities for which the institution itself may not be authorized.”

In this case, the CBSA and Force Four Productions had an Agreement whereby the CBSA would facilitate access to customs controlled areas to allow the production company to film the enforcement operations. There are two parts of the Agreement that provided the basis for the Commissioner’s overall Finding.

Firstly, the Commissioner found that the CBSA played an integral role “in providing the necessary conditions for filming to take place . . . and that the CBSA [had] substantial control over the collection of personal information by Force Four.” Secondly, the Commissioner found that the CBSA controls the circumstances under which Force Four can film, and maintains control over the footage. The agency also controlled when and how footage is collected, and had the right to review the footage; to comment and approve the footage; to obtain an episode upon request; and to use and reproduce the footage for training purposes.

As a result, the Commissioner found it was not necessary to determine whether the CBSA actually participated in the collection of personal information itself. Rather, he found that the CBSA’s facilitation and control over the filming process “implicates the collection of personal information”, and therefore the CBSA had certain obligations under section 8 of the Privacy Act regarding any subsequent disclosure of that personal information (paras. 81-82):

However, the question of whether the CBSA can be said to be participating in the collection of personal information for the purpose of the Program is not determinative of our finding in this case. In our view, the CBSA is first collecting personal information in the context of its enforcement activities and thereby has a responsibility under the Act for any subsequent disclosure of the information that is collected for, or generated by, such activities.

Following our investigation, we are of the view that there is a real-time disclosure of personal information by the CBSA to Force Four [the producer] for the purpose of Filming the TV Program. Under section 8 of the Act, unless the individual otherwise provided consent, this personal information collected by the CBSA may only be disclosed for the purpose(s) for which it was obtained, for a consistent use with that purpose, or for one of the enumerated circumstances under section 8(2).

Lessons for Business Contracting with the Federal Public Sector

The Privacy Commissioner’s Finding raises a number of potential red flags for private individuals and businesses that contract with government institutions.

This case appears to suggest that when a private entity enters into an agreement with a federal government institution, and the collection of personal information is involved, the OPC may find the government actor to be in violation of the Privacy Act for actions the private entity took if:

  • the government actor provides the necessary conditions for the collection of personal information to take place;
  • the government actor has “substantial control” over the collection of personal information;
  • the government actor controls the circumstances under which the private actor can collect personal information; or
  • the government actor controls the personal information itself.

This could very well result in the end of a potentially very profitable contractual relationship.

It remains unclear whether this Finding will have any precedential value moving forward. However, individuals and businesses that work alongside the federal government would do well to exercise caution in their contractual relationships by first conducting privacy assessment in order to determine how personal information will be collected, used, stored, and transmitted.

* Amanda Iarusso is a summer student in the Toronto office of McCarthy Tetrault.