CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

Supreme Court Renders Landmark Privacy decision in Royal Bank of Canada v. Trang

Posted in Privacy
Daniel G.C. GloverKirsten ThompsonBarry SookmanRenee ReicheltCharles Morgan

The Supreme Court of Canada released a landmark decision today giving important guidance on how Canada’s federal privacy law, the Personal Information Protection and Electronic Documents Act (“PIPEDA”), should be interpreted. In Royal Bank of Canada v. Trang, 2016 SCC 50, the Court ruled that courts can use their inherent jurisdiction to make orders permitting disclosures of personal information, including personal information contained in mortgage discharge statements. The Court also ruled that disclosures of personal information in such statements are permitted based on the implied consent of the mortgagor. Further, the Court held that while PIPEDA is consumer protection legislation for the digital age, it must be interpreted in a balanced way to facilitate the collection, use and disclosure of personal information by businesses. Continue Reading

Chatbots, Open Data and Sandboxes: Trending Topics from the 2016 Money20/20 Conference

Posted in AI and Machine Learning, Financial, FinTech, Mobile Payments, Privacy
Kirsten ThompsonAna BadourMatthew FlynnClaire Gowdy

With 10,000+ attendees, including more than three thousand companies from seventy-five countries, Money20/20 is the largest annual global event focusing on payments and financial services innovation. The 2016 conference in Las Vegas this October featured a packed agenda of talks by industry and thought leaders on a broad range of current and emerging Fintech issues, as well as an exhibition area featuring Fintech companies, investors, incubators, venture capitalists, consultants, regulators and lawyers. A team of McCarthy lawyers attended again this year and report back on some of the hottest topics of the 2016 conference: machine learning and artificial intelligence (AI), open data and regulatory sandboxes/innovation hubs.

  1. Machine Learning and AI. One of the next ‘next things’ creating a buzz at the conference was the integration of machine learning and artificial intelligence tools in the provision of financial services. “What will banking be in two, three or four years? It’s going to be this,” asserted Michelle Moore, head of digital banking for Bank of America, as she introduced BofA’s new chatbot named Erica at Money20/20. Ms. Moore was not alone at Money20/20 in her obvious excitement about the promise of financial chatbots – tools that will permit the bank to interact with its customers through text messages or services such as Facebook Messenger and Amazon’s Echo. MasterCard also announced MasterCard KAI, a bot for banks that will put the company’s services on messaging platforms and enable consumers in the United States to inquire about their accounts, review banking history, monitor spending levels, learn about MasterCard cardholder benefits, receive contextual offers through integration with MasterCard Priceless experiences, and get help with financial literacy. Thinking Capital, the Montreal-based online lender to small businesses has also launched a financial chatbot named Lucy. Thinking Capital says Lucy was the first chatbot among Fintechs in North America.Chatbots are automated chat programs that use artificial intelligence to draw in data and translate it into understandable responses, akin to Siri, Apple’s interactive voice tool. Although the technology is not restricted to financial services, it’s clear that many financial industry stalwarts and startups alike are placing big bets on it. The customer-centric expression of this technological promise is that machine learning will, as described in the Money20/20 session Machine Learning & AI Powering Next Gen CX in Financial Services, “…raise the customer experience benchmark in financial services.” But it’s not all about the customer; another promising aspect of chatbots for financial institutions is reducing the cost of customer support.

    Notwithstanding the promising aspects of the technology, as financial chatbots scale in number and functionality, associated legal and regulatory issues will certainly likewise scale. For example, a text messaging platform may not be secure enough to handle the sensitive nature of consumers’ financial information, giving rise to consumer privacy and protection issues. Consumer protection and competition concerns may also arise with chatbots that are affiliated with certain financial institutions, preventing them from giving unbiased advice about financial products and entities, and helping them to lock in customers using – and hoarding – accumulated data.

  2. Open Data. Another major theme emerging from Money20/20 2016 was the concept of ‘open data’ and ‘open banking’. Open Banking is an emerging term in financial services / financial technology that refers to, among other things, the use of open application programming interfaces (or “APIs”) that enable third party developers to build applications and services for the financial institution. It is predicated on the principle that personal information of customers (such as account and transaction information) should, with consumer consent, be made freely available to third parties, who can then use this data to create new tools to increase consumer financial data access as well as competition among participants in the financial ecosystem.The Director of the US Consumer Financial Protection Bureau (“CPFB”) made headlines when he endorsed open data in the financial context. In his Money 20/20 remarks, The Director stated that the CFPB is “gravely concerned” that financial institutions are limiting or shutting off access to financial data, rather than “exploring ways to make sure that such access…is safe and secure.” He concluded with the point: “Let me state the matter as clearly as I can here: We believe consumers should be able to access this information and give their permission for third-party companies to access this information as well.” These comments reflect a similar push (mandated by legislation) in the EU (the adoption by the European Parliament of the revised Directive on Payment Services or “PSD2”) and the UK (Open Banking Working Group) to require Open Banking.

    The financial services sector has been traditionally a data-intensive industry and the advent of ‘big data’ analytic techniques has created a new landscape for businesses based on data technologies: equity platforms based on crowdfunding, new platforms that match lenders with borrowers in innovative ways, data visualisations tools to follow companies, suppliers and clients, and a whole range of new payment systems based on mobile and cloud technologies. These transformative players include early innovators as well as established financial institutions which provide big data-related services that are shaking up the traditional financial markets.

  3. Regulatory Sandboxes, Innovation Hubs and the Fintech Charter. The issue of how regulators should respond to Fintech innovation, and to what extent they can encourage innovation, was the final hot topic of the conference. Regulatory approaches to Fintech were discussed in depth at a panel featuring the Director of the CPFB and by the Head of Project Innovate from the UK’s Financial Conduct Authority (FCA).The Head of Project Innovate described the nature and status of the UK’s Project Innovate, which features a regulatory sandbox (which seeks to create a regulatory “safe space” in which businesses that qualify can test innovative products and services without immediately incurring all the normal regulatory consequences of engaging in such activity), as well as an advice unit and an innovation hub. The FCA accepted 24 applications to the regulatory sandbox (out of a total of 69 applications) as part of the first cohort of applicants and this first cohort is expected to begin testing shortly. More detail on the first cohort is available here.

    In the US, the CFPB has launched Project Catalyst aiming to promoting consumer-friendly innovation. Project Catalyst involves the CFPB engaging with key stakeholders, coordinating with other government agencies and an “office hours” program of outreach to the Fintech community. In addition, the Office of the Comptroller of the Currency (OCC) has been considering the creation of a national limited purpose Fintech charter. While the approach has drawn praise from some within the Fintech community, some state regulators and consumer protection groups have been critical of the national Fintech charter concept, suggesting that a federal charter would likely preempt state laws on interest rates and impair the ability of states to protect less sophisticated retail consumers. The OCC has also recently issued a white paper outlining its recommendations for a responsible innovation framework.  However, the recent US election could impact these various initiatives, given statements made by President-Elect Donald Trump during the presidential campaign, in particular in respect of the role of the CFPB.

    In Canada, as in the US, legislative authority over financial services lies with both the federal and provincial governments.  The mix of federal and provincial jurisdiction over Fintech matters in Canada and in the US adds to the regulatory complexity of regulating and fostering Fintech in these jurisdictions. Notably, in Canada, the OSC recently announced that it will be taking steps to help Fintech entities navigate the regulatory framework, through its innovation hub called “OSC Launchpad”, which was unveiled on October 24, 2016. Read more about the OSC Launchpad here.

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

Privilege and Privacy in the Context of Company Email: Recent Canada vs US Cases

Posted in E-Discovery
Krupa Kotecha

Peerenboom v Marvel Entertainment (2016 NY Slip Op 31957(U))  is drama-driven case in which the New York County Supreme Court afforded Toronto businessman Harold Peerenboom the right to obtain the private emails of Isaac Perlmutter, the CEO of Marvel Entertainment Inc (“Marvel”). Perlmutter had claimed privilege over the emails; Peerenboom – who ultimately prevailed – argued that Permutter had sent them via his work email server and in doing so, had thereby waived privilege.

The Factual Background

The dispute between the two men centered on the management of the tennis club at the individuals’ exclusive compound in Palm Beach, Florida, where both men vacation over the winter. Following the dispute, anonymous letters defaming Peerenboom (and falsely accusing him of child molestation and murder) were sent to persons living and working at the luxury condominium complex where the two men reside.

Peerenboom commenced an action in the Circuit Court of Palm Beach County, Florida, alleging that Perlmutter and his wife were the persons responsible for sending the defamatory letters. Since Perlmutter allegedly utilized Marvel’s email server for his electronic communications (in his capacity as CEO of the company) Peerenboom issued subpoenas in a Florida action and addressed these subpoenas to Marvel to obtain any communications sent and received by Perlmutter or his wife through Marvel’s email server that were referable to Peerenboom and others involved in the dispute.

As Marvel’s principal office is in New York, Peerenboom thereafter commenced the proceeding against Marvel to enforce the subpoenas in the New York County. Despite not being named as a party to the proceeding, Perlmutter submitted three separate motions for a protective order, alleging that the emails sought by Peerenboom were protected from disclosure by various privileges, such as: the attorney-client privilege, the work-conduct privilege, the common-interest privilege, a purported accountant-client privilege, a limited principal-agent privilege, and the marital privilege.

Peerenboom opposed the motions, contending that Perlmutter waived all privileges, inasmuch as Perlmutter sent or received the emails on Marvel’s server. Marvel’s written computer usage handbook includes the following provision:

“[H]ardware, software, email, voicemail, intranet, and Internet access, computer files and programs—including any information that you create, send, receive, download, or store on Company assets—are Company property and [the Company] reserve[s] the right to monitor their use, where permitted by law to do so” [emphasis added].

The Decision

Justice Nancy Bannon, writing for the New York Supreme Court, held that Perlmutter had not satisfied the burden imposed on a person asserting any form of privilege, mainly that the information sought was immune from disclosure. Furthermore, the Court held that, “since privileges shield from disclosure pertinent information, and therefore constitute obstacles to the truth-finding process, they must be narrowly construed.”

In particular, Justice Bannon held that Perlmutter did not have a reasonable expectation of privacy in connection with electronic messages sent and received on Marvel’s server, and consequently waived the attorney-client and work-product privileges in connection with them. The Court agreed with Peerenboom that the use of a proprietary email system, subject to a computer usage policy such as that adopted by Marvel, constituted a waiver of any privilege that would otherwise be unilaterally asserted in instances of otherwise confidential communication.

Consequently, Perlmutter’s claims pertaining to attorney-client privilege and work-conduct privilege were dismissed.

As a consequence of having waived the attorney-client and work-product privileges with respect to all communications on Marvel’s server, the Court concluded he must necessarily have also waived the privileges in connection with communications that may have been relayed through an intermediary.

The Court also rejected Perlmutter’s contention that the common-interest privilege prevented the disclosure of communications between himself and Stephen Raphael, who assisted him in financing the Florida litigation. For this privilege to apply, the communication sought to be protected must relate to actual or anticipated litigation. Perlmutter did not show that the communication he sought to protect was relevant to the matter,  “in furtherance of a common legal interest”, and or that he and Raphael had a reasonable expectation of confidentiality (in respect of these communications).

With respect to marital privilege, the Court concluded that all electronic communications between Perlmutter and his wife on the Marvel server that were confidential in nature were protected by the marital privilege, unless knowingly shared. Conversely, all electronic communications between Perlmutter and his wife on the Marvel server that were not confidential in nature, but requested in the course of the litigation, were required to be turned over to Peerenboom.

The Takeaway

Peerenboom v Marvel is an informative case for companies with cross-border operations in the United States (and in particular, those with headquarters in the New York County). It sheds light on instances where employees will not be successful in evoking privilege over communications sought to be included in relevant litigation, particularly where a company’s policy specifically excludes the possibility of privacy in such circumstances. It remains to be seen if the decision will be appealed.

The Perenboom decision stands in sharp contrast to a similar recent decision of the Ontario Superior Court of Justice in Narusis v. Bullion Management Group (see our previous blog post here). In Narusis, the company had engaged a contractor to deliver certain services, and in this context, provided the contractor with a corporate e-mail account. The contractor used this corporate e-mail account to exchange e-mails with his lawyer about a legal dispute he had with the company.

With respect to a motion claiming Narusis had waived privilege over these emails, the court concluded that the facts of the case indicated that Narusis had not, implicitly or explicitly, waived his solicitor-client privilege over the e-mails. However, in its analysis, the court suggested that had Narusis been an employee (instead of a contractor), and/or signed the company policy governing such things and/or had personal e-mails been forbidden, the outcome may have been different

The Court in Narusis also went on to distinguish an employee’s right to privacy from the protection afforded to communications under solicitor-client privilege, noting the two concepts are very different, and serve very different purposes.

Companies working across the border should mindful of the interpretation of their policies and actions by courts in Canada and the US.

US Federal Regulators Propose Binding Rules to Enhance Banks’ Cybersecurity Practices

Posted in Cybersecurity
Taha Qureshi

On October 19, 2016, three US financial regulators – the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation (collectively, the “Agencies”) – issued a joint Advance Notice of Proposed Rulemaking (“ANPR”) seeking comments by all stakeholders on enhanced cyber risk management standards. Historically, US regulators have provided non-mandatory guidelines for cybersecurity best practices for voluntary compliance by financial institutions to ensure preparedness in face of cyber threats. For the first time, the ANPR outlines proposals for minimum binding standards that would be applicable to some of the largest regulated institutions in the US with consolidated assets of US$50 billion or more on an enterprise-wide basis. These binding standards indicate a shift in the approach adopted by US regulators from a lenient oversight to one that is more prescriptive.

Notably, the proposals are also applicable to certain nonbank financial institutions (“NBFI”), as well as third-party service providers used by regulated financial institutions. NBFIs include non-licensed financial institutions that facilitate financial services such as online brokerages and third-party service providers include those entities that provide payments processing, core banking and other financial technology services. The expansive nature of the ANPR’s scope further indicates the Agencies’ vision for more detailed regulation of the financial sector’s cybersecurity preparedness than previously.

While the enhanced standards will be baseline minimums applicable to all covered entities, the ANPR proposes an additional higher set of standards for those financial institutions with “sector-critical systems.” The Agencies define critical elements of the US financial system to include markets for commercial paper, corporate debt and equity, and US government bonds. Financial institutions that will be held to the additional set of standards are then defined as those that play a role in critical markets with sufficient market share such that their failure to settle their own or their customers’ material pending transactions by the end of business day could present systemic risk.

The ANPR divides the minimum proposed standards into five categories:

Cyber-Risk Governance

  • Refers to maintaining a formal cyber risk strategy integrated into risk governance structures; requires board of directors to develop written, enterprise-wide cyber risk strategy, approve cyber risk appetite and tolerances, and oversee and hold senior management accountable for implementing policies; proposal would require members of the board of directors to have adequate expertise in cybersecurity to be able to credibly carry out their role.

Cyber-Risk Management

  • Proposal would require business units responsible for day-to-day operations to frequently assess the cyber risks associated with their activities, comply with the entity’s own cyber risk management framework, and report vulnerabilities and threats to senior management.
  • Proposal recommends establishment of an independent risk management function within the entity to analyze, respond and promptly notify issues related to cyber risk at the enterprise level; an additional audit function is also proposed to frequently evaluate the efficacy of established policies and protocols.

Internal Dependency Management

  • Refers to the cyber risks associated with an entity’s own business assets e.g. insider threats, data storage policies and use of legacy systems acquired through acquisitions; the internal dependency strategy would form part of the broader cyber risk management plan implemented by the entity to ensure risks from internal dependencies are minimized by keeping inventory and mapping all vulnerable assets to ensure monitoring and adequate levels of incident response.

External Dependency Management

  • Refers to cyber risks associated with an entity’s relationships with outside vendors, suppliers, customers and other service providers; similar to internal dependency mechanisms requiring awareness of all possible external risk sources, as well as defined policies to ensure effective monitoring and incident response.

Incident Response, Cyber Resilience and Situational Awareness

  • Refers to an entity’s ability to maintain critical functionality in the event of cyber security incidents or disruptions; the proposals require establishing recovery time objectives, as well as developing protocols for secure, immutable off-line preservation of critical records in the event of a significant cyber event.

Impact of Proposed US Regulations on Canadian Financial Institutions

In light of the proposed enhanced standards, Canadian financial institutions should carefully consider any potential consequences and liabilities arising out of recent or future acquisition activity in the US. The Agencies are considering applying the enhanced standards on the US operations of foreign banking organizations with total US assets of US$50 billion or more. Canadian financial institutions expanding their footprint in the US run the risk of being subject to these mandatory minimum standards in the future, if their US asset base does not already exceed the threshold.

The binding nature of the proposed US Regulations will also likely catch the attention of Canadian regulators. In 2013, the Office of the Superintendent of Financial Institutions (“OSFI”) introduced a voluntary Cybersecurity Self-Assessment Guideline (the “Guideline”) and allowed federally regulated financial institutions (“FRFIs”) to assess their own levels of preparedness and respond to any perceived gaps or weaknesses.

Notably, OSFI stated at the time that while it encouraged FRFIs to utilize the Guideline, it did not plan to establish specific guidance for the control and management of cyber risk. OSFI did however reserve the right to specifically request completion of the otherwise voluntary self-assessment, or emphasize certain best practices in future supervisory circumstances. Since 2013, OSFI has made no substantial changes to the Guideline, limiting its updates to improved guidance in light of an evolving understanding of the cybersecurity threat. (By way of contrast, the New York Department of Financial Services (“NYDFS”) recently announced its first State-level regulations for cybersecurity applicable to financial institutions – see our blog post here.)

The existing Guideline prescribed by OSFI touch upon most, if not all, of the key priority areas identified by the ANPR. Ins far as the difference between a voluntary and mandatory system goes, the OSFI self-assessment template leaves it to FRFIs to devise mechanisms and methods of achieving the stated diligence goals. The ANPR recommends specific methods and procedures.

Despite the differences in policy mechanisms between Canada and the US, it is not inconceivable that Canadian regulators could eventually shift towards binding standards in the future, though there is nothing right now to indicate such a move is being considered. However, the risks posed by cyber threats to the financial system are evolving rapidly and Canadian regulators may prefer to share the burden of ensuring steadfastness in face of these risks with all participants of the financial system. The Bank of Canada has also touted the mirror-like similarities of the Canadian approach to cybersecurity policy guidance and stress testing as compared to other jurisdictions like the US and UK. If these and other countries decide to embark on a trend of binding standards, Canadian regulators may start examining a similar direction could follow suit to ensure they are in line with accepted global regulatory thinking.

What If You “Lost” Your Fingerprint?

Posted in Authentication, Cybersecurity, FinTech
Diego BeltranMeghan S. Bridges

Biometric authentication is becoming increasingly common. Smart phones and computers use it, banks have started to use it (in India, Yes bank unveiled its iris scan-enabled point of sale solution; in the US, Bank of America allows fingerprint authentication to log onto its mobile banking app; in Canada, TD Bank uses voice recognition to identify users over the phone), and recently MasterCard began rolling out ‘selfie pay’ allowing users to authenticate online payments by using their face at the point of sale.

Biometric authentication refers to the validation of a user’s identity by measuring physical or behavioral characteristics. Biometric samples may include fingerprints, retinal scans, palm scans, face and voice recognition.

Unlike a password, biometric data is unique, invariable and non-repudiable allowing to uniquely and unambiguously identify its owner. It is these same characteristics that make the storage of biometric data critical. If someone steals this data, they can steal a person’s identity—and, unlike a password or physical token, the victim cannot simple replace the stolen information with new data.

Biometrics are good security, but they are not impenetrable. A recent report highlights a number of ways malicious actors may circumvent biometric authentication and suggests possible countermeasures to such attacks. Although the report’s focus is on ATMs, the attacks illustrate some pitfalls to be avoided more generally.

This is not hypothetical – in September of this year, the U.S. Office of Personnel Management revealed fingerprint data belonging to nearly 6 million individuals was compromised in a recent cyberattack. While the OPM confirmed in July that personal information including Social Security numbers, mental health records and financial histories belonging to 21.5 million current, former and prospective government employees had been stolen by hackers, the agency’s September announcement thrust biometrics into the spotlight by revising its initial estimate of stolen fingerprint data to 5.6 million from 1.1 million, after uncovering an archived record of an additional 4.5 million fingerprint sets.

The OPM has said it believes that, as of now, the ability to misuse fingerprint data is limited but said this could change over time as technology evolves. An interagency group made up of agents in the Federal Bureau of Investigation, U.S. Department of Homeland Security, and other members of the intelligence community are reviewing the ways hackers may use the fingerprint data.

The report also highlights some of the ways hackers have been able to bypass biometric security, including:

  • Attacking biometric devices to intercept data as it is transmitted;
  • Using of biometric data skimmers to obtain data at the point of input;
  • Extracting data from EMV-cards, after stealing or obtaining the cards by other means; and
  • Attacking biometric databases directly.

The advice and possible countermeasures discussed in the report include:

  • Encrypting data in transit;
  • Using anti-skimming devices;
  • Monitoring the black market; and
  • Using strong encrypted mechanisms for stored data.

Given the implications of compromising customer biometric data, organizations will want to carefully consider how biometric data is stored and who within the organization will have access to it. The fewer people with access to customer biometric data the better. Organizations should also consider whether their applications and data or both require such level of protection, and if developing mobile apps, organizations should consider maintaining biometric data securely stored at the device. These measures are not only prudent protections of biometric data, but also evidence to which a company might point in the context of litigation when arguing that it was not negligent in storing and protecting private personal information. Since there is currently no Canadian case law about biometric data, whether this kind of argument will succeed remains to be seen.

The legal implications of the use of biometric data remain uncertain. One of the many questions raised by this report is: what is a consumer’s remedy for stolen biometric data? Generally, the law attempts to put a person in the position they would have occupied but for their injury (i.e., but for their identity being stolen, or but for their data being breached, etc.). But once a person’s fingerprint is stolen, they cannot grow a new fingerprint.

Some jurisdictions have passed laws creating a statutory right of action. For instance, Illinois’ Biometric Information Privacy Act  allows for a private right of action  that could expose businesses to possible civil liability. Texas has statutory provisions addressing biometric data, but only the Texas attorney general can bring an action to enforce the statute and collect a civil penalty.

How the law adapts to compensate a person for this seemingly irremediable loss, in light of well-established principles of compensation under contract and tort law, will be one of many interesting developments in the near future.

Seven Practical Lessons from CRTC’s First CASL Enforcement Decision

Posted in CASL
Keith RoseDaniel G.C. GloverCharles MorganKirsten Thompson

Although CASL has been in force since July 1, 2014, the Canadian Radio-Television and Telecommunications Commission (“CRTC”) has conducted its investigations and levied its penalties in a generally non-public manner.  Until now, the CRTC’s Compliance and Enforcement branch had publicly commented on only one Notice of Violation (“NoV”) under CASL.  We understand that an undisclosed number of other NoVs have been issued without public comment.

All other public CASL enforcement actions have taken the form of negotiated “undertakings” which are forms of settlements reached in confidential, closed door negotiations with the enforcement branch.  This atmosphere of secrecy has made it difficult for organizations across Canada to understand the law and assess their risks.

This has now changed, at least to a degree.  On October 26, the Commission issued its first written reasons in a decision under s. 25(1).

Background

The decision relates to a NoV apparently issued on January 30, 2015, to Blackstone Learning Corp.  As reported in the decision, the NoV related to some 385,668 email messages advertising educational and training services sent between July 9 and September 18, 2014, primarily targeting government employees.

Blackstone received a Notice to Produce (“NtP“) on November 7, 2014. (See our previous post for background on NtPs.)  Blackstone requested a review of the NtP on December 4, after the deadline for production had passed.  The Commission denied this request in a letter on January 22, 2015.  The NoV was issued approximately a week later.

Although an NoV sets out an Administrative Monetary Penalty (or “AMP”) for the alleged violation, these notices are not final determinations.  The person named in the NoV has the right, under s. 24(1), to respond to the notice with representations to the Commission, within 30 days.  Blackstone made such submissions on Feb. 14.

In parallel with that, Blackstone also made an unsuccessful attempt to appeal the letter decision denying its request for review.  Blackstone sought leave to appeal to the Supreme Court of Canada, which was the wrong forum.  The right of appeal under s. 27(1) of CASL is to the Federal Court of Appeal.

Pursuant to s. 25(1), the Commission must decide, on a balance of probabilities, whether the person committed the alleged violation and, if so, whether or not to impose the AMP set out in the NoV.  The Commission has the power to vary the amount of the AMP, or to suspend it subject to conditions, or waive it altogether.

The Decision

The decision canvasses a number of issues and will be carefully parsed by practitioners.  However, a few practical points jump out for immediate consideration.

  1. The decision was issued nearly 20 months after the NoV. This is quite a long time.  However, CASL does not set any particular time frame for this process.  It remains to be seen whether this is typical, or whether decisions will come more quickly in the future.
  2. The Commission reduced Blackstone’s AMP from $640,000 assessed in the NoV to $50,000. This substantial gap (reducing the initially proposed AMP by 92%)  suggests that (a) the Enforcement branch may need to re-calibrate its approach; and (b) it is worthwhile for anyone with a similar NoV to exercise their right to contest the AMP, as the Commission has demonstrated a willingness to consider mitigating factors in assessing the level of an AMP.
  3. The decision counts the number of “violations” based on the number of “campaigns”, not the (much higher) number of individual messages sent. In this case, the NoV alleged 9 violations, for 385,688 emails. This approach theoretically lowers the potential upper level of liability under CASL, though with a $10M fining power, this may prove to be little solace for Canadian organizations.
  4. Even though the messages (which referred to and promoted training programs) did not discuss any specific commercial terms, the references to discounts and group rates were enough for the Commission to conclude the messages were “commercial electronic messages” subject to CASL.
  5. The Commission’s analysis of the conspicuous publication exception seems to add a condition which does not expressly appear in the statute. According to the decision, in order for this exception to apply, the address must be “published in such a manner that it is reasonable to infer consent to receive the type of message sent, in the circumstances”.  The statute does not refer to the manner of publication, except that it must be conspicuous, and the only express limitation on the type of message is that it must be “relevant to the person’s business, role, functions or duties in a business or official capacity”.The Commission’s analysis does little to clarify how this exception can be applied.  However, it does effectively confirm that any business relying on this exception will have the burden to prove that the circumstances of publication fit within the wording of the Act.  In the words of the decision:

Paragraph 10(9)(b) of the Act does not provide persons sending commercial electronic messages with a broad licence to contact any electronic address they find online; rather, it provides for circumstances in which consent can be implied by such publication, to be evaluated on a case-by-case basis.

  1. The Commission also sets out a series of principles for the assessment of AMPs that will undoubtedly be influential in its future determinations. In particular:
    • the amount of the penalty must be enough to promote changes in behaviour, but driving a person out of business would not promote compliance – it would preclude future compliant behaviour;
    • the volume of complaints received will be relevant to assessing the nature and scope of the violation(s), but so is the time period over which the messages were sent – 60 unique complaints (for nearly 400,000 messages, or a complaint rate of approximately 0.016%) was considered to be a significant volume, but the fact that the violations were limited to a two month time-span indicated a lower penalty;
    • unaudited financial statements can be acceptable evidence of ability to pay, and an AMP that would represent “several years’ worth” of revenues would be excessive; and
    • lack of cooperation during an investigation may result in a higher penalty, while evidence of efforts to comply (even if “not particularly robust”) suggest a lower penalty will be appropriate.
  2. Blackstone was given until November 25, 2016 to pay the assessed AMP – which also happens to be the deadline for it to file an appeal of the decision to the Federal Court of Appeal (or an application for leave to appeal, if on a question of fact). Interest of 3% above the average bank rate, compounded monthly, will be applied after that date.

Impacts of Artificial Intelligence Remain Grey Areas, says White House Report

Posted in AI and Machine Learning, Cybersecurity, Privacy
Douglas Judson

Earlier this month the Executive Office of the President’s National Science and Technology Council (the “NTSC”) released a report entitled Preparing for the Future of Artificial Intelligence. The report surveys the current state of artificial intelligence (“AI”).

The NTSC foretells of a future where AI technologies play a growing role in society – opening up new economic opportunities and markets, and spurring innovation in the health, education, justice, energy, and environment sectors. The report cautions that the development of AI poses a number of challenges to conventional public policy, regulatory structures, and data management protocols in place to protect privacy.

AI presents a unique policy challenge compared to prior innovation

AI enthusiasts are optimistic about the promise of machine learning to improve people’s lives by solving challenges and inefficiencies. And rightly so – driverless cars, unmanned aircraft, and machines assisting with medical treatment are just some of the AI technologies that have made headlines in recent months. The NTSC points, by analogy, to the transformative impact advancements in mobile computing have had on industry, government, and how individuals carry out their day-to-day lives.

While prior conceptions of AI were mostly just examples of advanced computer programming, the current wave of AI is concerned with machines that can ‘think’ and ‘learn’. Coupled with new sources of big data and the capabilities of more powerful computers, improved machine learning algorithms are ushering in a new era of technology that “exhibits behaviour commonly thought of as requiring intelligence.”

For these reasons, AI tools are a different beast than prior tech innovations, and may not be governable by the same type of regulatory frameworks. The report indicates that any approach to regulating AI-enabled products should be informed by the aspects of risk that the addition of AI may reduce or augment. Where a risk falls within the existing policy regime, the report says that the policy exercise should begin by considering whether the existing regulatory framework adequately address the risks in question, or whether they need to be adapted to better account for AI.

But the red-tape must be reasonable too. The NTSC is conscious of the need to show leadership in AI innovation, and of the risk that cumbersome regulation could make the U.S. a laggard in this emerging field. The report indicates that where the regulatory responses to the addition of AI technologies threaten to increase compliance costs or slow the development of beneficial innovations, policymakers should consider how those policies could be adapted to lower costs and other barriers to innovation without compromising safety or market fairness.

AI Privacy Risks

It is little surprise that the focus on “intelligent” goods like driverless cars has caused much of the AI discussion to revolve around be risks to physical safety. Yet, privacy is another significant area of public concern implicated by AI developments. The NTSC report highlights these challenges.

AI may trigger privacy issues ranging from the mismanagement of personal information, unfairly allocated public resources, or inappropriate technology responses to new information. This is because the nature of AI technologies is such that the technology teaches itself and adapts its own analytical processes. This necessarily complicates compliance and adherence to best practices. As with any transformative force, there are no agreed methods for assessing the effects of AI in commercial and public service applications on human populations.

The report identifies a number of privacy risks arising from AI, including the following:

  • Because AI algorithms can be (or can become) opaque, it is difficult to trace or explain AI-based decision-making. The use of AI to make consequential decisions about people, often replacing decisions made by human actors and institutions, leads to concerns about how to ensure justice, fairness, and accountability—the same concerns voiced previously in the “Big Data” context.
  • The proliferation of AI may have the capacity to compromise fairness, particularly in the public service context. This is because AI may perpetuate bias and disadvantage for historically marginalized groups if the technology trains itself using a model that reflects past, biased decisions or statistics.
  • While AI may make for interesting applications in controlled or laboratory environments, the inherent unpredictability and inbuilt adaptability of AI may mean real-world applications may be much riskier.  The use of AI to control physical-world equipment leads to concerns about safety, especially as systems are exposed to the full complexity of the human environment.

Cybersecurity Applications and Risks of AI

The Report points out that AI has important applications in cybersecurity, and is expected to play an increasing role for both defensive (reactive) measures and offensive (proactive) measures. For instance, automating what is now expert work, either partially or entirely, may enable strong security across a much broader range of systems and applications at dramatically lower cost, and may increase the agility of cyber defenses. Using AI may help maintain the rapid response required to detect and react to the landscape of ever evolving cyber threats. Future AI systems could perform predictive analytics to anticipate cyberattacks by generating dynamic threat models from available data sources that are voluminous, ever-changing, and often incomplete. AI may be the most effective approach to interpreting these data, proactively identifying vulnerabilities, and taking action to prevent or mitigate future attacks.

However, AI systems also have their own cybersecurity needs. The Report calls for AI-driven applications to implement sound cybersecurity controls to ensure integrity of data and functionality, protect privacy and confidentiality, and maintain availability.

Meanwhile, in Canada

On the same week President Obama kicked off a “national conversation” about AI, it remains to be determined whether any Canadian governments are taking a concerted approach to AI and its policy needs.  A September 2016 report from Bloomberg indicated that Canada lags behind its peers in AI investment.

Federal ministries with the most relevant portfolios (such as Transport Canada, Innovation, Science and Economic Development, Global Affairs, and National Defence) may each be pressed to consider policy and regulations. Private sector development of Canadian AI has already begun,  with investors backing ambitious machine-learning and AI projects.

What is clear is that AI is coming, and it poses critical challenges to the existing state of privacy and personal information policies and practice. AI practitioners should take keep abreast of these developments, and advance their AI applications without losing sight of data management principles – like transparency and accountability – in their AI ambitions.

CJEU Finds that Dynamic IP Addresses are Personal Information

Posted in Cybersecurity, European Union, Privacy
Keith Rose

On October 19, the CJEU handed down a decision in the Breyer case.  The case arose from a complaint by Mr. Patrick Breyer, deals with whether dynamic IP addresses logged by a website are personal information, protected by privacy law.

The CJEU concluded that the answer to this is yes, provided that it was both technically and legally possible for the website operator to obtain information that could link that address back to an individual.  The court expressly considered that the possibility of obtaining that information via a court order or a similar intervention by some other competent authority would meet this standard.

This decision has some significant implications both in Europe and in Canada.

First, and most obviously, it means that all websites subject to European Data Protection law need user consent, or some other legal justification, to log IP addresses.  Once the General Data Protection Regulations (“GDPR“) comes into effect in 2018, this will include non-European websites that offer goods or services in the EU, or that track user behaviour in the EU (for example, for advertising purposes).

However, in the long term, the more important consequence may be to make it much harder to avoid privacy law obligations by de-identifying data.  Effectively, the Breyer decision suggests that, if re-identifying information exists anywhere within the reach of a court order, that information must always be considered personal information in the hands of anyone who could possibly obtain such an order.

In Canada, the applicable legal test is whether there is a “serious possibility that an individual could be identified through the use of that information, alone or in combination with other information.”  [See e.g. Gordon v. Canada (Health), 2008 FC 258.]  This test reflects the same concern about re-identification.

According to findings of the Privacy Commissioner of Canada, the test does not require that “someone would necessarily go to all lengths to actually do so”.  But no decision to date in Canada has gone so far as to say that a hypothetical option to obtain a court order would be enough by itself to satisfy the “serious possibility” standard.

For instance, in a 2011 decision, the Privacy Commissioner rejected the argument that Facebook collected personal information when it logged the IP addresses of non-members who visited third-party sites using its social plug-ins.  There was no evidence that Facebook had the capacity to link the IP address to an individual.

The logic of the Breyer decision could lead to the opposite result.

In the second part of the Breyers decision, the CJEU went on to also conclude that a law permitting media websites to collect users’ personal information without consent in order to facilitate and charge for the services, but requiring them to dispose of that information after the content was viewed, was inconsistent with the European Data Protection directive, because it was unduly prescriptive.  Article 7(f) of the Directive requires a broader balancing of interests.  The German law in question apparently failed to allow for the possibility that the website operator might have a legitimate interest in retaining the information for a longer period.

This leaves open the possibility that website operators (and others) can justify collection and retention of personal information about their users without consent based on such “legitimate interests”, which the CJEU suggests include the ability to defend against “cyber attacks”.  Presumably, enforcement of IP rights would also be considered a legitimate interest.  However, these interests must, according to the Directive, be balanced against the “fundamental rights and freedoms of the data subject”.

The CJEU has previously concluded that this balancing must be done on a case-by-case basis, rather than categorically, at least at the legislative level.  It is not clear how such an assessment could be applied to IP address logging in practice.  Nuanced balancing of interests is not usually a hallmark of scalable automated processes.  But resolving this question will have to wait for another day.

IIROC Issues Cybersecurity Report Cards to Dealer Firms

Posted in Cybersecurity, Regulatory Compliance

IIROC is providing all dealer member firms it regulates (Firms) with a confidential cybersecurity “report card” that will include:

  • an individual assessment of the Firm’s cybersecurity preparedness program
  • a comparison of the Firm’s cybersecurity practices against the industry and other Firms of similar size and business model
  • a list of cybersecurity areas to which the Firm should be giving priority attention.

The report cards were generated based on the results of an extensive assessment survey that Firms completed in June 2016. The survey responses were benchmarked against a National Institute of Standards and Technology cybersecurity framework that considers governance, threat prevention, threat detection and threat response and recovery criteria.

IIROC is also using the June survey results to assess the adequacy of each Firm’s cybersecurity policies and procedures. Firms that are assessed as lagging their peers may face further regulatory scrutiny.

For more on this, see the full post by our colleagues on the Canadian Securities Regulatory Monitor.

FDIC Issues Proposed Guidance on Best Practices for Third-Party Lending: Implications for Canadian Banks and Lenders

Posted in Financial, FinTech, Payments
Ana BadourD.J. LyndeDan Doliner

Background

The FDIC recently released for comments a proposed guidance with respect to third-party lending (the “Proposed Guidance”). While subject to potential revisions following the FDIC’s review of comments, the Proposed Guidance provides valuable insight into current regulatory trends relating to marketplace lending.

The Proposed Guidance defines “Third-Party Lending” as lending arrangements that rely on a third party to perform a significant aspect of the lending process, including: marketing; borrower solicitation; credit underwriting; loan pricing; loan origination; customer service; consumer disclosures; regulatory compliance; loan servicing; debt collection; and data collection (“Third-Party Lending”).

Third-Party Lending includes:

  1. Originating loans for third parties: a financial institution serving as the originator for an entity.
  2. Originating loans through, or with, third-parties: a financial institution authorizing third parties to offer loans on the financial institution’s behalf.
  3. Financial institutions originating loans through third party platforms: A third party providing nearly end-to-end lending platform for the financial institution’s use.

Third-party lending would therefore include partnerships between banks and non-bank lenders, such as online lenders and marketplace lenders.

Third-Party Lending arrangements may enable financial institutions to enhance lending services for their customers, including by offering credit products at lower costs. However, third-party lending arrangements present increased risks.

The FDIC views financial institutions (including their boards and management) as ultimately responsible for lending activities involving third-parties. The Proposed Guidance sets forth safety and soundness and consumer compliance measures. These measures are intended to address risks related to Third-Party Lending, and increase financial institutions’ ability to be compliant with all applicable legal requirements.

While not directly applicable to Canadian financial institutions in their Canadian operations, this Proposed Guidance is instructive as it sets out a list of best practices for financial institutions entering into third-party lending arrangements. In addition, the Proposed Guidance could apply to a Canadian entity proposing to enter into a Third-Party Lending relation with a US financial institution in the US market.

Financial Risks Arising from Third-Party Lending Relationship

Financial institutions dedicate significant resources to the development of lending operations, supervision, standardization and quality assurance. When engaging in Third-Party Lending, a financial institution becomes dependant on the third-party’s own operational processes and standards, and loses some of its ability to control the lending process. Such dependency and reduced control exposes the financial institution to various risks.

The FDIC lists certain key risks to consider prior to engaging a third-party for the purpose of lending:

  1. Strategic Risk: inconsistencies between the business strategy of the financial institution and the business strategy of the third-party.
  2. Operational Risk: integration with, and exposure to, a third-party’s lending process, creates operational complexity and reduces the financial institution’s operational control.
  3. Transaction Risk: the processing of the transactions is done by the third-party in accordance with the third-party’s standards and protocols, which may be less comprehensive than the financial institution’s.
  4. Pipeline and Liquidity Risk: if loans, originated through third-parties, are expected to be sold, and if the third-party is unable to consummate the loan as agreed, pipeline risk, and the resulting liquidity and financial risk, arise.
  5. Model Risk: financial institutions may become exposed to risks relating to flaws in financial models developed and used by third-parties.
  6. Credit Risk: third-parties may apply inadequate credit risk management processes (e.g., underwriting, credit check and assessment), which in turn may adversely impact financial institutions.
  7. Compliance Risk: in most cases, third-parties will devote fewer resources (compared to financial institutions) to compliance assurance, thus exposing the financial institution to risk of noncompliance (including with respect to consumer protection, bank secrecy and anti-money laundering requirements).

Mitigating Risks – The Third-Party Lending Risk Management Program

To reduce and manage risks associated with Third-Party Lending, the FDIC recommends that financial institutions develop and adopt a risk management program, defining the financial institution’s policies regarding all stages of a relationship with a third-party:

  1. Pre-engagement due diligence: due diligence and assessment of potential risks to take place prior to entering into contract with a third-party.
  2. Contract structuring: predefining contractual mechanisms that should be included in any agreement with a third-party in order to protect the financial institution. Such contractual mechanisms should, for example, refer to:
    1. Control and supervision over third-parties.
    2. Undertaking by third-parties to implement policies required by the financial institution.
    3. Financial institution’s access to third-parties’ data for the purpose of adequate audit and supervision.
    4. Adequate representations and indemnification undertakings by third-parties.
  3. Review and oversight (during the engagement): procedures addressing all aspects of integrated lending operations with a third-party, including:
    1. Predefined limits on the scope of the Third-Party Lending activity (including definition of types of loans and requirements for subprime products).
    2. Minimum performance standards which must be met for a third party to be engaged by the financial institution.
    3. Monitoring and reporting protocols (including with respect to third-parties’ vendors).
    4. Establish standards and quality assurance processes for credit underwriting.
    5. Provide a detailed process for consumer complaints, addressing reporting and timing for response.
    6. Appointment of compliance officer and definition of the related resources, authorities and reporting obligations.
    7. Standards and protocols for credit underwriting and administration.
    8. Capital adequacy, loss recognition and liquidity.
    9. Bank Secrecy Act/Anti-Money Laundering.
    10. Standards for information technology and customers’ data protection.
    11. Occasional transaction testing by the financial institution.

Takeaways for Business

The Proposed Guidance provides insight into best practices from a regulatory perspective in respect of Third-Party Lending. As partnerships between banks and non-banks continue to increase in Canada, Canadian banks and lenders involved in Third-Party Lending may find the Proposed Guidance helpful.

For more information about our firm’s Fintech expertise, please see our Fintech group page.