CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

Bank of Canada White Paper on Creating Digital Currency

Posted in Financial, FinTech
Shauvik ShahAdrienne Ho

In November 2017, the Bank of Canada (“BoC”) released a white paper evaluating whether a central bank should issue a decentralized digital currency for general public use. The Central Bank Digital Currency (“CBDC”) model considered by the white paper would be different from the BoC’s Project Jasper (as summarized in our previous post) and from a system where the public has accounts at the central bank.

Characteristics of a Central Bank Digital Currency

In fact, the CBDC model proposed would be quite similar to cash, mainly that it:

  • Would be legal tender, denominated in a sovereign currency, and convertible at par value;
  • Would not incur fees when stored or distributed by a central bank;
  • Could be used at any time and by anyone who has access to the required underlying technology;
  • Would be susceptible to risk and loss;
  • Would be perfectly elastic with respect to its supply;
  • Would be distributed to the public via financial institutions (much like today’s bank notes) subject to any requirements such as anti-money laundering regulation;
  • Would be on a distributed, bilateral payment network and the finality and irrevocability of transactions would be determined by the underlying technology.

The benchmark CBDC model is non-interest bearing and allows holders to remain anonymous. However, the white paper also discusses the possibility of an interest-bearing CBDC (“I-CBDC”). Unlike cash, it would be difficult for the holder of I-CBDC to remain completely anonymous as the central bank, at least, would need to provide identifying information to authorities for tax purposes.

Incentives for Issuing CBDC

The white paper assesses six reasons why CBDC could be issued in addition to existing bank notes and central bank reserves but it does not consider the technological or potential reputational costs of doing so. The white paper also discusses how this analysis might differ where the CBDC is interest-bearing.

  1. Ensuring sufficient central bank money and preserving seigniorage

One potential concern is that with an increased shift away from cash and towards alternate payment methods, there will not only be less central bank money available but there may also be a threat to seigniorage (the difference between the value of money and the cost to produce it) due to a decrease in the value of outstanding bank notes. A significantly large fall in seigniorage might mean a central bank may need to receive government funding, in turn reducing its autonomy. The white paper concludes that neither concern is a compelling reason to issue CBDC. Recent trends indicate that generally, the total value of bank notes has not in fact declined and a central bank can use other tools, such as charging higher fees, to preserve its revenue streams.

If both CBDC and bank notes were offered, overall seigniorage might increase due to a larger quantity of money in circulation. However, there can also be increased costs to a central bank when both options are provided. The overall effect of I-CBDC on seigniorage is unclear. While seigniorage is reduced due to the payment of interest, there might also be increased demand for I-CBDC. The white paper suggests the degree to which I-CBDC is taken up will depend on how well financial institutions can compete with it.

  1. Reducing the lower bound on interest rates and supporting unconventional monetary policy

The white paper concludes that trying to reduce real interest rates is not a compelling reason to introduce CBDC as this can already be achieved by reducing the availability of cash, particularly in larger-denominated notes. Reducing the volume of bank notes increases the costs associated with holding cash. The resulting higher negative yield on cash pushes down the effective lower bound (“ELB”) on interest rates such that real interest rates fall, in turn boosting economic growth. Introducing CBDC would actually put upward pressure on interests rates as holding CBDC would make it easier to avoid negative interest rates.

I-CBDC may not be effective in this regard either. This is because in a negative policy rate environment, I-CDBD holders may just convert their funds into bank notes instead. This in turn would make it more difficult for the central bank to sustain negative rates below the ELB.

Supporting unconventional monetary policy was also found to be unpersuasive. In the rare case that, to support quantitative easing, central bank funds are transferred directly to the public, experience has shown that this can be done without CBDC. In fact, using CBDC could impede this monetary tool since the funds, given CBDC’s anonymous nature, might end up being held by non-residents.

  1. Reducing risk and improving financial stability

The effect of CBDC on financial stability is mixed. Where CBDC is non-interest bearing, there is unlikely to be a significant shift away from traditional instruments such as deposit accounts since CBDC is still subject to risks like theft. The white paper suggests that financial institutions can effectively compete with I-CBDC as a method to store value because banks can, for instance, offer enhanced financial services such as wealth management or engage in cost-cutting measures. Nonetheless, in times of economic stress, there may be greater uptake of CBDC and I-CBDC, which is viewed as risk-free; the shift away from traditional deposits might disrupt the financial system and increase volatility.

  1. Increasing contestability in payment systems

Although CBDC might increase contestability in the payments industry generally by allowing more financial institutions to access the central bank’s funds, it provides little benefit in the retail and large-value payment contexts. CBDC may be cheaper to use than cash in making retail payments and it might provide greater privacy in online transactions. But, given existing low-cost electronic payment methods, any contestability CBDC could provide is likely small. I-CBDC might provide greater contestability given its incremental benefit of paying interest though its lack of anonymity may detract some users.

For large-value payments, the white paper suggests that the features of existing real-time gross settlement (“RTGS”) systems would make them preferable over using CBDC and I-CBDC. This is largely because where firms make large-value payments to each other, liquidity support is needed to help manage mismatches in payment flows. Current RTGS systems have mechanisms in place that offset payment orders with each other before funds are actually released, which reduces the need to provide liquidity. This, along with other reasons such as the network effects of having many users, make RTGS systems attractive.

  1. Promoting financial inclusion

The white paper notes that CBDC is not necessary to promote financial inclusion (as evidenced by the use of M-PESA, a mobile phone-based money transfer, financing and microfinancing service, in Kenya) and in any case, this is not a concern for most advanced economies.

  1. Inhibiting Criminal Activity

Reducing larger-denominated bank notes might help impede criminal activity, but this does not, as the white paper explains, mean that CBDC should be introduced. CBDC’s anonymous nature could actually help it facilitate crime. Though this concern can be mitigated by limiting the amount of CBDC that can be held, the white paper notes that such restrictions would reduce the demand for CBDC, may decrease seigniorage, and result in making CBDC more expensive to use. That said, I-CBDC can mitigate some of these concerns as transactions are not completely anonymous and use could be restricted to those whose identify can be verified.

Conclusion

Overall, the white paper concludes that many of the reasons suggested for introducing a centrally-controlled digital currency with government oversight are not compelling enough to warrant issuing CBDC and I-CBDC. The white paper suggests that further research is required and that any issuance of CBDC should be done cautiously and incrementally.

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

 

Supreme Court of Canada Rules Text Messages Can Attract a Reasonable Expectation of Privacy

Posted in Privacy
Erin ChesneyCharlotte-Anne Malischewski

On December 8, 2017, the Supreme Court of Canada (“SCC”) released two decisions dealing with privacy interests in text messages: R v Marakah, 2017 SCC 59  and R v Jones, 2007 SCC 60.   At issue in both cases was whether there is reasonable expectation of privacy in text messages, even after they have been sent and received.

In Marakah, the accused was convicted of multiple firearm offences based on evidence of text messages sent by him but obtained by police from the recipient/co-accused’s phone. As the accused was not the owner of the device from which the text messages were obtained, standing became an issue. The Supreme Court granted Mr. Marakah standing and, based on the Court’s analysis of section 8 of the Charter of Rights and Freedoms (which grants everyone the right to be free from unreasonable search and seizure), decided there was a breach of the accused’s Charter rights and set aside his convictions.

In Jones, the accused was convicted of drug and firearm trafficking charges based on evidence found in text messages. The text messages were stored on the server of an internet service provider and were seized by the police using a production order obtained under the Criminal Code. The SCC found that Mr. Jones had a reasonable expectation of privacy in the text messages stored by Telus and therefore, standing under section 8 of the Charter to challenge the production order. However, in this case, the SCC found that the accused’s section 8 Charter right was not breached because records of text messages stored on a service provider’s infrastructure were lawfully seized by means of a production order under Criminal Code. The conviction of the accused was upheld.

The standing of the accused to assert section 8 rights had been an issue throughout the court saga as in each case, the text messages were in a physical location not under the control of the accused (in Marakah, the were in a co-accused’s phone; in Jones, they were on a service provider’s server). At the Ontario Court of Appeal, in both R v Marakah, 2016 ONCA 542 and R v Jones, 2016 ONCA 543, the accused were denied standing to argue  whether there had been a breach of their section 8 Charter rights.  A key element of the Court of Appeal’s reasons was its emphasis on control over the physical location of the message as a decisive factor.

The SCC, however, made it clear that text messages themselves – regardless of their physical location – can attract a reasonable expectation of privacy and, therefore, can be protected against unreasonable search or seizure under section 8 of the Charter.

Text Messages are Electronic Conversations

Writing for the majority in Marakah, Chief Justice McLachlin adopted a broad, functional, and technologically neutral approach to characterizing the subject of the search.  She concluded that text messages are not only private communications, they are “electronic conversation[s],” which include “the existence of the conversation, the identities of the participants, the information shared, and any inferences about associations and activities drawn from it.”

Chief Justice McLachlin explained that text messages reveal a great deal of personal information and that preservation of a “zone of privacy” in which personal information is safe from state intrusion is the very heart of the purpose of section 8  of the Charter. She concluded that “it is reasonable to expect these private interactions — and not just the contents of a particular cell phone at a particular point in time — to remain private.”

Text Messages Can Attract Reasonable Expectations of Privacy

To claim protection under section 8 of the Charter, a claimant must first establish a reasonable expectation of privacy.  Writing for the majority in Marakah, Chief Justice McLachlin explained that whether someone has a reasonable expectation of privacy is a question that must be assessed in the totality of the circumstances and depends on:

  1. Whether the person has a direct interest in the subject matter of the search;
  2. Whether the person has a subjective expectation of privacy; and
  3. Whether that subjective expectation is objectively reasonable.

Control, she indicated, is but one of the factors to be considered in assessing the objective reasonableness of the expectation.  Unlike the Court of Appeal below, Chief Justice McLachlin did not find herself constrained by the property-centric notion of control that has dominated the jurisprudence.  Instead, she explains at paragraph 39 (citations omitted):

[c]ontrol must be analyzed in relation to the subject matter of the search: the electronic conversation. Individuals exercise meaningful control over the information they send by text message by making choices about how, when, and to whom they disclose the information. They “determine for themselves when, how, and to what extent information about them is communicated to others…

In Marakah, the application of this test meant that Mr. Marakah had a reasonable expectation of privacy in the text messages he had sent and were on his co-conspirator’s device.

In Jones,  the application of this test meant that Mr. Jones had a reasonable expectation of privacy in texts stored by service provider.

Privacy Advocates Claim a Win

The decisions are being heralded by civil liberties and privacy advocates who believe decisions about reasonable expectations of privacy should be made based on principle, rather than on how the technology works.

The dissenting judges in Marakah stated that although text messaging is “clearly” private, they were concerned that granting standing in these circumstances would unduly diminish the role of control in the privacy analysis and expand the scope of people who can bring a section 8 Charter challenge, adding to the complexity and length of criminal trial proceedings and placing even greater strains on a criminal justice system that is already overburdened.

These decisions are likely to have broad implications for privacy interests in Canada as they set the stage for how the Court will deal with informational privacy in the digital age, in the criminal law, and beyond.

The Supreme Court of Canada’s decision in R v. Marakah, 2017 SCC 59 is available here.

The Supreme Court of Canada’s decision in R v. Jones, 2007 SCC 60 is available here.

—-

McCarthy Tétrault represented one of the intervenors, the Canadian Civil Liberties Association (the “CCLA”), which made submissions largely accepted by the Court on the question of standing in informational privacy cases. The CCLA did not take a position with respect to the outcome for either party in these cases.

 

When Employees Go Rogue: Are Employers Vicariously Liable for the Privacy Breaches of Their Employees?

Posted in Class Actions, Data Breach, Privacy
Sara D.N. Babich

Although there has not yet been a definitive answer to this question in Canada, based on recent UK case law, it appears increasingly likely that, at least in some circumstances, the answer may be “yes”.

In Various Claimants v WM Morrisons Supermarket Plc, (Rev 1) [2017] EWHC 3113 (QB) (“Morrisons”), the High Court said that the supermarket chain Morrisons was vicariously liable for the actions of an employee, who leaked the payroll data of nearly 100,000 employees. The case is the first successful class action for a data breach in the UK.

More and more, Canadian courts and adjudicators have been asked to grapple with similar privacy issues, particularly in light of the privacy torts that have gained traction in some Canadian jurisdictions. Thus far, Canadian courts have not opined directly on the issue of whether vicarious liability may be extended to employers in respect of the privacy breaches of their employees, but the case law to date is consistent with the recent UK decision which holds that the test for vicarious liability of an employer for the wrongful acts of its employees is the same as it is for any other wrongful act of an employee.

Current Canadian Law

In Ari v Insurance Corporation of British Columbia, 2015 BCCA 468 (“Ari”) the BC Court of Appeal considered whether certain portions of a proposed class action ought to have been struck. In that case, the claimants alleged, among other things, that the employee’s alleged breach of the Privacy Act, RSBC 1996, c 373, imported vicarious liability on to the employer.

The Court held that the Privacy Act did not exclude the imposition of vicarious liability on the employer and suggested that the principles of vicarious liability may be applied in the context of a breach of privacy by an employee just as they would to any other wrongful act of an employee.

However, since the Court in Ari was considering the test for striking out pleadings (specifically whether it was plain and obvious that there is no reasonable claim in breach of privacy against the Defendants), rather than evaluating the whole of the Action on its merits, the case is not a definitive answer to the question of whether and when an employer is vicariously liable for the privacy breaches of its employees.

In Hynes v Western Regional Integrated Health Authority, 2014 NLTD(G) 137, the Supreme Court of Newfoundland and Labrador considered whether the proposed class action for a breach of the Privacy Act, RSNL 1990 c P-22 and for the tort of intrusion upon seclusion should be granted, partly on the basis of whether the employer could be vicariously liable for an employee’s wrongful breach of privacy.

The Court held that it was not plain and obvious that the assertion of vicarious liability would fail. The Court indicated that the issue of whether the employee’s acts were so connected to authorized acts to justify the imposition of vicarious liability (the test for imposing vicarious liability) must be resolved at trial. Therefore, the Court’s certification decision is not determinative of this issue.

In, Bigstone v St Pierre, 2011 SKCA 34 this issue was argued before the Chambers judge on an application to strike pleadings, but on appeal vicarious liability was not considered and the claim was struck on the basis that there were insufficient material facts pleaded to support the cause of action.

The Morrisons Case

Morrisons may provide an inkling as to how Canadian courts may approach the issue of vicarious liability of employers for privacy breaches committed by employees.

In Morrisons, a group of claimants brought an Action for breach of the Data Protection Act 1998 (“DPA”), as well as at common law for the tort of misuse of private information and an equitable claim for breach of confidence against Morrisons. The claimants were employees of Morrisons who had had their personal information taken and published online by a disgruntled employee, Mr. Skelton. Mr. Skelton had been a Senior IT Auditor who had obtained access to the private information of the claimants in the course of collating the data for transmission to Morrisons’ auditors.

The claimants alleged both a direct breach of the DPA by Morrisons for failing to protect their data and that Morrisons was vicariously liable for the actions of its employee, Mr. Skelton.

Direct Liability

The Court held that Morrisons did not breach the DPA directly since it was not the “Data Controller” (as defined in the DPA) at the relevant time with respect to the data at issue. The specific acts complained of were those of a third party, Mr. Skelton, and not Morrisons.

The Court also considered whether Morrisons breached the DPA by failing to take appropriate measures to safeguard the data. Morrisons had put in place security systems which were generally considered by the Court to be adequate and appropriate.

The Court also assessed whether Morrisons ought to have done more to supervise Mr. Skelton. Although Morrisons could have taken additional measures to monitor Mr. Skelton and his work, the Court indicated that there is a level of additional supervision which is not only disproportionate to the risk but that may result in a claim by the employee being supervised that the measures are unfairly intrusive to his or her own rights.

Vicarious Liability

The Court then considered whether Morrisons was vicariously liable for the actions of Mr. Skelton. The Court held that vicarious liability was not excluded by the DPA and can be imposed where the circumstances so warrant. The Court found that the principles of vicarious liability of an employer for the acts of its employees do not change simply because the wrong complained of relates to a privacy breach as opposed to a different wrongful act of the employee.

Whether liability will be imposed depends on whether one of the two bases for liability in Bazley v Curry, [1999] 2 SCR 534 are met, specifically, whether (1) the employer has authorized the acts, or (2) the unauthorized acts are so connected with the authorized acts that they may be regarded as mode of doing an unauthorized act. The Court also considered the policy rationales behind imposing vicarious liability in the circumstances.

In Morrisons, the Court found that “there was an unbroken thread that linked his work to the disclosure: what happened was a seamless and continuous sequence of events” even though the disclosure itself did not occur on a company computer or on company time. Dealing with sensitive confidential data was expressly part of Mr. Skelton’s role. His job was to receive and pass on data to a third party. The fact that the actual third party recipient of the data was unauthorized did not disengage the act from his employment.

The Court noted that cases where vicarious liability has been upheld are those “where the employee misused his position in a way which injured the claimant” and “it was just that the employer who selected him and put him in that position should be held responsible.” Further justification for imposing liability is that the employer has at least the theoretical right to control the employee’s actions and has the ability to protect itself by insuring against the liability.

In the end result, Morrisons stands for the proposition that a company can be held liable to compensate affected individuals for loss (including non-pecuniary loss such as emotional distress) caused by a data breach, even when the breach was caused by an employee and there was  no wrongdoing on the part of the company.

Importantly, the Court invited Morrisons to appeal the conclusion as to vicarious liability, considering that imposing liability in the circumstances may have served to render the Court an accessory to Mr. Skelton’s criminal aims (namely punishing Morrisons for taking disciplinary action against Mr. Skelton).

What it Means for Employers

Although there remains no definitive answer in Canada yet, this case and the preceding Canadian case law suggests that companies must consider carefully who they place in trusted roles and, in addition to the systems they use to protect data, what measures they might take to guard against human risk, which the Court in Morrisons acknowledged can never be fully anticipated or prevented.

Location of Third-Party’s Server Housing Municipal Data Ordered Disclosed

Posted in Cybersecurity, FIPPA/MFIPPA
Eva Guo

Against the backdrop of terrorist attacks, alleged voter fraud and fake news, one would think arguments that the security and integrity of the voting process would be compelling. However, on November 15, 2017 the BC Office of the Information and Privacy Commissioner (“OIPC”) rejected arguments along these lines and ordered the City of Vancouver (“City”) to disclose the physical location of computer servers that stored voter data for the City’s municipal election.[1]

Pursuant to BC’s Freedom of Information and Protection of Privacy Act (“FIPPA”),[2] a journalist requested the City to disclose its contract with the company that provided voting software and voter data storage to the City, and to other municipalities across Canada. The City partially complied with the request, disclosing the entirety of the contract except for the physical location of the computer servers and their corporate operators. The City relied on section 15(1)(l) of FIPPA, which permits an exemption from disclosure based on the public body’s assessment that “disclosure could reasonably be expected to harm the security of any property or system, including a building, a vehicle, a computer system or a communications system”.

The OIPC applied the Supreme Court of Canada’s formulation for “reasonable expectation of probable harm” in Ontario v Ontario[3] as the appropriate standard of proof. It is said that the statutory language of “could reasonably be expected to” requires a middle ground between that which is probable and that which is merely possible. The Supreme Court opined that: “An institution must provide evidence ‘well beyond’ or ‘considerably above’ a mere possibility of harm in order to reach that middle ground”.[4]

The City argued that voter data is “highly sensitive” and a target for criminal activity, and stolen voter data could be used to interfere with ongoing or future elections. Further, the City submitted affidavit evidence of the Chief Technology Officer (“CTO”) of the service provider, in which the CTO stated that: “These addresses have stringent physical security precautions but, for a dedicated attacker, knowledge of the address could provide additional means to initiate social engineering attacks focusing on employees at these facilities.”

The City also relied on two previous Orders holding that FIPPA’s section 15(1)(l) exemption applied to information which would allow or assist third parties to gain unauthorized access to a computer system or weaken the security of a computer system. The IOPC distinguished these Orders from the current case as neither of them dealt with physical location of servers but about user IDs, passwords, network configuration, security settings and so on. In the end, the OIPC was not satisfied that disclosing the server locations would make unlawful access considerably more likely than a mere possibility.

FIPPA expressly provides in its section 2 that one of the purposes is to make public bodies more accountable to the public. OIPC, in its reasoning in this decision, reiterated the strong public interest in transparency in relation to contracts involving public services delivered by private contractors and reinforced its position that the risk of harm under section 15(1)(l) must be sufficient to outweigh that public interest (footnotes omitted):

There is a strong public interest in transparency in relation to contracts involving public services delivered by private contractors and the risk of harm under s. 15(1)(l) must be sufficient to outweigh that public interest.  The City has
not satisfied me that the security of the primary and backup server facilities or the server computer system itself could reasonably be expected to be harmed by disclosure of their location or the names of the companies which operate them.
Therefore, I find the City is not authorized to refuse the  applicant access to this information pursuant to s. 15(1)(l).

This Order will be of obvious concern to companies which contract with public sector bodies for the storage and processing of data, many of which rely on the secrecy of their physical operations as part of their overall IT security plan.

Companies should consider reviewing the terms of the their contracts, and work with counsel to take steps to decrease the likelihood they will be adversely affected by requests such as the one here.

 For more information about our firm’s data expertise, please see our Cybersecurity, Privacy and Data Management Group’s page or see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses.

 

[1] Order F17-54, 2017 BCIPC 59 (CanLII)

[2] RSBC 1996, C 165 (“FIPPA”)

[3] Ontario (Community Safety and Correctional Services) v. Ontario (Information and Privacy Commissioner), 2014 SCC 31 (CanLII) [Ontario v Ontario]

[4] Order F17-54, at para 10 citing Ontario v Ontario

In the Future, Everyone Will Have Their Personality Misappropriated for 15 Minutes

Posted in Privacy
Jade Buchanan

At the same time Andy Warhol was predicting the intense, short-lived  “15 minutes of fame” that has now manifest as viral videos, legal scholars were pondering the implications of technology on our private lives.[1] While nobody got as close as predicting that a social media website would get sued for using photos people voluntarily uploaded to promote products, legal remedies for “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness” were already emerging in the 1960s.

So what is the law in Canada now? Can you sell “Damn Daniel” fidget spinners? Or use Chewbacca Mom to promote your crowd-funded hover boards? What if the person’s “fame” is their 407 Twitter followers? The answer is usually going to be “not without their consent”, but the reason why is a little less clear.

The Law on Misappropriation of Personality in Canada’s Common Law Jurisdictions

In Canada’s common law jurisdictions, if someone misuses your likeness to promote a product, your remedy will depend on where you live and whether or not you are living at all.

Four common law provinces have legislation that makes invasion of privacy a cause of action: British Columbia, Manitoba, Newfoundland and Labrador and Saskatchewan (the “Privacy Acts”). All four prohibit the use of a likeness for advertising. For example, the BC Privacy Act states it as follows:

[3](2) It is a tort, actionable without proof of damage, for a person to use the name or [likeness, still or moving] of another for the purpose of advertising or promoting the sale of, or other trading in, property or services, unless that other, or a person entitled to consent on his or her behalf, consents to the use for that purpose.

Let’s call this “statutory misappropriation”.

Things get a little murkier when it comes to the common law. It is generally accepted that Canadian courts will recognize misappropriation of personality as a cause of action that is “proprietary in nature and the interest protected is that of the individual in the exclusive use of his own identity in so far as it is represented by his name, reputation, likeness or other value.”[2] Claims of misappropriation of personality have typically been advanced by famous people, such as CFL linebacker Bob Krouse and the estate of Glenn Gould (both failed). Damages have been tied to the royalties the celebrity in question would have received if they had consented to the use of their likeness.[3] That all said, the Ontario Court of Appeal has stated that “Ontario has already accepted the existence of a tort claim for appropriation of personality” in reference to the theoretical privacy tort of “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness”.[4] That decision suggests that misappropriation of personality even extends to the non-famous, although we cannot yet say this is definitive across all common law jurisdictions.

Do Personality Rights Survive Forever?

All of the Privacy Acts state that the right to sue for invasion of privacy is extinguished by the death of the affected person, except for the Manitoba Privacy Act, which is silent on duration.

The duration of a right to claim for misappropriation of personality likely survives death but it is not clear for how long. At least one Canadian court has suggested it survives death but did not specify for how long.[5] That makes sense when you consider that part of the reason for the right is to give famous people the exclusive right to monetize their fame. Just like copyrights, they should be able to pass the economic value of their personality rights to their heirs.

Does Misappropriation of Personality Still Apply?

The Privacy Acts suggest that a person could sue for both misappropriation of personality and statutory misappropriation. Except for British Columbia, the Privacy Acts state that the rights under the legislation do not derogate any other rights of action or remedy otherwise available. While a court could find that the British Columbia Privacy Act is a full codification of misappropriation of personality, there is a decision from the Supreme Court of British Columbia that considered claims for misappropriation of personality and statutory misappropriation separately (but dismissed both because the individual was not actually identifiable).[6]

The co-existence of statutory and common law claims suggests that, while the dead cannot pursue claims of statutory misappropriation, common law misappropriation may still be available.

What are the Implications?

If you are going to include someone in your advertising or promotions (through their name, likeness, portrait, voice, caricature or otherwise) you need their consent. If you are unsure of whether or not what you are doing constitutes misappropriation, you need legal advice.

 

[1] Prosser, William L., Privacy, 48 CALIF. L. Rev., Vol. 48, Iss. 3 (Aug, 1960)

[2] Joseph v. Daniels, 1986 CanLII 1106 (BC SC).

[3] Athans v. Canadian Adventure Camps Ltd. et al., 1977 CanLII 1255 (ON SC)

[4] Jones v. Tsige, 2012 ONCA 32.

[5] Gould Estate v Stoddart Publishing Co., 1996 CanLII 8209 (ON SC).

[6] Supra note 2.

Financial Stability Board Releases Report on Financial Stability Implications of Artificial Intelligence and Machine Learning

Posted in AI and Machine Learning, Big Data, Financial, FinTech
Brianne PaulinAna BadourKirsten ThompsonCarole Piovesan

On November 1, 2017, the Financial Stability Board (the “FSB”)[1] published its report on the market developments and financial stability implications of artificial intelligence (“AI”) and machine learning in financial services. The FSB noted that the use of AI and machine learning in financial services is rapidly growing and that the application of such technologies to financial services are evolving.

“Use Cases” of AI and Machine Learning in the Financial Sector

The FSB identified current and potential types of use cases of AI and machine learning in financial services, including: “(i) customer-focused uses, (ii) operations-focused uses, (iii) uses for trading and portfolio management in financial markets, and (iv) uses by financial institutions for Regulatory Technology (“RegTech”) or by public authorities for supervision (“SupTech”).”

Customer-Focused Uses

The FSB found that “financial institutions and vendors are using AI and machine learning methods to assess credit quality, to price and market insurance contracts, and to automate client interaction.” Specifically in the insurance industry, machine learning is being used to analyze big data, improve profitability, and to increase the efficiency of claims and pricing processes. The investment to global InsurTech totaled $1.7 billion in 2016.

Such application of AI and machine learning can increase market stability as financial institutions have a greater ability to analyze big data to enhance their knowledge of trading patterns and to better anticipate trades. The FSB warned, however, that due to the lack of data on how the market would react to an increase use in AI and machine learning by market participants, a market shock could occur. In fact, market participants could be enticed to apply such technologies if their competitors, in applying AI and machine learning to customer-focused uses, are increasing profits and outperforming them. This increased use by market participants could cause a market shock and bring instability to the market.

Operations-Focused Uses and Trading and Portfolio Management

Trading firms would be able to better assess market impacts and shifts in market behaviour, increasing market stability. An example of such use is ‘trading robots’ than can react to market changes. The ‘trading robots’ can perform and assess market impact of certain trades, which allows trading firms to collect more information which, in turn, allows these firms to modify their trading strategies. The FSB also identified back-testing as an area of growth for the use of AI and machine learning. Back-testing is important for banks in their assessment of risk models. AI would provide a greater understanding of shifts in market behaviour and the FSB stated that this could potentially reduce the underestimating of risks in such instances.

Uses of AI and Machine Learning by Financial Institutions

The FSB found that AI and machine learning is used by financial institutions for regulatory purposes and by authorities for supervision purposes. The RegTech market is expected to reach $6.45 billion by 2020. Several regulators around the globe are using AI and machine learning to facilitate regulatory compliance, such as applying AI and machine learning to the Know-Your-Customer process. In terms of SupTech, the report noted the implementation of AI and machine learning in various supervision functions by authorities, such as monetary policy assessments. A 2015 survey of central banks’ use of AI and machine learning, cited by the FSB, found that central banks anticipated using big data reported by third parties for economic forecasting and for other financial stability purposes.

Implications of AI and Machine Learning on Market Stability

The FSB warned that, though AI and machine learning would benefit market stability by reducing costs, increasing efficiency and increasing profitability for financial institutions, financial institutions must implement governance structures and maintain auditability to ensure that potential effects beyond the institutions’ balance sheets are understood. Governance structures include ‘training’ to ensure that users understand the technologies and applications of AI and machine learning, promoting algorithmic transparency and accountability to ensure decisions made by the algorithm, such as the credit score assigned to a particular customer, can be understood and explained.

Without sound governance structures, the application of AI and machine learning could increase the risk to financial institutions. The report noted that “beyond the staff operating these applications, key functions such as risk management and internal audit and the administrative management and supervisory body should be fit for controlling and managing the use of applications.”

The benefits of using AI and machine learning systems for consumers and investors could  translate into lower costs of services and greater access to financial services. AI and machine learning could allow financial institutions to assess big data to tailor financial services to specific customers and investors. The FSB noted that proper governance structures must be in place to protect the privacy and data of both consumers and investors.

The FSB also raised concerns over the small number of third party providers of data in the financial system. Bank vulnerability could grow if the financial institutions rely on the same small number of third-party providers, using similar data and algorithms. On dependency, the FSB noted that “third-party dependencies and interconnections could have systemic effects if such a large firm were to face a major disruption or insolvency.” If financial institutions are unable to use big data from new sources, dependencies on previous data could develop, potentially leading to market shocks and bringing instability in the financial system.

This same concern was recently echoed by the Bank of Canada in its November 2017 Financial System Review, in which it said:

As financial services rely increasingly on information technology, there are growing operational risks from third-party service providers. Since providing services such as cloud computing, big data analytics and artificial intelligence requires a critical mass of users to remain cost-effective, global markets could become dominated by a few large technology firms. Higher industry concentration would raise systemic risks from operational disruptions and cyber attacks. Investments by service providers to avoid disruptions have benefits beyond the individual firm and can be considered a public good.

Legal and Ethical Issues

The FSB also provided an analysis on certain legal issues that arise in the use of AI and machine learning with big data, specifically in the context of data protection and data ownership rights. The FSB highlighted the efforts in several jurisdictions to adopt guidelines for the protection of data ownership and privacy.[2] Some jurisdictions are also assessing whether consumers should have the ability to understand certain techniques used in the application of AI and machine learning to credit systems. Other issues that arise in the use of AI and machine learning with big data include anti-discrimination laws and equal opportunity laws. The FSB noted that the use of AI and machine learning could lead to discriminatory practices and results, even without the inclusion of gender or racial information. Finally, liability issues could also arise, such as determining whether experts who rely on algorithms could be liable for their decisions.

Next Steps

The FSB noted that it will continue monitoring the uses of AI and machine learning in the financial markets, especially as the application of such technologies to the financial sector is growing.

AI and Financial Services at McCarthy Tétrault

In October 2017, McCarthy Tétrault released a White Paper on AI “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business”, in which we featured some preliminary research on AI in the financial services sector. In particular, we highlighted specific areas where we see immediate the incorporation of AI in financial services, being investments and portfolio allocations, compliance and RegTech, and AI-powered chatbots.

[1]       The FSB is an international body that monitors and makes recommendations about the global financial system. Its members include all G20 major economies (including Canada).

[2] In particular, the FSB referenced the OECD’s guidelines on the protection of privacy and cross-border uses and the European Union’s “General Data Protection Regulation” coming into force in 2018.

 

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

Canadian Competition Bureau Releases Fintech Report for Consultation

Posted in Big Data, FinTech, Open Banking
Jonathan BitranAna BadourKirsten ThompsonDonald HoustonMichele F. Siu

On November 6, 2017, the Competition Bureau (Bureau) released a draft report on its market study into technology-led innovation in the Canadian financial services (Fintech) sector.[1] The Bureau has invited feedback from interested parties only until November 20, 2017, an uncharacteristically short comment period.

Financial services are an area of interest for the Bureau due to their significance to the Canadian economy and Canadian employment as well as the critical role they play in the daily lives of Canadians. While offering a variety of products and services across many financial service segments, Fintechs are typically internet-based and application-oriented, promising user-friendly, efficient consumer interfaces. The Bureau asserts that Fintech represents an opportunity to increase competition in Canada’s financial services sector, which in turn could potentially lead to lower prices and increased choice and convenience for consumers and small and medium-sized enterprises (SMEs). To that end, the report covers Fintech innovation in segments that directly impact consumers and SMEs: (i) payments and payment systems (e.g., mobile wallets), (ii) lending (e.g., crowdfunding), and (iii) investment dealing and advice (e.g., robo-advisors). The report specifically does not cover insurance, cryptocurrencies/blockchain, payday loans, loyalty programs, deposit-taking, accounting, auditing, tax preparation, large corporate, commercial or institutional investing and banking (e.g., pension fund management, mergers and acquisitions) or business-to-business financial services.

The report is intended as guidance for financial services sector regulators and policymakers. It is a dense report, but the Bureau’s core message is that regulation of Fintech is necessary to protect the safety, soundness and security of the financial system, but should not unnecessarily impede competition and innovation in financial services. Or as Goldilocks might say, regulation should be “just right”. In examining the regulatory barriers to entry in Fintech, the Bureau makes 11 key recommendations, summarized below, which are intended to modernize financial services regulation by reducing barriers to innovation and competition in order to encourage Fintech growth.

Bureau’s Recommendations For Pro-Competitive Financial Services Regulatio

  1. Technologically-neutral. The Bureau asserts that regulation should be technology‑neutral and device‑agnostic to accommodate and encourage new (and yet‑to‑be developed) technologies. For example, requiring “wet” signatures (i.e., in person with a pen) prevents the use of new digital signature technology that also provides sufficient security.
  2. Principles-based. The Bureau asserts that regulation should be based on principles or expected outcomes and not strict rules on how to achieve the desired outcome. This is to allow for the implementation of new technologies, which might otherwise be barred by a prescriptive regime, while still protecting policy goals.
  3. Function-based. The Bureau asserts that regulation should be based on the functions carried out by an entity, not its identity (e.g., if a bank and a start-up are engaging in the same activity, they should face the same regulation with respect to that activity). This is to ensure that all entities have the same regulatory burden and consumers have the same protections when dealing with competing service providers.
  4. Proportional to risk. The Bureau asserts that regulation should be proportional to the risks that it aims to mitigate. Along with technology‑neutral, device-agnostic, principles‑based, and function‑based regulation, proportional regulation would level the playing field between Fintech entrants and incumbent service providers that offer the same types of services.
  5. National harmonization. The Bureau asserts that regulations should be harmonized across Canada. Although there has been improvement, a patchwork of provincial and federal regulations can make compliance unduly difficult and costly.
  6. Facilitate sectoral collaboration. The Bureau proposes that collaboration throughout the sector should be encouraged, including (i) among regulators to enable a unified approach, (ii) between the public and private sector to improve understanding of the latest services among regulators and of the regulatory framework among Fintech firms, and (iii) among industry participants to help bring more products and services to market (while avoiding anticompetitive collaborations). The UK, Australia, and Hong Kong currently facilitate such collaboration and the Bureau asserts that Canada should follow suit.
  7. Policy leadership. The Bureau proposes that a Fintech policy lead for Canada to facilitate Fintech development should be identified. The Fintech policy lead can then act as a gateway to other agencies, give Fintech firms a one‑stop resource and encourage investment in innovative businesses and technologies in the financial services sector.
  8. Facilitate access to core services. The Bureau supports promoting greater access to the financial sector’s core infrastructure and services to facilitate the development of Fintech services. Fintech firms often require access to core services (e.g., the payment system) in order to provide their services (e.g., bill payment app). Under the appropriate risk‑management frameworks, Fintech firms should be provided with access, so that regulation does not stifle useful services.
  9. Open banking. The Bureau supports embracing more “open” access to systems and data (also described as “open banking”). With appropriate customer consent and risk mitigation frameworks, the Bureau asserts that this will allow Fintech firms to access consumer banking information in order to develop bespoke price‑comparison tools and other applications that facilitate competitive switching by consumers. Looking abroad, the UK competition regulator has mandated the implementation of “open banking” (the Bureau does not have this authority). The Bureau has recognized the key role of data (specifically, big data) in Fintech and other sectors in its recently released draft paper for consultation, Big data and Innovation: Implications for competition policy in Canada (see our further comments on this paper). The comment period for this paper is open until November 17, 2017.
  10. Digital identification. The Bureau supports exploring the potential of digital identification for use in client identification processes. Digital identification could reduce the cost of customer acquisition (for new entrants and incumbent service providers), reduce the costs of switching for consumers and facilitate regulatory compliance where identity verification is needed.
  11. Continuing review. The Bureau supports continuing the frequent review of regulatory frameworks and the adaptation of regulation to changing market dynamics (e.g., consumer demand and advances in technology) to ensure they achieve their objectives in a way that does not unnecessarily inhibit competition.

The report comes in the context of a number of ongoing Fintech-related consultations and initiatives, including the recent announcement by the Government of Ontario that it would create a “regulatory super sandbox”, the launch earlier this year of regulatory sandboxes by the Canadian Securities Administrators, the modernization initiative of the Canadian payments system by Payments Canada, the federal consultation on the national retail payments oversight framework and the federal consultation on the federal financial sector framework.

The Bureau has clearly put significant thought and effort into this report. The impact it will have on financial services regulators and policymakers remains to be seen.

For more information about our Firm’s Competition and Fintech expertise, please see our Competition group’s and Fintech group’s pages.


[1] In May 2016 the Bureau announced it would launch this study. The Commissioner of Competition has emphasized the Bureau’s commitment to use its authority and jurisdiction to support Fintech innovation noting that “competitive intensity fosters innovation”. Earlier this year, the Bureau hosted industry stakeholders and federal and provincial regulators at a workshop to discuss the regulatory challenges faced by Fintech and possible approaches that could enhance the efficiency and effectiveness of Canada’s financial services sector.

U.S. Consumer Financial Protection Bureau Sets Out Principles for Consumer-Authorized Data Sharing and Aggregation

Posted in Big Data, FinTech, Open Banking
Kirsten Thompson

On October 18th, 2017 the U.S. Consumer Financial Protection Bureau (“CFPB”) outlined the principles to be followed (“Principles”) when consumers authorize third party companies to access their financial data to provide certain financial products and services. These principles will be of particular note to the Fintech sector, in which a significant number of companies incorporate into their business model some kind of aggregation or sharing of consumer financial information.

The CFPB refers to this is as the “consumer-authorized data-sharing market” and has stated its two-fold goal as intending to “help foster the development of innovative financial products and services, increase competition in financial markets, and empower consumers to take greater control of their financial lives”, while at the same time ensure protection for consumers “that provide, use, or aggregate consumer-authorized financial data”.

The Principles line up quite closely with the ten Fair Information Principles that underlie Canadian federal privacy legislation (PIPEDA). Absent (or diluted) from the CFPB Principles are the Fair Informaiton Principles regarding “Limiting Use, Disclosure and Retention”, “Limiting Collection” and “Identifying Purpose”. The CFPB Principles also attempt to address many of the same issues that arise in the mandatory “Open Banking” regime in the EU and the UK, but in a much less fulsome manner.

Background

Under the Dodd-Frank Act, the CFPB was empowered  to implement and enforce consumer financial law “for the purpose of ensuring that all consumers have access to markets for consumer financial products and services and that markets for consumer financial products and services are fair, transparent, and competitive.”[1] The CFPB was to exercise its authorities so that “markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation.”[2]

Increasingly, companies have been  accessing consumer account data with consumers’ authorization and providing services to consumers using data from the consumers’ various financial accounts. Such “data aggregation”-based services include the provision of financial advice or financial management tools, the verification of accounts and transactions, the facilitation of underwriting or fraud-screening, and a range of other functions. This type of consumer-authorized data access and aggregation holds the promise of improved and innovative consumer financial products and services, enhanced control for consumers over their financial lives, and increased competition in the provision of financial services to consumers.

The CFPB’s interest in consumer data (and specifically Open Banking) was telegraphed by the Director of the CFPB  his remarks at the 2016 Money 20/20 conference when he stated that the CFPB was “gravely concerned” that financial institutions were limiting or shutting off access to financial data, rather than “exploring ways to make sure that such access…is safe and secure.” (see our blog post on this here).

However, there are also challenges to this sharing of data – privacy, security and regulatory compliance being just a few. The CFPB notes that a range of industry stakeholders are working, through a variety of individual arrangements as well as broader industry initiatives, on agreements, systems, and standards for data access, aggregation, use, redistribution, and disposal. However, the CFPB believes that consumer interests must be the priority of all stakeholders as the aggregation services-related market develops.

The CFPB issued a Request for Information in 2016 to gather feedback from wide range of stakeholders, including large and small banks and credit unions, their trade associations, aggregators, “fintech” firms, consumer advocates, and individual consumers.

The CFPB has now released its set of Consumer Protection Principles intended to reiterate the importance of consumer interests. They are, however, non-binding and not intended to alter, interpret, or otherwise provide guidance on existing statutes and regulations that apply.

1) Access

Consumers should be able, upon request, to obtain information in a timely manner about their ownership or use of a financial product or service from their product or service provider. Further, consumers should generally be able to authorize trusted third parties to obtain such information from account providers to use on behalf of consumers, for consumer benefit, and in a safe manner.

The CFPB expects that financial account agreements and terms of use will, among other things, “not seek to deter consumers from accessing or granting access to their account information.” Notably, “[a]ccess does not require consumers to share their account credentials with third parties”, which suggests that screen scraping mechanisms cannot be made mandatory.

2) Data Scope and Usability

The scope of data that can be consumer-authorized for access should be broad, according to  the CFPB, and may include “any transaction, series of transactions, or other aspect of consumer usage; the terms of any account, such as a fee schedule; realized consumer costs, such as fees or interest paid; and realized consumer benefits, such as interest earned or rewards.” With this scope of information made available, consumers will be able to compare fees the cost of banking at a particular company or institution.

3) Control and Informed Consent

The CPFB suggests that authorized terms of access, storage, use, and disposal are fully and effectively disclosed to the consumer, understood by the consumer, not overly broad, and consistent with the consumer’s reasonable expectations in light of the product(s) or service(s) selected by the consumer. While no explanation accompanies the statement, the CPFB states that firms should take steps to ensure “[c]onsumers are not coerced into granting third-party access.”

Furthermore, consumers must be able to readily and simply revoke authorizations to access, use, or store data. Similarly, consumers should be able to require “third parties to delete personally identifiable information.”

4) Authorizing Payments

The CPFB reminds firms that authorized data access, in and of itself, is not payment authorization. A separate and distinct authorization to initiate payments must be obtained s. Providers that access information and initiate payments may reasonably require consumers to supply both forms of authorization to obtain services.

5) Security

The sharing of information can raise security concerns and the CFPB advises that consumer data are to be maintained “in a manner and in formats that deter and protect against security breaches and prevent harm to consumers.” Login and other access credentials are to be secured and “all parties that access, store, transmit, or dispose of data use strong protections and effective processes to mitigate the risks of, detect, promptly respond to, and resolve and remedy data breaches, transmission errors, unauthorized access, and fraud”. Further, firms should transmit data only to third parties that also have such protections and processes.

6) Access Transparency

Consumers should be informed of which of their authorized third parties are accessing or using information regarding their accounts. This can include the identity and security of each such party, the data they access, their use of such data, and the frequency at which they access the data.

7) Accuracy

Consumers should expect the data they access or authorize others to access or use to be accurate and current and firms should have reasonable means to dispute and resolve data inaccuracies, regardless of how or where inaccuracies arise.

8) Ability to Dispute and Resolve Unauthorized Access

Consumers should also have reasonable and practical means to dispute and resolve instances of unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, and failures to comply with other obligations, including the terms of consumer authorizations. Interestingly, the CFPB advises that consumers “are not required to identify the party or parties who gained or enabled unauthorized access to receive appropriate remediation.”

9) Efficient and Effective Accountability Mechanisms

The CFPB advises that commercial participants should be accountable for the risks, harms, and costs they introduce to consumers. It is of the view that this helps align the interests of the commercial participants, and suggests such participants be “incentivized” and empowered to prevent, detect, and resolve unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, data inaccuracies, insecurity of data, and failures to comply with other obligations, including the terms of consumer authorizations.

Canada

The situation in Canada is not dissimilar, with various stakeholders and regulators on the one hand recognizing a need for innovation driven by consumer data access and on the other, the need to protect consumers and their data.

For instance, in March of 2011, the Financial Consumer Agency of Canada (“FCAC”) issued a statement, warning Canadians to be aware of the possible risks of disclosing their online banking and credit card information to financial aggregation services. Aside from the obvious data security and privacy risks, the FCAC cautioned that using such a service could also violate the terms and conditions (see our blog post on this here).

[1] 12 U.S.C. 5511(a).

[2] 12 U.S.C. 5511(b)(5)

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Canadian Securities Administrators Issues Staff Notice providing Cybersecurity and Social Media Guidance

Posted in Cybersecurity
Kirsten ThompsonEriq Yu

On October 19, 2017, the Canadian Securities Administrators (“CSA”), representing provincial and territorial securities regulators, issued CSA Staff Notice 33-321 – Cyber Security and Social Media (the “Notice”). The Notice serves to publish the results of the CSA’s survey of cybersecurity and social media practices of registered firms dealing in securities, including those registered as investment fund managers, portfolio managers, and exempt market dealers.

The survey was the result of a CSA initiative following the release of CSA Staff Notice 11-332 – Cyber Security in September 2016 in which CSA announced its intention to determine the materiality of cybersecurity risks. Social media and its surrounding challenges for registered firms were previously discussed in the CSA’s Staff Notice 31-325 – Marketing Practices of Portfolio Managers in 2011.

Importantly, issues concerning cybersecurity gain new prominence with the release of this Notice. The Notice emphasizes that addressing the risks posed by cyber threats and the use of social media is required to comply with business obligations imposed by Section 11.1 of National Instrument 31-103 (“NI 31-103”), the Instrument that outlines registrant requirements and obligations. Specifically, Section 11.1 requires registered firms to “establish, maintain and apply policies and procedures that establish a system of controls and supervision sufficient to provide reasonable assurance that the firm and each individual acting on its behalf complies with securities legislation and manage the risks associated with its business in accordance with prudent business practices.”

Over Half of Registered Firms Experienced a Cyber Security Incident

Conducted between October 11, 2016 and November 4, 2016, the survey sampled responses from 63% of the 1000 firms invited to participate. Overall, the survey found that 51% of firms experienced a cybersecurity incident in 2016, including phishing (43%), malware incidents (18%), and fraudulent email attempts to transfer funds or securities (15%).

The survey questions focused, among others, on the areas of cybersecurity incidents, policies, and incident response plans; social media policies and practices; due diligence to assess the cybersecurity practices of third-party vendors and service providers; encryption and backups; and the frequency of internal cyber risk assessments.

Cybersecurity Policies, Procedures and Training

Specifically, for the areas identified, the survey found that:

  • Only 57% of firms have specific policies and procedures to address the firm’s continued operation during a cybersecurity incident.
  • Only 56% of firms have policies and procedures for cybersecurity training for employees.
  • 9% of firms have no policies and procedures concerning cybersecurity at all.
  • 18% of firms do not provide cybersecurity-specific training to employees.

Guidance: The resulting CSA guidance indicates that all firms should have policies and procedures that address, among others, the use of electronic communications; the use of firm-issued electronic devices; reporting cybersecurity incidents; and vetting third-party vendors and service providers. Training of employees on cyber risks, including the privacy risks associated with the collection, use, or disclosure of data, should take place with “sufficient frequency to remain current”, with a recognition that training more frequent than on an annual basis may be necessary.

Cyber Risk Assessments

The Survey found that most firms perform risk assessments at least annually to identify cyber threats. However, 14% of firms indicated that they do not conduct this type of assessment at all.

Guidance: In response, the CSA guidance indicates that firms should conduct a cyber risk assessment at least annually, including a review of the firm’s cybersecurity incident response plan to see whether changes are necessary. The risk assessment should include:

  • an inventory of the firm’s critical assets and confidential data, including what should reside on or be connected to the firm’s network and what is most important to protect;
  • what areas of the firm’s operations are vulnerable to cyber threats, including internal vulnerabilities (e.g., employees) and external vulnerabilities (e.g., hackers, third-party service providers);
  • how cyber threats and vulnerabilities are identified;
  • potential consequences of the types of cyber threats identified; and
  • adequacy of the firm’s preventative controls and incident response plan, including evaluating whether changes are required to such a plan.

Cybersecurity Incident Response Plans

On cybersecurity incident response plans, the Survey results indicated that 66% of firms have an incident response plan that is tested at least annually. However, a quarter of firms surveyed had not tested their incident response plans at all.

Guidance: The CSA guidance stipulates that firms should have a written incident response plan, which should include:

  • who is responsible for communicating about the cyber security incident and who should be involved in the response to the incident;
  • a description of the different types of cyber attacks (e.g., malware infections, insider threats, cyber-enabled fraudulent wire transfers) that might be used against the firm;
  • procedures to stop the incident from continuing to inflict damage and the eradication or neutralization of the threat;
  • procedures focused on recovery of data;
  • procedures for investigation of the incident to determine the extent of the damage and to identify the cause of the incident so the firm’s systems can be modified to prevent another similar incident from occurring; and
  • identification of parties that should be notified and what information should be reported.

Due Diligence on Third Party Providers

Almost all firms surveyed indicated they engaged third-party vendors, consultants, or other service providers. Of these firms, a majority conduct due diligence on the cyber security practices of these third parties. However, the extent of the due diligence conducted and how it is documented vary greatly

Guidance: The CSA Guidance states that firms should periodically evaluate the adequacy of their cyber security practices, including safeguards against cyber security incidents and the handling of such incidents by any third parties that have access to the firms’ systems and data. In addition, firms should limit the access of third-party vendors to their systems and data.

Written agreements with these outside parties should include provisions related to cyber threats, including a requirement by third parties to notify firms of cyber security incidents resulting in unauthorized access to the firms’ networks or data and the response plans of the third parties to counter these incidents.

Where firms use cloud services, they should understand the security practices that the cloud service provider has to safeguard from cyber threats and determine whether the practices are adequate. Firms that rely on a cloud service should have procedures in place in the event that data on the cloud is not accessible.

Data Protection

Encryption is one of the tools firms can use to protect their data and sensitive information from unauthorized access. However, the survey responses indicate a sizeable number of firms do not use any encryption or rely on other methods of data protection, such as password protected documents. In addition, almost all firms surveyed indicated they back up data, but the frequency of such back ups varied.

Guidance: The CSA’s view is that encryption protects the confidentiality of information as only authorized users can view the data. In addition to using encryption for all computers and other electronic devices, the CSA expects firms to require passwords to gain access to these devices and recommends so-called “strong” passwords be required, and change with some frequency.

Where firms provide portals for clients or other third parties for communication purposes or for accessing the firm’s data or systems, firms should ensure the access is secure and data is protected.

Firms are expected to back up their data and regularly test their back-up process. Also, when backing up data, firms should ensure that the data is backed up off-site to a secure server in case there is physical damage to the firms’ premises

Cyber Insurance

A majority of firms (59%) do not have specific cyber security insurance and for those that do, the types of incidents and amounts that their policies cover vary widely.

Guidance: The CSA guidance states that firms should review their existing insurance policies (e.g., financial institution bonds) to identify which types of cyber security incidents, if any, are covered. For areas not covered by existing policies, firms should consider whether additional insurance should be obtained.

Social Media

The focus of this part of the Notice was on the fact that social media may be used as a vehicle to carry out cyber attacks. For example, social media sites may be used by attackers to launch targeted phishing emails or links on these sites may lead to websites that install malware.

For social media specifically, firms should review, supervise, retain, and have the ability to retrieve social media content.  Policies and procedures on social media practices should cover:

  • the appropriate use of social media, including the use of social media for business purposes;
  • what content is permitted when using social media;
  • procedures for ensuring that social media content is current;
  • record keeping requirements for social media content; and
  • reviews and approvals of social media content, including evidence of such reviews and approvals.

In addition, given the ease with which information may be posted on social media platforms, the difficulty of removing information once posted and the need to respond in a timely manner to issues that may arise, the CSA states that firms should have appropriate approval and monitoring procedures for social media communications. This applies even if firms do not permit the use of social media for business purposes, because policies and procedures should be in place to monitor for unauthorized use.

Next Steps

The Notice advises that CSA staff will continue to review the cyber security and social media practices of firms through compliance reviews. It notes further that CSA staff will apply the information and guidance in this Notice when assessing how firms comply with their obligations to manage the risks associated with their business as set out in NI 31-103.

Firms registered to deal in securities are advised to adopt cybersecurity policies and procedures, including an incident response plan, to ensure compliance with registrant obligations under NI 31-103. The Notice underscores that cyber threats are ever-changing and preparedness and vigilance are key to ensure risk mitigation.
For more information, see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses.

Basel Committee on Banking Supervision Issues Consultative Document Highlighting Implications of Fintech on Banks

Posted in AI and Machine Learning, Big Data, Cybersecurity, FinTech, Payments, Privacy
Brianne Paulin

On August 31, the Basel Committee on Banking Supervision (the “BCBS”) published a consultative document on the implications of Fintech for the financial sector. The consultative document was produced by BCBS’s task force mandated with identifying trends in Fintech developments and assessing the implication of those developments on the financial sector.

Parts I and II of the consultative document provide an overview of current trends and developments in Fintech. The report assesses Fintech developments, presents forward-looking scenarios, and includes case studies to better present individual risks and the potential impact of the forward-looking scenarios on banks.

The main findings of the study are presented in Part III, summarized in 10 key observations and recommendations for banks and supervisors, which will be the focus of this blog post.

Key Observations and Recommendations: Implications for Banks and Banking Systems

  1. Banking risks may change over time with the emergence of new technologies.

Banks will need to adapt to new risks emanating from the introduction of new technologies in the financial sector without limiting potential benefits stemming from such technologies. Fintech innovations have the potential to benefit both the bank, by lowering banking costs, allowing for faster banking services and facilitating regulatory compliance, and consumers, by improving access to financial services, tailoring banking services to individual needs and allowing new competitors to join the market.

  1. Key risks for banks include “strategic risk, operational risk, cyber-risk and compliance risk.”[1]

Banks must implement appropriate risk management processes and governance structures to address new risks arising from innovative technologies, including operational risks, data protection and anti-money laundering (“AML”) risks. The report recommends the adoption of the Principles for sound management of operational risk (“PSMOR”)[2] to effectively respond to these risks.

  1. Emerging technologies bring benefits to the financial sector but also pose new risks for banks.

BCBS undertook an in-depth study of the impacts of three Fintech-enabling technologies on the banking industry: artificial intelligence/machine learning/advanced data analytics, distributed ledger technology and cloud computing. Banks will need to adapt risk management plans to address such enabling technologies by implementing effective IT and risk management plans.

  1. Banks increasingly outsource operational support for technology-based financial services to third parties but risks ultimately remain with the bank.

Banks will need to ensure that risk management plans are extended to any operations outsourced to a third party. This will require adapting operational risk management plans to third parties, including Fintech firms.

  1. Fintech innovations will require greater supervision and require further cooperation with public authorities to ensure compliance with regulations, such as data privacy, AML and consumer protection.

The emergence of new enabling technologies in the banking sector provides an opportunity for bank supervisors to further cooperate with public authorities responsible for the oversight of the financial sector and Fintech. Cooperation will facilitate the identification of new risks and facilitate supervision of important risks, including consumer protection, data protection, competition and cyber-security.

  1. Fintech companies can operate across borders. International cooperation between banks and bank supervisors is essential.

BCBS noted that current Fintech firms mostly operate at a national level. However, the opportunities for cross-border services are plenty, and when Fintech firms expand their operations, bank supervisors will need to ensure a level of international cooperation with other bank supervisors.

  1. Technology can bring important changes to traditional banking models. Supervision models need to be adapted to these emerging banking models.

Bank supervisors should ensure that staff are well equipped to deal with the changing technology. Staff should be trained to identify and monitor new and emerging risks associated with innovative technologies and new banking systems.

  1. Banks should harness emerging technologies, such as AI, to increase their efficiency in responding to Fintech-related risks.

Bank supervisors should determine how to use Fintech innovations to better supervise and monitor Fintech related risks and new banking technologies.

  1. Current regulatory frameworks were adopted before the emergence of Fintech innovations. “This may create the risk of unintended regulatory gaps when new business models move critical banking activities outside regulated environments or, conversely, result in unintended barriers to entry for new business models and entrants.”

The BCBS recommends that supervisors review their regulatory frameworks to ensure that regulations protect consumers but do not create barriers to entry for Fintech firms. The BCBS found that many Fintech firms operate outside the realm of traditional banking, and thus, traditional regulatory approaches may not be appropriate for such firms. Regulatory barriers, however, could push Fintech firms to operate outside of the regulated financial industry, causing significant risks to consumers.

  1. Government authorities in some jurisdictions have partnered with Fintech firms to facilitate the use of financial technologies while ensuring adequate regulatory safeguards for financial stability.

The BCBS found that several government authorities have put in place initiatives to help Fintech companies navigate the regulatory requirements of the financial sector. Bank supervisors should monitor developments in other jurisdictions to learn and implement similar approaches, if appropriate.

 

[1] The report identifies these risks for both incumbent banks and new Fintech entrants into the financial industry.

[2] See: http://www.bis.org/publ/bcbs292.pdf

 

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.