Insights on cybersecurity, privacy and data protection law

Artificial Intelligence: The Year in Review

Posted in AI and Machine Learning
Carole PiovesanKirsten ThompsonSuzie Cusson

By all accounts, “Maple Valley” is thriving.

2017 saw Canada, home to the second largest tech sector outside Silicon Valley,[i] solidify its position as a leader in the field of artificial intelligence (“AI”).

Based on available data to date, it is estimated that funding raised by Canadian AI companies in 2017 will exceed US$250 million, representing an almost two-fold increase from the previous record historical high of US$143 million in 2015.[ii] This healthy injection of private-sector funding has been accompanied by significant public investment. Notably, the 2017 federal budget provided for C$125 million in research and development funds earmarked for AI initiatives and nearly C$1 billion over 5 years to promote innovation superclusters.[iii]

Access to unprecedented levels of capital, a strong network of academic institutions, improving infrastructure and availability of talent facilitated by open immigration rules have fuelled the development of a burgeoning industry north of the border. Joining dozens of growing start-ups in AI cluster cities such as Toronto or Montreal, global tech giants such as Google, Facebook and Samsung have invested in or opened Canadian AI labs in 2017.[iv]

The regulatory landscape impacting AI continues to evolve both domestically and abroad. As we begin the new year, we pause to reflect on some of 2017’s most notable developments in AI and prepare for new trends to watch out for in 2018.

What we saw in 2017:

  • New federal support and plan for AI: Budget 2017 signalled the Government of Canada’s clear priority to foster innovation in a number of key industries, including AI. The Pan-Canadian Artificial Intelligence Strategy was launched in March 2017 with a view to increase and support our national research community on AI, connect Canada’s major centres for AI development and lead the global discussion on the economic, ethical, policy and legal implications of advances in AI.[v] More broadly, the federal government announced funding for innovation superclusters, began a review of its business innovation initiatives to better align them with client needs, and established Innovation Canada, a new platform created to coordinate the public programs made available across several departments in support of Canada’s innovative companies, including AI companies.[vi]
  • A survey of AI policy, innovation and regulations abroad:
    • United States: In May 2017, the US Congress established the Artificial Intelligence Congressional Caucus, tasked with informing policy decisions with respect to AI and ensuring rapid growth and innovation in this industry. This signals American lawmakers’ continued support for developing a national AI policy roadmap despite the change in administration since the publication of the Obama-era White House National Artificial Intelligence Research and Development Strategic Plan.[vii] Additional policy discussions of note have emanated from the United States Government Accountability Office, which released a technology assessment of the Internet of Things respecting the status and implications of an increasingly connected world.[viii]
    • United Kingdom: In January 2017, the Science and Technology Committee of the UK House of Commons released its report on robotics and AI with recommendations for supporting AI research. Chief among its recommendations is the creation of a national council on AI with membership drawn from academia, industry and government to produce a government-backed “National Robotics and Autonomous Systems Strategy”.[ix] The Committee is holding ongoing consultations to better understand issues related to the proliferation of advanced AI systems. In fall 2017, the UK government announced it will invest £75m into delivering recommendations from an independent review on AI, and will open a Centre for Data Ethics and Innovation.
    • European Union: In February 2017, the European Parliament passed a resolution titled “Civil Law Rules on Robotics”, pursuant to which the European Commission is to propose rules on robotics and AI, in order to fully exploit their economic potential and to guarantee a standard level of safety and security.[x] In October 2017, the European Parliament released its preliminary findings on its public consultation on robotics and AI. Meant to inform the EU’s position on ethical, economic, legal, and social issues, the consultations revealed support for the creation of a central EU regulatory body on AI. The European Commission published a preliminary response to some of the European Parliament’s recommendations and has committed to working on a EU response on AI following its own public consultation and stakeholder dialogue.
    • Estonia: The small country of Estonia, with a population of around 1.3 million people, is outlining options for extending legal representative rights to AI.
    • China: The Chinese government announced an ambitious goal of becoming a leader in AI innovation and regulation by 2030. In December 2017, the Ministry of Industry and Information Technology published a document outlining its goals to lead in AI.
    • South Korea: In February 2017, the Ministry of Science, ICT and Future Planning released a plan to prepare South Korea for advancements in AI-enabled technologies. The plan identifies policy goals relating to workforce preparedness, education and social welfare as well as a number of targeted measures to manage the development of AI in South Korea. These reforms include an overhaul of the country’s Framework Act on National Informatization and the establishment of a Charter of Ethics for AI (by 2018) and protocols to guide developers and users.[xi]
  • Privacy continues to be a concern: In his remarks delivered in May 2017 at the IAPP Canada Privacy Symposium, the Privacy Commissioner of Canada emphasized the need to proactively engage with organizations that use AI and other cutting edge-innovations to ensure accountability in this rapidly evolving area.[xii] From an automotive industry-specific perspective, the Privacy Commissioner signed the Resolution on Data Protection in Automated and Connected Vehicles during the 39th International Conference of Data Protection and Privacy Commissioners in September 2017. This non-binding document acknowledges the rapid advancement of vehicle automation and connected vehicle technologies and expresses concern about the possible lack of available information, user choice, data control and valid consent mechanisms for vehicle owners, drivers, passengers, other road users and pedestrians.[xiii]
  • Competition law is a hot topic: In September 2017, the Competition Bureau released its consultation paper entitled “Big data and Innovation: Implications for competition policy in Canada”.[xiv] The report canvasses how the acquisition and use of Big Data, which is essential to AI systems, can raise competition issues. Competition and privacy issues affecting Big Data were considered by the Federal Court of Appeal in its decision in Toronto Real Estate Board v. Commissioner of Competition,[xv] released on December 1, 2017. The decision has profound implications as it relates to risks of non-compliance with the civil prohibitions against abuse of dominance under the Competition Act stemming from allegations of data monopoly.
  • Continued dialogue between non-governmental actors: The 2017 Asilomar Conference, involving some of the leading thinkers in AI, released a list of 23 principles ranging from research strategies to data rights to future issues including potential super-intelligence. These principles, which underscore ethical and societal values for beneficial AI development, were endorsed by a total of 3,814 signatories, including over 1,250 AI/Robotics Researchers and twice as many thought leaders such as Stephen Hawking and Elon Musk.[xvi]
  • AI in Fintech is an area of focus: A number of public consultations and reports in 2018 have centered on the opportunities and challenges related to the integration of AI in the delivery of financial services. Of note is the European Commission’s public consultation document on Fintech and the European Banking Authority’s response, published in March and June 2017, respectively.[xvii] Access to information, transparency, cybersecurity risk, market distortions caused by widespread automation and limited data portability are identified as areas of concern. Similar issues were raised by the Basel Committee on Banking Supervision in its August consultative document on Fintech[xviii] and by the Financial Stability Board in its November 2017 report on the implications of AI for financial services.[xix]

What to watch for in 2018:

  • The AI industry will continue to experience rapid growth: Building on the success of 2017, the AI industry in Canada and globally is expected to continue to experience rapid growth in 2018 and beyond. By 2020, the worldwide market for AI-related products is predicted to reach US$47 billion, up from US$8.0 billion in 2016.[xx] Industries such as healthcare and financial services, which have traditionally been particularly proactive in integrating AI technologies to improve business models in various capacities, will continue to drive the market. Over the coming years, AI is also likely to progressively make its mark in traditionally more conservative industries – including the legal practice – as parties become more exposed to transformative technologies and weigh the opportunities and challenges associated with the deployment of AI.
  • Expect developments in respect of privacy requirements: In late 2017, the Privacy Commissioner invited stakeholders across Canada to comment on draft guidelines in respect of consent and data practices under the Personal Information Protection and Electronic Documents Act (PIPEDA), which is currently under review. The final form of these guidance documents, likely to be released this year, will assist those organizations relying big data analytics and AI that such uses are appropriately disclosed. Canadian businesses should also take note of the forthcoming entry into force of the European Union’s General Data Protection Regulation (“GDPR”) in May 2018. The GDPR will apply to any company offering goods or services to EU residents. Discrepancies between the GDPR and Canadian privacy law requirements may pose new challenges to businesses who come within the purview of both regimes
  • .
  • Intersection between AI and other transformative technologies: The intersection between AI and other technologies, including blockchain, is gaining attention from theoretical and applied perspectives. We expect to see growth in the areas of AI and blockchain, quantum computing and Internet of Things (IoT), among others.
  • Strategic acquisitions and competition for talent: The acquisition of AI start-ups, particularly in our innovation hubs of Toronto, Montreal and Edmonton, is expected to heat up. The most recent announcement of TD Bank’s acquisition of Layer 6, and AI start-up based in Toronto, is the beginning of what we expect to be one of many this year. Not only will such acquisitions fuel innovation but it will also propel the competition for talent in applied AI.

McCarthy Tétrault and AI

In this environment ripe for opportunities and challenges, legal counsel must keep up with AI to marry legislative compliance and the application of industry best practices in various jurisdictions with a practical knowledge of commercial and technical outcomes. McCarthy Tétrault, named Law Firm of the Year at the 3rd Annual Canadian Fintech & AI Awards presented by the Digital Finance Institute in November 2017, is thinking ahead to cross-disciplinary ways in which AI and the law will intersect, to provide comprehensive advice to our clients.

Our 2017 Transformative Technologies and the Law series explored the conceptual and practical legal issues arising from the integration of AI into organizational processes, operations and business models. Consult our first White Paper entitled “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business”, published in October 2017. Our series of seminars will continue this Spring 2018 with a focus on Managing Privacy Litigation Risk.

AI will also continue to feature prominently in our 2018 McCarthy Tétrault Advance Continuing Professional Development program, with seminars focusing on Open Banking, Cyber Insurance, Automated Cars as well as in the context of our 7th Annual Technology Summit.

For more on AI and the law, see our blogs on all things related to Fintech generally and AI and Machine Learning specifically.




[i] Jack Derricourt, “Understanding the legal risks of deploying AI in businesses – An interview with Carole Piovesan”, DX Journal (2 December 2017), online: <>.

[ii] PwC & CB Insights, MoneyTree Canada Report (Q3 2017), online: <>.

[iii] Government of Canada, “Innovation superclusters initiative (ISI): Program guide”, online: <>.

[iv] Denise Deveau, “Here’s what the AI map in Canada looks like (at least today)”, Financial Post (14 June 2017), online: <>.

[v] Canadian Institute for Advanced Research, “Pan-Canadian Artificial Intelligence Strategy Overview” (30 March 2017), online: <>.

[vi] Government of Canada, “Canada’s Innovation And Skills Plan”, online: <>; Government of Canada, “Innovation superclusters initiative (ISI): Program guide”, online: <>; Treasury Board of Canada Secretariat, News Release, “Government of Canada review will ensure that innovation and clean technology programs deliver results for Canada’s innovators” (6 September 2017), online: <>.

[vii] Congressman John Delaney, “Delaney Launches Bipartisan Artificial Intelligence (AI) Caucus for 115th Congress” (24 May 2017), online: <>.

[viii] United States Government Accountability Office, Technology Assessment: Internet of Things (May 2017), online: <>.

[ix] United Kingdom, House of Commons Science and Technology Committee, Robotics and artificial intelligence: Government Response to the Committee’s Fifth Report of Session 2016–17 (London: House of Commons, 2017), online: <>.

[x] EC, European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), online: <>.

[xi] South Korea Ministry of Science and ICT, “Long-term Service R&D Strategy Road-map and Investment Plan Announced ” (27 February 2017), online: <>.

[xii] Office of the Privacy Commissioner of Canada, “Course correction for improved outcomes for Canadians: Remarks at the IAPP Canada Privacy Symposium 2017” (17 May 2017), online: <>.

[xiii] International Data Protection and Privacy Commissioners Conference, “Resolution on Data Protection in Automated and Connected Vehicles” (25-29 September 2017), online: <>.

[xiv] Competition Bureau, Big data and Innovation: Implications for competition policy in Canada, online: <>.

[xv] Toronto Real Estate Board v Commissioner of Competition, 2017 FCA 236.

[xvi] Future of Life Institute, “A Principled AI Discussion in Asilomar” (17 January 2017), online: <>

[xvii] EC, Consultation Document: Fintech: A More Competitive And Innovative European Financial Sector, online: <> ; European Banking Authority, EBA Response, online: <>.

[xviii] Basel Committee on Banking Supervision, Sound Practices: Implications of fintech developments for banks and bank supervisors (August 2017), online: <>.

[xix] Financial Stability Board, Artificial intelligence and machine learning in financial services, online: <>.

[xx] International Data Corporation, “Worldwide Cognitive Systems and Artificial Intelligence Revenues Forecast to Surge Past $47 Billion in 2020, According to New IDC Spending Guide” (26 October 2016), online: <>.

Fintech Regulatory Developments: 2017 Year in Review

Posted in Big Data, Competition, European Union, Financial, FinTech, Legislation, Open Banking, Virtual Currency
Ana BadourHeidi GordonKirsten ThompsonLaure FouinShane C. D'SouzaShauvik ShahEriq Yu

As predicted in our 2016 year-end report, 2017 proved to be a busy year for Fintech in Canada, with a number of important regulatory developments. With the dawn of 2018, we look back to summarize some of 2017’s most notable Fintech regulatory developments in Canada, as well as developments to watch for in 2018.

What We Saw in 2017

  • Launch of CSA regulatory sandbox – On February 23, 2017, the Canadian Securities Administrators (“CSA”) announced the launch of a regulatory sandbox aimed at supporting Fintech businesses through a more tailored approach to regulation that balances innovation and appropriate investor protection. Following the launch of the CSA regulatory sandbox, the Ontario Securities Commission (“OSC”) announced that its OSC LaunchPad has been made a permanent fixture of the OSC. The Government of Ontario has also expressed its intent to launch a Regulatory Super Sandbox and an Ontario Fintech Accelerator Office. In addition, Québec’s Autorité des marchés financiers (“AMF”) created a new position (Director, Fintech and Innovation) to co-ordinate the work of its Fintech Working Group, which includes six major projects: (i) blockchain technology; (ii) mobile payment solutions and virtual currencies; (iii) fundraising platforms; (iv) automated insurance and investment tools; (v) Regtech; and (vi) Big Data and connected devices.
  • National retail payments framework review – On July 7, 2017, the Department of Finance Canada (“Finance Canada”) issued a consultation paper proposing a federal oversight framework for retail payments. The consultation paper proposed a functional approach to the regulation of retail payments in Canada, applicable to payment service providers, including many types of Fintech entities in the payment space as well as entities that are already regulated, such as banks and credit unions.
  • Federal financial sector framework review – On August 11, 2017, Finance Canada released its second consultation paper concerning the review of the federal financial sector framework, in connection with the 2019 Bank Act review. The consultation paper raised a number of Fintech related matters, including whether there was a need to clarify the Fintech business powers of financial institutions, whether there was any need for amendments to the Bank Act to facilitate Fintech collaboration (such as by providing additional flexibility to financial institutions to make non-controlling investments in Fintech entities) and whether the bank entry and exit framework should be streamlined.  In addition, the paper stated that the Department of Finance Canada would be examining the merits of open banking, including consideration of how other jurisdictions are implementing open banking and the potential benefits and risks for Canadians. Issues that are key elsewhere (and would likely be key in Canada) are considerations around privacy and data protection, data ownership, cybersecurity issues raised by an open or partially-open environment (including consideration of uniform technical standards), and liability allocation. The paper also sought views on whether non-banks should be permitted to use the terms “bank”, “banker” and “banking” following the issuance by OSFI of  OSFI Advisory 2017-01 providing additional guidance on its interpretation of such terms.
  • Competition Bureau report – On November 6, 2017, the federal Competition Bureau released a draft report on its market study into technology-led innovation, namely Fintech innovation, in the Canadian financial services sector. The report is intended as guidance for financial services regulators and policymakers with the key message that,  while Fintech regulation is necessary to protect the safety, soundness, and security of the financial system, regulation should not unnecessarily impede competition and innovation in financial services. The final report was issued on December 14, 2017.
  • Competition and Big Data – There were two significant developments dealing with Big Data during 2017. While of importance generally, they will be of particular note to those in the Fintech ecosystem as new and emerging business models seek to leverage and/or monetize large data sets and data analytics. The first development was the release, on September 18, 2017, of the Competition Bureau’s white paper for public consultation titled “Big data and Innovation: Implications for competition policy in Canada”. The white paper draws from the Competition Bureau’s recent abuse of dominance investigations involving big data considerations, and also considers US and European developments in order to identify challenges raised by big data. The second development was the much-anticipated decision of the Federal Court of Appeal in the Toronto Real Estate Board (“TREB”) case, in which the Court found TREB was abusing its dominant position. While raising numerous competition law issues, the TREB case fundamentally determined that an organization could be found to be engaged in an anti-competitive practice when it restricts access to data (in this case, home-sales data). Although dealing specifically with the data held by TREB in the GTA, the case is seen as precedent-setting as it has the potential to be a significant factor driving the opening up of data sources in other sectors, notably financial services.
  • Payments modernization – Payments Canada is in the process of its multiyear payments modernization project, which involves building a new core clearing and settlement system, establishing real-time payment capability, enhancing automated funds transfer (including transitioning to the ISO 20022 standard), aligning with global regulatory standards and modernizing the rules framework. Payments Canada released its Modernization Target State in December 2017 providing further detail on the project and outlining its plan to create a three-system end state consisting of (1) a new high-value system (Lynx) to replace the current Large Value Transfer System (LVTS), (2) a batch system (Settlement Optimization Engine (SEO)) to replace the current Automated Clearing Settlement System (ACSS) and (3) a new real time system (the Real-Time Rail (RTR)) for real-time delivery of low value payments. All three systems will be built to support ISO 20022.  Prudentially regulated financial institutions will have direct access to Lynx and SEO while both prudentially regulated financial institutions and non-financial institutions (such as payment service providers (PSPs)) who meet necessary access criteria will have access to the RTR.  The RTR will also be built to provide for overlay services through application programming interfaces (APIs).
  • Project Jasper – In June 2017, the Bank of Canada issued a report on the preliminary findings of Project Jasper, its initiative to learn more about the viability of distributed ledger technology (“DLT”) as the basis for a wholesale payment system. While the experiment revealed that DLT is not presently more beneficial than the current centralized system of wholesale payments, it uncovered other opportunities for the implementation of DLT within the financial industry. The preliminary findings of this report were expanded on September 29, 2017, when the Bank of Canada released a white paper outlining detailed findings from Project Jasper. The white paper noted that settlement finality and low operational risk were key challenges in the first and second phases of the DLT experiment. As of October 17, 2017, the Bank of Canada embarked on the third and final phase of the experiment to develop a proof of concept for the clearing and settling of securities. The results of this final phase are expected to be released at the Payments Canada Summit in May 2018.
  • Cryptocurrency securities developments 
    • Application of securities laws – The OSC issued a press release on March 8, 2017 advising stakeholders that Ontario securities law may apply to any use of DLT, such as blockchain, as part of financial products or service offerings. On August 24, 2017, the CSA weighed-in on the applicability of Canadian securities laws to cryptocurrencies in its release of CSA Staff Notice 46-307 – Cryptocurrency Offerings. The effect of the Notice is to confirm the potential applicability of Canadian securities laws to cryptocurrencies and related trading and marketplace operations and to provide guidance on analyzing these requirements.
    • Registrations and exemptions – In British Columbia, the British Columbia Securities Commission (“BCSC”) announced on September 6, 2017 the first registration of an investment fund manager in Canada dedicated solely to cryptocurrency investments. The registrant, also registered as an exempt market dealer, will operate a bitcoin investment fund. In August 2015, Québec’s Autorité des marchés financiers approved an initial coin offering (“ICO” or “ITO”) by Impak Finance, making it the first such legal ICO authorized in Canada. Impak was admitted to the regulatory sandbox for two years and only allowed to sell to accredited investors. In October 2017, the OSC approved the first ICO open to retail investors, who can invest up to $2,500 in the tokens offered. The applicant, TokenFunder Inc. (“TokenFunder”), was a blockchain business seeking to offer a technology platform for businesses to raise capital through tokens, coins and other blockchain-based securities. The OSC granted exemptive relief granted based on TokenFunder agreeing to seek registration promptly following the ICO.
    • Cryptocurrency futures – Futures trading on Bitcoin was introduced on December 10, 2017 by the Chicago Board Options Exchange (Cboe), and the CME Group followed, bringing bitcoin and cryptocurrencies in general, a step further toward the mainstream and bitcoin to a record high on December 18, 2017 of over $19,850. Regulators quickly reacted: (i) the U.S. Commodity and Futures Trading Commission (CFTC) is requesting comments on certain questions related to its treatment of cryptocurrency transactions, including margin requirement; (ii) on December 11, 2017, the Investment Industry Regulatory Organization of Canada (IIROC) announced greater margin requirements for cryptocurrency futures contracts that trade on commodity futures exchanges; and (iii) on December 18, 2017, the CSA issued a warning to remind dealers and investors about the inherent risks associated with products linked to cryptocurrencies (including futures), such as bitcoin.
  • Australian cooperation agreement – The Australian Securities & Investments Commission (“ASIC”) announced in December a new Cooperation Agreement with the CSA (ASIC previously had entered into an agreement of this type with the OSC in 2016). The Cooperation Agreement will expand the existing framework for ASIC to share information with the CSA, allowing each regulator to share and learn from initiatives such as the ASIC Innovation Hub and the CSA regulatory sandbox.
  • Securities commission hackathons – Building on the success of the OSC hackathon in November 2016, which brought together members of the Fintech community to find solutions to regulatory problems arising in the area of RegTech, similar hackathons were held this year in both Québec and BC.

What to Watch for in 2018

  • Amendments to the Proceeds of Crime (Money Laundering) and Terrorist Financing Act (the “PCMLTFA”) are expected to be introduced in 2018 in respect of, among other things, open loop prepaid cards and virtual currencies. In particular, the PCMLTFA was previously amended in 2014 to specifically extend the definition of “money services businesses” to include “persons dealing in virtual currencies”, but the regulations implementing this change remain outstanding, even as cryptocurrencies have become more popular.
  • Further development/ consultations may take place in respect of the national retail payments framework and the federal financial sector framework. We would also expect further developments on the provincial side, including with respect to the Ontario Regulatory Super Sandbox and an Ontario Fintech Accelerator Office.
  • We expect to see Fintech companies continue to explore admission to securities regulatory sandboxes in order to tap into the Canadian investor base.
  • In 2018 we may start to see the launch in Canada of blockchain applications that have been in the pilot phase, including in respect of loyalty points, cross-border payments and digital identity. Project Jasper’s Phase 3 results are also expected to be released at the Payments Canada Summit in May 2018.
  • We expect to see cryptocurrencies becoming more mainstream and being increasingly used as traditional financial assets, including in exchange traded funds (ETFs).
  • We expect Canadian securities regulators to increasingly (a) monitor ICOs that are, directly or indirectly, marketed or sold to Canadians without the appropriate registration or exemptive relief, (b) cooperate with securities regulators in other jurisdictions, and (c) initiate enforcement proceedings or seek emergency orders against individuals and companies associated with ICOs that are alleged to have violated applicable securities laws. For instance, in July 2017, Quebec’s AMF granted a freeze asset order against the principals of, and the companies associated with, “PlexCoin” and enjoined them from all investment-related activities targeting Quebec residents. When the respondents continued to market and solicit investments in PlexCoin, Quebec’s Superior Court declared them in contempt of court and issued a two-month prison sentence and a fine of $110,000. Then on December 1, 2017, the US Securities and Exchange Commission obtained an emergency freeze order and filed charges against many of the same PlexCoin respondents for allegedly violating U.S. securities laws and defrauding investors.
  • We may also see class actions in Canada in 2018 in respect of ICOs. In the past few months, class actions have been filed in the U.S. relating to the ICOs of TezosCentra and ATBCoin. ICO class actions can be expected to allege (a) that the token should be considered a security, (b) breaches of applicable securities laws, (c) misrepresentations and/or fraud relating to the marketing and sale of the token, regardless of whether the token is a security, (d) personal liability against the principals, advisors and promoters of the ICO, and (e) the culpability of any “deep pockets” associated with the ICO.
  • Leave to appeal to the Supreme Court of Canada in the TREB case will be sought, and given what is at stake (access to data), we expect to see a wide variety of organizations line up as intervenors if leave is granted. We also expect to see further and more rapid developments in open data (and open banking-like developments) in Canada.

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Abuse of Dominance: Long-Awaited Federal Court of Appeal Decision Confirms Decision Against the Toronto Real Estate Board

Posted in Big Data, Competition, Privacy
Dominic TherienDonald HoustonStephanie St-Jean

In a long judicial saga which is now coming one step closer to an end, the Federal Court of Appeal (“FCA”) has confirmed the decision of the Competition Tribunal (the “Tribunal”), which had found that the Toronto Real Estate Board (“TREB”) was abusing of its dominant position.

This decision creates an important precedent as it confirms that the burden of proof of the Commissioner of Competition (the “Commissioner”) in an abuse of dominance case can be met solely by adducing qualitative evidence (as opposed to quantitative evidence). It also confirms the Tribunal’s finding that the evidence was insufficient to support the privacy arguments raised by TREB, which, accordingly, could not be used as a legitimate business justification to preclude a finding of anticompetitive conduct. Finally, the FCA confirms the Tribunal’s decision to refuse to apply the exception based on the mere exercise of a copyright and, by doing so, raises questions on the scope of this exception.


In May 2012, the Commissioner brought an abuse of dominance application under subsection 79(1) of the Competition Act (the “Act”) against TREB, a trade association, on the basis that TREB restricted the manner in which its member real estate agents could disseminate information from the multiple listing service (the “MLS”) it controls. At the centre of this case were Virtual Office Websites (“VOWs”), which received limited information from the MLS on a data feed. TREB members who operated a VOW were also limited in how they could use the MLS information transmitted over the data feed (solely for display), and were prohibited from displaying certain information found on the MLS on their VOWs (the “Disputed Data”). The Commissioner alleged that these restrictions had the effect of substantially lessening competition in the supply of residential real estate brokerage services in the Greater Toronto Area.

On April 15, 2013, the Tribunal initially dismissed the Commissioner’s application, without considering the merits, on the basis that subsection 79(1) of the Act could not apply to TREB, a trade association, as it did not compete with its members. On February 3, 2014, the FCA ruled that the Tribunal erred in dismissing the case, and sent the Commissioner’s application back to the Tribunal for redetermination (to read our article on the FCA’s initial decision, please click here). Leave to appeal to the Supreme Court of Canada (the “SCC”) was denied.

Further to a redetermination hearing, the Tribunal granted the Commissioner’s application on April 27, 2016, and found that TREB had abused of its dominance (to read our article on the Tribunal’s 2016 decision, please click here). TREB appealed the Tribunal’s decision and the case was heard by the FCA in December 2016. After much anticipation, on December 1, 2017, the FCA confirmed the Tribunal’s decision, dismissing the appeal and confirming most of the Tribunal’s findings. The saga is, however, not yet over as TREB has announced its intention to ask for leave to appeal to the SCC.

Burden of Proof in Abuse of Dominance Cases

The FCA confirmed the Tribunal’s finding that it was not necessary for the Commissioner to adduce quantifiable evidence, even when such evidence was available, in order to prove a substantial prevention or lessening of competition. Contrary to what was argued by TREB and CREA, the FCA stated that the SCC (in Tervita Corp. v. Canada (Commissioner of Competition), 2015 SCC 3) had not made any pronouncement on the necessity for the Commissioner to quantify effects which can be quantified to establish a substantial lessening or prevention of competition in a merger case. Rather, the Commissioner’s obligation to quantify anti-competitive effects only applies when merging parties raise an efficiencies defence (under section 96). The FCA therefore felt that it had “no choice but to hold that the principle requiring quantification of quantifiable effects cannot be applied” to the abuse of dominance provisions. The FCA did note, however, that, had it been open to it to decide the issue afresh, it would have held that the Commissioner was required to adduce quantifiable evidence, if available, for an abuse of dominance case. The FCA also confirmed the Tribunal’s finding that it should not draw an adverse inference from the Commissioner’s decision not to adduce quantifiable evidence when such evidence was available, as this would be tantamount to saying that the Commissioner had a legal burden to adduce such evidence, which it does not.

TREB and CREA also argued that the Tribunal erred in relying on speculative qualitative evidence. The FCA found these arguments to be without merit. It re-emphasized the fact that the Tribunal understood the difference in nature between quantitative and qualitative evidence and that it recognized that it was more difficult for the Commissioner to prove his case on the basis of mostly qualitative evidence. The Tribunal, indeed, recognized explicitly in its decision in favour of the Commissioner that it had been a “close call” notably due to the fact that the Commissioner was relying solely on qualitative evidence, and made clear that not all abuse cases where only qualitative evidence is adduced would necessarily enable the Commissioner to meet his burden to prove a substantial lessening or prevention of competition. It was also noted in the Tribunal’s reasons, and re-emphasized by the FCA, that it would sometimes be inevitable in a case such as this one, which pertains mostly to innovation and dynamic competition, that the Commissioner would have to rely on qualitative evidence.

The FCA noted that, as TREB and CREA had not asked for leave to appeal of questions of fact, the assessment of evidence by the Tribunal could not be reviewed on appeal. It concluded that it was satisfied that the Tribunal had made no reviewable error in relying solely on qualitative evidence for its findings of anticompetitive effects and its ultimate conclusion that TREB’s practice regarding the Disputed Data was a practice which had the effect of preventing competition substantially in the relevant geographic market.

Privacy as a Legitimate Business Justification

In response to the Commissioner’s argument that the restrictions implemented by the VOW Policy and Rules were anticompetitive, TREB and CREA notably argued that privacy concerns were the business justification behind those restrictions. The FCA concluded that the Tribunal had assessed the evidence before it according to the correct principles. The Tribunal concluded that TREB’s VOW Policy and Rules were motivated by a desire to maintain control over the Disputed Data in an effort to forestall new forms of competition, and not by any efficiency, pro-competition, or genuine privacy concerns. It concluded that there was no evidence that TREB’s privacy policies received much, if any, consideration during the development of TREB’s VOW Policy and Rules. The FCA agreed with the Tribunal that the evidence was compelling.

Notwithstanding that the alleged privacy considerations could not justify the adoption of the VOW Policy and Rules, the FCA confirmed that privacy concerns could in any event have been a valid business justification if it had been established that the VOW Policy and Rules had to be put in place in order to comply with statutory or regulatory requirements. However, the FCA confirmed that TREB had to establish a factual and legal nexus between the VOW Policy and Rules and the requirements of the Personal Information Protection and Electronic Documents Act (“PIPEDA”).

In this case, the FCA concluded that the VOW Policy and Rules were not required by PIPEDA, as the consents clauses contained in the listing agreement, signed by house sellers and used by real estate agents, were sufficiently broad to allow the uses which were restricted by the VOW Policy and Rules. The FCA concluded that the Tribunal correctly interpreted the scope of the consents under the ordinary law of contract, as informed by the purpose and objectives of PIPEDA, and that there were no errors in the conclusion it reached. The FCA confirmed the Tribunal’s finding that the consents used were sufficiently specific to be compliant with PIPEDA in the electronic distribution of the Disputed Data on a VOW.

Limited Defence Based on the Mere Exercise of a Copyright

TREB argued that the adoption of its VOW Policy and Rules constituted the mere exercise of its copyright in the MLS database, and could therefore not be challenged under the abuse of dominance provisions in light of the exception found in subsection 79(5). The FCA however confirms the Tribunal’s decision that there was no copyright in the MLS database (although it disagrees with the Tribunal’s reasoning to get to this conclusion), and that the exception of subsection 79(5) could not be applied. While the FCA’s reasons on this last point are very succinct, the Tribunal’s reasons are more detailed. In brief, the Tribunal concluded that the adoption of the VOW Policy and Rules were not in this case “the mere exercise” of a copyright, because the restrictions under review had nothing to do with the decision to grant or not a licence, and were rather anticompetitive conditions attached to the exercise of the licence granted to all TREB members. While the FCA agrees with the Tribunal’s reasons, some statements in its decision seem to imply a very restrictive application of the intellectual property right exception. Indeed, the FCA concluded that, in light of the determination that the VOW Policy and Rules were anticompetitive, subsection 79(5) of the Act precludes reliance on copyright as a defence to an anticompetitive act. This statement raises questions on the scope of the intellectual property defence, as it seems to imply that any determination that a practice is anticompetitive closes the door on the possibility to use this exemption.

SEC issues Cease and Desist Order for ICO and Statement on Cryptocurrencies

Posted in Financial, FinTech
Sonia StruthersShane C. D'SouzaArie van WijngaardenLaure FouinShauvik Shah

On December 11, 2017, Munchee Inc., a California-based developer of a restaurant app, shut down its initial coin offering (“ICO”) after the Securities and Exchange Commission (“SEC”) issued a Cease and Desist order. SEC Chairman Jay Clayton subsequently issued a statement highlighting the SEC’s general concerns with cryptocurrencies and ICOs. The order and Chairman Clayton’s statement shed new light on whether a token issued in the context of an ICO is a security.

Munchee ICO

Munchee wanted to raise US$15 million to fund the growth of its restaurant review app by selling MUN tokens to investors. In October 2017, the company published a white paper describing its plan to issue MUN tokens and how they would be used to develop the app. Investors could purchase tokens using Bitcoin or Ether during the token sale. Munchee estimated that 75% of the ICO proceeds would be used to hire employees, 15% would be used for app maintenance and 10% would be used for legal fees.

Munchee marketed MUN tokens as being “utility tokens” only. Once Munchee had built out its ecosystem, tokenholders could receive MUN tokens for writing restaurant reviews and advertise on the app by paying in tokens. Munchee;s marketing appeared to be consistent with the Simple Agreement for Future Tokens (SAFT) framework, under which “functional utility tokens” are more likely to pass the Howey test and not be considered securities.

However, the SEC issued a Cease and Desist order to prevent the Munchee token sale before tokens were delivered to investors. The SEC investigation concluded that contrary to Munchee’s representations, MUN tokens were in fact securities. The SEC’s reasoning highlights three areas of analysis applicable to ICOs:

  • The SEC’s analysis of whether a token is a security will follow the case law using a “flexible rather than a static” approach. Simply labelling something a utility token is not enough to definitively ensure that securities laws do not apply.
  • The SEC keyed in on Munchee’s emphasis in its white paper that the MUN tokens could rise in value and be traded on a secondary market. These representations suggested that investors would purchase the tokens with an expectation of profits. Significantly, Munchee’s token offering documents were translated into additional languages which enabled the company to reach investors in areas the app was not even available.
  • Munchee’s plan to create a MUN token ecosystem suggested to the SEC that investors were heavily dependent on the efforts of Munchee’s founders and staff for the tokens they purchased to increase in value. This meets an important Howey test prong for determining whether the tokens were an investment contract (that is, investors could expect profits from the efforts of the promoters) and therefore securities.

SEC Staff concluded that Munchee’s public token distribution violated federal securities laws. However, Munchee was able to escape further sanction by cooperating with SEC Staff and promptly returning money to investors who purchased tokens.

SEC Chairman’s Statement

The SEC Chairman’s statement on cryptocurrencies and ICOs demonstrates that SEC Staff are closely monitoring ICOs. The SEC Chairman’s statement is particularly concerned with the protection of “Main Street” investors. Potential investors are encouraged to ask questions about the token issuer, the uses of the funds, whether there are financial statements or other disclosure, whether the offering has been structured to be compliant with securities law, and what legal rights they will have in the event an ICO goes wrong. They are also reminded about the risks posed by cross-border ICOs as it may be difficult to recover funds which have been moved to a different jurisdiction with a different regulator.

The SEC is also concerned with how market professionals such as lawyers, consultants and broker dealers act in the cryptocurrency space. The SEC Chairman’s statement encourages these professionals to familiarize themselves with the SEC’s investigative report into ICOs as well as subsequent enforcement actions taken by the SEC. In the view of the Chairman, “by and large, the structures of initial coin offerings… involve the offer and sale of securities and directly implicate the securities registration requirements and other investor protection provisions of our federal securities laws.” Industry professionals should keep this in mind when advising on ICOs or token sales.


The SEC’s cease and desist order regarding the Munchee ICO and statement by SEC Chairman Clayton show that the SEC is highly attuned to the risks posed by ICOs. Canadian regulators are also closely monitoring developments in the ICO space, as discussed in our earlier posts regarding the applicability of Canadian securities laws to cryptocurrencies and the Ontario Securities Commission’s approval of the first ICO in Ontario.

More often than not, conducting an ICO will engage securities legislation. In  Canada, a security includes an “investment contract”. In determining whether a coin/token is an investment contract, a four-prong test is applied, being does the coin/token involve: (i) an investment of money (ii) in a common enterprise (iii) with the expectation of profit (iv) to come significantly from the efforts of others. Advertisement of a coin or token as a software product or utility token is not relevant in determining whether a coin or token constitutes a “security”.

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Bank of Canada White Paper on Creating Digital Currency

Posted in Financial, FinTech
Shauvik ShahAdrienne Ho

In November 2017, the Bank of Canada (“BoC”) released a white paper evaluating whether a central bank should issue a decentralized digital currency for general public use. The Central Bank Digital Currency (“CBDC”) model considered by the white paper would be different from the BoC’s Project Jasper (as summarized in our previous post) and from a system where the public has accounts at the central bank.

Characteristics of a Central Bank Digital Currency

In fact, the CBDC model proposed would be quite similar to cash, mainly that it:

  • Would be legal tender, denominated in a sovereign currency, and convertible at par value;
  • Would not incur fees when stored or distributed by a central bank;
  • Could be used at any time and by anyone who has access to the required underlying technology;
  • Would be susceptible to risk and loss;
  • Would be perfectly elastic with respect to its supply;
  • Would be distributed to the public via financial institutions (much like today’s bank notes) subject to any requirements such as anti-money laundering regulation;
  • Would be on a distributed, bilateral payment network and the finality and irrevocability of transactions would be determined by the underlying technology.

The benchmark CBDC model is non-interest bearing and allows holders to remain anonymous. However, the white paper also discusses the possibility of an interest-bearing CBDC (“I-CBDC”). Unlike cash, it would be difficult for the holder of I-CBDC to remain completely anonymous as the central bank, at least, would need to provide identifying information to authorities for tax purposes.

Incentives for Issuing CBDC

The white paper assesses six reasons why CBDC could be issued in addition to existing bank notes and central bank reserves but it does not consider the technological or potential reputational costs of doing so. The white paper also discusses how this analysis might differ where the CBDC is interest-bearing.

  1. Ensuring sufficient central bank money and preserving seigniorage

One potential concern is that with an increased shift away from cash and towards alternate payment methods, there will not only be less central bank money available but there may also be a threat to seigniorage (the difference between the value of money and the cost to produce it) due to a decrease in the value of outstanding bank notes. A significantly large fall in seigniorage might mean a central bank may need to receive government funding, in turn reducing its autonomy. The white paper concludes that neither concern is a compelling reason to issue CBDC. Recent trends indicate that generally, the total value of bank notes has not in fact declined and a central bank can use other tools, such as charging higher fees, to preserve its revenue streams.

If both CBDC and bank notes were offered, overall seigniorage might increase due to a larger quantity of money in circulation. However, there can also be increased costs to a central bank when both options are provided. The overall effect of I-CBDC on seigniorage is unclear. While seigniorage is reduced due to the payment of interest, there might also be increased demand for I-CBDC. The white paper suggests the degree to which I-CBDC is taken up will depend on how well financial institutions can compete with it.

  1. Reducing the lower bound on interest rates and supporting unconventional monetary policy

The white paper concludes that trying to reduce real interest rates is not a compelling reason to introduce CBDC as this can already be achieved by reducing the availability of cash, particularly in larger-denominated notes. Reducing the volume of bank notes increases the costs associated with holding cash. The resulting higher negative yield on cash pushes down the effective lower bound (“ELB”) on interest rates such that real interest rates fall, in turn boosting economic growth. Introducing CBDC would actually put upward pressure on interests rates as holding CBDC would make it easier to avoid negative interest rates.

I-CBDC may not be effective in this regard either. This is because in a negative policy rate environment, I-CDBD holders may just convert their funds into bank notes instead. This in turn would make it more difficult for the central bank to sustain negative rates below the ELB.

Supporting unconventional monetary policy was also found to be unpersuasive. In the rare case that, to support quantitative easing, central bank funds are transferred directly to the public, experience has shown that this can be done without CBDC. In fact, using CBDC could impede this monetary tool since the funds, given CBDC’s anonymous nature, might end up being held by non-residents.

  1. Reducing risk and improving financial stability

The effect of CBDC on financial stability is mixed. Where CBDC is non-interest bearing, there is unlikely to be a significant shift away from traditional instruments such as deposit accounts since CBDC is still subject to risks like theft. The white paper suggests that financial institutions can effectively compete with I-CBDC as a method to store value because banks can, for instance, offer enhanced financial services such as wealth management or engage in cost-cutting measures. Nonetheless, in times of economic stress, there may be greater uptake of CBDC and I-CBDC, which is viewed as risk-free; the shift away from traditional deposits might disrupt the financial system and increase volatility.

  1. Increasing contestability in payment systems

Although CBDC might increase contestability in the payments industry generally by allowing more financial institutions to access the central bank’s funds, it provides little benefit in the retail and large-value payment contexts. CBDC may be cheaper to use than cash in making retail payments and it might provide greater privacy in online transactions. But, given existing low-cost electronic payment methods, any contestability CBDC could provide is likely small. I-CBDC might provide greater contestability given its incremental benefit of paying interest though its lack of anonymity may detract some users.

For large-value payments, the white paper suggests that the features of existing real-time gross settlement (“RTGS”) systems would make them preferable over using CBDC and I-CBDC. This is largely because where firms make large-value payments to each other, liquidity support is needed to help manage mismatches in payment flows. Current RTGS systems have mechanisms in place that offset payment orders with each other before funds are actually released, which reduces the need to provide liquidity. This, along with other reasons such as the network effects of having many users, make RTGS systems attractive.

  1. Promoting financial inclusion

The white paper notes that CBDC is not necessary to promote financial inclusion (as evidenced by the use of M-PESA, a mobile phone-based money transfer, financing and microfinancing service, in Kenya) and in any case, this is not a concern for most advanced economies.

  1. Inhibiting Criminal Activity

Reducing larger-denominated bank notes might help impede criminal activity, but this does not, as the white paper explains, mean that CBDC should be introduced. CBDC’s anonymous nature could actually help it facilitate crime. Though this concern can be mitigated by limiting the amount of CBDC that can be held, the white paper notes that such restrictions would reduce the demand for CBDC, may decrease seigniorage, and result in making CBDC more expensive to use. That said, I-CBDC can mitigate some of these concerns as transactions are not completely anonymous and use could be restricted to those whose identify can be verified.


Overall, the white paper concludes that many of the reasons suggested for introducing a centrally-controlled digital currency with government oversight are not compelling enough to warrant issuing CBDC and I-CBDC. The white paper suggests that further research is required and that any issuance of CBDC should be done cautiously and incrementally.

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.


Supreme Court of Canada Rules Text Messages Can Attract a Reasonable Expectation of Privacy

Posted in Privacy
Erin ChesneyCharlotte-Anne Malischewski

On December 8, 2017, the Supreme Court of Canada (“SCC”) released two decisions dealing with privacy interests in text messages: R v Marakah, 2017 SCC 59  and R v Jones, 2007 SCC 60.   At issue in both cases was whether there is reasonable expectation of privacy in text messages, even after they have been sent and received.

In Marakah, the accused was convicted of multiple firearm offences based on evidence of text messages sent by him but obtained by police from the recipient/co-accused’s phone. As the accused was not the owner of the device from which the text messages were obtained, standing became an issue. The Supreme Court granted Mr. Marakah standing and, based on the Court’s analysis of section 8 of the Charter of Rights and Freedoms (which grants everyone the right to be free from unreasonable search and seizure), decided there was a breach of the accused’s Charter rights and set aside his convictions.

In Jones, the accused was convicted of drug and firearm trafficking charges based on evidence found in text messages. The text messages were stored on the server of an internet service provider and were seized by the police using a production order obtained under the Criminal Code. The SCC found that Mr. Jones had a reasonable expectation of privacy in the text messages stored by Telus and therefore, standing under section 8 of the Charter to challenge the production order. However, in this case, the SCC found that the accused’s section 8 Charter right was not breached because records of text messages stored on a service provider’s infrastructure were lawfully seized by means of a production order under Criminal Code. The conviction of the accused was upheld.

The standing of the accused to assert section 8 rights had been an issue throughout the court saga as in each case, the text messages were in a physical location not under the control of the accused (in Marakah, the were in a co-accused’s phone; in Jones, they were on a service provider’s server). At the Ontario Court of Appeal, in both R v Marakah, 2016 ONCA 542 and R v Jones, 2016 ONCA 543, the accused were denied standing to argue  whether there had been a breach of their section 8 Charter rights.  A key element of the Court of Appeal’s reasons was its emphasis on control over the physical location of the message as a decisive factor.

The SCC, however, made it clear that text messages themselves – regardless of their physical location – can attract a reasonable expectation of privacy and, therefore, can be protected against unreasonable search or seizure under section 8 of the Charter.

Text Messages are Electronic Conversations

Writing for the majority in Marakah, Chief Justice McLachlin adopted a broad, functional, and technologically neutral approach to characterizing the subject of the search.  She concluded that text messages are not only private communications, they are “electronic conversation[s],” which include “the existence of the conversation, the identities of the participants, the information shared, and any inferences about associations and activities drawn from it.”

Chief Justice McLachlin explained that text messages reveal a great deal of personal information and that preservation of a “zone of privacy” in which personal information is safe from state intrusion is the very heart of the purpose of section 8  of the Charter. She concluded that “it is reasonable to expect these private interactions — and not just the contents of a particular cell phone at a particular point in time — to remain private.”

Text Messages Can Attract Reasonable Expectations of Privacy

To claim protection under section 8 of the Charter, a claimant must first establish a reasonable expectation of privacy.  Writing for the majority in Marakah, Chief Justice McLachlin explained that whether someone has a reasonable expectation of privacy is a question that must be assessed in the totality of the circumstances and depends on:

  1. Whether the person has a direct interest in the subject matter of the search;
  2. Whether the person has a subjective expectation of privacy; and
  3. Whether that subjective expectation is objectively reasonable.

Control, she indicated, is but one of the factors to be considered in assessing the objective reasonableness of the expectation.  Unlike the Court of Appeal below, Chief Justice McLachlin did not find herself constrained by the property-centric notion of control that has dominated the jurisprudence.  Instead, she explains at paragraph 39 (citations omitted):

[c]ontrol must be analyzed in relation to the subject matter of the search: the electronic conversation. Individuals exercise meaningful control over the information they send by text message by making choices about how, when, and to whom they disclose the information. They “determine for themselves when, how, and to what extent information about them is communicated to others…

In Marakah, the application of this test meant that Mr. Marakah had a reasonable expectation of privacy in the text messages he had sent and were on his co-conspirator’s device.

In Jones,  the application of this test meant that Mr. Jones had a reasonable expectation of privacy in texts stored by service provider.

Privacy Advocates Claim a Win

The decisions are being heralded by civil liberties and privacy advocates who believe decisions about reasonable expectations of privacy should be made based on principle, rather than on how the technology works.

The dissenting judges in Marakah stated that although text messaging is “clearly” private, they were concerned that granting standing in these circumstances would unduly diminish the role of control in the privacy analysis and expand the scope of people who can bring a section 8 Charter challenge, adding to the complexity and length of criminal trial proceedings and placing even greater strains on a criminal justice system that is already overburdened.

These decisions are likely to have broad implications for privacy interests in Canada as they set the stage for how the Court will deal with informational privacy in the digital age, in the criminal law, and beyond.

The Supreme Court of Canada’s decision in R v. Marakah, 2017 SCC 59 is available here.

The Supreme Court of Canada’s decision in R v. Jones, 2007 SCC 60 is available here.


McCarthy Tétrault represented one of the intervenors, the Canadian Civil Liberties Association (the “CCLA”), which made submissions largely accepted by the Court on the question of standing in informational privacy cases. The CCLA did not take a position with respect to the outcome for either party in these cases.


When Employees Go Rogue: Are Employers Vicariously Liable for the Privacy Breaches of Their Employees?

Posted in Class Actions, Data Breach, Privacy
Sara D.N. Babich

Although there has not yet been a definitive answer to this question in Canada, based on recent UK case law, it appears increasingly likely that, at least in some circumstances, the answer may be “yes”.

In Various Claimants v WM Morrisons Supermarket Plc, (Rev 1) [2017] EWHC 3113 (QB) (“Morrisons”), the High Court said that the supermarket chain Morrisons was vicariously liable for the actions of an employee, who leaked the payroll data of nearly 100,000 employees. The case is the first successful class action for a data breach in the UK.

More and more, Canadian courts and adjudicators have been asked to grapple with similar privacy issues, particularly in light of the privacy torts that have gained traction in some Canadian jurisdictions. Thus far, Canadian courts have not opined directly on the issue of whether vicarious liability may be extended to employers in respect of the privacy breaches of their employees, but the case law to date is consistent with the recent UK decision which holds that the test for vicarious liability of an employer for the wrongful acts of its employees is the same as it is for any other wrongful act of an employee.

Current Canadian Law

In Ari v Insurance Corporation of British Columbia, 2015 BCCA 468 (“Ari”) the BC Court of Appeal considered whether certain portions of a proposed class action ought to have been struck. In that case, the claimants alleged, among other things, that the employee’s alleged breach of the Privacy Act, RSBC 1996, c 373, imported vicarious liability on to the employer.

The Court held that the Privacy Act did not exclude the imposition of vicarious liability on the employer and suggested that the principles of vicarious liability may be applied in the context of a breach of privacy by an employee just as they would to any other wrongful act of an employee.

However, since the Court in Ari was considering the test for striking out pleadings (specifically whether it was plain and obvious that there is no reasonable claim in breach of privacy against the Defendants), rather than evaluating the whole of the Action on its merits, the case is not a definitive answer to the question of whether and when an employer is vicariously liable for the privacy breaches of its employees.

In Hynes v Western Regional Integrated Health Authority, 2014 NLTD(G) 137, the Supreme Court of Newfoundland and Labrador considered whether the proposed class action for a breach of the Privacy Act, RSNL 1990 c P-22 and for the tort of intrusion upon seclusion should be granted, partly on the basis of whether the employer could be vicariously liable for an employee’s wrongful breach of privacy.

The Court held that it was not plain and obvious that the assertion of vicarious liability would fail. The Court indicated that the issue of whether the employee’s acts were so connected to authorized acts to justify the imposition of vicarious liability (the test for imposing vicarious liability) must be resolved at trial. Therefore, the Court’s certification decision is not determinative of this issue.

In, Bigstone v St Pierre, 2011 SKCA 34 this issue was argued before the Chambers judge on an application to strike pleadings, but on appeal vicarious liability was not considered and the claim was struck on the basis that there were insufficient material facts pleaded to support the cause of action.

The Morrisons Case

Morrisons may provide an inkling as to how Canadian courts may approach the issue of vicarious liability of employers for privacy breaches committed by employees.

In Morrisons, a group of claimants brought an Action for breach of the Data Protection Act 1998 (“DPA”), as well as at common law for the tort of misuse of private information and an equitable claim for breach of confidence against Morrisons. The claimants were employees of Morrisons who had had their personal information taken and published online by a disgruntled employee, Mr. Skelton. Mr. Skelton had been a Senior IT Auditor who had obtained access to the private information of the claimants in the course of collating the data for transmission to Morrisons’ auditors.

The claimants alleged both a direct breach of the DPA by Morrisons for failing to protect their data and that Morrisons was vicariously liable for the actions of its employee, Mr. Skelton.

Direct Liability

The Court held that Morrisons did not breach the DPA directly since it was not the “Data Controller” (as defined in the DPA) at the relevant time with respect to the data at issue. The specific acts complained of were those of a third party, Mr. Skelton, and not Morrisons.

The Court also considered whether Morrisons breached the DPA by failing to take appropriate measures to safeguard the data. Morrisons had put in place security systems which were generally considered by the Court to be adequate and appropriate.

The Court also assessed whether Morrisons ought to have done more to supervise Mr. Skelton. Although Morrisons could have taken additional measures to monitor Mr. Skelton and his work, the Court indicated that there is a level of additional supervision which is not only disproportionate to the risk but that may result in a claim by the employee being supervised that the measures are unfairly intrusive to his or her own rights.

Vicarious Liability

The Court then considered whether Morrisons was vicariously liable for the actions of Mr. Skelton. The Court held that vicarious liability was not excluded by the DPA and can be imposed where the circumstances so warrant. The Court found that the principles of vicarious liability of an employer for the acts of its employees do not change simply because the wrong complained of relates to a privacy breach as opposed to a different wrongful act of the employee.

Whether liability will be imposed depends on whether one of the two bases for liability in Bazley v Curry, [1999] 2 SCR 534 are met, specifically, whether (1) the employer has authorized the acts, or (2) the unauthorized acts are so connected with the authorized acts that they may be regarded as mode of doing an unauthorized act. The Court also considered the policy rationales behind imposing vicarious liability in the circumstances.

In Morrisons, the Court found that “there was an unbroken thread that linked his work to the disclosure: what happened was a seamless and continuous sequence of events” even though the disclosure itself did not occur on a company computer or on company time. Dealing with sensitive confidential data was expressly part of Mr. Skelton’s role. His job was to receive and pass on data to a third party. The fact that the actual third party recipient of the data was unauthorized did not disengage the act from his employment.

The Court noted that cases where vicarious liability has been upheld are those “where the employee misused his position in a way which injured the claimant” and “it was just that the employer who selected him and put him in that position should be held responsible.” Further justification for imposing liability is that the employer has at least the theoretical right to control the employee’s actions and has the ability to protect itself by insuring against the liability.

In the end result, Morrisons stands for the proposition that a company can be held liable to compensate affected individuals for loss (including non-pecuniary loss such as emotional distress) caused by a data breach, even when the breach was caused by an employee and there was  no wrongdoing on the part of the company.

Importantly, the Court invited Morrisons to appeal the conclusion as to vicarious liability, considering that imposing liability in the circumstances may have served to render the Court an accessory to Mr. Skelton’s criminal aims (namely punishing Morrisons for taking disciplinary action against Mr. Skelton).

What it Means for Employers

Although there remains no definitive answer in Canada yet, this case and the preceding Canadian case law suggests that companies must consider carefully who they place in trusted roles and, in addition to the systems they use to protect data, what measures they might take to guard against human risk, which the Court in Morrisons acknowledged can never be fully anticipated or prevented.

Location of Third-Party’s Server Housing Municipal Data Ordered Disclosed

Posted in Cybersecurity, FIPPA/MFIPPA
Eva Guo

Against the backdrop of terrorist attacks, alleged voter fraud and fake news, one would think arguments that the security and integrity of the voting process would be compelling. However, on November 15, 2017 the BC Office of the Information and Privacy Commissioner (“OIPC”) rejected arguments along these lines and ordered the City of Vancouver (“City”) to disclose the physical location of computer servers that stored voter data for the City’s municipal election.[1]

Pursuant to BC’s Freedom of Information and Protection of Privacy Act (“FIPPA”),[2] a journalist requested the City to disclose its contract with the company that provided voting software and voter data storage to the City, and to other municipalities across Canada. The City partially complied with the request, disclosing the entirety of the contract except for the physical location of the computer servers and their corporate operators. The City relied on section 15(1)(l) of FIPPA, which permits an exemption from disclosure based on the public body’s assessment that “disclosure could reasonably be expected to harm the security of any property or system, including a building, a vehicle, a computer system or a communications system”.

The OIPC applied the Supreme Court of Canada’s formulation for “reasonable expectation of probable harm” in Ontario v Ontario[3] as the appropriate standard of proof. It is said that the statutory language of “could reasonably be expected to” requires a middle ground between that which is probable and that which is merely possible. The Supreme Court opined that: “An institution must provide evidence ‘well beyond’ or ‘considerably above’ a mere possibility of harm in order to reach that middle ground”.[4]

The City argued that voter data is “highly sensitive” and a target for criminal activity, and stolen voter data could be used to interfere with ongoing or future elections. Further, the City submitted affidavit evidence of the Chief Technology Officer (“CTO”) of the service provider, in which the CTO stated that: “These addresses have stringent physical security precautions but, for a dedicated attacker, knowledge of the address could provide additional means to initiate social engineering attacks focusing on employees at these facilities.”

The City also relied on two previous Orders holding that FIPPA’s section 15(1)(l) exemption applied to information which would allow or assist third parties to gain unauthorized access to a computer system or weaken the security of a computer system. The IOPC distinguished these Orders from the current case as neither of them dealt with physical location of servers but about user IDs, passwords, network configuration, security settings and so on. In the end, the OIPC was not satisfied that disclosing the server locations would make unlawful access considerably more likely than a mere possibility.

FIPPA expressly provides in its section 2 that one of the purposes is to make public bodies more accountable to the public. OIPC, in its reasoning in this decision, reiterated the strong public interest in transparency in relation to contracts involving public services delivered by private contractors and reinforced its position that the risk of harm under section 15(1)(l) must be sufficient to outweigh that public interest (footnotes omitted):

There is a strong public interest in transparency in relation to contracts involving public services delivered by private contractors and the risk of harm under s. 15(1)(l) must be sufficient to outweigh that public interest.  The City has
not satisfied me that the security of the primary and backup server facilities or the server computer system itself could reasonably be expected to be harmed by disclosure of their location or the names of the companies which operate them.
Therefore, I find the City is not authorized to refuse the  applicant access to this information pursuant to s. 15(1)(l).

This Order will be of obvious concern to companies which contract with public sector bodies for the storage and processing of data, many of which rely on the secrecy of their physical operations as part of their overall IT security plan.

Companies should consider reviewing the terms of the their contracts, and work with counsel to take steps to decrease the likelihood they will be adversely affected by requests such as the one here.

 For more information about our firm’s data expertise, please see our Cybersecurity, Privacy and Data Management Group’s page or see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses.


[1] Order F17-54, 2017 BCIPC 59 (CanLII)

[2] RSBC 1996, C 165 (“FIPPA”)

[3] Ontario (Community Safety and Correctional Services) v. Ontario (Information and Privacy Commissioner), 2014 SCC 31 (CanLII) [Ontario v Ontario]

[4] Order F17-54, at para 10 citing Ontario v Ontario

In the Future, Everyone Will Have Their Personality Misappropriated for 15 Minutes

Posted in Privacy
Jade Buchanan

At the same time Andy Warhol was predicting the intense, short-lived  “15 minutes of fame” that has now manifest as viral videos, legal scholars were pondering the implications of technology on our private lives.[1] While nobody got as close as predicting that a social media website would get sued for using photos people voluntarily uploaded to promote products, legal remedies for “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness” were already emerging in the 1960s.

So what is the law in Canada now? Can you sell “Damn Daniel” fidget spinners? Or use Chewbacca Mom to promote your crowd-funded hover boards? What if the person’s “fame” is their 407 Twitter followers? The answer is usually going to be “not without their consent”, but the reason why is a little less clear.

The Law on Misappropriation of Personality in Canada’s Common Law Jurisdictions

In Canada’s common law jurisdictions, if someone misuses your likeness to promote a product, your remedy will depend on where you live and whether or not you are living at all.

Four common law provinces have legislation that makes invasion of privacy a cause of action: British Columbia, Manitoba, Newfoundland and Labrador and Saskatchewan (the “Privacy Acts”). All four prohibit the use of a likeness for advertising. For example, the BC Privacy Act states it as follows:

[3](2) It is a tort, actionable without proof of damage, for a person to use the name or [likeness, still or moving] of another for the purpose of advertising or promoting the sale of, or other trading in, property or services, unless that other, or a person entitled to consent on his or her behalf, consents to the use for that purpose.

Let’s call this “statutory misappropriation”.

Things get a little murkier when it comes to the common law. It is generally accepted that Canadian courts will recognize misappropriation of personality as a cause of action that is “proprietary in nature and the interest protected is that of the individual in the exclusive use of his own identity in so far as it is represented by his name, reputation, likeness or other value.”[2] Claims of misappropriation of personality have typically been advanced by famous people, such as CFL linebacker Bob Krouse and the estate of Glenn Gould (both failed). Damages have been tied to the royalties the celebrity in question would have received if they had consented to the use of their likeness.[3] That all said, the Ontario Court of Appeal has stated that “Ontario has already accepted the existence of a tort claim for appropriation of personality” in reference to the theoretical privacy tort of “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness”.[4] That decision suggests that misappropriation of personality even extends to the non-famous, although we cannot yet say this is definitive across all common law jurisdictions.

Do Personality Rights Survive Forever?

All of the Privacy Acts state that the right to sue for invasion of privacy is extinguished by the death of the affected person, except for the Manitoba Privacy Act, which is silent on duration.

The duration of a right to claim for misappropriation of personality likely survives death but it is not clear for how long. At least one Canadian court has suggested it survives death but did not specify for how long.[5] That makes sense when you consider that part of the reason for the right is to give famous people the exclusive right to monetize their fame. Just like copyrights, they should be able to pass the economic value of their personality rights to their heirs.

Does Misappropriation of Personality Still Apply?

The Privacy Acts suggest that a person could sue for both misappropriation of personality and statutory misappropriation. Except for British Columbia, the Privacy Acts state that the rights under the legislation do not derogate any other rights of action or remedy otherwise available. While a court could find that the British Columbia Privacy Act is a full codification of misappropriation of personality, there is a decision from the Supreme Court of British Columbia that considered claims for misappropriation of personality and statutory misappropriation separately (but dismissed both because the individual was not actually identifiable).[6]

The co-existence of statutory and common law claims suggests that, while the dead cannot pursue claims of statutory misappropriation, common law misappropriation may still be available.

What are the Implications?

If you are going to include someone in your advertising or promotions (through their name, likeness, portrait, voice, caricature or otherwise) you need their consent. If you are unsure of whether or not what you are doing constitutes misappropriation, you need legal advice.


[1] Prosser, William L., Privacy, 48 CALIF. L. Rev., Vol. 48, Iss. 3 (Aug, 1960)

[2] Joseph v. Daniels, 1986 CanLII 1106 (BC SC).

[3] Athans v. Canadian Adventure Camps Ltd. et al., 1977 CanLII 1255 (ON SC)

[4] Jones v. Tsige, 2012 ONCA 32.

[5] Gould Estate v Stoddart Publishing Co., 1996 CanLII 8209 (ON SC).

[6] Supra note 2.

Financial Stability Board Releases Report on Financial Stability Implications of Artificial Intelligence and Machine Learning

Posted in AI and Machine Learning, Big Data, Financial, FinTech
Brianne PaulinAna BadourKirsten ThompsonCarole Piovesan

On November 1, 2017, the Financial Stability Board (the “FSB”)[1] published its report on the market developments and financial stability implications of artificial intelligence (“AI”) and machine learning in financial services. The FSB noted that the use of AI and machine learning in financial services is rapidly growing and that the application of such technologies to financial services are evolving.

“Use Cases” of AI and Machine Learning in the Financial Sector

The FSB identified current and potential types of use cases of AI and machine learning in financial services, including: “(i) customer-focused uses, (ii) operations-focused uses, (iii) uses for trading and portfolio management in financial markets, and (iv) uses by financial institutions for Regulatory Technology (“RegTech”) or by public authorities for supervision (“SupTech”).”

Customer-Focused Uses

The FSB found that “financial institutions and vendors are using AI and machine learning methods to assess credit quality, to price and market insurance contracts, and to automate client interaction.” Specifically in the insurance industry, machine learning is being used to analyze big data, improve profitability, and to increase the efficiency of claims and pricing processes. The investment to global InsurTech totaled $1.7 billion in 2016.

Such application of AI and machine learning can increase market stability as financial institutions have a greater ability to analyze big data to enhance their knowledge of trading patterns and to better anticipate trades. The FSB warned, however, that due to the lack of data on how the market would react to an increase use in AI and machine learning by market participants, a market shock could occur. In fact, market participants could be enticed to apply such technologies if their competitors, in applying AI and machine learning to customer-focused uses, are increasing profits and outperforming them. This increased use by market participants could cause a market shock and bring instability to the market.

Operations-Focused Uses and Trading and Portfolio Management

Trading firms would be able to better assess market impacts and shifts in market behaviour, increasing market stability. An example of such use is ‘trading robots’ than can react to market changes. The ‘trading robots’ can perform and assess market impact of certain trades, which allows trading firms to collect more information which, in turn, allows these firms to modify their trading strategies. The FSB also identified back-testing as an area of growth for the use of AI and machine learning. Back-testing is important for banks in their assessment of risk models. AI would provide a greater understanding of shifts in market behaviour and the FSB stated that this could potentially reduce the underestimating of risks in such instances.

Uses of AI and Machine Learning by Financial Institutions

The FSB found that AI and machine learning is used by financial institutions for regulatory purposes and by authorities for supervision purposes. The RegTech market is expected to reach $6.45 billion by 2020. Several regulators around the globe are using AI and machine learning to facilitate regulatory compliance, such as applying AI and machine learning to the Know-Your-Customer process. In terms of SupTech, the report noted the implementation of AI and machine learning in various supervision functions by authorities, such as monetary policy assessments. A 2015 survey of central banks’ use of AI and machine learning, cited by the FSB, found that central banks anticipated using big data reported by third parties for economic forecasting and for other financial stability purposes.

Implications of AI and Machine Learning on Market Stability

The FSB warned that, though AI and machine learning would benefit market stability by reducing costs, increasing efficiency and increasing profitability for financial institutions, financial institutions must implement governance structures and maintain auditability to ensure that potential effects beyond the institutions’ balance sheets are understood. Governance structures include ‘training’ to ensure that users understand the technologies and applications of AI and machine learning, promoting algorithmic transparency and accountability to ensure decisions made by the algorithm, such as the credit score assigned to a particular customer, can be understood and explained.

Without sound governance structures, the application of AI and machine learning could increase the risk to financial institutions. The report noted that “beyond the staff operating these applications, key functions such as risk management and internal audit and the administrative management and supervisory body should be fit for controlling and managing the use of applications.”

The benefits of using AI and machine learning systems for consumers and investors could  translate into lower costs of services and greater access to financial services. AI and machine learning could allow financial institutions to assess big data to tailor financial services to specific customers and investors. The FSB noted that proper governance structures must be in place to protect the privacy and data of both consumers and investors.

The FSB also raised concerns over the small number of third party providers of data in the financial system. Bank vulnerability could grow if the financial institutions rely on the same small number of third-party providers, using similar data and algorithms. On dependency, the FSB noted that “third-party dependencies and interconnections could have systemic effects if such a large firm were to face a major disruption or insolvency.” If financial institutions are unable to use big data from new sources, dependencies on previous data could develop, potentially leading to market shocks and bringing instability in the financial system.

This same concern was recently echoed by the Bank of Canada in its November 2017 Financial System Review, in which it said:

As financial services rely increasingly on information technology, there are growing operational risks from third-party service providers. Since providing services such as cloud computing, big data analytics and artificial intelligence requires a critical mass of users to remain cost-effective, global markets could become dominated by a few large technology firms. Higher industry concentration would raise systemic risks from operational disruptions and cyber attacks. Investments by service providers to avoid disruptions have benefits beyond the individual firm and can be considered a public good.

Legal and Ethical Issues

The FSB also provided an analysis on certain legal issues that arise in the use of AI and machine learning with big data, specifically in the context of data protection and data ownership rights. The FSB highlighted the efforts in several jurisdictions to adopt guidelines for the protection of data ownership and privacy.[2] Some jurisdictions are also assessing whether consumers should have the ability to understand certain techniques used in the application of AI and machine learning to credit systems. Other issues that arise in the use of AI and machine learning with big data include anti-discrimination laws and equal opportunity laws. The FSB noted that the use of AI and machine learning could lead to discriminatory practices and results, even without the inclusion of gender or racial information. Finally, liability issues could also arise, such as determining whether experts who rely on algorithms could be liable for their decisions.

Next Steps

The FSB noted that it will continue monitoring the uses of AI and machine learning in the financial markets, especially as the application of such technologies to the financial sector is growing.

AI and Financial Services at McCarthy Tétrault

In October 2017, McCarthy Tétrault released a White Paper on AI “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business”, in which we featured some preliminary research on AI in the financial services sector. In particular, we highlighted specific areas where we see immediate the incorporation of AI in financial services, being investments and portfolio allocations, compliance and RegTech, and AI-powered chatbots.

[1]       The FSB is an international body that monitors and makes recommendations about the global financial system. Its members include all G20 major economies (including Canada).

[2] In particular, the FSB referenced the OECD’s guidelines on the protection of privacy and cross-border uses and the European Union’s “General Data Protection Regulation” coming into force in 2018.


For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.