Insights on cybersecurity, privacy and data protection law

Canadian Competition Bureau Releases Fintech Report for Consultation

Posted in Big Data, FinTech, Open Banking
Jonathan BitranAna BadourKirsten ThompsonDonald HoustonMichele F. Siu

On November 6, 2017, the Competition Bureau (Bureau) released a draft report on its market study into technology-led innovation in the Canadian financial services (Fintech) sector.[1] The Bureau has invited feedback from interested parties only until November 20, 2017, an uncharacteristically short comment period.

Financial services are an area of interest for the Bureau due to their significance to the Canadian economy and Canadian employment as well as the critical role they play in the daily lives of Canadians. While offering a variety of products and services across many financial service segments, Fintechs are typically internet-based and application-oriented, promising user-friendly, efficient consumer interfaces. The Bureau asserts that Fintech represents an opportunity to increase competition in Canada’s financial services sector, which in turn could potentially lead to lower prices and increased choice and convenience for consumers and small and medium-sized enterprises (SMEs). To that end, the report covers Fintech innovation in segments that directly impact consumers and SMEs: (i) payments and payment systems (e.g., mobile wallets), (ii) lending (e.g., crowdfunding), and (iii) investment dealing and advice (e.g., robo-advisors). The report specifically does not cover insurance, cryptocurrencies/blockchain, payday loans, loyalty programs, deposit-taking, accounting, auditing, tax preparation, large corporate, commercial or institutional investing and banking (e.g., pension fund management, mergers and acquisitions) or business-to-business financial services.

The report is intended as guidance for financial services sector regulators and policymakers. It is a dense report, but the Bureau’s core message is that regulation of Fintech is necessary to protect the safety, soundness and security of the financial system, but should not unnecessarily impede competition and innovation in financial services. Or as Goldilocks might say, regulation should be “just right”. In examining the regulatory barriers to entry in Fintech, the Bureau makes 11 key recommendations, summarized below, which are intended to modernize financial services regulation by reducing barriers to innovation and competition in order to encourage Fintech growth.

Bureau’s Recommendations For Pro-Competitive Financial Services Regulatio

  1. Technologically-neutral. The Bureau asserts that regulation should be technology‑neutral and device‑agnostic to accommodate and encourage new (and yet‑to‑be developed) technologies. For example, requiring “wet” signatures (i.e., in person with a pen) prevents the use of new digital signature technology that also provides sufficient security.
  2. Principles-based. The Bureau asserts that regulation should be based on principles or expected outcomes and not strict rules on how to achieve the desired outcome. This is to allow for the implementation of new technologies, which might otherwise be barred by a prescriptive regime, while still protecting policy goals.
  3. Function-based. The Bureau asserts that regulation should be based on the functions carried out by an entity, not its identity (e.g., if a bank and a start-up are engaging in the same activity, they should face the same regulation with respect to that activity). This is to ensure that all entities have the same regulatory burden and consumers have the same protections when dealing with competing service providers.
  4. Proportional to risk. The Bureau asserts that regulation should be proportional to the risks that it aims to mitigate. Along with technology‑neutral, device-agnostic, principles‑based, and function‑based regulation, proportional regulation would level the playing field between Fintech entrants and incumbent service providers that offer the same types of services.
  5. National harmonization. The Bureau asserts that regulations should be harmonized across Canada. Although there has been improvement, a patchwork of provincial and federal regulations can make compliance unduly difficult and costly.
  6. Facilitate sectoral collaboration. The Bureau proposes that collaboration throughout the sector should be encouraged, including (i) among regulators to enable a unified approach, (ii) between the public and private sector to improve understanding of the latest services among regulators and of the regulatory framework among Fintech firms, and (iii) among industry participants to help bring more products and services to market (while avoiding anticompetitive collaborations). The UK, Australia, and Hong Kong currently facilitate such collaboration and the Bureau asserts that Canada should follow suit.
  7. Policy leadership. The Bureau proposes that a Fintech policy lead for Canada to facilitate Fintech development should be identified. The Fintech policy lead can then act as a gateway to other agencies, give Fintech firms a one‑stop resource and encourage investment in innovative businesses and technologies in the financial services sector.
  8. Facilitate access to core services. The Bureau supports promoting greater access to the financial sector’s core infrastructure and services to facilitate the development of Fintech services. Fintech firms often require access to core services (e.g., the payment system) in order to provide their services (e.g., bill payment app). Under the appropriate risk‑management frameworks, Fintech firms should be provided with access, so that regulation does not stifle useful services.
  9. Open banking. The Bureau supports embracing more “open” access to systems and data (also described as “open banking”). With appropriate customer consent and risk mitigation frameworks, the Bureau asserts that this will allow Fintech firms to access consumer banking information in order to develop bespoke price‑comparison tools and other applications that facilitate competitive switching by consumers. Looking abroad, the UK competition regulator has mandated the implementation of “open banking” (the Bureau does not have this authority). The Bureau has recognized the key role of data (specifically, big data) in Fintech and other sectors in its recently released draft paper for consultation, Big data and Innovation: Implications for competition policy in Canada (see our further comments on this paper). The comment period for this paper is open until November 17, 2017.
  10. Digital identification. The Bureau supports exploring the potential of digital identification for use in client identification processes. Digital identification could reduce the cost of customer acquisition (for new entrants and incumbent service providers), reduce the costs of switching for consumers and facilitate regulatory compliance where identity verification is needed.
  11. Continuing review. The Bureau supports continuing the frequent review of regulatory frameworks and the adaptation of regulation to changing market dynamics (e.g., consumer demand and advances in technology) to ensure they achieve their objectives in a way that does not unnecessarily inhibit competition.

The report comes in the context of a number of ongoing Fintech-related consultations and initiatives, including the recent announcement by the Government of Ontario that it would create a “regulatory super sandbox”, the launch earlier this year of regulatory sandboxes by the Canadian Securities Administrators, the modernization initiative of the Canadian payments system by Payments Canada, the federal consultation on the national retail payments oversight framework and the federal consultation on the federal financial sector framework.

The Bureau has clearly put significant thought and effort into this report. The impact it will have on financial services regulators and policymakers remains to be seen.

For more information about our Firm’s Competition and Fintech expertise, please see our Competition group’s and Fintech group’s pages.

[1] In May 2016 the Bureau announced it would launch this study. The Commissioner of Competition has emphasized the Bureau’s commitment to use its authority and jurisdiction to support Fintech innovation noting that “competitive intensity fosters innovation”. Earlier this year, the Bureau hosted industry stakeholders and federal and provincial regulators at a workshop to discuss the regulatory challenges faced by Fintech and possible approaches that could enhance the efficiency and effectiveness of Canada’s financial services sector.

U.S. Consumer Financial Protection Bureau Sets Out Principles for Consumer-Authorized Data Sharing and Aggregation

Posted in Big Data, FinTech, Open Banking
Kirsten Thompson

On October 18th, 2017 the U.S. Consumer Financial Protection Bureau (“CFPB”) outlined the principles to be followed (“Principles”) when consumers authorize third party companies to access their financial data to provide certain financial products and services. These principles will be of particular note to the Fintech sector, in which a significant number of companies incorporate into their business model some kind of aggregation or sharing of consumer financial information.

The CFPB refers to this is as the “consumer-authorized data-sharing market” and has stated its two-fold goal as intending to “help foster the development of innovative financial products and services, increase competition in financial markets, and empower consumers to take greater control of their financial lives”, while at the same time ensure protection for consumers “that provide, use, or aggregate consumer-authorized financial data”.

The Principles line up quite closely with the ten Fair Information Principles that underlie Canadian federal privacy legislation (PIPEDA). Absent (or diluted) from the CFPB Principles are the Fair Informaiton Principles regarding “Limiting Use, Disclosure and Retention”, “Limiting Collection” and “Identifying Purpose”. The CFPB Principles also attempt to address many of the same issues that arise in the mandatory “Open Banking” regime in the EU and the UK, but in a much less fulsome manner.


Under the Dodd-Frank Act, the CFPB was empowered  to implement and enforce consumer financial law “for the purpose of ensuring that all consumers have access to markets for consumer financial products and services and that markets for consumer financial products and services are fair, transparent, and competitive.”[1] The CFPB was to exercise its authorities so that “markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation.”[2]

Increasingly, companies have been  accessing consumer account data with consumers’ authorization and providing services to consumers using data from the consumers’ various financial accounts. Such “data aggregation”-based services include the provision of financial advice or financial management tools, the verification of accounts and transactions, the facilitation of underwriting or fraud-screening, and a range of other functions. This type of consumer-authorized data access and aggregation holds the promise of improved and innovative consumer financial products and services, enhanced control for consumers over their financial lives, and increased competition in the provision of financial services to consumers.

The CFPB’s interest in consumer data (and specifically Open Banking) was telegraphed by the Director of the CFPB  his remarks at the 2016 Money 20/20 conference when he stated that the CFPB was “gravely concerned” that financial institutions were limiting or shutting off access to financial data, rather than “exploring ways to make sure that such access…is safe and secure.” (see our blog post on this here).

However, there are also challenges to this sharing of data – privacy, security and regulatory compliance being just a few. The CFPB notes that a range of industry stakeholders are working, through a variety of individual arrangements as well as broader industry initiatives, on agreements, systems, and standards for data access, aggregation, use, redistribution, and disposal. However, the CFPB believes that consumer interests must be the priority of all stakeholders as the aggregation services-related market develops.

The CFPB issued a Request for Information in 2016 to gather feedback from wide range of stakeholders, including large and small banks and credit unions, their trade associations, aggregators, “fintech” firms, consumer advocates, and individual consumers.

The CFPB has now released its set of Consumer Protection Principles intended to reiterate the importance of consumer interests. They are, however, non-binding and not intended to alter, interpret, or otherwise provide guidance on existing statutes and regulations that apply.

1) Access

Consumers should be able, upon request, to obtain information in a timely manner about their ownership or use of a financial product or service from their product or service provider. Further, consumers should generally be able to authorize trusted third parties to obtain such information from account providers to use on behalf of consumers, for consumer benefit, and in a safe manner.

The CFPB expects that financial account agreements and terms of use will, among other things, “not seek to deter consumers from accessing or granting access to their account information.” Notably, “[a]ccess does not require consumers to share their account credentials with third parties”, which suggests that screen scraping mechanisms cannot be made mandatory.

2) Data Scope and Usability

The scope of data that can be consumer-authorized for access should be broad, according to  the CFPB, and may include “any transaction, series of transactions, or other aspect of consumer usage; the terms of any account, such as a fee schedule; realized consumer costs, such as fees or interest paid; and realized consumer benefits, such as interest earned or rewards.” With this scope of information made available, consumers will be able to compare fees the cost of banking at a particular company or institution.

3) Control and Informed Consent

The CPFB suggests that authorized terms of access, storage, use, and disposal are fully and effectively disclosed to the consumer, understood by the consumer, not overly broad, and consistent with the consumer’s reasonable expectations in light of the product(s) or service(s) selected by the consumer. While no explanation accompanies the statement, the CPFB states that firms should take steps to ensure “[c]onsumers are not coerced into granting third-party access.”

Furthermore, consumers must be able to readily and simply revoke authorizations to access, use, or store data. Similarly, consumers should be able to require “third parties to delete personally identifiable information.”

4) Authorizing Payments

The CPFB reminds firms that authorized data access, in and of itself, is not payment authorization. A separate and distinct authorization to initiate payments must be obtained s. Providers that access information and initiate payments may reasonably require consumers to supply both forms of authorization to obtain services.

5) Security

The sharing of information can raise security concerns and the CFPB advises that consumer data are to be maintained “in a manner and in formats that deter and protect against security breaches and prevent harm to consumers.” Login and other access credentials are to be secured and “all parties that access, store, transmit, or dispose of data use strong protections and effective processes to mitigate the risks of, detect, promptly respond to, and resolve and remedy data breaches, transmission errors, unauthorized access, and fraud”. Further, firms should transmit data only to third parties that also have such protections and processes.

6) Access Transparency

Consumers should be informed of which of their authorized third parties are accessing or using information regarding their accounts. This can include the identity and security of each such party, the data they access, their use of such data, and the frequency at which they access the data.

7) Accuracy

Consumers should expect the data they access or authorize others to access or use to be accurate and current and firms should have reasonable means to dispute and resolve data inaccuracies, regardless of how or where inaccuracies arise.

8) Ability to Dispute and Resolve Unauthorized Access

Consumers should also have reasonable and practical means to dispute and resolve instances of unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, and failures to comply with other obligations, including the terms of consumer authorizations. Interestingly, the CFPB advises that consumers “are not required to identify the party or parties who gained or enabled unauthorized access to receive appropriate remediation.”

9) Efficient and Effective Accountability Mechanisms

The CFPB advises that commercial participants should be accountable for the risks, harms, and costs they introduce to consumers. It is of the view that this helps align the interests of the commercial participants, and suggests such participants be “incentivized” and empowered to prevent, detect, and resolve unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, data inaccuracies, insecurity of data, and failures to comply with other obligations, including the terms of consumer authorizations.


The situation in Canada is not dissimilar, with various stakeholders and regulators on the one hand recognizing a need for innovation driven by consumer data access and on the other, the need to protect consumers and their data.

For instance, in March of 2011, the Financial Consumer Agency of Canada (“FCAC”) issued a statement, warning Canadians to be aware of the possible risks of disclosing their online banking and credit card information to financial aggregation services. Aside from the obvious data security and privacy risks, the FCAC cautioned that using such a service could also violate the terms and conditions (see our blog post on this here).

[1] 12 U.S.C. 5511(a).

[2] 12 U.S.C. 5511(b)(5)

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Canadian Securities Administrators Issues Staff Notice providing Cybersecurity and Social Media Guidance

Posted in Cybersecurity
Kirsten ThompsonEriq Yu

On October 19, 2017, the Canadian Securities Administrators (“CSA”), representing provincial and territorial securities regulators, issued CSA Staff Notice 33-321 – Cyber Security and Social Media (the “Notice”). The Notice serves to publish the results of the CSA’s survey of cybersecurity and social media practices of registered firms dealing in securities, including those registered as investment fund managers, portfolio managers, and exempt market dealers.

The survey was the result of a CSA initiative following the release of CSA Staff Notice 11-332 – Cyber Security in September 2016 in which CSA announced its intention to determine the materiality of cybersecurity risks. Social media and its surrounding challenges for registered firms were previously discussed in the CSA’s Staff Notice 31-325 – Marketing Practices of Portfolio Managers in 2011.

Importantly, issues concerning cybersecurity gain new prominence with the release of this Notice. The Notice emphasizes that addressing the risks posed by cyber threats and the use of social media is required to comply with business obligations imposed by Section 11.1 of National Instrument 31-103 (“NI 31-103”), the Instrument that outlines registrant requirements and obligations. Specifically, Section 11.1 requires registered firms to “establish, maintain and apply policies and procedures that establish a system of controls and supervision sufficient to provide reasonable assurance that the firm and each individual acting on its behalf complies with securities legislation and manage the risks associated with its business in accordance with prudent business practices.”

Over Half of Registered Firms Experienced a Cyber Security Incident

Conducted between October 11, 2016 and November 4, 2016, the survey sampled responses from 63% of the 1000 firms invited to participate. Overall, the survey found that 51% of firms experienced a cybersecurity incident in 2016, including phishing (43%), malware incidents (18%), and fraudulent email attempts to transfer funds or securities (15%).

The survey questions focused, among others, on the areas of cybersecurity incidents, policies, and incident response plans; social media policies and practices; due diligence to assess the cybersecurity practices of third-party vendors and service providers; encryption and backups; and the frequency of internal cyber risk assessments.

Cybersecurity Policies, Procedures and Training

Specifically, for the areas identified, the survey found that:

  • Only 57% of firms have specific policies and procedures to address the firm’s continued operation during a cybersecurity incident.
  • Only 56% of firms have policies and procedures for cybersecurity training for employees.
  • 9% of firms have no policies and procedures concerning cybersecurity at all.
  • 18% of firms do not provide cybersecurity-specific training to employees.

Guidance: The resulting CSA guidance indicates that all firms should have policies and procedures that address, among others, the use of electronic communications; the use of firm-issued electronic devices; reporting cybersecurity incidents; and vetting third-party vendors and service providers. Training of employees on cyber risks, including the privacy risks associated with the collection, use, or disclosure of data, should take place with “sufficient frequency to remain current”, with a recognition that training more frequent than on an annual basis may be necessary.

Cyber Risk Assessments

The Survey found that most firms perform risk assessments at least annually to identify cyber threats. However, 14% of firms indicated that they do not conduct this type of assessment at all.

Guidance: In response, the CSA guidance indicates that firms should conduct a cyber risk assessment at least annually, including a review of the firm’s cybersecurity incident response plan to see whether changes are necessary. The risk assessment should include:

  • an inventory of the firm’s critical assets and confidential data, including what should reside on or be connected to the firm’s network and what is most important to protect;
  • what areas of the firm’s operations are vulnerable to cyber threats, including internal vulnerabilities (e.g., employees) and external vulnerabilities (e.g., hackers, third-party service providers);
  • how cyber threats and vulnerabilities are identified;
  • potential consequences of the types of cyber threats identified; and
  • adequacy of the firm’s preventative controls and incident response plan, including evaluating whether changes are required to such a plan.

Cybersecurity Incident Response Plans

On cybersecurity incident response plans, the Survey results indicated that 66% of firms have an incident response plan that is tested at least annually. However, a quarter of firms surveyed had not tested their incident response plans at all.

Guidance: The CSA guidance stipulates that firms should have a written incident response plan, which should include:

  • who is responsible for communicating about the cyber security incident and who should be involved in the response to the incident;
  • a description of the different types of cyber attacks (e.g., malware infections, insider threats, cyber-enabled fraudulent wire transfers) that might be used against the firm;
  • procedures to stop the incident from continuing to inflict damage and the eradication or neutralization of the threat;
  • procedures focused on recovery of data;
  • procedures for investigation of the incident to determine the extent of the damage and to identify the cause of the incident so the firm’s systems can be modified to prevent another similar incident from occurring; and
  • identification of parties that should be notified and what information should be reported.

Due Diligence on Third Party Providers

Almost all firms surveyed indicated they engaged third-party vendors, consultants, or other service providers. Of these firms, a majority conduct due diligence on the cyber security practices of these third parties. However, the extent of the due diligence conducted and how it is documented vary greatly

Guidance: The CSA Guidance states that firms should periodically evaluate the adequacy of their cyber security practices, including safeguards against cyber security incidents and the handling of such incidents by any third parties that have access to the firms’ systems and data. In addition, firms should limit the access of third-party vendors to their systems and data.

Written agreements with these outside parties should include provisions related to cyber threats, including a requirement by third parties to notify firms of cyber security incidents resulting in unauthorized access to the firms’ networks or data and the response plans of the third parties to counter these incidents.

Where firms use cloud services, they should understand the security practices that the cloud service provider has to safeguard from cyber threats and determine whether the practices are adequate. Firms that rely on a cloud service should have procedures in place in the event that data on the cloud is not accessible.

Data Protection

Encryption is one of the tools firms can use to protect their data and sensitive information from unauthorized access. However, the survey responses indicate a sizeable number of firms do not use any encryption or rely on other methods of data protection, such as password protected documents. In addition, almost all firms surveyed indicated they back up data, but the frequency of such back ups varied.

Guidance: The CSA’s view is that encryption protects the confidentiality of information as only authorized users can view the data. In addition to using encryption for all computers and other electronic devices, the CSA expects firms to require passwords to gain access to these devices and recommends so-called “strong” passwords be required, and change with some frequency.

Where firms provide portals for clients or other third parties for communication purposes or for accessing the firm’s data or systems, firms should ensure the access is secure and data is protected.

Firms are expected to back up their data and regularly test their back-up process. Also, when backing up data, firms should ensure that the data is backed up off-site to a secure server in case there is physical damage to the firms’ premises

Cyber Insurance

A majority of firms (59%) do not have specific cyber security insurance and for those that do, the types of incidents and amounts that their policies cover vary widely.

Guidance: The CSA guidance states that firms should review their existing insurance policies (e.g., financial institution bonds) to identify which types of cyber security incidents, if any, are covered. For areas not covered by existing policies, firms should consider whether additional insurance should be obtained.

Social Media

The focus of this part of the Notice was on the fact that social media may be used as a vehicle to carry out cyber attacks. For example, social media sites may be used by attackers to launch targeted phishing emails or links on these sites may lead to websites that install malware.

For social media specifically, firms should review, supervise, retain, and have the ability to retrieve social media content.  Policies and procedures on social media practices should cover:

  • the appropriate use of social media, including the use of social media for business purposes;
  • what content is permitted when using social media;
  • procedures for ensuring that social media content is current;
  • record keeping requirements for social media content; and
  • reviews and approvals of social media content, including evidence of such reviews and approvals.

In addition, given the ease with which information may be posted on social media platforms, the difficulty of removing information once posted and the need to respond in a timely manner to issues that may arise, the CSA states that firms should have appropriate approval and monitoring procedures for social media communications. This applies even if firms do not permit the use of social media for business purposes, because policies and procedures should be in place to monitor for unauthorized use.

Next Steps

The Notice advises that CSA staff will continue to review the cyber security and social media practices of firms through compliance reviews. It notes further that CSA staff will apply the information and guidance in this Notice when assessing how firms comply with their obligations to manage the risks associated with their business as set out in NI 31-103.

Firms registered to deal in securities are advised to adopt cybersecurity policies and procedures, including an incident response plan, to ensure compliance with registrant obligations under NI 31-103. The Notice underscores that cyber threats are ever-changing and preparedness and vigilance are key to ensure risk mitigation.
For more information, see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses.

Basel Committee on Banking Supervision Issues Consultative Document Highlighting Implications of Fintech on Banks

Posted in AI and Machine Learning, Big Data, Cybersecurity, FinTech, Payments, Privacy
Brianne Paulin

On August 31, the Basel Committee on Banking Supervision (the “BCBS”) published a consultative document on the implications of Fintech for the financial sector. The consultative document was produced by BCBS’s task force mandated with identifying trends in Fintech developments and assessing the implication of those developments on the financial sector.

Parts I and II of the consultative document provide an overview of current trends and developments in Fintech. The report assesses Fintech developments, presents forward-looking scenarios, and includes case studies to better present individual risks and the potential impact of the forward-looking scenarios on banks.

The main findings of the study are presented in Part III, summarized in 10 key observations and recommendations for banks and supervisors, which will be the focus of this blog post.

Key Observations and Recommendations: Implications for Banks and Banking Systems

  1. Banking risks may change over time with the emergence of new technologies.

Banks will need to adapt to new risks emanating from the introduction of new technologies in the financial sector without limiting potential benefits stemming from such technologies. Fintech innovations have the potential to benefit both the bank, by lowering banking costs, allowing for faster banking services and facilitating regulatory compliance, and consumers, by improving access to financial services, tailoring banking services to individual needs and allowing new competitors to join the market.

  1. Key risks for banks include “strategic risk, operational risk, cyber-risk and compliance risk.”[1]

Banks must implement appropriate risk management processes and governance structures to address new risks arising from innovative technologies, including operational risks, data protection and anti-money laundering (“AML”) risks. The report recommends the adoption of the Principles for sound management of operational risk (“PSMOR”)[2] to effectively respond to these risks.

  1. Emerging technologies bring benefits to the financial sector but also pose new risks for banks.

BCBS undertook an in-depth study of the impacts of three Fintech-enabling technologies on the banking industry: artificial intelligence/machine learning/advanced data analytics, distributed ledger technology and cloud computing. Banks will need to adapt risk management plans to address such enabling technologies by implementing effective IT and risk management plans.

  1. Banks increasingly outsource operational support for technology-based financial services to third parties but risks ultimately remain with the bank.

Banks will need to ensure that risk management plans are extended to any operations outsourced to a third party. This will require adapting operational risk management plans to third parties, including Fintech firms.

  1. Fintech innovations will require greater supervision and require further cooperation with public authorities to ensure compliance with regulations, such as data privacy, AML and consumer protection.

The emergence of new enabling technologies in the banking sector provides an opportunity for bank supervisors to further cooperate with public authorities responsible for the oversight of the financial sector and Fintech. Cooperation will facilitate the identification of new risks and facilitate supervision of important risks, including consumer protection, data protection, competition and cyber-security.

  1. Fintech companies can operate across borders. International cooperation between banks and bank supervisors is essential.

BCBS noted that current Fintech firms mostly operate at a national level. However, the opportunities for cross-border services are plenty, and when Fintech firms expand their operations, bank supervisors will need to ensure a level of international cooperation with other bank supervisors.

  1. Technology can bring important changes to traditional banking models. Supervision models need to be adapted to these emerging banking models.

Bank supervisors should ensure that staff are well equipped to deal with the changing technology. Staff should be trained to identify and monitor new and emerging risks associated with innovative technologies and new banking systems.

  1. Banks should harness emerging technologies, such as AI, to increase their efficiency in responding to Fintech-related risks.

Bank supervisors should determine how to use Fintech innovations to better supervise and monitor Fintech related risks and new banking technologies.

  1. Current regulatory frameworks were adopted before the emergence of Fintech innovations. “This may create the risk of unintended regulatory gaps when new business models move critical banking activities outside regulated environments or, conversely, result in unintended barriers to entry for new business models and entrants.”

The BCBS recommends that supervisors review their regulatory frameworks to ensure that regulations protect consumers but do not create barriers to entry for Fintech firms. The BCBS found that many Fintech firms operate outside the realm of traditional banking, and thus, traditional regulatory approaches may not be appropriate for such firms. Regulatory barriers, however, could push Fintech firms to operate outside of the regulated financial industry, causing significant risks to consumers.

  1. Government authorities in some jurisdictions have partnered with Fintech firms to facilitate the use of financial technologies while ensuring adequate regulatory safeguards for financial stability.

The BCBS found that several government authorities have put in place initiatives to help Fintech companies navigate the regulatory requirements of the financial sector. Bank supervisors should monitor developments in other jurisdictions to learn and implement similar approaches, if appropriate.


[1] The report identifies these risks for both incumbent banks and new Fintech entrants into the financial industry.

[2] See:


For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Project Jasper Update: White Paper Release and Phase 3 Announcement

Posted in Authentication, Blockchain, Financial, FinTech, Identity, Payments, Privacy
Andrea Schneider

Project Jasper is an experiment being done by the Bank of Canada, Payments Canada and R3 to test the viability and feasibility of using Distributed Ledger Technology (“DLT”) as the basis for wholesale interbank payment settlements. This project was launched in March 2016 and has completed two phases. Phase 1 of Project Jasper employed the Ethereum platform as the basis for the DLT, while Phase 2 employed the custom-designed R3 Corda platform. In June 2017, the Bank of Canada issued a report on its preliminary findings from Project Jasper, which were summarized in our previous article. On September 29, 2017, the Bank of Canada, Payments Canada, and R3 released a white paper outlining their detailed findings from Project Jasper. This article elaborates on our previous article based on the findings from the white paper and discusses the next steps for Project Jasper.

Key Merits and Considerations of Project Jasper

End-to-End Settlement

Project Jasper was premised on the idea that payment settlement is the final leg of most economic transactions, but also that other areas of the contract chain have the potential to be supported by DLT. For example, “smart contracts” can be used to codify the terms and conditions of an agreement and can be automatically executed once certain conditions are met. Based on the experience from Project Jasper, the primary benefit of a DLT interbank cash payment platform would be an “end-to-end” settlement, meaning that the DLT arrangements for payment settlement would be aligned with other DLT arrangements within the same economic contract.

Settlement Risk

Principle 8 of the Principles for Financial Market Infrastructures (“PFMIs”) requires that a settlement must be final and irrevocable. Settlement finality in Phase 1 was “probabilistic” because of the possibility that a payment could fail to remain in the blockchain and be recorded under a proof-of-work consensus. To address this and improve settlement finality, Phase 2 introduced a notary node to be managed by a trusted third party. The Bank of Canada served as the notary node and was responsible for confirming the uniqueness of a transaction to avoid double spending. The second requirement under PFMIs is that there be a full and irreversible transfer of an underlying claim in central bank money. To meet this, Project Jasper created a digital depository receipt (“DDR”) as a digital settlement asset, which represented a claim to central bank deposits. The strength of the legal basis for settlement finality remains to be tested.

Operational Resilience and Efficiency

Project Jasper used a “permissioned” DLT, meaning that only those approved could use the exchange. This allows regulation of users and allows consensus to be achieved more quickly than with a public ledger. DLT solutions can also reduce the number of errors and duplications compared to the incumbent, manual systems in Canada because parties are required to reach a consensus before a transaction is posted. However, due to the limited implementation of the project, it is difficult to assess whether DLT is more operationally efficient than the current system.

The white paper also considered the operational resiliency of Project Jasper and noted the following:


Phase 1: The maximum processing capacity was 14 transactions per second, which is similar to incumbent systems, meaning there are constraints for future volume increases.

Phase 2: There is capacity for volume increases, in part because only the transacting parties, a supervisory node and the notary node are required to validate and record transactions (vs. the requirement for majority consensus in Phase 1).

Availability and Cost

Phase 1: The proof-of-work consensus allows for high availability at a lower cost. This is because of the sharing of databases across all participants in the proof-of-work consensus and the back up of ledgers by all participants.

Phase 2: To increase data privacy, each participant had a proprietary ledger. This creates challenges for data replication across the network.


Phase 1: The consensus protocol requires agreement of a majority of R3 members, meaning there could not be a single point of failure.

However, this does not eliminate the need for participants to back up their data. Due to the confidentiality of the information, in the event of a failure, participants would be unlikely to share data.

Phase 2: Both the notary and supervisory nodes are needed for consensus, therefore increasing the risk of a single point of failure. To mitigate this risk, participants will need to back up their data.

Potential Applications & Benefits of DLT to the Payments Industry  

Reduction of Disputes and Errors

A single payment or file transfer can involve many participants, and therefore may be recorded by multiple financial institutions. This can lead to errors and duplication, and inevitably, disputes. DLT technology requires multiple parties to reach an agreement on the legitimacy of a transaction before it can be posted. While this is a recognized benefit of DLT, the overall operational efficiency of this benefit compared to the incumbent system has not been measured.

Improved Back-Office Efficiency

After the parties reach a consensus, a single record of the transaction is recorded. This eliminates the need for internal record keeping of each party. Project Jasper found that DLT is not necessarily more efficient on a domestic level than the current LVTS system, however the analysis did not account for the back-office work that might be avoided by the individual financial institutions if DLT is used. Significant resources are expended in back-office reconciliations; therefore there may be significant cost savings that have not yet been considered.

Regulatory Compliance

DLT has the potential to assist with regulatory compliance, particularly with anti-money laundering (“AML”) and anti-terrorism financing (“ATF”) regulations for cross-border transactions where counterparty risk can run high. In the current system, false positives in relation to AML/ATF are a problem as they can take weeks or months to resolve. DLT has the potential to allow for easier reconciliation of such payments in order to legitimize a transaction because of the trusted ledger created. These benefits could extend to other regulatory compliance as well.

Transparency vs. Privacy

In the traditional clearing and settlement process, there is a central database. The DLT used in Phase 2 allows for privacy between the financial institutions, with each only being able to view their own proprietary ledgers. However, those with the supervisory or notary nodes can view all transactions and therefore have the ability to monitor and perform the traditional function of a central database. In Phase 2, the Bank of Canada held the supervisory and notary nodes.

Improved Automation through use of Smart Contracts

As previously discussed, significant benefits can be obtained where DLT can be used for end-to-end settlement through the use of smart contracts. The solution system created by Project Jasper could be the basis upon which other DLT platforms can be built for a variety of transactions, such as the settlement of financial asset transactions, managing syndicated loans, and supporting trade finance.

Conclusions from Phase 1 and Phase 2

The key conclusions from Phase 1 and 2 of Project Jasper are that DLT platforms that employ a “proof-of-work” consensus protocol, as used in Phase 1, do not deliver the required settlement finality and low operational risk. While Phase 2 was able to address improvements in settlement finality, scalability and privacy, it did not adequately address operational risks requirements. Further evaluation and enhancements will need to be done to satisfy PFMIs. On a global scale, the white paper recommends that the focus should be on developing protocols for interoperability between DLT platforms.

Overall, Project Jasper is an example of the benefits of collaboration within the payments industry. Such collaboration is particularly conducive in the concentrated Canadian financial industry. This collaboration is being extended for Phase 3.

Phase 3

On October 17, 2017 Payments Canada, the Bank of Canada and TMX announced the third phase of Project Jasper. This phase will build on the first two phases and involve developing a proof of concept for the clearing and settling of securities. Phase 3 hopes to explore an end-to-end settlement process by integrating the securities and payment infrastructure and the ability to settle multiple assets on the same ledger. The objectives of this phase are to reduce the cost of securities transactions, increase efficiency, and reduce settlement risk. The results of this phase are expected to be released at the Payments Canada Summit in May 2018.

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

Here We Go Again: Schrems 2 Puts the Model Clauses for Transfer of EU Personal Data in Doubt

Posted in European Union, Privacy
Keith Rose

On October 3, 2017, the High Court of Ireland rendered a decision in The Data Protection Commissioner v. Facebook Ireland Limited & anor, [2017] IEHC 545.  This decision, which could well be labeled Schrems 2,  is effectively a sequel to the original Schrems decision, based on the same underlying facts and issues.  In this most recent decision, the High Court has granted a request from the Irish Data Protection Commissioner (“DPC”) for a reference to the CJEU for a ruling on the validity of the so-called “Model Clauses” (or “Standard Contractual Clauses”) for transfer of EU personal data to the US.  In so doing, it has set in motion a potentially drastic shake-up of the existing order for export of EU personal data, which could ultimately have far broader consequences than the first Schrems decision.


Under EU law, an organization may only transfer “personal data” about an individual to a non-EU country for processing if the destination country “ensures an adequate level of protection”.  The European Commission has the authority to make a determination of whether the protections afforded to personal data in a given third country are or are not “adequate” in this regard.

In some cases “adequacy” decisions apply broadly.  In the case of Canada, for example, the Commission concluded that Canadian privacy laws were sufficiently similar to European laws that they were inherently adequate.[1]  But the US has a very different legal regime in this regard.  As a result, the Commission has taken a more circumstantial approach, considering incremental measures that can be applied by the exporting and importing organizations.

The Commission has recognized three bases for lawful transfer of EU personal data to the US:

  • A voluntary arrangement, originally known as “Safe Harbour”, by which U.S. organizations self-certify compliance with certain privacy principles;
  • Standardized contractual commitments between the data controller and data processor, based on approved “Model Clauses”; and
  • Similar commitments adopted in binding non-contractual rules applicable within a corporate group (so-called “Binding Corporate Rules”).

In the wake of the 2013 Snowden revelations about US data surveillance programs, Austrian law student Max Schrems brought a complaint against Facebook in Ireland, arguing that Facebook’s transfer of his personal information to the US was unlawful under both Irish and EU law.  This case was eventually referred to the Court of Justice of the European Union (“CJEU”), which struck down the Safe Harbour regime.  (See previous posts detailing this decision and its fallout here, here, here, and here.)

Following this decision, Facebook purported to rely on contractual commitments as the basis for its transfer of personal data to the US.  Mr. Schrems renewed and reformulated his original complaint, alleging both that Facebook’s specific contracts did not meet the obligations of EU law and that, in any case, the contracts could not provide adequate protection where national laws of the third country would override them.

The Decision

The fundamental issue before the Irish High Court was whether to refer the Commission’s decisions on the adequacy of the Model Clauses to the CJEU.  The decision is long and complex.  It canvasses a number of threshold issues before engaging in a methodical assessment of the law applicable to US state access to personal data in the hands of data processors, for national security purposes.

The court’s principal findings and conclusions include the following.

  • The exclusion from the EU directive of data processing for national security purposes did not put the entire matter outside of the competence of the CJEU: the court concluded that the existing jurisprudence clearly contemplated that US national security surveillance programs were open to scrutiny and challenge under EU law.
  • The Commission’s Privacy Shield decision did not close the subject. On the contrary, the first Schrems decision made it clear that national data protection authorities and courts had an obligation to refer “well founded” doubts as to the validity of a Commission decision to the CJEU for a preliminary ruling.
  • The adequacy of the Model Clauses cannot be assessed in a vacuum. If there are fundamental inadequacies in US laws, from the perspective of EU law, the Model Clauses cannot compensate for them because they cannot bind the sovereign authority of the US and its agencies.
  • Many of the statutory protections and remedies that would apply to US persons are not available to EU citizens who are not US citizens or residents.
  • The legal effect of the Trump administration’s executive order directing agencies to ensure that their privacy policies exclude persons who were not US citizens or lawful permanent residents from the protections of the Privacy Act is uncertain; however it signals a change in policy from the previous administration which had expanded administrative protections of non-US personal information.
  • There are “a variety of very significant barriers to individual EU citizens obtaining any remedy for unlawful processing of their personal data by US intelligence agencies”. In particular, under US case law, an objectively reasonable likelihood that one has been subjected to surveillance is not sufficient to establish legal standing.  Actual evidence that one has been the subject of a secret surveillance program will necessarily be difficult to come by.
  • The right to an effective remedy under Article 47 of the Charter of Fundamental Rights of the European Union had to be considered in a systematic way, without a threshold need to prove a specific violation of some other Charter right.
  • On this fundamental point, the court’s conclusion was damning: “To my mind the arguments of the DPC that the laws – and indeed the practices – of the United States do not respect the essence of the right to an effective remedy before an independent tribunal as guaranteed by Article 47 of the Charter, which applies to the data of all EU data subjects transferred to the United States, are well founded.” [See para. 298.]
  • Furthermore, the introduction of the Ombudsperson mechanism established by the US as part of the negotiations leading to the adoption of the Privacy Shield program did not fill the gap. The court had significant concerns about the independence of this office and, in any case, it could not offer any remedy to the individual concerned.  Indeed, it could not even confirm whether or not the individual had been subject to any electronic surveillance.


While not entirely unexpected, this decision may potentially be a game-changer, which could easily turn out to be even more significant than the first Schrems decision.  If confirmed by the CJEU, the logic of the High Court’s analysis of US and EU law carries far beyond Facebook’s data processing agreement, or even the Model Clauses themselves.  The High Court’s interpretation and application of Article 47 of the EU Charter makes it hard to imagine that any of the recognized bases for lawful transfer of EU personal data to the US could survive without fundamental changes to US law, which the US already rejected under a political climate that was more open to international cooperation.  While the original Schrems decision only affected the Safe Harbour regime, this decision may pull out all of the legs of the stool at once.

The High Court has not yet determined the precise questions that will be referred to the CJEU.  All of the parties had requested the opportunity to make further submissions on that point in the event that the court determined to make a reference and the court has agreed to hear those submissions.  Once the reference is made, it will likely be about two years before the CJEU renders a decision.  During that time, the GDPR will come into force, increasing the substantive divide between EU and US privacy law.

Furthermore, the US is by no means the only country with secretive national security programs that are largely shielded from public oversight or individual accountability.  If the CJEU confirms that Article 47 of the EU Charter requires individual remedies for EU data subjects against foreign national security agencies, as a precondition for any transfer of personal data, practical consequences will be dramatic.

[1] This assessment  is currently under review.  Some have questioned whether it will remain valid, particularly after the General Data Protection Regulation (“GDPR”) comes into force in May 2018.

Europeans Express Positive Views on AI and Robotics: Report on Preliminary Results from Public Consultations

Posted in AI and Machine Learning, Big Data, European Union, Privacy
Carole Piovesan

On October 6, 2017, the European Parliament released its preliminary findings on its public consultation on robotics and artificial intelligence. The consultations resulted in 298 responses reflecting public perceptions about the risks and benefits of AI technology. According to the EU Committee website, the results of the consultation will inform the Parliament’s position on ethical, economic, legal, and social issues arising in the area of robotics and artificial intelligence for civil use.

Among the key findings were that there is strong support for a central EU regulatory body, in part to protect “EU values” (especially data protection, privacy and ethics) and to address  significant public concern regarding issues of data protection.


The European Parliament’s Committee on Legal Affairs set up a working group in 2015 with the aim of drawing up “European” civil law rules regarding robots and artificial intelligence. While the European Commission has the right to initiate laws, the Parliament is able to draft a motion for resolution, which if passed, can prompt the Commission to create a proposal for legislation.

The Parliament passed a resolution on February 16, 2017 titled “Civil Law Rules on Robotics”, asking the Commission to propose rules on robotics and artificial intelligence, in order to fully exploit their economic potential and to guarantee a standard level of safety and security.

The goal of the Parliament seemed to be to place the EU at the forefront of developing regulation for artificial intelligence and robots. Part of the reason for this was to ensure that human rights and ethical concerns are protected and that EU values (especially data protection, privacy and ethics) were paramount.

The Parliament proposed a Charter on Robotics, which is a code of ethical conduct for robotics engineers, research ethics committees, and a license for designers and users (annexed to the Resolution).

The resolution called on the European Commission to propose legislation on various topics including:

  • General principles concerning the development of robotics and artificial intelligence for civil use – for example by creating a classification system for robots (see para. 1);
  • Research and innovation guidelines (see paras. 6-9);
  • Ethical principles (see paras. 10-14);
  • Creating a “European Agency for Robotics and Artificial Intelligence” (see paras. 15-17);
  • Intellectual property rights and the flow of data (see paras. 18-21);
  • Standardization, safety and security – for example by harmonising technical standards (see paras. 22-23);
  • Autonomous means of transportation (see paras. 24-30);
  • Creating a specific legal status for robots in the long run, in order to establish who is liable if they cause damage;
  • Environmental impact (see paras. 47-48); and,
  • Liability related to robots[1] – for example, to clarify liability issues for self-driving cars (see paras. 49-59), and to create a mandatory insurance scheme and a supplementary fund to ensure that victims of accidents caused by driverless cars are compensated (see para. 57).

In May 2017, the European Commission published a preliminary response to some of Parliament’s recommendations. While the Commission agreed with many of Parliament’s suggestions, it has not made any proposals on the issues yet.

Overall in the Commission’s response, it agreed with the Parliament that there is a “need for legal certainty as to the allocation of liability” in the context of new technologies. To this end, the Commission “intends to work with the European Parliament and the Member States on an EU response.”

The Commission noted that it awaits the response of the Parliament’s public consultation, and that it will conduct its own public consultation and stakeholder dialogue on the issues.

Results of Public Consultation

The preliminary results of the Parliament’s public consultation were released on October 6, 2017. A PowerPoint summarizing the results is available here. The public consultations were open to all EU citizens and consisted of one general public survey and one survey targeted to a “specialized” audience. The trends emerging from the consultations showed:

  • the vast majority of respondents have positive views on robotics and AI developments but want careful management of the technology;
  • despite the positive attitude towards the technology, the majority of respondents are concerned about privacy interests and the possible threat of AI and robotics to humanity;
  • 90% of respondents support public regulation of robotics and AI with only 6% against regulation and 4% noted as “other”;
  • reasons given in support of public regulation include:
    • avoid abuse by industry;
    • need to address concerns about ethics, human rights, data protection and privacy;
    • need to set common standards for industry to have certainty; and,
    • consumer protection.
  • reasons given against public regulation include:
    • too soon to regulate emerging technology;
    • harms competitiveness;
    • hinders innovation and creativity; and,
    • general skepticism with regulation.
  • 96% of respondents agree that international regulation of AI and robotics is desirable as well;
  • the top four reasons in support of EU-wide regulation of AI and robotics are:
    • data protection;
    • values and principles;
    • liability rules; and,
    • EU competitiveness.
  • public opinion regarding sectors in urgent need of EU-wide regulation is almost equally shared between (a) autonomous vehicles; (b) medical robots; (c) care robots; (d) drones; and, (e) human repair and enhancement.

A summary report of the findings of the public consultation will be publicly available in due course.

Interestingly, European public opinion appears to be much more positive towards automation technologies than U.S. public opinion, based on the results of a recently-release report by the Pew Research Centre. The Center surveyed 4,135 U.S. adults between May 1 and 15, 2017, and found that “Americans generally express more worry than enthusiasm when asked about these automation technologies.” A summary of the report is available here.


[1] EP resolution 16 Feb 2017,Paras 49-59

McCarthy Tétrault Event: Big Data Seminar – October 18th, 2017

Posted in Big Data, Competition, Privacy

The second part of McCarthy Tétraults Transformative Technologies Series explores the asset that underpins many of today’s transformative technologies: big data.

This seminar will provide an overview of some of the pressing legal questions businesses are facing as big data takes centre stage. Businesses are increasingly harnessing big data in ways that drive innovation and quality improvements across a range of industries.

With Canada’s federal privacy legislation currently under review and the Competition Bureau’s release on September 18, 2017 of its consultation paper “Competition Bureau – Big data and Innovation”, data is not only a driver of innovation, it can also present legal and regulatory challenges – both to businesses and regulators.

Topics to be covered during this session are:

  • Privacy: How can companies be sure consumer consent is valid for big data applications, those in use, and those that won’t be known until sometime in the future? Does aggregation solve privacy problems? Does de-identification? How can businesses fulfil transparency and accountability obligations to customers when dealing with big data? How does a business working with a third party provider, (e.g. cloud services or data analytics provider), demonstrate a “comparable level of protection”? With an evolving global privacy landscape, (the General Data Protection Regulation (GDPR) comes into force in May 2018), what are the potential directions for Canada?
  • Competition: The growth of the digital economy means the rise of business models based on “Big Data”. The use of big data by companies for the development of products and services can generate substantial efficiency and productivity gains, (e.g. improving decision-making, refining consumer segmentation and targeting). However, the acquisition and use of Big Data can raise competition issues, including allegations of abuse of dominance and even criminal cartel activity. Competition and privacy issues associated with Big Data may appear to conflict, and are currently before the Federal Court of Appeal in the TREB case. Find out how competition laws impact – and are likely to impact in the future – companies’ Big Data activities.
  • Managing Data: To be useful, data must be processed. This means organizations must find data in their systems, (or from other sources), manage it appropriately, standardize it so it can be processed, refine it so it achieves the ends anticipated, monitor the outputs, and make decisions about what will and will not be shared and with whom. Organizations face challenges at each step along the way, and there are better, (and worse!), ways to approach them. Technical missteps can result in legal and regulatory issues.

Our speakers are :

  • Paul Johnson, T.D. MacDonald Chair in Industrial Economics from the Competition Bureau of Canada
  • Kirsten Thompson from McCarthy Tétrault 
  • Izabella Gabowicz, COO of Sensibill

We look forward to welcoming you!Interested in attending?  Please contact us at


Wednesday, October 18, 2017

11:30 a.m. (EST) – Registration and Lunch
12:00 p.m. (EST) – 1:30 p.m. (EST) – Seminar

Toronto Office and Online

*Note: For those participants who cannot join us in person, we are offering this program via webinar. If you are interested in this alternative, please select the appropriate option during the online registration process. All instructions and information on how to access the webinar will be forwarded a few days before the event.

This program qualifies for up to 1.5 hours of eligible educational activity or CPD/MCE credit under the mandatory education regimes in British Columbia, Ontario and Québec.

Drones, Trains and Automobiles: Clear(er) Skies Ahead for Drone Operators in Canada

Posted in UAVs
Shane Lamond

Drone operators are (almost) cleared for takeoff in urban centres again as Transport Canada proposes a new regulatory regime aiming to balance innovation with public safety and easy-to-follow rules with flexibility.

The new regulations – for which public comment is open until October 13 – adopt a risk-based approach to managing the use of unmanned aircraft systems based on the weight of the unmanned aircraft (UA), the operating environment, and the complexity of the operation.

Businesses currently using drone technology, and especially those in rural areas, will see increased predictability as ad hoc applications under the existing Special Flight Operations Certificate (SFOC) regime are replaced with Canada-wide standards. However, more adventurous and demanding applications, for example those using UAs heavier than 25kg or operating beyond visual line of site, will still require a SFOC.

The current regulations

Transport Canada has identified three issues associated with the rapidly growing UA industry and its current regulations: (1) the overarching safety issue; (2) lack of regulatory predictability; and (3) a significant administrative burden borne out of the application for and granting of SFOCs.

Current regulations distinguish between recreational and commercial purposes in defining whether and to what extent the government will require registration with Transport Canada (for drones between 1 – 25kgs used for work or research) or the possession of an SFOC (work or research UAs weighing more than 25kgs or recreational UAs weighing more than 35kgs). All operators are currently required to follow rules applicable to the weight class and operation environment of their UA, as well as obeying criminal and nuisance laws and observing air safety rules.

Combined with strict limitations on the physical proximity of UAs to vehicles, vessels and the public, existing regulations effectively prohibit the operation of UAs in urban areas and impose an onerous certification scheme on both commercial operators and the government.

The proposed regulations hope to strike the right balance between supporting innovation and increased use of drones whilst ensuring public safety.

What’s being proposed?

The proposed regime uses a risk-based approach to managing pilots and operators by dividing UAs into five classes. The distinctions are based on weight and operating environment and decidedly eschew a commercial or recreational distinction of use on the grounds that the risks posed are identical in both scenarios.

As a regulatory foundation, all UAs heavier than 250g will have a minimum age requirement for operators (as low as 14 years old), as well as mandatory possession of liability insurance and the satisfactory completion of a basic knowledge test. All operators will be required to label their devices with contact information. Transport Canada’s infographic illustrates the gradual application of more onerous demands on operators as both weight and proximity to built-up areas increases.

The most significant distinction lies in the requirement that UAs in urban environments (complex operations) will require a pilot permit specific to small drones, as well as having to meet design standards yet to be confirmed. In contrast, the same UA piloted in rural areas (limited operations) will face significantly less demanding rules, requiring only that the operator be at least 16 years old and have passed a basic knowledge test. Each class must also adhere to minimum operating distances from certain people, events, buildings and air spaces, depending on the operating environment.

For commercial operators in rural areas especially, the new regime will ensure an even application of standards nation-wide whilst those seeking to operate in urban areas will have to display the requisite level of skill and knowledge to operate within built-up areas with increased risk of damage to people and property. The movement away from SFOCs is a win for all operators otherwise subject to the administrative burden and application costs.

For UAs heavier than 25kg or that are operated beyond visual line of site and any other use that cannot comply with the proposed regulatory provisions (think competitive racing), an SFOC is still required.

How the new regime will improve outcomes

Despite the new regulations coming with a $61 million price tag to government and private users, Transport Canada considers it a net benefit given the reduced risk of manned aircraft accidents. By instituting a minimum age to operate UAs over 250g and by further requiring the completion of knowledge tests and licensing requirements commensurate with the level of risk, operators will at least conform to a basic standard of knowledge and skill.

Businesses will benefit from greater certainty, a fairer application of standards country wide and increased operability within urban areas. However, they will face increased operating costs in obtaining insurance coverage and it remains to be seen just how exacting the design requirements will be for UAs operating in urban environments.

Estonian Blockchain-Based ID Card Security Flaw Raises Issues About Identity

Posted in Cybersecurity, Data Breach, Identity
Kirsten ThompsonEriq Yu

On August 30, 2017, an international team of security researchers notified the Estonian government of a security vulnerability affecting the digital use of Estonian ID cards issued to around half of the Estonian population. Affecting 750,000 ID cards issued to a population of 1.3 million, the Estonian Information System Authority (RIA) has taken measures to restrict some of the ID card’s security features until a permanent solution is found.

While there appears to be no sign of unauthorized use (the vulnerability appears to have been a “theoretical” vulnerability) the discovery of the vulnerability comes as Estonia continues to advance its national “e-Estonia” initiative to bring its citizens into a digital ecosystem of public and private services built upon the security and authentication provided by the Estonian ID card.

Blockchain and Identity

The e-Estonia initiative is notable for its technological innovation that currently makes Estonia a preeminent use case of blockchain technology and public-key cryptography in the delivery of government services. However, as this event shows, cybersecurity and privacy considerations must remain at the forefront of centralized security and authentication, especially in the case of multi-use identification cards.

Since 2013, Estonian government registers have paired cryptographic ‘hash functions’ with distributed ledger technology, allowing the Estonian government to guarantee its various records.

The ID card unifies access to a host of services. Citizens can order prescriptions, vote, bank online, review school records, apply for state benefits, file their tax return, submit planning applications, upload their will, apply to serve in the armed forces, and fulfil around 3000 other functions. Businesses owners can use the ID card to file their annual reports, issue shareholder documents, apply for licenses, and so on. Government officials can use the ID card to encrypt documents, review and approve permits, contracts and applications, and submit information requests to law enforcement agencies.

Digital authentication is convenient and saves both time and money for government, business and public services. However, in order to function effectively, it is critical for the government to know its records are the right records, and that they have not been altered. The underlying technology in the Estonian ID card is blockchain, which records every piece of data with proof of time, identity and authenticity – providing a verifiable guarantee that data has not been tampered with.

This immutable ledger identity was thought to be highly secure, and even believed to be unbreakable. However, the reported vulnerability in this case is notable due to the increase in computing power in recent years. A few years ago, exploiting such a vulnerability would have been significantly more expensive and thus more unlikely than it was today.

Identity Cards and Identity In Canada

Canada does not have a national identity card; Canadians (and others with appropriate residency status) have a Social Insurance Number issued which is used for certain permitted purposes, but the card itself is not an identity document and were phased out in 2014, in part because of creep in the scope of use and the lack of security features on the card.

The Office of the Privacy Commissioner of Canada has opposed the use of a national identity card in Canada. The provinces have dabbled with various “enhanced” driver’s licenses and other types of cards, with various success, and varying levels of resistance.

British Columbia and Manitoba have both moved towards a multi-use identification card, with significant privacy implications for individuals and businesses. The provinces of Quebec, Manitoba, Ontario and British Columbia have negotiated a Memorandum of Understanding with Citizenship and Immigration Canada the Canada Border Services Agency to implement  their  provincial “enhanced driver’s license” programs. For example, Ontario’s “enhanced driver’s licence” serves as an identity document and permits travel between Canada and the United States of America when travelling by road or water. Currently, the programs are voluntary.

More recently, the Digital Identity and Authentication Council of Canada (DIACC) spearheaded the creation of a national digital identity ecosystem, the Pan-Canadian Trust Framework (PCTF),  which would enable digital identity and, by extension, facilitate trustworthy digital transactions. The trust framework would define and standardise processes and practices, and specify data protection policies that government agencies, banks, telecommunication companies, health care providers, and businesses agree to follow with regard to information assurance practices.

The PCTF is backed by a public-private consortium that includes the governments of Ontario, British Columbia, Saskatchewan, and New Brunswick, along with Canada’s leading banks, telecom companies, and universities. It has been reported that the digital identity supercluster bid was able to raise $185 million of private sector investment for use over five years in just four weeks. If selected to move on to the second phase of the initiative, it will need to raise $250 million, the target for matchable funds set by the federal government.

Integrated identity products save time, money and can lead to increased security on a transaction by transaction basis. However, the consistent concern has be that while standalone services with discrete databases naturally limit the information accessible to intruders in the wake of a data breach, a data incident involving a multi-use identification card that permits access to a host of services could result in wide-ranging damage. Governments and businesses alike are well-advised to maintain a cybersecurity incident response plan to limit data loss and organizational disruption. Integrated identity documents have the potential to create disruption both for the public issuers of such documents, but also for the businesses that rely on them. Businesses and governments embracing new technologies (or reviewing older technologies) should be aware of the need to “future-proof” their investments.

For more information, see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses