CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

In the Future, Everyone Will Have Their Personality Misappropriated for 15 Minutes

Posted in Privacy
Jade Buchanan

At the same time Andy Warhol was predicting the intense, short-lived  “15 minutes of fame” that has now manifest as viral videos, legal scholars were pondering the implications of technology on our private lives.[1] While nobody got as close as predicting that a social media website would get sued for using photos people voluntarily uploaded to promote products, legal remedies for “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness” were already emerging in the 1960s.

So what is the law in Canada now? Can you sell “Damn Daniel” fidget spinners? Or use Chewbacca Mom to promote your crowd-funded hover boards? What if the person’s “fame” is their 407 Twitter followers? The answer is usually going to be “not without their consent”, but the reason why is a little less clear.

The Law on Misappropriation of Personality in Canada’s Common Law Jurisdictions

In Canada’s common law jurisdictions, if someone misuses your likeness to promote a product, your remedy will depend on where you live and whether or not you are living at all.

Four common law provinces have legislation that makes invasion of privacy a cause of action: British Columbia, Manitoba, Newfoundland and Labrador and Saskatchewan (the “Privacy Acts”). All four prohibit the use of a likeness for advertising. For example, the BC Privacy Act states it as follows:

[3](2) It is a tort, actionable without proof of damage, for a person to use the name or [likeness, still or moving] of another for the purpose of advertising or promoting the sale of, or other trading in, property or services, unless that other, or a person entitled to consent on his or her behalf, consents to the use for that purpose.

Let’s call this “statutory misappropriation”.

Things get a little murkier when it comes to the common law. It is generally accepted that Canadian courts will recognize misappropriation of personality as a cause of action that is “proprietary in nature and the interest protected is that of the individual in the exclusive use of his own identity in so far as it is represented by his name, reputation, likeness or other value.”[2] Claims of misappropriation of personality have typically been advanced by famous people, such as CFL linebacker Bob Krouse and the estate of Glenn Gould (both failed). Damages have been tied to the royalties the celebrity in question would have received if they had consented to the use of their likeness.[3] That all said, the Ontario Court of Appeal has stated that “Ontario has already accepted the existence of a tort claim for appropriation of personality” in reference to the theoretical privacy tort of “appropriation, for the defendant’s advantage, of the plaintiff’s name or likeness”.[4] That decision suggests that misappropriation of personality even extends to the non-famous, although we cannot yet say this is definitive across all common law jurisdictions.

Do Personality Rights Survive Forever?

All of the Privacy Acts state that the right to sue for invasion of privacy is extinguished by the death of the affected person, except for the Manitoba Privacy Act, which is silent on duration.

The duration of a right to claim for misappropriation of personality likely survives death but it is not clear for how long. At least one Canadian court has suggested it survives death but did not specify for how long.[5] That makes sense when you consider that part of the reason for the right is to give famous people the exclusive right to monetize their fame. Just like copyrights, they should be able to pass the economic value of their personality rights to their heirs.

Does Misappropriation of Personality Still Apply?

The Privacy Acts suggest that a person could sue for both misappropriation of personality and statutory misappropriation. Except for British Columbia, the Privacy Acts state that the rights under the legislation do not derogate any other rights of action or remedy otherwise available. While a court could find that the British Columbia Privacy Act is a full codification of misappropriation of personality, there is a decision from the Supreme Court of British Columbia that considered claims for misappropriation of personality and statutory misappropriation separately (but dismissed both because the individual was not actually identifiable).[6]

The co-existence of statutory and common law claims suggests that, while the dead cannot pursue claims of statutory misappropriation, common law misappropriation may still be available.

What are the Implications?

If you are going to include someone in your advertising or promotions (through their name, likeness, portrait, voice, caricature or otherwise) you need their consent. If you are unsure of whether or not what you are doing constitutes misappropriation, you need legal advice.

 

[1] Prosser, William L., Privacy, 48 CALIF. L. Rev., Vol. 48, Iss. 3 (Aug, 1960)

[2] Joseph v. Daniels, 1986 CanLII 1106 (BC SC).

[3] Athans v. Canadian Adventure Camps Ltd. et al., 1977 CanLII 1255 (ON SC)

[4] Jones v. Tsige, 2012 ONCA 32.

[5] Gould Estate v Stoddart Publishing Co., 1996 CanLII 8209 (ON SC).

[6] Supra note 2.

Financial Stability Board Releases Report on Financial Stability Implications of Artificial Intelligence and Machine Learning

Posted in AI and Machine Learning, Big Data, Financial, FinTech
Brianne PaulinAna BadourKirsten ThompsonCarole Piovesan

On November 1, 2017, the Financial Stability Board (the “FSB”)[1] published its report on the market developments and financial stability implications of artificial intelligence (“AI”) and machine learning in financial services. The FSB noted that the use of AI and machine learning in financial services is rapidly growing and that the application of such technologies to financial services are evolving.

“Use Cases” of AI and Machine Learning in the Financial Sector

The FSB identified current and potential types of use cases of AI and machine learning in financial services, including: “(i) customer-focused uses, (ii) operations-focused uses, (iii) uses for trading and portfolio management in financial markets, and (iv) uses by financial institutions for Regulatory Technology (“RegTech”) or by public authorities for supervision (“SupTech”).”

Customer-Focused Uses

The FSB found that “financial institutions and vendors are using AI and machine learning methods to assess credit quality, to price and market insurance contracts, and to automate client interaction.” Specifically in the insurance industry, machine learning is being used to analyze big data, improve profitability, and to increase the efficiency of claims and pricing processes. The investment to global InsurTech totaled $1.7 billion in 2016.

Such application of AI and machine learning can increase market stability as financial institutions have a greater ability to analyze big data to enhance their knowledge of trading patterns and to better anticipate trades. The FSB warned, however, that due to the lack of data on how the market would react to an increase use in AI and machine learning by market participants, a market shock could occur. In fact, market participants could be enticed to apply such technologies if their competitors, in applying AI and machine learning to customer-focused uses, are increasing profits and outperforming them. This increased use by market participants could cause a market shock and bring instability to the market.

Operations-Focused Uses and Trading and Portfolio Management

Trading firms would be able to better assess market impacts and shifts in market behaviour, increasing market stability. An example of such use is ‘trading robots’ than can react to market changes. The ‘trading robots’ can perform and assess market impact of certain trades, which allows trading firms to collect more information which, in turn, allows these firms to modify their trading strategies. The FSB also identified back-testing as an area of growth for the use of AI and machine learning. Back-testing is important for banks in their assessment of risk models. AI would provide a greater understanding of shifts in market behaviour and the FSB stated that this could potentially reduce the underestimating of risks in such instances.

Uses of AI and Machine Learning by Financial Institutions

The FSB found that AI and machine learning is used by financial institutions for regulatory purposes and by authorities for supervision purposes. The RegTech market is expected to reach $6.45 billion by 2020. Several regulators around the globe are using AI and machine learning to facilitate regulatory compliance, such as applying AI and machine learning to the Know-Your-Customer process. In terms of SupTech, the report noted the implementation of AI and machine learning in various supervision functions by authorities, such as monetary policy assessments. A 2015 survey of central banks’ use of AI and machine learning, cited by the FSB, found that central banks anticipated using big data reported by third parties for economic forecasting and for other financial stability purposes.

Implications of AI and Machine Learning on Market Stability

The FSB warned that, though AI and machine learning would benefit market stability by reducing costs, increasing efficiency and increasing profitability for financial institutions, financial institutions must implement governance structures and maintain auditability to ensure that potential effects beyond the institutions’ balance sheets are understood. Governance structures include ‘training’ to ensure that users understand the technologies and applications of AI and machine learning, promoting algorithmic transparency and accountability to ensure decisions made by the algorithm, such as the credit score assigned to a particular customer, can be understood and explained.

Without sound governance structures, the application of AI and machine learning could increase the risk to financial institutions. The report noted that “beyond the staff operating these applications, key functions such as risk management and internal audit and the administrative management and supervisory body should be fit for controlling and managing the use of applications.”

The benefits of using AI and machine learning systems for consumers and investors could  translate into lower costs of services and greater access to financial services. AI and machine learning could allow financial institutions to assess big data to tailor financial services to specific customers and investors. The FSB noted that proper governance structures must be in place to protect the privacy and data of both consumers and investors.

The FSB also raised concerns over the small number of third party providers of data in the financial system. Bank vulnerability could grow if the financial institutions rely on the same small number of third-party providers, using similar data and algorithms. On dependency, the FSB noted that “third-party dependencies and interconnections could have systemic effects if such a large firm were to face a major disruption or insolvency.” If financial institutions are unable to use big data from new sources, dependencies on previous data could develop, potentially leading to market shocks and bringing instability in the financial system.

This same concern was recently echoed by the Bank of Canada in its November 2017 Financial System Review, in which it said:

As financial services rely increasingly on information technology, there are growing operational risks from third-party service providers. Since providing services such as cloud computing, big data analytics and artificial intelligence requires a critical mass of users to remain cost-effective, global markets could become dominated by a few large technology firms. Higher industry concentration would raise systemic risks from operational disruptions and cyber attacks. Investments by service providers to avoid disruptions have benefits beyond the individual firm and can be considered a public good.

Legal and Ethical Issues

The FSB also provided an analysis on certain legal issues that arise in the use of AI and machine learning with big data, specifically in the context of data protection and data ownership rights. The FSB highlighted the efforts in several jurisdictions to adopt guidelines for the protection of data ownership and privacy.[2] Some jurisdictions are also assessing whether consumers should have the ability to understand certain techniques used in the application of AI and machine learning to credit systems. Other issues that arise in the use of AI and machine learning with big data include anti-discrimination laws and equal opportunity laws. The FSB noted that the use of AI and machine learning could lead to discriminatory practices and results, even without the inclusion of gender or racial information. Finally, liability issues could also arise, such as determining whether experts who rely on algorithms could be liable for their decisions.

Next Steps

The FSB noted that it will continue monitoring the uses of AI and machine learning in the financial markets, especially as the application of such technologies to the financial sector is growing.

AI and Financial Services at McCarthy Tétrault

In October 2017, McCarthy Tétrault released a White Paper on AI “From Chatbots to Self-Driving Cars: The Legal Risks of Adopting Artificial Intelligence in Your Business”, in which we featured some preliminary research on AI in the financial services sector. In particular, we highlighted specific areas where we see immediate the incorporation of AI in financial services, being investments and portfolio allocations, compliance and RegTech, and AI-powered chatbots.

[1]       The FSB is an international body that monitors and makes recommendations about the global financial system. Its members include all G20 major economies (including Canada).

[2] In particular, the FSB referenced the OECD’s guidelines on the protection of privacy and cross-border uses and the European Union’s “General Data Protection Regulation” coming into force in 2018.

 

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

Canadian Competition Bureau Releases Fintech Report for Consultation

Posted in Big Data, FinTech, Open Banking
Jonathan BitranAna BadourKirsten ThompsonDonald HoustonMichele F. Siu

On November 6, 2017, the Competition Bureau (Bureau) released a draft report on its market study into technology-led innovation in the Canadian financial services (Fintech) sector.[1] The Bureau has invited feedback from interested parties only until November 20, 2017, an uncharacteristically short comment period.

Financial services are an area of interest for the Bureau due to their significance to the Canadian economy and Canadian employment as well as the critical role they play in the daily lives of Canadians. While offering a variety of products and services across many financial service segments, Fintechs are typically internet-based and application-oriented, promising user-friendly, efficient consumer interfaces. The Bureau asserts that Fintech represents an opportunity to increase competition in Canada’s financial services sector, which in turn could potentially lead to lower prices and increased choice and convenience for consumers and small and medium-sized enterprises (SMEs). To that end, the report covers Fintech innovation in segments that directly impact consumers and SMEs: (i) payments and payment systems (e.g., mobile wallets), (ii) lending (e.g., crowdfunding), and (iii) investment dealing and advice (e.g., robo-advisors). The report specifically does not cover insurance, cryptocurrencies/blockchain, payday loans, loyalty programs, deposit-taking, accounting, auditing, tax preparation, large corporate, commercial or institutional investing and banking (e.g., pension fund management, mergers and acquisitions) or business-to-business financial services.

The report is intended as guidance for financial services sector regulators and policymakers. It is a dense report, but the Bureau’s core message is that regulation of Fintech is necessary to protect the safety, soundness and security of the financial system, but should not unnecessarily impede competition and innovation in financial services. Or as Goldilocks might say, regulation should be “just right”. In examining the regulatory barriers to entry in Fintech, the Bureau makes 11 key recommendations, summarized below, which are intended to modernize financial services regulation by reducing barriers to innovation and competition in order to encourage Fintech growth.

Bureau’s Recommendations For Pro-Competitive Financial Services Regulatio

  1. Technologically-neutral. The Bureau asserts that regulation should be technology‑neutral and device‑agnostic to accommodate and encourage new (and yet‑to‑be developed) technologies. For example, requiring “wet” signatures (i.e., in person with a pen) prevents the use of new digital signature technology that also provides sufficient security.
  2. Principles-based. The Bureau asserts that regulation should be based on principles or expected outcomes and not strict rules on how to achieve the desired outcome. This is to allow for the implementation of new technologies, which might otherwise be barred by a prescriptive regime, while still protecting policy goals.
  3. Function-based. The Bureau asserts that regulation should be based on the functions carried out by an entity, not its identity (e.g., if a bank and a start-up are engaging in the same activity, they should face the same regulation with respect to that activity). This is to ensure that all entities have the same regulatory burden and consumers have the same protections when dealing with competing service providers.
  4. Proportional to risk. The Bureau asserts that regulation should be proportional to the risks that it aims to mitigate. Along with technology‑neutral, device-agnostic, principles‑based, and function‑based regulation, proportional regulation would level the playing field between Fintech entrants and incumbent service providers that offer the same types of services.
  5. National harmonization. The Bureau asserts that regulations should be harmonized across Canada. Although there has been improvement, a patchwork of provincial and federal regulations can make compliance unduly difficult and costly.
  6. Facilitate sectoral collaboration. The Bureau proposes that collaboration throughout the sector should be encouraged, including (i) among regulators to enable a unified approach, (ii) between the public and private sector to improve understanding of the latest services among regulators and of the regulatory framework among Fintech firms, and (iii) among industry participants to help bring more products and services to market (while avoiding anticompetitive collaborations). The UK, Australia, and Hong Kong currently facilitate such collaboration and the Bureau asserts that Canada should follow suit.
  7. Policy leadership. The Bureau proposes that a Fintech policy lead for Canada to facilitate Fintech development should be identified. The Fintech policy lead can then act as a gateway to other agencies, give Fintech firms a one‑stop resource and encourage investment in innovative businesses and technologies in the financial services sector.
  8. Facilitate access to core services. The Bureau supports promoting greater access to the financial sector’s core infrastructure and services to facilitate the development of Fintech services. Fintech firms often require access to core services (e.g., the payment system) in order to provide their services (e.g., bill payment app). Under the appropriate risk‑management frameworks, Fintech firms should be provided with access, so that regulation does not stifle useful services.
  9. Open banking. The Bureau supports embracing more “open” access to systems and data (also described as “open banking”). With appropriate customer consent and risk mitigation frameworks, the Bureau asserts that this will allow Fintech firms to access consumer banking information in order to develop bespoke price‑comparison tools and other applications that facilitate competitive switching by consumers. Looking abroad, the UK competition regulator has mandated the implementation of “open banking” (the Bureau does not have this authority). The Bureau has recognized the key role of data (specifically, big data) in Fintech and other sectors in its recently released draft paper for consultation, Big data and Innovation: Implications for competition policy in Canada (see our further comments on this paper). The comment period for this paper is open until November 17, 2017.
  10. Digital identification. The Bureau supports exploring the potential of digital identification for use in client identification processes. Digital identification could reduce the cost of customer acquisition (for new entrants and incumbent service providers), reduce the costs of switching for consumers and facilitate regulatory compliance where identity verification is needed.
  11. Continuing review. The Bureau supports continuing the frequent review of regulatory frameworks and the adaptation of regulation to changing market dynamics (e.g., consumer demand and advances in technology) to ensure they achieve their objectives in a way that does not unnecessarily inhibit competition.

The report comes in the context of a number of ongoing Fintech-related consultations and initiatives, including the recent announcement by the Government of Ontario that it would create a “regulatory super sandbox”, the launch earlier this year of regulatory sandboxes by the Canadian Securities Administrators, the modernization initiative of the Canadian payments system by Payments Canada, the federal consultation on the national retail payments oversight framework and the federal consultation on the federal financial sector framework.

The Bureau has clearly put significant thought and effort into this report. The impact it will have on financial services regulators and policymakers remains to be seen.

For more information about our Firm’s Competition and Fintech expertise, please see our Competition group’s and Fintech group’s pages.


[1] In May 2016 the Bureau announced it would launch this study. The Commissioner of Competition has emphasized the Bureau’s commitment to use its authority and jurisdiction to support Fintech innovation noting that “competitive intensity fosters innovation”. Earlier this year, the Bureau hosted industry stakeholders and federal and provincial regulators at a workshop to discuss the regulatory challenges faced by Fintech and possible approaches that could enhance the efficiency and effectiveness of Canada’s financial services sector.

U.S. Consumer Financial Protection Bureau Sets Out Principles for Consumer-Authorized Data Sharing and Aggregation

Posted in Big Data, FinTech, Open Banking
Kirsten Thompson

On October 18th, 2017 the U.S. Consumer Financial Protection Bureau (“CFPB”) outlined the principles to be followed (“Principles”) when consumers authorize third party companies to access their financial data to provide certain financial products and services. These principles will be of particular note to the Fintech sector, in which a significant number of companies incorporate into their business model some kind of aggregation or sharing of consumer financial information.

The CFPB refers to this is as the “consumer-authorized data-sharing market” and has stated its two-fold goal as intending to “help foster the development of innovative financial products and services, increase competition in financial markets, and empower consumers to take greater control of their financial lives”, while at the same time ensure protection for consumers “that provide, use, or aggregate consumer-authorized financial data”.

The Principles line up quite closely with the ten Fair Information Principles that underlie Canadian federal privacy legislation (PIPEDA). Absent (or diluted) from the CFPB Principles are the Fair Informaiton Principles regarding “Limiting Use, Disclosure and Retention”, “Limiting Collection” and “Identifying Purpose”. The CFPB Principles also attempt to address many of the same issues that arise in the mandatory “Open Banking” regime in the EU and the UK, but in a much less fulsome manner.

Background

Under the Dodd-Frank Act, the CFPB was empowered  to implement and enforce consumer financial law “for the purpose of ensuring that all consumers have access to markets for consumer financial products and services and that markets for consumer financial products and services are fair, transparent, and competitive.”[1] The CFPB was to exercise its authorities so that “markets for consumer financial products and services operate transparently and efficiently to facilitate access and innovation.”[2]

Increasingly, companies have been  accessing consumer account data with consumers’ authorization and providing services to consumers using data from the consumers’ various financial accounts. Such “data aggregation”-based services include the provision of financial advice or financial management tools, the verification of accounts and transactions, the facilitation of underwriting or fraud-screening, and a range of other functions. This type of consumer-authorized data access and aggregation holds the promise of improved and innovative consumer financial products and services, enhanced control for consumers over their financial lives, and increased competition in the provision of financial services to consumers.

The CFPB’s interest in consumer data (and specifically Open Banking) was telegraphed by the Director of the CFPB  his remarks at the 2016 Money 20/20 conference when he stated that the CFPB was “gravely concerned” that financial institutions were limiting or shutting off access to financial data, rather than “exploring ways to make sure that such access…is safe and secure.” (see our blog post on this here).

However, there are also challenges to this sharing of data – privacy, security and regulatory compliance being just a few. The CFPB notes that a range of industry stakeholders are working, through a variety of individual arrangements as well as broader industry initiatives, on agreements, systems, and standards for data access, aggregation, use, redistribution, and disposal. However, the CFPB believes that consumer interests must be the priority of all stakeholders as the aggregation services-related market develops.

The CFPB issued a Request for Information in 2016 to gather feedback from wide range of stakeholders, including large and small banks and credit unions, their trade associations, aggregators, “fintech” firms, consumer advocates, and individual consumers.

The CFPB has now released its set of Consumer Protection Principles intended to reiterate the importance of consumer interests. They are, however, non-binding and not intended to alter, interpret, or otherwise provide guidance on existing statutes and regulations that apply.

1) Access

Consumers should be able, upon request, to obtain information in a timely manner about their ownership or use of a financial product or service from their product or service provider. Further, consumers should generally be able to authorize trusted third parties to obtain such information from account providers to use on behalf of consumers, for consumer benefit, and in a safe manner.

The CFPB expects that financial account agreements and terms of use will, among other things, “not seek to deter consumers from accessing or granting access to their account information.” Notably, “[a]ccess does not require consumers to share their account credentials with third parties”, which suggests that screen scraping mechanisms cannot be made mandatory.

2) Data Scope and Usability

The scope of data that can be consumer-authorized for access should be broad, according to  the CFPB, and may include “any transaction, series of transactions, or other aspect of consumer usage; the terms of any account, such as a fee schedule; realized consumer costs, such as fees or interest paid; and realized consumer benefits, such as interest earned or rewards.” With this scope of information made available, consumers will be able to compare fees the cost of banking at a particular company or institution.

3) Control and Informed Consent

The CPFB suggests that authorized terms of access, storage, use, and disposal are fully and effectively disclosed to the consumer, understood by the consumer, not overly broad, and consistent with the consumer’s reasonable expectations in light of the product(s) or service(s) selected by the consumer. While no explanation accompanies the statement, the CPFB states that firms should take steps to ensure “[c]onsumers are not coerced into granting third-party access.”

Furthermore, consumers must be able to readily and simply revoke authorizations to access, use, or store data. Similarly, consumers should be able to require “third parties to delete personally identifiable information.”

4) Authorizing Payments

The CPFB reminds firms that authorized data access, in and of itself, is not payment authorization. A separate and distinct authorization to initiate payments must be obtained s. Providers that access information and initiate payments may reasonably require consumers to supply both forms of authorization to obtain services.

5) Security

The sharing of information can raise security concerns and the CFPB advises that consumer data are to be maintained “in a manner and in formats that deter and protect against security breaches and prevent harm to consumers.” Login and other access credentials are to be secured and “all parties that access, store, transmit, or dispose of data use strong protections and effective processes to mitigate the risks of, detect, promptly respond to, and resolve and remedy data breaches, transmission errors, unauthorized access, and fraud”. Further, firms should transmit data only to third parties that also have such protections and processes.

6) Access Transparency

Consumers should be informed of which of their authorized third parties are accessing or using information regarding their accounts. This can include the identity and security of each such party, the data they access, their use of such data, and the frequency at which they access the data.

7) Accuracy

Consumers should expect the data they access or authorize others to access or use to be accurate and current and firms should have reasonable means to dispute and resolve data inaccuracies, regardless of how or where inaccuracies arise.

8) Ability to Dispute and Resolve Unauthorized Access

Consumers should also have reasonable and practical means to dispute and resolve instances of unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, and failures to comply with other obligations, including the terms of consumer authorizations. Interestingly, the CFPB advises that consumers “are not required to identify the party or parties who gained or enabled unauthorized access to receive appropriate remediation.”

9) Efficient and Effective Accountability Mechanisms

The CFPB advises that commercial participants should be accountable for the risks, harms, and costs they introduce to consumers. It is of the view that this helps align the interests of the commercial participants, and suggests such participants be “incentivized” and empowered to prevent, detect, and resolve unauthorized access and data sharing, unauthorized payments conducted in connection with or as a result of either authorized or unauthorized data sharing access, data inaccuracies, insecurity of data, and failures to comply with other obligations, including the terms of consumer authorizations.

Canada

The situation in Canada is not dissimilar, with various stakeholders and regulators on the one hand recognizing a need for innovation driven by consumer data access and on the other, the need to protect consumers and their data.

For instance, in March of 2011, the Financial Consumer Agency of Canada (“FCAC”) issued a statement, warning Canadians to be aware of the possible risks of disclosing their online banking and credit card information to financial aggregation services. Aside from the obvious data security and privacy risks, the FCAC cautioned that using such a service could also violate the terms and conditions (see our blog post on this here).

[1] 12 U.S.C. 5511(a).

[2] 12 U.S.C. 5511(b)(5)

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Canadian Securities Administrators Issues Staff Notice providing Cybersecurity and Social Media Guidance

Posted in Cybersecurity
Kirsten ThompsonEriq Yu

On October 19, 2017, the Canadian Securities Administrators (“CSA”), representing provincial and territorial securities regulators, issued CSA Staff Notice 33-321 – Cyber Security and Social Media (the “Notice”). The Notice serves to publish the results of the CSA’s survey of cybersecurity and social media practices of registered firms dealing in securities, including those registered as investment fund managers, portfolio managers, and exempt market dealers.

The survey was the result of a CSA initiative following the release of CSA Staff Notice 11-332 – Cyber Security in September 2016 in which CSA announced its intention to determine the materiality of cybersecurity risks. Social media and its surrounding challenges for registered firms were previously discussed in the CSA’s Staff Notice 31-325 – Marketing Practices of Portfolio Managers in 2011.

Importantly, issues concerning cybersecurity gain new prominence with the release of this Notice. The Notice emphasizes that addressing the risks posed by cyber threats and the use of social media is required to comply with business obligations imposed by Section 11.1 of National Instrument 31-103 (“NI 31-103”), the Instrument that outlines registrant requirements and obligations. Specifically, Section 11.1 requires registered firms to “establish, maintain and apply policies and procedures that establish a system of controls and supervision sufficient to provide reasonable assurance that the firm and each individual acting on its behalf complies with securities legislation and manage the risks associated with its business in accordance with prudent business practices.”

Over Half of Registered Firms Experienced a Cyber Security Incident

Conducted between October 11, 2016 and November 4, 2016, the survey sampled responses from 63% of the 1000 firms invited to participate. Overall, the survey found that 51% of firms experienced a cybersecurity incident in 2016, including phishing (43%), malware incidents (18%), and fraudulent email attempts to transfer funds or securities (15%).

The survey questions focused, among others, on the areas of cybersecurity incidents, policies, and incident response plans; social media policies and practices; due diligence to assess the cybersecurity practices of third-party vendors and service providers; encryption and backups; and the frequency of internal cyber risk assessments.

Cybersecurity Policies, Procedures and Training

Specifically, for the areas identified, the survey found that:

  • Only 57% of firms have specific policies and procedures to address the firm’s continued operation during a cybersecurity incident.
  • Only 56% of firms have policies and procedures for cybersecurity training for employees.
  • 9% of firms have no policies and procedures concerning cybersecurity at all.
  • 18% of firms do not provide cybersecurity-specific training to employees.

Guidance: The resulting CSA guidance indicates that all firms should have policies and procedures that address, among others, the use of electronic communications; the use of firm-issued electronic devices; reporting cybersecurity incidents; and vetting third-party vendors and service providers. Training of employees on cyber risks, including the privacy risks associated with the collection, use, or disclosure of data, should take place with “sufficient frequency to remain current”, with a recognition that training more frequent than on an annual basis may be necessary.

Cyber Risk Assessments

The Survey found that most firms perform risk assessments at least annually to identify cyber threats. However, 14% of firms indicated that they do not conduct this type of assessment at all.

Guidance: In response, the CSA guidance indicates that firms should conduct a cyber risk assessment at least annually, including a review of the firm’s cybersecurity incident response plan to see whether changes are necessary. The risk assessment should include:

  • an inventory of the firm’s critical assets and confidential data, including what should reside on or be connected to the firm’s network and what is most important to protect;
  • what areas of the firm’s operations are vulnerable to cyber threats, including internal vulnerabilities (e.g., employees) and external vulnerabilities (e.g., hackers, third-party service providers);
  • how cyber threats and vulnerabilities are identified;
  • potential consequences of the types of cyber threats identified; and
  • adequacy of the firm’s preventative controls and incident response plan, including evaluating whether changes are required to such a plan.

Cybersecurity Incident Response Plans

On cybersecurity incident response plans, the Survey results indicated that 66% of firms have an incident response plan that is tested at least annually. However, a quarter of firms surveyed had not tested their incident response plans at all.

Guidance: The CSA guidance stipulates that firms should have a written incident response plan, which should include:

  • who is responsible for communicating about the cyber security incident and who should be involved in the response to the incident;
  • a description of the different types of cyber attacks (e.g., malware infections, insider threats, cyber-enabled fraudulent wire transfers) that might be used against the firm;
  • procedures to stop the incident from continuing to inflict damage and the eradication or neutralization of the threat;
  • procedures focused on recovery of data;
  • procedures for investigation of the incident to determine the extent of the damage and to identify the cause of the incident so the firm’s systems can be modified to prevent another similar incident from occurring; and
  • identification of parties that should be notified and what information should be reported.

Due Diligence on Third Party Providers

Almost all firms surveyed indicated they engaged third-party vendors, consultants, or other service providers. Of these firms, a majority conduct due diligence on the cyber security practices of these third parties. However, the extent of the due diligence conducted and how it is documented vary greatly

Guidance: The CSA Guidance states that firms should periodically evaluate the adequacy of their cyber security practices, including safeguards against cyber security incidents and the handling of such incidents by any third parties that have access to the firms’ systems and data. In addition, firms should limit the access of third-party vendors to their systems and data.

Written agreements with these outside parties should include provisions related to cyber threats, including a requirement by third parties to notify firms of cyber security incidents resulting in unauthorized access to the firms’ networks or data and the response plans of the third parties to counter these incidents.

Where firms use cloud services, they should understand the security practices that the cloud service provider has to safeguard from cyber threats and determine whether the practices are adequate. Firms that rely on a cloud service should have procedures in place in the event that data on the cloud is not accessible.

Data Protection

Encryption is one of the tools firms can use to protect their data and sensitive information from unauthorized access. However, the survey responses indicate a sizeable number of firms do not use any encryption or rely on other methods of data protection, such as password protected documents. In addition, almost all firms surveyed indicated they back up data, but the frequency of such back ups varied.

Guidance: The CSA’s view is that encryption protects the confidentiality of information as only authorized users can view the data. In addition to using encryption for all computers and other electronic devices, the CSA expects firms to require passwords to gain access to these devices and recommends so-called “strong” passwords be required, and change with some frequency.

Where firms provide portals for clients or other third parties for communication purposes or for accessing the firm’s data or systems, firms should ensure the access is secure and data is protected.

Firms are expected to back up their data and regularly test their back-up process. Also, when backing up data, firms should ensure that the data is backed up off-site to a secure server in case there is physical damage to the firms’ premises

Cyber Insurance

A majority of firms (59%) do not have specific cyber security insurance and for those that do, the types of incidents and amounts that their policies cover vary widely.

Guidance: The CSA guidance states that firms should review their existing insurance policies (e.g., financial institution bonds) to identify which types of cyber security incidents, if any, are covered. For areas not covered by existing policies, firms should consider whether additional insurance should be obtained.

Social Media

The focus of this part of the Notice was on the fact that social media may be used as a vehicle to carry out cyber attacks. For example, social media sites may be used by attackers to launch targeted phishing emails or links on these sites may lead to websites that install malware.

For social media specifically, firms should review, supervise, retain, and have the ability to retrieve social media content.  Policies and procedures on social media practices should cover:

  • the appropriate use of social media, including the use of social media for business purposes;
  • what content is permitted when using social media;
  • procedures for ensuring that social media content is current;
  • record keeping requirements for social media content; and
  • reviews and approvals of social media content, including evidence of such reviews and approvals.

In addition, given the ease with which information may be posted on social media platforms, the difficulty of removing information once posted and the need to respond in a timely manner to issues that may arise, the CSA states that firms should have appropriate approval and monitoring procedures for social media communications. This applies even if firms do not permit the use of social media for business purposes, because policies and procedures should be in place to monitor for unauthorized use.

Next Steps

The Notice advises that CSA staff will continue to review the cyber security and social media practices of firms through compliance reviews. It notes further that CSA staff will apply the information and guidance in this Notice when assessing how firms comply with their obligations to manage the risks associated with their business as set out in NI 31-103.

Firms registered to deal in securities are advised to adopt cybersecurity policies and procedures, including an incident response plan, to ensure compliance with registrant obligations under NI 31-103. The Notice underscores that cyber threats are ever-changing and preparedness and vigilance are key to ensure risk mitigation.
For more information, see McCarthy Tétrault’s Cybersecurity Risk Management – A Practical Guide for Businesses.

Basel Committee on Banking Supervision Issues Consultative Document Highlighting Implications of Fintech on Banks

Posted in AI and Machine Learning, Big Data, Cybersecurity, FinTech, Payments, Privacy
Brianne Paulin

On August 31, the Basel Committee on Banking Supervision (the “BCBS”) published a consultative document on the implications of Fintech for the financial sector. The consultative document was produced by BCBS’s task force mandated with identifying trends in Fintech developments and assessing the implication of those developments on the financial sector.

Parts I and II of the consultative document provide an overview of current trends and developments in Fintech. The report assesses Fintech developments, presents forward-looking scenarios, and includes case studies to better present individual risks and the potential impact of the forward-looking scenarios on banks.

The main findings of the study are presented in Part III, summarized in 10 key observations and recommendations for banks and supervisors, which will be the focus of this blog post.

Key Observations and Recommendations: Implications for Banks and Banking Systems

  1. Banking risks may change over time with the emergence of new technologies.

Banks will need to adapt to new risks emanating from the introduction of new technologies in the financial sector without limiting potential benefits stemming from such technologies. Fintech innovations have the potential to benefit both the bank, by lowering banking costs, allowing for faster banking services and facilitating regulatory compliance, and consumers, by improving access to financial services, tailoring banking services to individual needs and allowing new competitors to join the market.

  1. Key risks for banks include “strategic risk, operational risk, cyber-risk and compliance risk.”[1]

Banks must implement appropriate risk management processes and governance structures to address new risks arising from innovative technologies, including operational risks, data protection and anti-money laundering (“AML”) risks. The report recommends the adoption of the Principles for sound management of operational risk (“PSMOR”)[2] to effectively respond to these risks.

  1. Emerging technologies bring benefits to the financial sector but also pose new risks for banks.

BCBS undertook an in-depth study of the impacts of three Fintech-enabling technologies on the banking industry: artificial intelligence/machine learning/advanced data analytics, distributed ledger technology and cloud computing. Banks will need to adapt risk management plans to address such enabling technologies by implementing effective IT and risk management plans.

  1. Banks increasingly outsource operational support for technology-based financial services to third parties but risks ultimately remain with the bank.

Banks will need to ensure that risk management plans are extended to any operations outsourced to a third party. This will require adapting operational risk management plans to third parties, including Fintech firms.

  1. Fintech innovations will require greater supervision and require further cooperation with public authorities to ensure compliance with regulations, such as data privacy, AML and consumer protection.

The emergence of new enabling technologies in the banking sector provides an opportunity for bank supervisors to further cooperate with public authorities responsible for the oversight of the financial sector and Fintech. Cooperation will facilitate the identification of new risks and facilitate supervision of important risks, including consumer protection, data protection, competition and cyber-security.

  1. Fintech companies can operate across borders. International cooperation between banks and bank supervisors is essential.

BCBS noted that current Fintech firms mostly operate at a national level. However, the opportunities for cross-border services are plenty, and when Fintech firms expand their operations, bank supervisors will need to ensure a level of international cooperation with other bank supervisors.

  1. Technology can bring important changes to traditional banking models. Supervision models need to be adapted to these emerging banking models.

Bank supervisors should ensure that staff are well equipped to deal with the changing technology. Staff should be trained to identify and monitor new and emerging risks associated with innovative technologies and new banking systems.

  1. Banks should harness emerging technologies, such as AI, to increase their efficiency in responding to Fintech-related risks.

Bank supervisors should determine how to use Fintech innovations to better supervise and monitor Fintech related risks and new banking technologies.

  1. Current regulatory frameworks were adopted before the emergence of Fintech innovations. “This may create the risk of unintended regulatory gaps when new business models move critical banking activities outside regulated environments or, conversely, result in unintended barriers to entry for new business models and entrants.”

The BCBS recommends that supervisors review their regulatory frameworks to ensure that regulations protect consumers but do not create barriers to entry for Fintech firms. The BCBS found that many Fintech firms operate outside the realm of traditional banking, and thus, traditional regulatory approaches may not be appropriate for such firms. Regulatory barriers, however, could push Fintech firms to operate outside of the regulated financial industry, causing significant risks to consumers.

  1. Government authorities in some jurisdictions have partnered with Fintech firms to facilitate the use of financial technologies while ensuring adequate regulatory safeguards for financial stability.

The BCBS found that several government authorities have put in place initiatives to help Fintech companies navigate the regulatory requirements of the financial sector. Bank supervisors should monitor developments in other jurisdictions to learn and implement similar approaches, if appropriate.

 

[1] The report identifies these risks for both incumbent banks and new Fintech entrants into the financial industry.

[2] See: http://www.bis.org/publ/bcbs292.pdf

 

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

Project Jasper Update: White Paper Release and Phase 3 Announcement

Posted in Authentication, Blockchain, Financial, FinTech, Identity, Payments, Privacy
Andrea Schneider

Project Jasper is an experiment being done by the Bank of Canada, Payments Canada and R3 to test the viability and feasibility of using Distributed Ledger Technology (“DLT”) as the basis for wholesale interbank payment settlements. This project was launched in March 2016 and has completed two phases. Phase 1 of Project Jasper employed the Ethereum platform as the basis for the DLT, while Phase 2 employed the custom-designed R3 Corda platform. In June 2017, the Bank of Canada issued a report on its preliminary findings from Project Jasper, which were summarized in our previous article. On September 29, 2017, the Bank of Canada, Payments Canada, and R3 released a white paper outlining their detailed findings from Project Jasper. This article elaborates on our previous article based on the findings from the white paper and discusses the next steps for Project Jasper.

Key Merits and Considerations of Project Jasper

End-to-End Settlement

Project Jasper was premised on the idea that payment settlement is the final leg of most economic transactions, but also that other areas of the contract chain have the potential to be supported by DLT. For example, “smart contracts” can be used to codify the terms and conditions of an agreement and can be automatically executed once certain conditions are met. Based on the experience from Project Jasper, the primary benefit of a DLT interbank cash payment platform would be an “end-to-end” settlement, meaning that the DLT arrangements for payment settlement would be aligned with other DLT arrangements within the same economic contract.

Settlement Risk

Principle 8 of the Principles for Financial Market Infrastructures (“PFMIs”) requires that a settlement must be final and irrevocable. Settlement finality in Phase 1 was “probabilistic” because of the possibility that a payment could fail to remain in the blockchain and be recorded under a proof-of-work consensus. To address this and improve settlement finality, Phase 2 introduced a notary node to be managed by a trusted third party. The Bank of Canada served as the notary node and was responsible for confirming the uniqueness of a transaction to avoid double spending. The second requirement under PFMIs is that there be a full and irreversible transfer of an underlying claim in central bank money. To meet this, Project Jasper created a digital depository receipt (“DDR”) as a digital settlement asset, which represented a claim to central bank deposits. The strength of the legal basis for settlement finality remains to be tested.

Operational Resilience and Efficiency

Project Jasper used a “permissioned” DLT, meaning that only those approved could use the exchange. This allows regulation of users and allows consensus to be achieved more quickly than with a public ledger. DLT solutions can also reduce the number of errors and duplications compared to the incumbent, manual systems in Canada because parties are required to reach a consensus before a transaction is posted. However, due to the limited implementation of the project, it is difficult to assess whether DLT is more operationally efficient than the current system.

The white paper also considered the operational resiliency of Project Jasper and noted the following:

Capacity

Phase 1: The maximum processing capacity was 14 transactions per second, which is similar to incumbent systems, meaning there are constraints for future volume increases.

Phase 2: There is capacity for volume increases, in part because only the transacting parties, a supervisory node and the notary node are required to validate and record transactions (vs. the requirement for majority consensus in Phase 1).

Availability and Cost

Phase 1: The proof-of-work consensus allows for high availability at a lower cost. This is because of the sharing of databases across all participants in the proof-of-work consensus and the back up of ledgers by all participants.

Phase 2: To increase data privacy, each participant had a proprietary ledger. This creates challenges for data replication across the network.

Risk

Phase 1: The consensus protocol requires agreement of a majority of R3 members, meaning there could not be a single point of failure.

However, this does not eliminate the need for participants to back up their data. Due to the confidentiality of the information, in the event of a failure, participants would be unlikely to share data.

Phase 2: Both the notary and supervisory nodes are needed for consensus, therefore increasing the risk of a single point of failure. To mitigate this risk, participants will need to back up their data.


Potential Applications & Benefits of DLT to the Payments Industry  
 

Reduction of Disputes and Errors

A single payment or file transfer can involve many participants, and therefore may be recorded by multiple financial institutions. This can lead to errors and duplication, and inevitably, disputes. DLT technology requires multiple parties to reach an agreement on the legitimacy of a transaction before it can be posted. While this is a recognized benefit of DLT, the overall operational efficiency of this benefit compared to the incumbent system has not been measured.

Improved Back-Office Efficiency

After the parties reach a consensus, a single record of the transaction is recorded. This eliminates the need for internal record keeping of each party. Project Jasper found that DLT is not necessarily more efficient on a domestic level than the current LVTS system, however the analysis did not account for the back-office work that might be avoided by the individual financial institutions if DLT is used. Significant resources are expended in back-office reconciliations; therefore there may be significant cost savings that have not yet been considered.

Regulatory Compliance

DLT has the potential to assist with regulatory compliance, particularly with anti-money laundering (“AML”) and anti-terrorism financing (“ATF”) regulations for cross-border transactions where counterparty risk can run high. In the current system, false positives in relation to AML/ATF are a problem as they can take weeks or months to resolve. DLT has the potential to allow for easier reconciliation of such payments in order to legitimize a transaction because of the trusted ledger created. These benefits could extend to other regulatory compliance as well.

Transparency vs. Privacy

In the traditional clearing and settlement process, there is a central database. The DLT used in Phase 2 allows for privacy between the financial institutions, with each only being able to view their own proprietary ledgers. However, those with the supervisory or notary nodes can view all transactions and therefore have the ability to monitor and perform the traditional function of a central database. In Phase 2, the Bank of Canada held the supervisory and notary nodes.

Improved Automation through use of Smart Contracts

As previously discussed, significant benefits can be obtained where DLT can be used for end-to-end settlement through the use of smart contracts. The solution system created by Project Jasper could be the basis upon which other DLT platforms can be built for a variety of transactions, such as the settlement of financial asset transactions, managing syndicated loans, and supporting trade finance.

Conclusions from Phase 1 and Phase 2

The key conclusions from Phase 1 and 2 of Project Jasper are that DLT platforms that employ a “proof-of-work” consensus protocol, as used in Phase 1, do not deliver the required settlement finality and low operational risk. While Phase 2 was able to address improvements in settlement finality, scalability and privacy, it did not adequately address operational risks requirements. Further evaluation and enhancements will need to be done to satisfy PFMIs. On a global scale, the white paper recommends that the focus should be on developing protocols for interoperability between DLT platforms.

Overall, Project Jasper is an example of the benefits of collaboration within the payments industry. Such collaboration is particularly conducive in the concentrated Canadian financial industry. This collaboration is being extended for Phase 3.

Phase 3

On October 17, 2017 Payments Canada, the Bank of Canada and TMX announced the third phase of Project Jasper. This phase will build on the first two phases and involve developing a proof of concept for the clearing and settling of securities. Phase 3 hopes to explore an end-to-end settlement process by integrating the securities and payment infrastructure and the ability to settle multiple assets on the same ledger. The objectives of this phase are to reduce the cost of securities transactions, increase efficiency, and reduce settlement risk. The results of this phase are expected to be released at the Payments Canada Summit in May 2018.

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

Here We Go Again: Schrems 2 Puts the Model Clauses for Transfer of EU Personal Data in Doubt

Posted in European Union, Privacy
Keith Rose

On October 3, 2017, the High Court of Ireland rendered a decision in The Data Protection Commissioner v. Facebook Ireland Limited & anor, [2017] IEHC 545.  This decision, which could well be labeled Schrems 2,  is effectively a sequel to the original Schrems decision, based on the same underlying facts and issues.  In this most recent decision, the High Court has granted a request from the Irish Data Protection Commissioner (“DPC”) for a reference to the CJEU for a ruling on the validity of the so-called “Model Clauses” (or “Standard Contractual Clauses”) for transfer of EU personal data to the US.  In so doing, it has set in motion a potentially drastic shake-up of the existing order for export of EU personal data, which could ultimately have far broader consequences than the first Schrems decision.

Background

Under EU law, an organization may only transfer “personal data” about an individual to a non-EU country for processing if the destination country “ensures an adequate level of protection”.  The European Commission has the authority to make a determination of whether the protections afforded to personal data in a given third country are or are not “adequate” in this regard.

In some cases “adequacy” decisions apply broadly.  In the case of Canada, for example, the Commission concluded that Canadian privacy laws were sufficiently similar to European laws that they were inherently adequate.[1]  But the US has a very different legal regime in this regard.  As a result, the Commission has taken a more circumstantial approach, considering incremental measures that can be applied by the exporting and importing organizations.

The Commission has recognized three bases for lawful transfer of EU personal data to the US:

  • A voluntary arrangement, originally known as “Safe Harbour”, by which U.S. organizations self-certify compliance with certain privacy principles;
  • Standardized contractual commitments between the data controller and data processor, based on approved “Model Clauses”; and
  • Similar commitments adopted in binding non-contractual rules applicable within a corporate group (so-called “Binding Corporate Rules”).

In the wake of the 2013 Snowden revelations about US data surveillance programs, Austrian law student Max Schrems brought a complaint against Facebook in Ireland, arguing that Facebook’s transfer of his personal information to the US was unlawful under both Irish and EU law.  This case was eventually referred to the Court of Justice of the European Union (“CJEU”), which struck down the Safe Harbour regime.  (See previous posts detailing this decision and its fallout here, here, here, and here.)

Following this decision, Facebook purported to rely on contractual commitments as the basis for its transfer of personal data to the US.  Mr. Schrems renewed and reformulated his original complaint, alleging both that Facebook’s specific contracts did not meet the obligations of EU law and that, in any case, the contracts could not provide adequate protection where national laws of the third country would override them.

The Decision

The fundamental issue before the Irish High Court was whether to refer the Commission’s decisions on the adequacy of the Model Clauses to the CJEU.  The decision is long and complex.  It canvasses a number of threshold issues before engaging in a methodical assessment of the law applicable to US state access to personal data in the hands of data processors, for national security purposes.

The court’s principal findings and conclusions include the following.

  • The exclusion from the EU directive of data processing for national security purposes did not put the entire matter outside of the competence of the CJEU: the court concluded that the existing jurisprudence clearly contemplated that US national security surveillance programs were open to scrutiny and challenge under EU law.
  • The Commission’s Privacy Shield decision did not close the subject. On the contrary, the first Schrems decision made it clear that national data protection authorities and courts had an obligation to refer “well founded” doubts as to the validity of a Commission decision to the CJEU for a preliminary ruling.
  • The adequacy of the Model Clauses cannot be assessed in a vacuum. If there are fundamental inadequacies in US laws, from the perspective of EU law, the Model Clauses cannot compensate for them because they cannot bind the sovereign authority of the US and its agencies.
  • Many of the statutory protections and remedies that would apply to US persons are not available to EU citizens who are not US citizens or residents.
  • The legal effect of the Trump administration’s executive order directing agencies to ensure that their privacy policies exclude persons who were not US citizens or lawful permanent residents from the protections of the Privacy Act is uncertain; however it signals a change in policy from the previous administration which had expanded administrative protections of non-US personal information.
  • There are “a variety of very significant barriers to individual EU citizens obtaining any remedy for unlawful processing of their personal data by US intelligence agencies”. In particular, under US case law, an objectively reasonable likelihood that one has been subjected to surveillance is not sufficient to establish legal standing.  Actual evidence that one has been the subject of a secret surveillance program will necessarily be difficult to come by.
  • The right to an effective remedy under Article 47 of the Charter of Fundamental Rights of the European Union had to be considered in a systematic way, without a threshold need to prove a specific violation of some other Charter right.
  • On this fundamental point, the court’s conclusion was damning: “To my mind the arguments of the DPC that the laws – and indeed the practices – of the United States do not respect the essence of the right to an effective remedy before an independent tribunal as guaranteed by Article 47 of the Charter, which applies to the data of all EU data subjects transferred to the United States, are well founded.” [See para. 298.]
  • Furthermore, the introduction of the Ombudsperson mechanism established by the US as part of the negotiations leading to the adoption of the Privacy Shield program did not fill the gap. The court had significant concerns about the independence of this office and, in any case, it could not offer any remedy to the individual concerned.  Indeed, it could not even confirm whether or not the individual had been subject to any electronic surveillance.

Implications

While not entirely unexpected, this decision may potentially be a game-changer, which could easily turn out to be even more significant than the first Schrems decision.  If confirmed by the CJEU, the logic of the High Court’s analysis of US and EU law carries far beyond Facebook’s data processing agreement, or even the Model Clauses themselves.  The High Court’s interpretation and application of Article 47 of the EU Charter makes it hard to imagine that any of the recognized bases for lawful transfer of EU personal data to the US could survive without fundamental changes to US law, which the US already rejected under a political climate that was more open to international cooperation.  While the original Schrems decision only affected the Safe Harbour regime, this decision may pull out all of the legs of the stool at once.

The High Court has not yet determined the precise questions that will be referred to the CJEU.  All of the parties had requested the opportunity to make further submissions on that point in the event that the court determined to make a reference and the court has agreed to hear those submissions.  Once the reference is made, it will likely be about two years before the CJEU renders a decision.  During that time, the GDPR will come into force, increasing the substantive divide between EU and US privacy law.

Furthermore, the US is by no means the only country with secretive national security programs that are largely shielded from public oversight or individual accountability.  If the CJEU confirms that Article 47 of the EU Charter requires individual remedies for EU data subjects against foreign national security agencies, as a precondition for any transfer of personal data, practical consequences will be dramatic.

[1] This assessment  is currently under review.  Some have questioned whether it will remain valid, particularly after the General Data Protection Regulation (“GDPR”) comes into force in May 2018.

Europeans Express Positive Views on AI and Robotics: Report on Preliminary Results from Public Consultations

Posted in AI and Machine Learning, Big Data, European Union, Privacy
Carole PiovesanPaulina Bogdanova

On October 6, 2017, the European Parliament released its preliminary findings on its public consultation on robotics and artificial intelligence. The consultations resulted in 298 responses reflecting public perceptions about the risks and benefits of AI technology. According to the EU Committee website, the results of the consultation will inform the Parliament’s position on ethical, economic, legal, and social issues arising in the area of robotics and artificial intelligence for civil use.

Among the key findings were that there is strong support for a central EU regulatory body, in part to protect “EU values” (especially data protection, privacy and ethics) and to address  significant public concern regarding issues of data protection.

Background

The European Parliament’s Committee on Legal Affairs set up a working group in 2015 with the aim of drawing up “European” civil law rules regarding robots and artificial intelligence. While the European Commission has the right to initiate laws, the Parliament is able to draft a motion for resolution, which if passed, can prompt the Commission to create a proposal for legislation.

The Parliament passed a resolution on February 16, 2017 titled “Civil Law Rules on Robotics”, asking the Commission to propose rules on robotics and artificial intelligence, in order to fully exploit their economic potential and to guarantee a standard level of safety and security.

The goal of the Parliament seemed to be to place the EU at the forefront of developing regulation for artificial intelligence and robots. Part of the reason for this was to ensure that human rights and ethical concerns are protected and that EU values (especially data protection, privacy and ethics) were paramount.

The Parliament proposed a Charter on Robotics, which is a code of ethical conduct for robotics engineers, research ethics committees, and a license for designers and users (annexed to the Resolution).

The resolution called on the European Commission to propose legislation on various topics including:

  • General principles concerning the development of robotics and artificial intelligence for civil use – for example by creating a classification system for robots (see para. 1);
  • Research and innovation guidelines (see paras. 6-9);
  • Ethical principles (see paras. 10-14);
  • Creating a “European Agency for Robotics and Artificial Intelligence” (see paras. 15-17);
  • Intellectual property rights and the flow of data (see paras. 18-21);
  • Standardization, safety and security – for example by harmonising technical standards (see paras. 22-23);
  • Autonomous means of transportation (see paras. 24-30);
  • Creating a specific legal status for robots in the long run, in order to establish who is liable if they cause damage;
  • Environmental impact (see paras. 47-48); and,
  • Liability related to robots[1] – for example, to clarify liability issues for self-driving cars (see paras. 49-59), and to create a mandatory insurance scheme and a supplementary fund to ensure that victims of accidents caused by driverless cars are compensated (see para. 57).

In May 2017, the European Commission published a preliminary response to some of Parliament’s recommendations. While the Commission agreed with many of Parliament’s suggestions, it has not made any proposals on the issues yet.

Overall in the Commission’s response, it agreed with the Parliament that there is a “need for legal certainty as to the allocation of liability” in the context of new technologies. To this end, the Commission “intends to work with the European Parliament and the Member States on an EU response.”

The Commission noted that it awaits the response of the Parliament’s public consultation, and that it will conduct its own public consultation and stakeholder dialogue on the issues.

Results of Public Consultation

The preliminary results of the Parliament’s public consultation were released on October 6, 2017. A PowerPoint summarizing the results is available here. The public consultations were open to all EU citizens and consisted of one general public survey and one survey targeted to a “specialized” audience. The trends emerging from the consultations showed:

  • the vast majority of respondents have positive views on robotics and AI developments but want careful management of the technology;
  • despite the positive attitude towards the technology, the majority of respondents are concerned about privacy interests and the possible threat of AI and robotics to humanity;
  • 90% of respondents support public regulation of robotics and AI with only 6% against regulation and 4% noted as “other”;
  • reasons given in support of public regulation include:
    • avoid abuse by industry;
    • need to address concerns about ethics, human rights, data protection and privacy;
    • need to set common standards for industry to have certainty; and,
    • consumer protection.
  • reasons given against public regulation include:
    • too soon to regulate emerging technology;
    • harms competitiveness;
    • hinders innovation and creativity; and,
    • general skepticism with regulation.
  • 96% of respondents agree that international regulation of AI and robotics is desirable as well;
  • the top four reasons in support of EU-wide regulation of AI and robotics are:
    • data protection;
    • values and principles;
    • liability rules; and,
    • EU competitiveness.
  • public opinion regarding sectors in urgent need of EU-wide regulation is almost equally shared between (a) autonomous vehicles; (b) medical robots; (c) care robots; (d) drones; and, (e) human repair and enhancement.

A summary report of the findings of the public consultation will be publicly available in due course.

Interestingly, European public opinion appears to be much more positive towards automation technologies than U.S. public opinion, based on the results of a recently-release report by the Pew Research Centre. The Center surveyed 4,135 U.S. adults between May 1 and 15, 2017, and found that “Americans generally express more worry than enthusiasm when asked about these automation technologies.” A summary of the report is available here.

______________

[1] EP resolution 16 Feb 2017,Paras 49-59

McCarthy Tétrault Event: Big Data Seminar – October 18th, 2017

Posted in Big Data, Competition, Privacy

The second part of McCarthy Tétraults Transformative Technologies Series explores the asset that underpins many of today’s transformative technologies: big data.

This seminar will provide an overview of some of the pressing legal questions businesses are facing as big data takes centre stage. Businesses are increasingly harnessing big data in ways that drive innovation and quality improvements across a range of industries.

With Canada’s federal privacy legislation currently under review and the Competition Bureau’s release on September 18, 2017 of its consultation paper “Competition Bureau – Big data and Innovation”, data is not only a driver of innovation, it can also present legal and regulatory challenges – both to businesses and regulators.

Topics to be covered during this session are:

  • Privacy: How can companies be sure consumer consent is valid for big data applications, those in use, and those that won’t be known until sometime in the future? Does aggregation solve privacy problems? Does de-identification? How can businesses fulfil transparency and accountability obligations to customers when dealing with big data? How does a business working with a third party provider, (e.g. cloud services or data analytics provider), demonstrate a “comparable level of protection”? With an evolving global privacy landscape, (the General Data Protection Regulation (GDPR) comes into force in May 2018), what are the potential directions for Canada?
  • Competition: The growth of the digital economy means the rise of business models based on “Big Data”. The use of big data by companies for the development of products and services can generate substantial efficiency and productivity gains, (e.g. improving decision-making, refining consumer segmentation and targeting). However, the acquisition and use of Big Data can raise competition issues, including allegations of abuse of dominance and even criminal cartel activity. Competition and privacy issues associated with Big Data may appear to conflict, and are currently before the Federal Court of Appeal in the TREB case. Find out how competition laws impact – and are likely to impact in the future – companies’ Big Data activities.
  • Managing Data: To be useful, data must be processed. This means organizations must find data in their systems, (or from other sources), manage it appropriately, standardize it so it can be processed, refine it so it achieves the ends anticipated, monitor the outputs, and make decisions about what will and will not be shared and with whom. Organizations face challenges at each step along the way, and there are better, (and worse!), ways to approach them. Technical missteps can result in legal and regulatory issues.

Our speakers are :

  • Paul Johnson, T.D. MacDonald Chair in Industrial Economics from the Competition Bureau of Canada
  • Kirsten Thompson from McCarthy Tétrault 
  • Izabella Gabowicz, COO of Sensibill

We look forward to welcoming you!Interested in attending?  Please contact us at clientevents@mccarthy.ca.

 

Date: 
Wednesday, October 18, 2017

Time: 
11:30 a.m. (EST) – Registration and Lunch
12:00 p.m. (EST) – 1:30 p.m. (EST) – Seminar

Location:
Toronto Office and Online

*Note: For those participants who cannot join us in person, we are offering this program via webinar. If you are interested in this alternative, please select the appropriate option during the online registration process. All instructions and information on how to access the webinar will be forwarded a few days before the event.

This program qualifies for up to 1.5 hours of eligible educational activity or CPD/MCE credit under the mandatory education regimes in British Columbia, Ontario and Québec.