CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

Few “likes” for Facebook Forum Selection Clause: Supreme Court Finds “Strong Cause” to Not Enforce Forum Selection Clause

Posted in Class Actions, Privacy, Privacy Act, Social Media, Uncategorized
Jade BuchananMiranda Lam

Electronic terms of service govern billions of relationships worldwide, whether a user is joining a social media service, shopping online or accessing a blog. In each case, a binding contract is formed, the terms of which are usually set out in the website’s “terms of service” . But when a contract is made over the internet and there is later a dispute, whose law governs? What is the “forum” for the resolution of the dispute? What if the contract expressly designates a specific jurisdiction as the appropriate “forum”? In Douez v Facebook, Inc. (“Douez”), the Supreme Court of Canada refused to uphold the forum selection clause contained in Facebook, Inc.’s terms of service.

Background

The case involves Facebook, Inc. (“Facebook”) and the representative plaintiff in a proposed class action, Ms Deborah Douez. When Ms Douez joined and continued using Facebook, she agreed to terms of service which included committing to bring any claim against Facebook exclusively in Santa Clara, California.

Ms Douez’ dispute with Facebook started when she found her name and image being used in Facebook’s “Sponsored Stories” product. She initiated proceedings under BC’s Class Proceedings Act with a proposed class of the approximately 1.8 million British Columbians who appeared in Sponsored Stories. The claim was based on Section 3(2) of BC’s Privacy Act:

(2) It is a tort, actionable without proof of damage, for a person to use the name or portrait of another for the purpose of advertising or promoting the sale of, or other trading in, property or services, unless that other, or a person entitled to consent on his or her behalf, consents to the use for that purpose.

Facebook brought a preliminary motion to dismiss the claim, citing the forum selection clause, which read as follows:

You will resolve any claim, cause of action or dispute (claim) you have with us arising out of or relating to this Statement or Facebook exclusively in a state or federal court located in Santa Clara County. The laws of the State of California will govern this Statement, as well as any claim that might arise between you and us, without regard to conflict of law provisions. You agree to submit to the personal jurisdiction of the courts located in Santa Clara County, California for purpose of litigating all such claims.

Facebook obtained a favorable decision from the British Columbia Court of Appeal. Ms Douez appealed to the Supreme Court of Canada.

Summary of the Majority Decision

A narrow 4-3 majority of the Court found that Facebook could not rely on its forum selection clause.

The Court did unanimously affirm that forum selection clauses should continue to be considered under the test established in Z.I. Pompey Industrie v ECU-Line N.V., 2003 SCC 27 (“Pompey”). The Pompey test involves two steps. First, the party seeking to rely on a forum selection clause must prove that it is clear, valid and enforceable as a matter of contract law. Second, once the forum selection clause is accepted as valid, the party asking the Court to not enforce the clause needs to show a “strong cause” for doing so based on “all the circumstances.”

The Court’s consensus ended at Pompey. Three members of the Court, Justices Karakatsanis, Wagner and Gascon, decided that Facebook had satisfied the first step of Pompey and that the forum selection clause was valid. However, they found Ms Douez had shown a strong cause for not enforcing the clause.

The strong cause was based on two main factors. First, the power imbalance inherent in a unilaterally imposed contract (known as a contract of adhesion) between one individual consumer and one of the largest companies in the world. This power imbalance was increased by the fact that “unlike a standard retail transaction, there are few comparable alternatives to Facebook.”

Second, the Privacy Act was described as “quasi-constitutional”, because it was intended to protect the privacy rights of individuals. The decision explained the importance of adjudicating constitutional and quasi-constitutional rights in Canada:

Canadian courts have a greater interest in adjudicating cases impinging on constitutional and quasi-constitutional rights because these rights play an essential role in a free and democratic society and embody key Canadian values. There is an inherent public good in Canadian courts deciding these types of claims. Through adjudication, courts establish norms and interpret the rights enjoyed by all Canadians.

In addition to the power imbalance and the quasi-constitutional nature of privacy legislation, the three Justices cited two additional factors. First, it was in the interest of justice for the case to be adjudicated in BC, where there Privacy Act would be enforced and the Court would be well-positioned to understand the intention of the Legislature. The decision also cited the “comparative expense and inconvenience” of advancing the claim in BC, rather than California, which again favored a strong cause.

A strong cause was not even required for Justice Abella, who wrote a separate decision that ultimately “broke the tie” amongst the seven justices and allowed Ms. Douez’ appeal to succeed. She found that Facebook had not met the first Pompey step of showing the clause to be enforceable as a matter of contract law. Justice Abella concluded that the forum selection clause was void relying on public policy, inequality of bargaining power and unconscionability.

In a dissenting opinion, Chief Justice McLachlin and Justices Moldaver and Côté were prepared to enforce the forum selection clause, finding that Ms Douez had not shown a strong cause.

Impact for Businesses

  • Forum selection clauses are still enforceable, even if they are not a silver bullet against being brought into litigation in unexpected places. Had Ms Douez been advancing a claim that did not impinge on “constitutional and quasit-constitutional rights” like those engaged in the Privacy Act, the forum selection clause may have been upheld. Indeed, six out of seven Supreme Court Justices were prepared to enforce Facebook’s forum selection clause, save for the existence of a “strong cause” in this instance.
  • When engaging with personal information, consulting local privacy counsel is a must. Privacy legislation varies from province to province and failing to appreciate even slight differences can result in class action claims.

Impact on the Future of Internet Law

The only thing that can be said for certain is that the interaction of the internet and the law is likely to produce more decisions like Douez. In fact, the Supreme Court just released Google Inc. v Equustek Solutions Inc. et al., which addresses if and when a Canadian court can order a search engine to delist certain websites globally.

Further, Douez is unlikely to be the last word on the specific issue of forum selection clauses. The Pompey test may open future debates about “strong cause” in the context of consumer contracts. The opinions of the divided Court in Douez could be used to provide supporting arguments for both sides in a situation where the facts are just slightly different.

Lastly, this decision is just the end of the first chapter of the Douez saga. Facebook’s preliminary motion was rejected but the class action has yet to be certified, so there is more internet law to come.

“Not There Yet”: Bank of Canada Experiments with Blockchain Wholesale Payment System

Posted in FinTech
Maureen GillisAlexandru Trusca

The Bank of Canada has issued a report on Project Jasper, its recently completed experiment testing the viability of distributed ledger technology (DLT) as the basis for a wholesale payment system. The experiment was a combined effort by the Bank of Canada and Payments Canada, along with Bank of Montreal, Canadian Imperial Bank of Commerce, HSBC, National Bank of Canada, Royal Bank of Canada, Scotiabank and TD Canada Trust. The experiment revealed that such technology is not more beneficial, at least for now, than the current centralized system of wholesale payments. However, the successful proof-of-concept highlighted best practices for wide-scale public/private cooperation and uncovered other opportunities for the implementation of the technology within the financial industry.

Bank of Canada and Project Jasper

The Bank of Canada embarked on Project Jasper to learn more about the feasibility, benefits and challenges of using DLT as the basis for a wholesale interbank payment system. These systems are crucial mechanisms for the financial industry that allow large financial institutions to process payments to each other as well as to and from central banks. Canada’s wholesale payment system, the Large Value Transfer System (LVTS), is operated by Payments Canada and processes an average of $175 billion in payments each business day. Despite the large sums processed, LVTS payments are relatively simple and thus presented a reasonable starting point for practical testing of a DLT system.

Project Jasper involved the building and testing of a simulated wholesale payment system using a DLT-based settlement asset. The experiment’s dual objectives were to evaluate whether a test system could meet international standards for systemically important payments infrastructure, including the Principles for Financial Market Infrastructure (PFMIs), and to collaborate with the private sector on a practical DLT application.

The first phase of the project involved the building of a settlement capability on an Ethereum platform and the demonstration of the ability to exchange the settlement asset between participants. The second phase was built on a Corda platform[i] and introduced a liquidity saving mechanism (LSM) where only the net difference between transactions actually settles, mirroring the LSM function in the existing centralized system. Project Jasper required the creation of a novel LSM designed specifically for a distributed ledger, believed to be the first of its kind. The second phase also used a consensus system that allowed the Bank of Canada to serve as a notary with access to the entire ledger, allowing the Bank of Canada to verify the funds involved in transactions. Relying on these two custom features built within the Corda platform, the Bank of Canada, along with Payments Canada, ran a set of simulated transactions.

Experiment Assessment

While the experiment validated that a DLT-based wholesale payments system can likely satisfy risk concerns and PFMIs related to credit risk, liquidity risk and settlement risk, other areas such as settlement finality, operational risk, as well as access and participation requirements are still of concern. The highest operational risks relate to resilience:

  1. Continued back-up and security needs: While the project demonstrated that the core of a DLT-based wholesale payments platform can deliver high availability at a low cost, once additional technology components, such as digital keys, identity and system access management—all important elements of Project Jasper, but currently based on centralized models—are added to the system, the typical challenges associated with a single point of failure faced by existing centralized systems re-emerge. This vital information must be backed up and secured to ensure it is not lost, mishandled or abused, similarly to the current security measures of centralized systems. Up against the highly efficient existing system, the high costs of initial design of the DLT system suggested that the bulk of cost savings that might arise from the use of this kind DLT system would arise from a reduction in bank reconciliation efforts, not from the core system.
  2. Difficult balance between privacy and transparency: While Project Jasper partitioned data in such a way as to create a significant amount of privacy for transactions, it also introduced significant challenges for data replication across the network, a key feature and advantage of DLT, because each participant’s node had access to only a subset of data, introducing a point of failure at each node. More robust data verification requires wider sharing of information. The balance required between transparency and privacy poses a fundamental question to the viability of the system for such uses once its core and defining feature is limited.
  3. Settlement risk: Principle 8 of the PFMIs requires settlement finality. Defining the conditions under which a transfer in the wholesale payments system is considered irrecovable and unconditional is central to the system’s operation and involves both operational and legal components. Phase 1 of Project Jasper underlined some of the challenges the use of Ethereum poses for settlement finality, as its proof-of-work (POW) concensus mechanism is probabilistic, meaning that although settlement becomes increasingly certain as a transaction becomes progressively more immutable over time, there is always a small possibility that a payment could be reversed. The use of the Corda platform and the notarial function of the Bank of Canada have potentially introduced in Phase 2 of Project Jasper an element of irrevocability, but stress testing would be required to confirm that settlement risk had been adequately addressed.
  4. Potential for restricted DLT systems to create single point of failure: The notary consensus system implemented in Phase 2 of Project Jasper, while important for verification, also creates a single point of failure, with the implication that an event such as an outage at the Bank of Canada would prevent the processing of any payments. Activities such as permissioning of nodes and establishment of operational standards continue to require significant centralization. Given these considerations, it was concluded that restricted distributed ledger schemes such as Project Jasper may decrease operational resilience or incur more expense when compared against current centralized systems.

Conclusions

Despite Project Jasper highlighting significant limitations to the use of DLT within the wholesale payment space, it still proved valuable in the eyes of the stakeholders involved. The participants, including public-sector and private-sector partners, stated that they learned a great deal about the technical aspects of DLT technology, discovered best practices for wide-scale cooperation and uncovered insights into other paths that may be explored to help reap the benefits of such technology through other Fintech innovations. One key insight that Project Jasper illuminated is that cost savings or efficiency gains can be obtained “if a DLT-based core interbank payment system can serve as the basis for other DLT systems to improve clearing and settlement across a range of financial assets”, such as stocks, bonds, derivatives and other, more decentralized systems with long settlement times, interacting with the wider financial market infrastructure by combining different elements on the same ledger.

Notably, this proof-of-concept exercise excluded many governance and legal considerations of traditional wholesale payment systems, including anti-money laundering requirements, suggesting that a true production system could have significant additional complexity to address.

Commenting on the experiment, Carolyn Wilkins, senior deputy governor of the Bank of Canada, and Gerry Gaetz, president of Payments Canada, concluded that, as against the necessity for interbank systems to be safe, secure, efficient and resilient, as well as to meet all international standards, “DLT-based platforms are just not there yet.” Consequently, they indicated that near-term modernization of Canada’s payments system will not involve distributed ledgers. Nonetheless, it will involve wide-scale innovation and collaboration across many public and private parties, the benefits of which were also demonstrated in Project Jasper.

[i] R3 has created an open-source distributed ledger platform named Corda that is designed to record, manage and automate legal agreements between businesses. Two important differentiators from traditional blockchains include the ability to support various consensus mechanisms and the ability to restrict access to the data within agreements to those explicitly entitled to it.

For more information about our firm’s Fintech expertise, please see our Fintech group’s page.

 

 

 

 

A Glimpse into a Tangled Future: Implications of an Increasingly Connected World

Posted in AI and Machine Learning, Big Data, Internet of Things
Kevin Stenner

Looking forward to living in a house that reduces your workload by mowing your lawn? What about having your front door beam you photographs of everyone your adolescent children let into your home while you are at work? Or, even better, a door that will only open for certain people at specific times during the week?  As the Internet of Things (IoT) continues to expand into every nook of daily life, these “advances” are not only the way of the future – they are the way of the present.

In response to this proliferation of the IoT and IoT devices, the United States Government Accountability Office recently released a technology assessment of the IoT (the “Report”) respecting the status and implications of an increasingly connected world.  The Report highlighted the benefits of the IoT’s rapid emergence.  However, it also made sure to stress the challenges presented by a future where our refrigerators can provide a summary of our late night snacking habits to our insurance companies, or worse, our personal trainers.

The Benefits of Living in a Connected World

There is no shortage of benefits that can be derived from IoT devices.  Some of these benefits are obvious; imagine a surgeon operating on you through smart glasses that overlay digital aids onto the physical world.  Some benefits are less obvious, such as cow monitoring devices used by ranchers to determine when cows are in their optimal breeding cycle.

Clearly, there can be little debate that the benefits of IoT devices are seemingly endless. Consumers are seen to benefit from the use of wearables, networked electronic homes and collision detection systems in vehicles.[i]  Industry benefits through an optimization in operations and the public sector benefits through the management of service delivery.[ii]

The Downside to the IoT

However, despite the significant benefits of IoT devices, there are also real dangers associated with this increased connectivity and, more importantly, there seems to be little consensus on how to regulate the IoT moving forward.

Information Security Challenges

As the Report identifies, the rapid adoption of IoT devices into everyday situations has the potential of bringing the effects of a device’s poor security into homes, industries and communities.[iii]  The risk is that unauthorized individuals and organizations can gain access to IoT devices for malicious purposes.[iv]  Furthermore, this risk is exacerbated as many IoT devices were built without anticipating the threats associated with internet connectivity.[v]  As an example, researchers found that they could remotely gain control over a vehicle’s steering and brakes through wireless communication.

Although numerous agencies have issued extensive guidelines in respect of protecting IoT devices, there is no standard for the implementation of these guidelines and there is no consensus on how to deal with the associated risks.[vi]

For example, the Federal Trade Commission recommends that companies prioritize and build security into their devices.  However, the risk is that by implementing access controls and security measures, the functionality and flexibility of IoT devices could be affected.[vii]

As an additional security feature, the National Institute of Standards Technology and AT&T recommend that consumers take steps to ensure that their IoT devices are updated with the most current software upgrades.[viii]  Although this suggestion is practical, it is based on the assumption that an IoT device can easily be updated, that the update will increase the security of the device and that the consumer will ultimately install the update.  This suggestion also raises an interesting question as to who would be responsible for any damage caused by a rogue vehicle if the owner had failed to install a software upgrade that may have prevented the vehicle from being wirelessly hijacked.

Privacy Challenges      

Other major hurdles for the developers of IoT devices are to ensure: i) that the devices do not inappropriately collect or misuse personal information; ii) that suitable methods for notifying customers about how data will be used are developed; and iii) that a consumer’s consent is obtained for the collection and use of personal data.[ix]  As an example, in many cases IoT devices collect information through sensors that are embedded in everyday items and that record data while an individual is unaware that data is being recorded.[x]  Despite this constant monitoring, many of these IoT devices do not seek consent or do not have the means to seek consent.  In addition, even if an IoT device requested consent, would consumers take the time to properly review and understand the consent that they were providing?

There are also the concerns that information harvested from IoT devices can be used for a variety of purposes unrelated to the consumer’s use of the device and that this information could ultimately be linked with other harvested information to provide a detailed profile of an individual’s habits.[xi]  Accordingly, experts suggest that data harvested from IoT devices should be de-identified.  However, not only is there no standard process by which data can be de-identified, but the de-identification of data must be done in such a manner that the information cannot be re-identified.[xii]

Final Word

Although the Report does not provide a solution on how to manage the proliferation of IoT devices, it does highlight that fact that in the United States there is no single “federal agency that has overall regulatory responsibility for the IoT”.[xiii]  Canada has a more centralized privacy regime and in that respect has an advantage (and may provide more certainty to businesses), but IoT involves more than just privacy.

As IoT devices continue to become cheaper and move into all facets of life, governments in Canada will need to determine if and  how to get involved.

Based on the Report, it would seem that one of the first areas where government may look at is in the adoption of guidelines to ensure that IoT devices are built to minimum security standards.  The threshold question of whether this is approached as a regulatory initiative, a framework document, or in partnership with a third-party standards body and/or industry would need to be answered.

Furthermore,  concerns regarding consent, data harvesting and the de-identification of personal information – concerns central to IoT devices – were front and centre in the recent hearings on the review of Canadian privacy legislation (PIPEDA). While IoT devices and manufacturers may not be regulated specifically, it is likely that coming amendments to privacy laws will impact those in the IoT ecosystem.

[i] The Report at pgs 16-19.

[ii] The Report at Appendix II.

[iii] The Report at p 26.

[iv] The Report at p 26.

[v] The Report at p 28.

[vi] The Report at p 27.

[vii] The Report at p 28.

[viii] The Report at pgs 29-30.

[ix] The Report at p 31.

[x] The Report at p 33.

[xi] The Report at p 35.

[xii] The Report at p 35.

[xiii] The Report at p 55.

Lawyers need to keep up with AI

Posted in AI and Machine Learning, Big Data
Carole Piovesan

For decades, novelists, scientists, mathematicians, futurists and science fiction enthusiasts have imagined what an automated society might look like. Artificial intelligence, or AI, is rapidly evolving, and the society we could only once imagine may be on the brink of becoming our new reality.

Simply, and generally, AI refers to the ability of a computer system to complete increasingly complex tasks or solve increasingly complex problems in a manner similar to intelligent human behaviour. Examples range from IBM’s Watson system that, in 2011, won a game of Jeopardy! against two former winners to emerging technologies fuelling the development of  driverless cars.

AI is expected to have a profound impact on society, whereby intelligent systems will be able to make independent decisions that will have a direct effect on human lives. As a result, some countries are considering whether intelligent systems should be considered “electronic persons” at law, with all the rights and responsibilities that come with personhood. Among the questions related to AI with which the legal profession is starting to grapple: Should we create an independent regulatory body to govern AI systems? Are our existing industry-specific regulatory regimes good enough? Do we need new or more regulation to prevent harm and assign fault?

While we are at least a few steps away from mass AI integration in society, there is an immediate ethical, legal, economic and political discussion that must accompany AI innovation. Legal and ethical questions concerning AI systems are broad and deep, engaging issues related to liability for harm, appropriate use of data for training these systems and IP protections, among many others.

Governments around the world are mobilizing along these lines. The Japanese government announced in 2015 a “New Robot Strategy,” which has strengthened collaboration in this area between industry, the government and academia.

Late last year, the United Kingdom created a parliamentary group — the All Party Parliamentary Group on Artificial Intelligence — mandated to explore the impact and implications of artificial intelligence, including machine learning. Also late last year, under the Obama administration, the White House released the reports, Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.” The reports consider the challenge for policymakers in updating, strengthening and adapting policies to respond to the economic effects of AI.

In February 2017, the European Parliament approved a report of its Legal Affairs Committee calling for the review of draft legislation to clarify liability issues, especially for driverless cars. It also called for consideration of creating a specific legal status for robots, in order to establish who is liable if they cause damage.Most recently, the Canadian federal government announced substantial investments in a Pan-Canadian Artificial Intelligence Strategy. These investments seek to bolster Canada’s technical expertise and to attract and maintain sophisticated talent.

Lawyers can play a valuable role in shaping and informing discussion about the regulatory regime needed to ensure responsible innovation.

Ajay Agrawal, Founder of the Creative Destruction Lab and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management, says Canada has a leadership advantage in three areas — research, supporting the AI startup ecosystem and policy development. The issue of policy development is notable for at least two reasons. First, one of the factors affecting mass adoption of AI creations, especially in highly regulated industries, is going to be the regulatory environment. According to Agrawal, jurisdictions with greater regulatory maturity will be better placed to attract all aspects of a particular industry. For instance, an advanced regulatory environment for driverless cars is more likely to attract other components of the industry (for example, innovations such as tolling or parking).

Second, policy leadership plays to our technical strength in AI. We are home to AI pioneers who continue to push the boundaries of AI evolution. We can lead by leveraging our technical strengths to inform serious and thoughtful policy debate about issues in AI that are likely to impact people in Canada and around the world.

Having recently spoken with several Canadian AI innovators and entrepreneurs, I have identified two schools of thought on the issue of regulating AI. The first is based on the premise that regulation is bad for innovation. Entrepreneurs who share this view don’t want the field of AI to be defined too soon and certainly not by non-technical people. Among their concerns are the beliefs that bad policy creates bad technology, regulation kills innovation and regulation is premature because we don’t yet have a clear idea of what it is we would be regulating.

The other school of thought seeks to protect against potentially harmful creations that can spoil the well for other AI entrepreneurs. Subscribers to this view believe that Canada should act now to promote existing standards and guidelines — or, where necessary, create new standards — to ensure a basic respect for the general principle of do no harm. Policy clarity should coalesce in particular around data collection and use for AI training.

Canada, home to sophisticated academic research, technical expertise and entrepreneurial talent, can and should lead in policy thought on AI. Our startups, established companies and universities all need to talk to each other and be involved in the pressing debate about the nature and scope of societal issues resulting from AI.

As lawyers, we need to invest in understanding the technology to be able to effectively contribute to these ethical and legal discussions with all key stakeholders. The law is often criticized for trailing technology by decades. Given the pace of AI innovation and its potential implications, we can’t afford to do that here.

This post first appeared as a Speaker’s Corner feature in the June 5, 2017 edition of the The Law Times.

Why Autonomous Vehicle Providers Should Consider Their Stance on Privacy

Posted in Connected Cars, Privacy
Brandon MattaloKosta Kalogiros

Autonomous vehicles are coming fast. It is now believed that autonomous vehicles will be widely available to consumers by 2020. Many futurists predict that one day owning and driving a car will be a hobby, much like horseback riding, and that most consumers will simply press a button on their mobile devices to have a car transport them to and from various destinations. While the societal, infrastructural and safety benefits of such a world are obvious, the privacy implications associated with these innovations are rarely discussed—or certainly not enough as they should be.

According to a report from Intel, autonomous vehicles may generate over 4 terabytes of data each day — based on the current average use of non-autonomous cars at one hour and a half of driving a day. In other words, an autonomous vehicle will produce, in an hour and a half, over three-thousand times the data an average internet-using consumer produces in a single day. While data generation at this scale is not yet the norm, current vehicles already generate and collect a great deal of personal information, making discussions around privacy timely and important.

With innovation racing ahead at break-neck speeds, policy makers will have to work doubly hard to ensure privacy policies and protections are in place as quickly as possible. This is especially so considering data privacy was one of the top 5 consumer concerns (according to a study out of the University of Michigan Transportation Institute). To ensure that autonomous vehicles are adopted, consumers will need to trust their cars (or car services), in respect of both its safety and protection of privacy.

It was refreshing, therefore, when the Privacy Commissioner of Canada, Daniel Therrien, appeared before the Senate Committee on Transportation and Communications (“TRCM”) on March 28, 2017, to discuss these exact issues.

Modern cars are more than simply vehicles […] [t]hey have become smartphones on wheels.

Mr. Therrien’s statement is not far from the truth. By connecting your phone to your car, or by using the onboard infotainment systems, your car can collect and store information about your location, address books, calls, text messages and musical preferences.
As with any innovative technology, the adoption rate of the first-movers will dictate the market leaders for the foreseeable future. While one of the main barriers to adopting autonomous vehicles is the safety of the vehicles, another may be the car providers’ approach to consumer privacy. While companies are often thought of as being unwelcoming of overzealous regulations, Mr. Therrien offered a fresh perspective when he suggested that privacy regulations may help increase the adoption rate of autonomous vehicles by alleviating consumer’s privacy concerns altogether.
Mr. Therrien may have a point. If one of the larger barriers for adopting autonomous vehicles is privacy, then autonomous vehicle companies should explore strategies that embrace privacy regulations, not dismiss them. Additionally, even in the absence of privacy regulations, autonomous vehicle companies should at least conduct an analysis of whether privacy can be used as differentiator to increase the adoption rate of their vehicles.

In terms of Canadian policy, Mr. Therrien suggests three main areas that regulators should focus on.

First, Mr. Therrien suggests that consumers need to understand who to contact regarding their privacy concerns. Is it the car manufacturer, the car dealer from whom they purchased the car, the smartphone developer, the application developer or maybe it is some third party who owns the information. The interplay between all of the various companies makes it difficult for the average consumer to understand who they need to contact regarding their privacy concerns. By providing clear guidance, consumers will feel more comfortable using autonomous vehicles.

Second, Mr. Therrien suggests that regulators should create standards for wiping consumer data when a car is resold, returned or simply re-rented to another driver. Having standard practices will ensure that any subsequent user cannot illicitly collect and disclose private information pertaining to previous drivers.

Lastly, Mr. Therrien suggests that regulators need to ensure that the collection, use and disclosure of private information continues to be communicated to consumers in a clear and understandable way so that they have a real choice in providing consent to services that are not essential to the proper functioning of their car. The days are over where companies can hide their privacy policies under the façade of legalese. Instead, it is better to draft easy-to-understand privacy policies that consumers understand. This level of transparency and openness can differentiate autonomous car providers and build a consumer base that trusts the car provider. This can help increase adoption rates in what is likely to become a very competitive landscape.

If, as Mr. Therrien and we suggest, specific privacy regulations for autonomous vehicles can help increase the adoption rate of autonomous vehicles and move the industry forward, companies in the space have every incentive to get involved in the legislative process and to embrace privacy legislation as a means to expedite the market-readiness of their products.

Regardless of whether regulators choose to create new privacy regulations, individual autonomous vehicle providers can fine-tune their stance on privacy to differentiate and increase the adoption rate of their products or services.

The importance of privacy policies is increasing as consumers become more informed about their privacy rights. In a recent example, Evernote changed their privacy policy to include a term that stated “you cannot opt out of [Evernote] employees looking at your content”. While this clause was intended to allow Evernote to improve its machine-learning analysis, the company immediately had to go into damage control after the change was spotted by a consumer and users started complaining.

In short, autonomous car providers need to pay attention to their stance on privacy. They should not be afraid to embrace legislation that is intended to protect consumer privacy, since it may help increase the adoption rate of autonomous vehicles. Even if regulators do not implement more stringent privacy regulations for autonomous vehicles, car providers can use their stance on privacy to aid in becoming a market leader. After all, it may not be a stretch to suggest that the winners and losers in the new autonomous vehicle industry may, in part, be dictated by the companies’ privacy policies.

Lawful Access: The Privacy Commissioner Reiterates its Position

Posted in Criminal, Legislation, Privacy
Marissa Caldwell

One of the challenging aspects of PIPEDA in recent years has been the new section 7(3)(c.1)(ii), which permits organisations to disclosure personal information to a government institution that has requested the disclosure for the purpose of law enforcement and has stated its “lawful authority” for the request. Organizations faced with such a request almost always ask the same question: What constitutes “lawful authority”?

Background on Lawful Access

On April 5, 2017, Patricia Kosseim,  Senior General Counsel and Director General, Legal Services, Policy, Research and Technology Analysis for the Office of the Privacy Commissioner of Canada (the “OPC”), gave testimony before the Quebec Commission of Inquiry on protection of confidential media sources. Premier Philippe Couillard had announced an inquiry would be held after Montreal and provincial police admitted they collected data from the cellphones of several journalists. The inquiry’s mandate includes identifying best practices to protect the confidentiality of journalistic sources and the commissioners must report back with recommendations by March 1, 2018.

Ms. Kosseim was asked, at the request of Commission’s counsel, to provide an overview of the legislation for protecting privacy in Canada and to answer questions about lawful access issues from a federal perspective. Ms. Kosseim took the opportunity to present a clear view of the OPC’s position on how lawful access, as articulated in section 7(3) of PIPEDA, should be addressed. Of particular interest is how this position differs from the position taken by the federal government in recent years.

Section 7(3)(c.1)(ii) permits disclosure of personal information to a government institution that has requested the disclosure for the purpose of law enforcement and has stated its “lawful authority” for the request. As was articulated by the Supreme Court of Canada in R v. Spencer, “[t]he reference to “lawful authority” in s. 7(3)(c.1)(ii) must mean something other than a “subpoena or search warrant.” The Court went to suggest that “[l]awful authority” may include several things. It may refer to the common law authority of the police to ask questions relating to matters that are not subject to a reasonable expectation of privacy. It may refer to the authority of police to conduct warrantless searches under exigent circumstances or where authorized by a reasonable law.”

While this decision did clarify that s. 7(3) ought not to be used to indiscriminately justify police search and seizure powers, it provided little by way of concrete examples of what the section does authorize.

Parliament’s passing of Bill C-13 further complicated the matter as it added new lawful access powers to transmission data to the Criminal Code.

In her remarks, Ms. Kosseim reiterated that in each of the OPC’s intervener submissions to the SCC in R v. Spencer, the OPC’s advice to Parliament on Bill C-13, and in other government consultations, the OPC has warned against lowering the thresholds for authorizing new investigative powers and noted the lack of accountability placed on the use of the powers and the absence of conditions specific to their use.

OPC Position on Lawful Access

Ms. Kosseim went on to reiterate the position that the Privacy Commissioner of Canada, Daniel Therrien, has taken on the subject. Commissioner Therrien has been vocal about the impact of these surveillance powers and has suggested that the following be done:

  • lawful access powers should be limited, not extended – particularly in light of the privacy risks, which are much greater than the much-used innocuous analogy to telephone directories would lead people to believe;
  • the crucial role of judges in the process of authorizing investigative powers should be maintained in order to ensure the necessary independence of police forces, which will better ensure the protection of people’s most basic rights; and
  • Parliament should consider legislating more specifically on the prerequisites for lawful access, as well as grant judges the ability to attach conditions, such as: the protection of citizens who are not targeted (but still captured) by these measures; the period for which the information can be retained; and the intentional destruction of non-pertinent data.

It is clear that the OPC would like to see the lawful access rights of government institutions, including police, be limited, clearly articulated, and supervised by the judiciary. Canadians have the right to be secure against unreasonable search and seizure under the Charter and have the right to have their personal information protected under PIPEDA. These rights must be balanced with the reality that circumstances will arise when personal information will need to be disclosed for purposes such as public safety.

This would also provide clarity to businesses which, faced with a non-judicially-supervised PIPEDA “lawful authority” request for personal information, find themselves having to make their own determination as to whether the government agency requesting access has a sufficiently bona fide reasons for doing so. Business are, quite understandably, generally reluctant to be put in that position lest they later be the target of a privacy breach lawsuit or class action by the individual or individuals whose personal information they disclosed.

It will be interesting to see if the OPC, and other interested stakeholders, can motivate Parliament to re-evaluate and clarify the powers under section 7(3) or if Parliament will simply wait for a case to come forward that challenges the scope of what exactly lawful access is.

Defending a Lawsuit is Not a “Commercial Activity” Under Privacy Legislation

Posted in Privacy
Krupa Kotecha

In a case dating back to 2016 but just recently published, the Office of the Privacy Commissioner of Canada has ruled that the collection and use of a plaintiff’s personal information for the purpose of defending against a civil lawsuit is not a “commercial activity” and, as such, the Personal Information Protection and Electronic Documents Act, SC 2000, c 5, (“PIPEDA“)does not apply.

The complaint at issue arose as a result of a motor vehicle accident involving two parties, following which the plaintiff commenced civil litigation proceedings against the defendant. A psychiatric assessment was carried out by an independent psychiatrist, retained on behalf of the defendant’s insurance company. The plaintiff requested access to the personal information gathered by the psychiatrist. Although the psychiatrist did provide the plaintiff with some information, including a redacted copy of the psychiatric report, the plaintiff filed a complaint with the federal Privacy Commissioner on the basis that the plaintiff believed there was additional information to which he was entitled. In addition, the plaintiff was also concerned about the accuracy of the personal information held by the psychiatrist.

The Privacy Commissioner determined that the psychiatrist’s collection and use of the plaintiff’s personal information did not fall within the term “commercial activity”, as defined in section 2 of PIPEDA. Despite the existence of commercial activity between the defendant and the insurance company, as a result of the defendant being insured by the insurance company, the Privacy Commissioner held that there was no commercial activity between the plaintiff and the defendant, and likewise, no commercial activity between the plaintiff and the defendant insurance company or its outside medical expert. The issue of access to the psychiatrist’s records was therefore beyond the scope of the Privacy Commissioner’s jurisdiction.

The decision serves to provide further guidance with respect to the types of activities which will or will not be held to fall under the Privacy Commissioner’s purview.

Lenovo and Superfish: Proposed Class Action Proceeds on Privacy Tort and Statutes

Posted in Cybersecurity, Internet of Things, Privacy
Carole Piovesan

A recent privacy decision regarding pre-installed software on laptops may have implications for companies operating not only in the traditional hardware space, but for those companies venturing into the burgeoning “Internet of Things” ecosystem. In short, an Ontario court declined to strike the common law and statutory privacy claims, suggesting that courts are at least willing to entertain such claims in the context of manufactured devices.

Background

Lenovo has faced several privacy-related lawsuits in Canada and the United States following its sale of laptop computers preloaded with Superfish. Superfish is a VisualDiscovery (VD) adware program that tracks a user’s Web searches and browsing activity to place targeted ads on sites visited by the user.

In Canada, a nationwide proposed class action has been commenced by plaintiff Daniel Bennett, a lawyer from St. John’s Newfoundland (see Bennett v Lenovo, 2017 ONSC 1082). Mr. Bennett recently purchased a new laptop from Lenovo’s website, which he later discovered contained the VD adware program.

Mr. Bennett alleges in the Statement of Claim that the adware program not only affects a computer’s performance but, crucially, “intercepts the user’s secure internet connections and scans the user’s web traffic to inject unauthorized advertisements into the user’s web browser without the user’s knowledge or consent”. He further alleges that the adware program “allows hackers … to collect … bank credentials, passwords and other highly sensitive information” including “confidential personal and financial information.”

Mr. Bennett advances the following claims against Lenovo on behalf of the proposed class: (1) breach of the implied condition of merchantability; (2) intrusion upon seclusion; (3) breach of provincial privacy legislation; and, (4) breach of contract. Mr. Bennett initially pled negligence as well but subsequently withdrew that claim.

In February 2017, Lenovo brought a motion to strike the Statement of Claim on the basis that it was plain and obvious the four claims could not succeed. The motion was heard by Justice Edward Belobaba who struck only one of the four claims.

The Decision

(1)  Breach of the implied condition of merchantability

Mr. Bennett alleges that the security risks and performance problems caused by the adware program render the computer “not of merchantable quality” or, simply, defective. The legal context for this claim is that consumer protection legislation establishes “implied conditions of fitness for purpose and merchantability” that cannot be modified or varied.[1] The question thus is what constitutes “merchantable”?

Lenovo argued that under Canadian law a product with multiple uses, such as Mr. Bennett’s computer (word processing, storing data, accessing the internet, etc.) is “merchantable” if it can be reasonably used, even with the alleged defect (i.e. the adware program), for at least one of the purposes, such as off-line word processing. Mr. Bennett argued in retort that the various purposes listed by Lenovo are not “multiple purposes” but illustrations of the laptop’s over-riding single purpose: to engage in electronic communications that are expected to remain private.

Justice Belobaba refused to strike this claim on the basis that the law on implied condition of merchantability in respect of computers is still unsettled. His Honour stated:

It is enough for me to find that it is not at all plain and obvious under Canadian law that a laptop that cannot be used on-line because of a hidden defect that has compromised the user’s privacy, and can only be used off-line for word processing, is nonetheless merchantable. As Professor Fridman notes, “If the test for unmerchantability [is] that the article is fit for no use, few goods would be unmerchantable because use can always be found for goods at a price.” Further, it is not plain and obvious that a reasonable computer user today would ever agree to purchase and use an affected laptop, knowing about the security risks created by the VD adware program, without insisting on a substantial reduction in the purchase price.

 (2)  Intrusion upon seclusion

Intrusion upon seclusion was recognized by the Court of Appeal as a new privacy tort in Jones v. Tsige. Intrusion upon seclusion is established when: (i) the defendant’s conduct is intentional or reckless; (ii) the defendant has invaded, without lawful justification, the plaintiff’s private affairs or concerns; and (iii) a reasonable person would regard the invasion as highly offensive causing distress, humiliation or anguish. Proof of actual loss is not required.

Mr. Bennett claims that the mere act of implanting the adware program onto his laptop without his prior knowledge and consent was an intrusion on his privacy. The adware program allows private information to be sent to unknown servers, thereby compromising the security of a user’s personal information. The vulnerabilities in security facilitate a hacker’s ability to intercept a user’s internet collection and access private data such as passwords.

Justice Belobaba found that the first two elements of the tort had been properly pled and were viable on the facts as stated in the Statement of Claim. The third element of distress was not pled but was reasonably inferred in the circumstances. His Honour held that the tort of intrusion upon seclusion is still evolving and its scope and content have not yet been fully determined. He also refused to strike this claim.

(3)  Provincial privacy laws

Mr. Bennett advances a claim of breach of privacy laws in British Columbia, Saskatchewan, Manitoba, and Newfoundland and Labrador. Lenovo argued that there is no pleading of actual violation of privacy and no allegation that any confidential information was actually hacked and appropriated. Accordingly, argued Lenovo, these statutory claims were certain to fail.

Justice Belobaba rejected Lenovo’s argument on the basis that unauthorized access to private information is itself a concern, even without proof of actual removal or theft of information. Each of the four provincial statutes declares in essence that the unlawful violation of another’s privacy is an actionable tort, without proof of loss.

His Honour stated that the scope and content of the provincial privacy laws in question is still evolving. He refused to strike this claim as well.

(4) Breach of contract

The only claim struck by Justice Belobaba was the claim for breach of contract. Mr. Bennett pleads the existence of an implied term in the sales agreement that the Lenovo laptops would be free of any defects and at the very least would not have pre-installed software that exposed class members to significant security risks.

Justice Belobaba stated that the case law is clear that a term will not be implied if it is inconsistent or otherwise conflicts with an express provision in the agreement. In this case, the sales agreement that was viewable on-line when Mr. Bennett purchased his laptop on Lenovo’s website and “clicked” his acceptance, made clear in Article 5.2 that the installed software was being sold “without warranties or conditions of any kind.”

Conclusion

It has been reported that a partial settlement may have been reached with Superfish, in a U.S. class action against both defendants. The settlement reportedly includes Superfish’s cooperation with the plaintiffs by disclosing over 2.8 million additional files and providing Superfish witnesses for a potential trial.

The Canadian proposed class action is very much in its infancy. It remains to be seen how the class action will evolve in Canada.

[1]       Sections 9(2) and (3) of the Consumer Protection Act stipulate that the implied conditions and warranties applicable to goods sold under the Sale of Goods Act are also applicable to goods sold under a consumer agreement (in this case, the Lenovo sales agreement). These implied conditions and warranties cannot be varied or waived.

Health Record Snooping Nets Hefty Fine

Posted in PHIPA
Sara D.N. Babich

In a recent case out of Goderich, Ontario a $20,000 fine, the highest of its kind in Canada, was handed out for a health privacy violation.

Between September 9, 2014 and March 5, 2015, a Masters of Social Work student accessed the personal health information of 139 individuals including family, friends, and local politicians, among others, without authorization while on placement with a family health team. The student was ordered to pay $25,000 total, which included a $20,000 fine and a $5,000 victim surcharge after pleading guilty to wilfully accessing the personal health information of five individuals.

The Information and Privacy Commissioner of Ontario (the “IPC”) recently reported that this was the fourth person convicted under the Personal Health Information Protection Act (“PHIPA”). Under s. 72 of the PHIPA, it is an offence to wilfully collect, use, or disclose personal health information. This and the other offences enumerated in s. 72(1) of the PHIPA are punishable by a fine of up to $100,000 for individuals and $500,000 for institutions. The $20,000 fine imposed in this most recent case is far from the upper limit in the PHIPA, but a signals an increasing willingness to hand out hefty fines for violations.

From the news release issued by the IPC (available here), it is apparent that deterrence of this type of snooping into the private medical affairs of individuals is being treated seriously and is seen as a necessary safeguard to maintain patient confidence in the health care system.

The unauthorized access to private health records is an ongoing issue for health care organizations which has had an increasing impact on individuals and the organizations they work for, as evidenced by the Goderich case. Given the responsibility of organizations to ensure that private health records remain protected, and the potential institutional fines associated with breaches of the relevant privacy legislation, it is incumbent on health care and related organizations to ensure that its employees are properly trained and are fully aware of the implications of a privacy breach, even if there is no malicious intent. It is also imperative that everyone who has access to these private records, including staff, students, volunteers, and interns, are fully apprised of their obligations and the consequences for breaches, including snooping.

There is similar legislation in other provinces which provides for serious monetary penalties for breaching health privacy. In British Columbia, a breach of the E-Health (Personal Health Information Access and Protection of Privacy) Act, SBC 2008, c 38 could net a fine of up to $200,000. Alberta and Manitoba legislation authorizes fines of up to $50,000 for improper access and disclosure of health information (Health Information Act, RSA 2000, c H-5; Personal Health Information Act, CCSM c P33.5). A breach of Saskatchewan’s Health Information Protection Act, SS 1999, c H-0.021 could carry a fine of up to $50,000 for individuals and $500,000 for corporations, with an added penalty of one year imprisonment on summary conviction. Other Canadian jurisdictions authorize fines ranging from $10,000 to $50,000 for individual offenders, and some carry additional imprisonment penalties.

In addition to the fines that could be issued for health legislation violations, some provinces also allow claimants to advance court actions for invasion of privacy torts. In Ontario, the courts have expressly acknowledged that the PHIPA contemplates other proceedings in relation to personal health information. The Ontario Court of Appeal has stated that the PHIPA is well-suited to deal with systemic issues while recourse for individual wrongs can be found in the recently recognized privacy torts (see Hopkins v Kay, 2015 ONCA 112). In Manitoba, there is also dual recourse to privacy legislation and tort actions (see the comments of Monnin JA in Grant v Winnipeg Regional Health Authority et al, 2015 MBCA 44).

Notably, British Columbia has declined to recognize the privacy torts of intrusion upon seclusion and public disclosure of embarrassing private facts since the BC Privacy Act “covers the field” (see Ladas v Apple, 2014 BCSC 1821 at para 76).  Alberta courts have also indicated that an action for breach of privacy relating to information in the control of an organization must proceed before the Commissioner appointed under the Personal Information Protection Act, SA 2003, c P-6.5 before recourse may be had to the courts (see Martin v General Teamsters, Local Union No 362, 2011 ABQB 412 at paras 45-48).

Goldilocks and the Interactive Bear: The Privacy Nightmare

Posted in Cybersecurity, Internet of Things, Privacy
Anaïs GalpinCamille Marceau

A Wake-up Call: The Rise and Demise of Hello Barbie

Once upon a time, which happened to be close to around March 2015, Mattel introduced Hello Barbie, the world’s first “interactive doll”. With the press of a single button, the voice of its user was to be recorded and processed, and the Hello Barbie would respond to the question or statement recorded. The interactive doll appeared to be a dream come true for children and parents alike: for the former, an ever-present friend with whom to babble and play, and, for the latter, someone to provide answers and explanations to the incessant curiosity of their child, granting them a little respite. How could this not be a miracle?

However, soon after the release of Hello Barbie, cybersecurity commentators warned against the potential privacy risks of the interactive doll, and “connected toys” generally. As reported in a previous blog post, in November 2015, VTech, a Hongkong supplier of children’s connected learning toys, was hacked, compromising the personal data of over 6.5 million child profiles. VTech fixed the breach and amended its terms of use to warn against the risk of data piracy, and that was that.

Following the publicity around the incident, and VTech’s quick fix of the situation, interactive dolls and their engineers and makers largely vanished from the headlines. Presumably, toy manufacturers, and parents, had learned their lesson on the privacy risks that come along with connected toys.

The Comeback of Interactive Toys and Dolls: A Messy Affair

History tends to repeat itself, however, and this story is no exception. CloudPets, essentially an app that allows parents and friends to record and send messages to a connected CloudPet stuffed animal from anywhere in the world, suffered a similar incident. In what was reported to be the result of a lapse of security, private conversations between family members could be overheard via a listening device installed in the kids’ teddy bear.

In addition, the personal data of over 821,000 users and owners of CloudPets was reportedly discovered to be easily accessible online. How easy was it really, you ask? Too easy, apparently, since it was reported that an unidentified number of individuals managed to hack the database and personal accounts and recover sensitive data by using brute force. The database storing the personal data was, according to reports, protected by neither a firewall nor a password, and the personal accounts of the users and owners used overly simplistic secured passwords and usernames such as “123456, “qwerty”, and “password”.

Another interactive toy also made the news in early 2017. The My Friend Cayla doll was declared to be an “illegal espionage apparatus” by Germany’s Federal Network in February 2017 as it was deemed to be a surveillance device disguised as another object, which cannot be legally manufactured, sold or possessed in Germany. The access to the doll was unsecured, and any hacker within 15 meters of the doll could access the doll via the Bluetooth connection and interfere with messages received and sent by the doll. The doll cannot be sold in Germany anymore, and owners of the doll were ordered to disable its “smart” feature at the very least.

Moving Forward: How to Compromise between Companionship and Cybersecurity

Two lessons can be learned from these three attempts to provide children with the companionship of a virtual friend.

First, there seems to be a higher expectation of privacy for children, which has been expressed by a call to boycott Hello Barbie following the 2015 incident, as well as the strict implementation of espionage rules by Germany. The interactive dolls described above are not significantly different in their purpose and functioning from the Siris (Apple) and Alexas (Amazon) of this world: both record, process and store voices and personal data in order to provide companionship and on-the-spot information to their owners and users. However, they differ greatly in their targeted audience: one is aimed at adults, while the other is for  children, generally regarded as vulnerable.

In this regard, the Office of the Privacy Commissioner of Canada (“OPC”) made this distinction clear in its guide to collecting personal data from children, published in December 2015, stating: “the [OPC] has consistently viewed personal information relating to youth and children as being of particular sensitivity, especially the younger they are, and that any collection, use or disclosure of such information must be done with this in mind (if at all)”. Keeping this warning in mind, the OPC’s first tip is to limit or, avoid altogether, the collection of personal information. Other tips touch on the retention of data and ways to obtain proper consent.

Second, while some first attempts to provide children with interactive toys have resulted in significant missteps, , interactive toys are here to stay, as evidenced by their comeback following the Hello Barbie incident. Toy-makers must therefore find a way to manufacture a toy that satisfies Papa Bear, Mama Bear and Baby Bears’ wants and needs.

For more information, please see McCarthy Tétrault’s guide on cybersecurity risk management, which is available here.

Camille Marceau is an articling student in McCarthy Tétrault’s Montreal Office.