CyberLex

CyberLex

Insights on cybersecurity, privacy and data protection law

Lawyers need to keep up with AI

Posted in AI and Machine Learning, Big Data
Carole Piovesan

For decades, novelists, scientists, mathematicians, futurists and science fiction enthusiasts have imagined what an automated society might look like. Artificial intelligence, or AI, is rapidly evolving, and the society we could only once imagine may be on the brink of becoming our new reality.

Simply, and generally, AI refers to the ability of a computer system to complete increasingly complex tasks or solve increasingly complex problems in a manner similar to intelligent human behaviour. Examples range from IBM’s Watson system that, in 2011, won a game of Jeopardy! against two former winners to emerging technologies fuelling the development of  driverless cars.

AI is expected to have a profound impact on society, whereby intelligent systems will be able to make independent decisions that will have a direct effect on human lives. As a result, some countries are considering whether intelligent systems should be considered “electronic persons” at law, with all the rights and responsibilities that come with personhood. Among the questions related to AI with which the legal profession is starting to grapple: Should we create an independent regulatory body to govern AI systems? Are our existing industry-specific regulatory regimes good enough? Do we need new or more regulation to prevent harm and assign fault?

While we are at least a few steps away from mass AI integration in society, there is an immediate ethical, legal, economic and political discussion that must accompany AI innovation. Legal and ethical questions concerning AI systems are broad and deep, engaging issues related to liability for harm, appropriate use of data for training these systems and IP protections, among many others.

Governments around the world are mobilizing along these lines. The Japanese government announced in 2015 a “New Robot Strategy,” which has strengthened collaboration in this area between industry, the government and academia.

Late last year, the United Kingdom created a parliamentary group — the All Party Parliamentary Group on Artificial Intelligence — mandated to explore the impact and implications of artificial intelligence, including machine learning. Also late last year, under the Obama administration, the White House released the reports, Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.” The reports consider the challenge for policymakers in updating, strengthening and adapting policies to respond to the economic effects of AI.

In February 2017, the European Parliament approved a report of its Legal Affairs Committee calling for the review of draft legislation to clarify liability issues, especially for driverless cars. It also called for consideration of creating a specific legal status for robots, in order to establish who is liable if they cause damage.Most recently, the Canadian federal government announced substantial investments in a Pan-Canadian Artificial Intelligence Strategy. These investments seek to bolster Canada’s technical expertise and to attract and maintain sophisticated talent.

Lawyers can play a valuable role in shaping and informing discussion about the regulatory regime needed to ensure responsible innovation.

Ajay Agrawal, Founder of the Creative Destruction Lab and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management, says Canada has a leadership advantage in three areas — research, supporting the AI startup ecosystem and policy development. The issue of policy development is notable for at least two reasons. First, one of the factors affecting mass adoption of AI creations, especially in highly regulated industries, is going to be the regulatory environment. According to Agrawal, jurisdictions with greater regulatory maturity will be better placed to attract all aspects of a particular industry. For instance, an advanced regulatory environment for driverless cars is more likely to attract other components of the industry (for example, innovations such as tolling or parking).

Second, policy leadership plays to our technical strength in AI. We are home to AI pioneers who continue to push the boundaries of AI evolution. We can lead by leveraging our technical strengths to inform serious and thoughtful policy debate about issues in AI that are likely to impact people in Canada and around the world.

Having recently spoken with several Canadian AI innovators and entrepreneurs, I have identified two schools of thought on the issue of regulating AI. The first is based on the premise that regulation is bad for innovation. Entrepreneurs who share this view don’t want the field of AI to be defined too soon and certainly not by non-technical people. Among their concerns are the beliefs that bad policy creates bad technology, regulation kills innovation and regulation is premature because we don’t yet have a clear idea of what it is we would be regulating.

The other school of thought seeks to protect against potentially harmful creations that can spoil the well for other AI entrepreneurs. Subscribers to this view believe that Canada should act now to promote existing standards and guidelines — or, where necessary, create new standards — to ensure a basic respect for the general principle of do no harm. Policy clarity should coalesce in particular around data collection and use for AI training.

Canada, home to sophisticated academic research, technical expertise and entrepreneurial talent, can and should lead in policy thought on AI. Our startups, established companies and universities all need to talk to each other and be involved in the pressing debate about the nature and scope of societal issues resulting from AI.

As lawyers, we need to invest in understanding the technology to be able to effectively contribute to these ethical and legal discussions with all key stakeholders. The law is often criticized for trailing technology by decades. Given the pace of AI innovation and its potential implications, we can’t afford to do that here.

This post first appeared as a Speaker’s Corner feature in the June 5, 2017 edition of the The Law Times.

Why Autonomous Vehicle Providers Should Consider Their Stance on Privacy

Posted in Connected Cars, Privacy
Brandon MattaloKosta Kalogiros

Autonomous vehicles are coming fast. It is now believed that autonomous vehicles will be widely available to consumers by 2020. Many futurists predict that one day owning and driving a car will be a hobby, much like horseback riding, and that most consumers will simply press a button on their mobile devices to have a car transport them to and from various destinations. While the societal, infrastructural and safety benefits of such a world are obvious, the privacy implications associated with these innovations are rarely discussed—or certainly not enough as they should be.

According to a report from Intel, autonomous vehicles may generate over 4 terabytes of data each day — based on the current average use of non-autonomous cars at one hour and a half of driving a day. In other words, an autonomous vehicle will produce, in an hour and a half, over three-thousand times the data an average internet-using consumer produces in a single day. While data generation at this scale is not yet the norm, current vehicles already generate and collect a great deal of personal information, making discussions around privacy timely and important.

With innovation racing ahead at break-neck speeds, policy makers will have to work doubly hard to ensure privacy policies and protections are in place as quickly as possible. This is especially so considering data privacy was one of the top 5 consumer concerns (according to a study out of the University of Michigan Transportation Institute). To ensure that autonomous vehicles are adopted, consumers will need to trust their cars (or car services), in respect of both its safety and protection of privacy.

It was refreshing, therefore, when the Privacy Commissioner of Canada, Daniel Therrien, appeared before the Senate Committee on Transportation and Communications (“TRCM”) on March 28, 2017, to discuss these exact issues.

Modern cars are more than simply vehicles […] [t]hey have become smartphones on wheels.

Mr. Therrien’s statement is not far from the truth. By connecting your phone to your car, or by using the onboard infotainment systems, your car can collect and store information about your location, address books, calls, text messages and musical preferences.
As with any innovative technology, the adoption rate of the first-movers will dictate the market leaders for the foreseeable future. While one of the main barriers to adopting autonomous vehicles is the safety of the vehicles, another may be the car providers’ approach to consumer privacy. While companies are often thought of as being unwelcoming of overzealous regulations, Mr. Therrien offered a fresh perspective when he suggested that privacy regulations may help increase the adoption rate of autonomous vehicles by alleviating consumer’s privacy concerns altogether.
Mr. Therrien may have a point. If one of the larger barriers for adopting autonomous vehicles is privacy, then autonomous vehicle companies should explore strategies that embrace privacy regulations, not dismiss them. Additionally, even in the absence of privacy regulations, autonomous vehicle companies should at least conduct an analysis of whether privacy can be used as differentiator to increase the adoption rate of their vehicles.

In terms of Canadian policy, Mr. Therrien suggests three main areas that regulators should focus on.

First, Mr. Therrien suggests that consumers need to understand who to contact regarding their privacy concerns. Is it the car manufacturer, the car dealer from whom they purchased the car, the smartphone developer, the application developer or maybe it is some third party who owns the information. The interplay between all of the various companies makes it difficult for the average consumer to understand who they need to contact regarding their privacy concerns. By providing clear guidance, consumers will feel more comfortable using autonomous vehicles.

Second, Mr. Therrien suggests that regulators should create standards for wiping consumer data when a car is resold, returned or simply re-rented to another driver. Having standard practices will ensure that any subsequent user cannot illicitly collect and disclose private information pertaining to previous drivers.

Lastly, Mr. Therrien suggests that regulators need to ensure that the collection, use and disclosure of private information continues to be communicated to consumers in a clear and understandable way so that they have a real choice in providing consent to services that are not essential to the proper functioning of their car. The days are over where companies can hide their privacy policies under the façade of legalese. Instead, it is better to draft easy-to-understand privacy policies that consumers understand. This level of transparency and openness can differentiate autonomous car providers and build a consumer base that trusts the car provider. This can help increase adoption rates in what is likely to become a very competitive landscape.

If, as Mr. Therrien and we suggest, specific privacy regulations for autonomous vehicles can help increase the adoption rate of autonomous vehicles and move the industry forward, companies in the space have every incentive to get involved in the legislative process and to embrace privacy legislation as a means to expedite the market-readiness of their products.

Regardless of whether regulators choose to create new privacy regulations, individual autonomous vehicle providers can fine-tune their stance on privacy to differentiate and increase the adoption rate of their products or services.

The importance of privacy policies is increasing as consumers become more informed about their privacy rights. In a recent example, Evernote changed their privacy policy to include a term that stated “you cannot opt out of [Evernote] employees looking at your content”. While this clause was intended to allow Evernote to improve its machine-learning analysis, the company immediately had to go into damage control after the change was spotted by a consumer and users started complaining.

In short, autonomous car providers need to pay attention to their stance on privacy. They should not be afraid to embrace legislation that is intended to protect consumer privacy, since it may help increase the adoption rate of autonomous vehicles. Even if regulators do not implement more stringent privacy regulations for autonomous vehicles, car providers can use their stance on privacy to aid in becoming a market leader. After all, it may not be a stretch to suggest that the winners and losers in the new autonomous vehicle industry may, in part, be dictated by the companies’ privacy policies.

Lawful Access: The Privacy Commissioner Reiterates its Position

Posted in Criminal, Legislation, Privacy
Marissa Caldwell

One of the challenging aspects of PIPEDA in recent years has been the new section 7(3)(c.1)(ii), which permits organisations to disclosure personal information to a government institution that has requested the disclosure for the purpose of law enforcement and has stated its “lawful authority” for the request. Organizations faced with such a request almost always ask the same question: What constitutes “lawful authority”?

Background on Lawful Access

On April 5, 2017, Patricia Kosseim,  Senior General Counsel and Director General, Legal Services, Policy, Research and Technology Analysis for the Office of the Privacy Commissioner of Canada (the “OPC”), gave testimony before the Quebec Commission of Inquiry on protection of confidential media sources. Premier Philippe Couillard had announced an inquiry would be held after Montreal and provincial police admitted they collected data from the cellphones of several journalists. The inquiry’s mandate includes identifying best practices to protect the confidentiality of journalistic sources and the commissioners must report back with recommendations by March 1, 2018.

Ms. Kosseim was asked, at the request of Commission’s counsel, to provide an overview of the legislation for protecting privacy in Canada and to answer questions about lawful access issues from a federal perspective. Ms. Kosseim took the opportunity to present a clear view of the OPC’s position on how lawful access, as articulated in section 7(3) of PIPEDA, should be addressed. Of particular interest is how this position differs from the position taken by the federal government in recent years.

Section 7(3)(c.1)(ii) permits disclosure of personal information to a government institution that has requested the disclosure for the purpose of law enforcement and has stated its “lawful authority” for the request. As was articulated by the Supreme Court of Canada in R v. Spencer, “[t]he reference to “lawful authority” in s. 7(3)(c.1)(ii) must mean something other than a “subpoena or search warrant.” The Court went to suggest that “[l]awful authority” may include several things. It may refer to the common law authority of the police to ask questions relating to matters that are not subject to a reasonable expectation of privacy. It may refer to the authority of police to conduct warrantless searches under exigent circumstances or where authorized by a reasonable law.”

While this decision did clarify that s. 7(3) ought not to be used to indiscriminately justify police search and seizure powers, it provided little by way of concrete examples of what the section does authorize.

Parliament’s passing of Bill C-13 further complicated the matter as it added new lawful access powers to transmission data to the Criminal Code.

In her remarks, Ms. Kosseim reiterated that in each of the OPC’s intervener submissions to the SCC in R v. Spencer, the OPC’s advice to Parliament on Bill C-13, and in other government consultations, the OPC has warned against lowering the thresholds for authorizing new investigative powers and noted the lack of accountability placed on the use of the powers and the absence of conditions specific to their use.

OPC Position on Lawful Access

Ms. Kosseim went on to reiterate the position that the Privacy Commissioner of Canada, Daniel Therrien, has taken on the subject. Commissioner Therrien has been vocal about the impact of these surveillance powers and has suggested that the following be done:

  • lawful access powers should be limited, not extended – particularly in light of the privacy risks, which are much greater than the much-used innocuous analogy to telephone directories would lead people to believe;
  • the crucial role of judges in the process of authorizing investigative powers should be maintained in order to ensure the necessary independence of police forces, which will better ensure the protection of people’s most basic rights; and
  • Parliament should consider legislating more specifically on the prerequisites for lawful access, as well as grant judges the ability to attach conditions, such as: the protection of citizens who are not targeted (but still captured) by these measures; the period for which the information can be retained; and the intentional destruction of non-pertinent data.

It is clear that the OPC would like to see the lawful access rights of government institutions, including police, be limited, clearly articulated, and supervised by the judiciary. Canadians have the right to be secure against unreasonable search and seizure under the Charter and have the right to have their personal information protected under PIPEDA. These rights must be balanced with the reality that circumstances will arise when personal information will need to be disclosed for purposes such as public safety.

This would also provide clarity to businesses which, faced with a non-judicially-supervised PIPEDA “lawful authority” request for personal information, find themselves having to make their own determination as to whether the government agency requesting access has a sufficiently bona fide reasons for doing so. Business are, quite understandably, generally reluctant to be put in that position lest they later be the target of a privacy breach lawsuit or class action by the individual or individuals whose personal information they disclosed.

It will be interesting to see if the OPC, and other interested stakeholders, can motivate Parliament to re-evaluate and clarify the powers under section 7(3) or if Parliament will simply wait for a case to come forward that challenges the scope of what exactly lawful access is.

Defending a Lawsuit is Not a “Commercial Activity” Under Privacy Legislation

Posted in Privacy
Krupa Kotecha

In a case dating back to 2016 but just recently published, the Office of the Privacy Commissioner of Canada has ruled that the collection and use of a plaintiff’s personal information for the purpose of defending against a civil lawsuit is not a “commercial activity” and, as such, the Personal Information Protection and Electronic Documents Act, SC 2000, c 5, (“PIPEDA“)does not apply.

The complaint at issue arose as a result of a motor vehicle accident involving two parties, following which the plaintiff commenced civil litigation proceedings against the defendant. A psychiatric assessment was carried out by an independent psychiatrist, retained on behalf of the defendant’s insurance company. The plaintiff requested access to the personal information gathered by the psychiatrist. Although the psychiatrist did provide the plaintiff with some information, including a redacted copy of the psychiatric report, the plaintiff filed a complaint with the federal Privacy Commissioner on the basis that the plaintiff believed there was additional information to which he was entitled. In addition, the plaintiff was also concerned about the accuracy of the personal information held by the psychiatrist.

The Privacy Commissioner determined that the psychiatrist’s collection and use of the plaintiff’s personal information did not fall within the term “commercial activity”, as defined in section 2 of PIPEDA. Despite the existence of commercial activity between the defendant and the insurance company, as a result of the defendant being insured by the insurance company, the Privacy Commissioner held that there was no commercial activity between the plaintiff and the defendant, and likewise, no commercial activity between the plaintiff and the defendant insurance company or its outside medical expert. The issue of access to the psychiatrist’s records was therefore beyond the scope of the Privacy Commissioner’s jurisdiction.

The decision serves to provide further guidance with respect to the types of activities which will or will not be held to fall under the Privacy Commissioner’s purview.

Lenovo and Superfish: Proposed Class Action Proceeds on Privacy Tort and Statutes

Posted in Cybersecurity, Internet of Things, Privacy
Carole Piovesan

A recent privacy decision regarding pre-installed software on laptops may have implications for companies operating not only in the traditional hardware space, but for those companies venturing into the burgeoning “Internet of Things” ecosystem. In short, an Ontario court declined to strike the common law and statutory privacy claims, suggesting that courts are at least willing to entertain such claims in the context of manufactured devices.

Background

Lenovo has faced several privacy-related lawsuits in Canada and the United States following its sale of laptop computers preloaded with Superfish. Superfish is a VisualDiscovery (VD) adware program that tracks a user’s Web searches and browsing activity to place targeted ads on sites visited by the user.

In Canada, a nationwide proposed class action has been commenced by plaintiff Daniel Bennett, a lawyer from St. John’s Newfoundland (see Bennett v Lenovo, 2017 ONSC 1082). Mr. Bennett recently purchased a new laptop from Lenovo’s website, which he later discovered contained the VD adware program.

Mr. Bennett alleges in the Statement of Claim that the adware program not only affects a computer’s performance but, crucially, “intercepts the user’s secure internet connections and scans the user’s web traffic to inject unauthorized advertisements into the user’s web browser without the user’s knowledge or consent”. He further alleges that the adware program “allows hackers … to collect … bank credentials, passwords and other highly sensitive information” including “confidential personal and financial information.”

Mr. Bennett advances the following claims against Lenovo on behalf of the proposed class: (1) breach of the implied condition of merchantability; (2) intrusion upon seclusion; (3) breach of provincial privacy legislation; and, (4) breach of contract. Mr. Bennett initially pled negligence as well but subsequently withdrew that claim.

In February 2017, Lenovo brought a motion to strike the Statement of Claim on the basis that it was plain and obvious the four claims could not succeed. The motion was heard by Justice Edward Belobaba who struck only one of the four claims.

The Decision

(1)  Breach of the implied condition of merchantability

Mr. Bennett alleges that the security risks and performance problems caused by the adware program render the computer “not of merchantable quality” or, simply, defective. The legal context for this claim is that consumer protection legislation establishes “implied conditions of fitness for purpose and merchantability” that cannot be modified or varied.[1] The question thus is what constitutes “merchantable”?

Lenovo argued that under Canadian law a product with multiple uses, such as Mr. Bennett’s computer (word processing, storing data, accessing the internet, etc.) is “merchantable” if it can be reasonably used, even with the alleged defect (i.e. the adware program), for at least one of the purposes, such as off-line word processing. Mr. Bennett argued in retort that the various purposes listed by Lenovo are not “multiple purposes” but illustrations of the laptop’s over-riding single purpose: to engage in electronic communications that are expected to remain private.

Justice Belobaba refused to strike this claim on the basis that the law on implied condition of merchantability in respect of computers is still unsettled. His Honour stated:

It is enough for me to find that it is not at all plain and obvious under Canadian law that a laptop that cannot be used on-line because of a hidden defect that has compromised the user’s privacy, and can only be used off-line for word processing, is nonetheless merchantable. As Professor Fridman notes, “If the test for unmerchantability [is] that the article is fit for no use, few goods would be unmerchantable because use can always be found for goods at a price.” Further, it is not plain and obvious that a reasonable computer user today would ever agree to purchase and use an affected laptop, knowing about the security risks created by the VD adware program, without insisting on a substantial reduction in the purchase price.

 (2)  Intrusion upon seclusion

Intrusion upon seclusion was recognized by the Court of Appeal as a new privacy tort in Jones v. Tsige. Intrusion upon seclusion is established when: (i) the defendant’s conduct is intentional or reckless; (ii) the defendant has invaded, without lawful justification, the plaintiff’s private affairs or concerns; and (iii) a reasonable person would regard the invasion as highly offensive causing distress, humiliation or anguish. Proof of actual loss is not required.

Mr. Bennett claims that the mere act of implanting the adware program onto his laptop without his prior knowledge and consent was an intrusion on his privacy. The adware program allows private information to be sent to unknown servers, thereby compromising the security of a user’s personal information. The vulnerabilities in security facilitate a hacker’s ability to intercept a user’s internet collection and access private data such as passwords.

Justice Belobaba found that the first two elements of the tort had been properly pled and were viable on the facts as stated in the Statement of Claim. The third element of distress was not pled but was reasonably inferred in the circumstances. His Honour held that the tort of intrusion upon seclusion is still evolving and its scope and content have not yet been fully determined. He also refused to strike this claim.

(3)  Provincial privacy laws

Mr. Bennett advances a claim of breach of privacy laws in British Columbia, Saskatchewan, Manitoba, and Newfoundland and Labrador. Lenovo argued that there is no pleading of actual violation of privacy and no allegation that any confidential information was actually hacked and appropriated. Accordingly, argued Lenovo, these statutory claims were certain to fail.

Justice Belobaba rejected Lenovo’s argument on the basis that unauthorized access to private information is itself a concern, even without proof of actual removal or theft of information. Each of the four provincial statutes declares in essence that the unlawful violation of another’s privacy is an actionable tort, without proof of loss.

His Honour stated that the scope and content of the provincial privacy laws in question is still evolving. He refused to strike this claim as well.

(4) Breach of contract

The only claim struck by Justice Belobaba was the claim for breach of contract. Mr. Bennett pleads the existence of an implied term in the sales agreement that the Lenovo laptops would be free of any defects and at the very least would not have pre-installed software that exposed class members to significant security risks.

Justice Belobaba stated that the case law is clear that a term will not be implied if it is inconsistent or otherwise conflicts with an express provision in the agreement. In this case, the sales agreement that was viewable on-line when Mr. Bennett purchased his laptop on Lenovo’s website and “clicked” his acceptance, made clear in Article 5.2 that the installed software was being sold “without warranties or conditions of any kind.”

Conclusion

It has been reported that a partial settlement may have been reached with Superfish, in a U.S. class action against both defendants. The settlement reportedly includes Superfish’s cooperation with the plaintiffs by disclosing over 2.8 million additional files and providing Superfish witnesses for a potential trial.

The Canadian proposed class action is very much in its infancy. It remains to be seen how the class action will evolve in Canada.

[1]       Sections 9(2) and (3) of the Consumer Protection Act stipulate that the implied conditions and warranties applicable to goods sold under the Sale of Goods Act are also applicable to goods sold under a consumer agreement (in this case, the Lenovo sales agreement). These implied conditions and warranties cannot be varied or waived.

Health Record Snooping Nets Hefty Fine

Posted in PHIPA
Sara D.N. Babich

In a recent case out of Goderich, Ontario a $20,000 fine, the highest of its kind in Canada, was handed out for a health privacy violation.

Between September 9, 2014 and March 5, 2015, a Masters of Social Work student accessed the personal health information of 139 individuals including family, friends, and local politicians, among others, without authorization while on placement with a family health team. The student was ordered to pay $25,000 total, which included a $20,000 fine and a $5,000 victim surcharge after pleading guilty to wilfully accessing the personal health information of five individuals.

The Information and Privacy Commissioner of Ontario (the “IPC”) recently reported that this was the fourth person convicted under the Personal Health Information Protection Act (“PHIPA”). Under s. 72 of the PHIPA, it is an offence to wilfully collect, use, or disclose personal health information. This and the other offences enumerated in s. 72(1) of the PHIPA are punishable by a fine of up to $100,000 for individuals and $500,000 for institutions. The $20,000 fine imposed in this most recent case is far from the upper limit in the PHIPA, but a signals an increasing willingness to hand out hefty fines for violations.

From the news release issued by the IPC (available here), it is apparent that deterrence of this type of snooping into the private medical affairs of individuals is being treated seriously and is seen as a necessary safeguard to maintain patient confidence in the health care system.

The unauthorized access to private health records is an ongoing issue for health care organizations which has had an increasing impact on individuals and the organizations they work for, as evidenced by the Goderich case. Given the responsibility of organizations to ensure that private health records remain protected, and the potential institutional fines associated with breaches of the relevant privacy legislation, it is incumbent on health care and related organizations to ensure that its employees are properly trained and are fully aware of the implications of a privacy breach, even if there is no malicious intent. It is also imperative that everyone who has access to these private records, including staff, students, volunteers, and interns, are fully apprised of their obligations and the consequences for breaches, including snooping.

There is similar legislation in other provinces which provides for serious monetary penalties for breaching health privacy. In British Columbia, a breach of the E-Health (Personal Health Information Access and Protection of Privacy) Act, SBC 2008, c 38 could net a fine of up to $200,000. Alberta and Manitoba legislation authorizes fines of up to $50,000 for improper access and disclosure of health information (Health Information Act, RSA 2000, c H-5; Personal Health Information Act, CCSM c P33.5). A breach of Saskatchewan’s Health Information Protection Act, SS 1999, c H-0.021 could carry a fine of up to $50,000 for individuals and $500,000 for corporations, with an added penalty of one year imprisonment on summary conviction. Other Canadian jurisdictions authorize fines ranging from $10,000 to $50,000 for individual offenders, and some carry additional imprisonment penalties.

In addition to the fines that could be issued for health legislation violations, some provinces also allow claimants to advance court actions for invasion of privacy torts. In Ontario, the courts have expressly acknowledged that the PHIPA contemplates other proceedings in relation to personal health information. The Ontario Court of Appeal has stated that the PHIPA is well-suited to deal with systemic issues while recourse for individual wrongs can be found in the recently recognized privacy torts (see Hopkins v Kay, 2015 ONCA 112). In Manitoba, there is also dual recourse to privacy legislation and tort actions (see the comments of Monnin JA in Grant v Winnipeg Regional Health Authority et al, 2015 MBCA 44).

Notably, British Columbia has declined to recognize the privacy torts of intrusion upon seclusion and public disclosure of embarrassing private facts since the BC Privacy Act “covers the field” (see Ladas v Apple, 2014 BCSC 1821 at para 76).  Alberta courts have also indicated that an action for breach of privacy relating to information in the control of an organization must proceed before the Commissioner appointed under the Personal Information Protection Act, SA 2003, c P-6.5 before recourse may be had to the courts (see Martin v General Teamsters, Local Union No 362, 2011 ABQB 412 at paras 45-48).

Goldilocks and the Interactive Bear: The Privacy Nightmare

Posted in Cybersecurity, Internet of Things, Privacy
Anaïs GalpinCamille Marceau

A Wake-up Call: The Rise and Demise of Hello Barbie

Once upon a time, which happened to be close to around March 2015, Mattel introduced Hello Barbie, the world’s first “interactive doll”. With the press of a single button, the voice of its user was to be recorded and processed, and the Hello Barbie would respond to the question or statement recorded. The interactive doll appeared to be a dream come true for children and parents alike: for the former, an ever-present friend with whom to babble and play, and, for the latter, someone to provide answers and explanations to the incessant curiosity of their child, granting them a little respite. How could this not be a miracle?

However, soon after the release of Hello Barbie, cybersecurity commentators warned against the potential privacy risks of the interactive doll, and “connected toys” generally. As reported in a previous blog post, in November 2015, VTech, a Hongkong supplier of children’s connected learning toys, was hacked, compromising the personal data of over 6.5 million child profiles. VTech fixed the breach and amended its terms of use to warn against the risk of data piracy, and that was that.

Following the publicity around the incident, and VTech’s quick fix of the situation, interactive dolls and their engineers and makers largely vanished from the headlines. Presumably, toy manufacturers, and parents, had learned their lesson on the privacy risks that come along with connected toys.

The Comeback of Interactive Toys and Dolls: A Messy Affair

History tends to repeat itself, however, and this story is no exception. CloudPets, essentially an app that allows parents and friends to record and send messages to a connected CloudPet stuffed animal from anywhere in the world, suffered a similar incident. In what was reported to be the result of a lapse of security, private conversations between family members could be overheard via a listening device installed in the kids’ teddy bear.

In addition, the personal data of over 821,000 users and owners of CloudPets was reportedly discovered to be easily accessible online. How easy was it really, you ask? Too easy, apparently, since it was reported that an unidentified number of individuals managed to hack the database and personal accounts and recover sensitive data by using brute force. The database storing the personal data was, according to reports, protected by neither a firewall nor a password, and the personal accounts of the users and owners used overly simplistic secured passwords and usernames such as “123456, “qwerty”, and “password”.

Another interactive toy also made the news in early 2017. The My Friend Cayla doll was declared to be an “illegal espionage apparatus” by Germany’s Federal Network in February 2017 as it was deemed to be a surveillance device disguised as another object, which cannot be legally manufactured, sold or possessed in Germany. The access to the doll was unsecured, and any hacker within 15 meters of the doll could access the doll via the Bluetooth connection and interfere with messages received and sent by the doll. The doll cannot be sold in Germany anymore, and owners of the doll were ordered to disable its “smart” feature at the very least.

Moving Forward: How to Compromise between Companionship and Cybersecurity

Two lessons can be learned from these three attempts to provide children with the companionship of a virtual friend.

First, there seems to be a higher expectation of privacy for children, which has been expressed by a call to boycott Hello Barbie following the 2015 incident, as well as the strict implementation of espionage rules by Germany. The interactive dolls described above are not significantly different in their purpose and functioning from the Siris (Apple) and Alexas (Amazon) of this world: both record, process and store voices and personal data in order to provide companionship and on-the-spot information to their owners and users. However, they differ greatly in their targeted audience: one is aimed at adults, while the other is for  children, generally regarded as vulnerable.

In this regard, the Office of the Privacy Commissioner of Canada (“OPC”) made this distinction clear in its guide to collecting personal data from children, published in December 2015, stating: “the [OPC] has consistently viewed personal information relating to youth and children as being of particular sensitivity, especially the younger they are, and that any collection, use or disclosure of such information must be done with this in mind (if at all)”. Keeping this warning in mind, the OPC’s first tip is to limit or, avoid altogether, the collection of personal information. Other tips touch on the retention of data and ways to obtain proper consent.

Second, while some first attempts to provide children with interactive toys have resulted in significant missteps, , interactive toys are here to stay, as evidenced by their comeback following the Hello Barbie incident. Toy-makers must therefore find a way to manufacture a toy that satisfies Papa Bear, Mama Bear and Baby Bears’ wants and needs.

For more information, please see McCarthy Tétrault’s guide on cybersecurity risk management, which is available here.

Camille Marceau is an articling student in McCarthy Tétrault’s Montreal Office.

U.S. Consumer Protection Regulator Consults on Use of Alternative Credit Data

Posted in Big Data, Financial, FinTech
Ana BadourD.J. LyndeTaha QureshiKirsten Thompson

On February 16, 2017, the U.S. Consumer Financial Protection Bureau (the “CFPB”) issued a Request for Information (“RFI”) seeking feedback from the public on the use or potential use of alternative data and methodologies to assess consumer credit risk, thereby expanding access to credit for credit invisible segments of the population. Presently, most lenders make credit decisions using modeling techniques that rely on “traditional data” such as loan history, credit limits, debt repayment history, and information relating to tax liens and bankruptcy. Lenders use this information to assess creditworthiness of consumers to determine the likelihood of whether a consumer will default or become delinquent on a loan within a given period of time.

However, in the U.S. approximately 45 million Americans do not have access to credit as a result of not having a credit file with any credit bureau, or credit files that are too limited or stale to generate a reliable score (“credit invisible consumers”). The CFPB is seeking feedback on whether “alternative data”,  such as payment information on phone bills, rent, insurance, utilities, as well as checking account transaction information and social media activities, could provide an innovative solution to this problem. Enhanced data points could potentially address the information asymmetry that currently prevents lenders from serving “credit invisible consumers”.

In Canada, privacy legislation and other laws place restrictions on the types of personal information and other information that can be used in making credit decisions; the role of consent is critical. In contrast, the U.S. lacks a comprehensive privacy regime and has not examined credit risk assessment from that perspective.

Through its RFI, the CFPB is seeking information from interested parties on the development and adoption of innovations relating to the application of alternative data, and current and potential consumer benefits and risks associated with such application. The following is a summary of some of the risks and benefits identified by the CFPB:

Benefits

  • Greater Access to Credit: Consumers without a traditional loan repayment history could substitute such data points with regular bill payments for cell phones, utilities or rent. This information could in some cases prove sufficient for a lender to assess the creditworthiness of consumers and perhaps deem them to be viable credit risks.
  • Improved Credit Prediction: Access to more practical and nuanced information about a consumer’s financial behaviour, pattern and context could allow lenders to identify trends in the consumer’s profile. Social media job status changes could perhaps help identify those individuals with low credit scores who have surmounted previous financial obstacles, and have a much better future credit outlook than the credit score snapshot would suggest.
  • Profitability, Cost Savings and Convenience: Currently, many lenders forego providing credit to consumers with poor credit scores or non-existent credit history. With better data, lenders can market their products and services to more consumers, thereby increasing revenues and profits.

Risks

  • Privacy: As is the case with big data generally, the CFPB has identified privacy issues as one of the primary risks associated with the use of such alternative data. Most forms of alternative data include information that can reveal fairly intimate details about an individual, such as social media activity, behaviour and location patterns. Lender access to such data would likely need to be regulated and protected explicitly at the legislative level.
  • Data Quality: Some types of alternative data may be prone to greater rates of error due to the potential that the quality standards required to be met for their original purpose are less rigorous relative to those applied to traditional data destined to be used in the credit approval process. Incomplete, inconsistent or inaccurate data could have a detrimental impact on lenders’ ability to correctly and fairly assess a consumer’s credit viability.
  •  Discrimination: Greater access to information also introduces the potential for discrimination. Machine learning algorithms may predict a consumer’s likelihood of default, but could correlate such probabilities with race, sex, ethnicity, religion, national origin, marital status, age or some other basis protected by law. Using alternative data as a proxy for identification of certain sub-groups of the population could be a violation of anti-discrimination laws.
  •  Unintended Consequences and Control: The CFPB has expressed concern that use of alternative data could have unintended negative consequences for some consumers. For example, frequent moves and address changes by members of the military could create a false impression of overall instability. Alternative data could also include information about consumers that is beyond their control. Such data would make it difficult for consumers to improve their credit profile and thereby harden barriers to economic and social mobility.

Canadian Perspective

The Canadian regulatory landscape already addresses many of the risks identified in the CFPB RFI. Provincial credit reporting legislation governs the use of traditional credit data and includes a number of safeguards intended to protect consumers.  For example, an entity which for profit furnishes credit information or personal information, or both, pertaining to a consumer to a third party for the purposes of that third party’s credit assessment is required to be licensed and regulated as a credit reporting agency in Ontario.

In addition, under such legislation, consumers have certain rights, including the right to be notified when a credit application they have made has been refused based on their credit score (otherwise known as the requirement to provide “adverse action letters” to credit applicants), and the ability to access, review and correct their credit report (for example if the credit report includes incorrect information as a result of identity theft or error).

However, the potential use by lenders of non-traditional credit data which consumers may not be aware of, or able to access and correct, could lead to similar data quality concerns in Canada as identified above in the U.S. It is worth noting that the to the extent the non-traditional data points are “personal information”, Canadian consumers would have a right under privacy legislation to access and/or correct any such information.

The privacy and discrimination concerns as outlined above in the U.S. have , in large part, been addressed in Canada through human rights legislation and privacy laws, although the advent of Big Data and analytics techniques (including the use of aggregate and anonymous personal information) is making once-clear regulatory boundaries significantly murkier. The Office of the Privacy Commissioner of Canada (“OPC”) recognized this in its recent Discussion Paper Exploring Potential Enhancements to Consent under the Personal Information Protection and Electronic Documents Act, where it noted that the challenges of obtaining meaningful, valid consent in a Big Data world. The OPC thought that some of the Big Data concerns could potentially be addressed by sectoral “codes of practice” (and observed that in Australia,  credit reporting bureaus can develop codes of practice which are then registered by the commissioner there).

The OPC has also explored the idea of legislating “no-go zones” of data – in short, prohibiting the collection, use or disclosure of personal information in certain circumstances. They could be based on a variety of criteria, such as the sensitivity of the data, the nature of the proposed use or disclosure or vulnerabilities associated with the group whose data is being processed. Alternative means of assessing credit risk, and attend concerns about the sensitivity of this information and the potential for discriminatory impacts, suggest that this type of use of this information may attract future regulatory scrutiny.

For more information about our firm’s Fintech expertise, please see our Fintech group‘s page.

In New CASL Case, CRTC Sends $15,000 Message

Posted in CASL
Jade Buchanan

The biggest changes to CASL since CASL are on the horizon but the Canadian Radio-Television and Telecommunications Commission (“CRTC”) just showed us that it still cares about the little things. All it took was complaints about 58 emails – fewer emails than many people receive in a day – for the CRTC to impose an administrative monetary penalty (“AMP”) of $15,000.

Background

The CRTC just released a Compliance and Enforcement Decision (the “Decision”) regarding the ill-fated email campaigns of one Mr. William Rapanos. In a rare act of true irony, Mr Rapanos sent a series of unsolicited emails advertising his business of designing, printing and distributing paper flyer advertisements.

After receiving complaints between July 8, 2014 and October 16, 2014, the CRTC investigated and, on April 22, 2016, sent Mr Rapanos a Notice of Violation (“NoV”). The NoV cited 10 violations related to Mr Rapanos’ commercial electronic messages (“CEMs”), which violated CASL in almost every way an email can, including being sent without the recipients’ consent and not including an unsubscribe mechanism. Mr Rapanos ran three separate campaigns, committing a total of ten violations. The NoV also assessed an AMP of $15,000.

Mr Rapanos exercised his right to respond to the NoV, arguing that because his wi-fi connection was unsecured, an unknown person and sent the offending emails and that he had been “potentially the victim of a personal vendetta or of identity theft” and that the CRTC had not proven “beyond a reasonable doubt” that he sent the CEMs.

The Decision

Mr Rapanos was not successful. The CRTC was satisfied – on the balance of probabilities (the actual legal test that applied, and not “reasonable doubt”) – that Mr Rapanos had committed all of the violations listed in the NoV and maintained the $15,000 penalty.

Unfortunately, Mr Rapanos’ case does not lend itself to clarifying CASL’s more pressing ambiguities (such as the “6(6)” issue and the handling of consents obtained prior to CASL coming into force). The emails were unambiguous violations.

The Decision does contain some useful guidance on how to manage investigations and how to minimize AMPs. The Decision is particularly notable for maintaining the AMP imposed by the NoV. This differs from the recent Blackstone decision, which decreased the AMP from $640,000 to $50,000. Here are some of the lessons from the Decision:

  1. Documenting compliance efforts can help reduce an AMP. As in Blackstone, the CRTC considered indicators of self-correction (a factor not explicitly listed in CASL). Mr Rapanos showed an unwillingness to self-correct, including by running a fourth CASL-violating email campaign after learning the CRTC was taking action. The CRTC used this to further justify the quantum of the AMP. It is probable that Mr Rampanos could have helped lower the AMP by demonstrating self-correction, or at least a willingness to correct. In fact, evidence of self-correction – before and after the investigation began – did justify a lower AMP in Blackstone.
  2. The CRTC found a fine of $1,500 per-violation to be reasonable. This is a helpful benchmark for future AMPs against individuals. For comparison, the per-violation penalty for a small business in Blackstone was roughly $5,500. However, due to the high volume of emails in Blackstone (385,668), Mr Rapanos is paying dramatically more on a per-email basis: $259 per email compared  to Blackstone’s 13 cents per email.
  3. The Decision further clarifies the calculation of violations. The number of emails did not factor into the number of violations (even though it may have affected the AMP). Just as in Blackstone, the CRTC considered the number of email campaigns. However, unlike Blackstone, Mr Rapanos committed multiple violations in each email campaign by violating three to four different provisions of CASL.
  4. Violators must produce evidence if they want the CRTC to consider their ability to pay. Mr Rapanos claimed “he never had a career due to health issues and that he and his wife subsist solely on social assistance.” The CRTC, faced with a sympathetic offender, noted they had “taken into account” Mr. Rapanos’ submissions on the AMP, but ultimately disregarded this claim because Mr Rapanos did not provide supporting evidence.
  5. The CRTC will use its power to compel evidence liberally. Notices for production were issued to Mr Rapanos (twice), his wife, the owner of the house where he resided, the host of his website domain and both of the companies that provided him with cell phone services (and that is just the notices that were mentioned in the Decision.

U.S. Federal Insurance Office Issues Report Addressing InsurTech and Traditional Insurance

Posted in Big Data, Cybersecurity, Discrimination, FinTech
Dan DolinerZachary MasoudShauvik ShahKirsten Thompson

The Federal Insurance Office, U.S. Department of the Treasury (“FIO”) released its first annual Report on Protection of Consumers and Access to Insurance (the “Report”). The Report reviews developments and concerns relating to five insurance issues: technology; environmental hazards; fairness in insurance practices; fairness in state insurance standards; and retirement and related issues. The Report identifies options available to consumers, industry, and state and federal policymakers to address certain noteworthy gaps in protection for insurance consumers.

Of note is the Report’s observations on technology (Section II of the Report), and the manner in which technology issues (such as big data and cybersecurity) affect both traditional insurance companies and innovative InsurTech companies.

Big Data

The Report notes that the use of big data holds promise for both insurers and consumers, as it facilitates innovation and modernization in insurance product design, distribution, and delivery. However, the Report identifies some of the concerns regarding the use of big data by insurers in the U.S., specifically in respect of the risks for consumers. “Big data” is defined in the Report as the “ability to gather large volumes of data, often from multiple sources, [which] produce[s] new kinds of observations, measurements and predictions.” Big data can accumulate consolidated information which is gleaned from different sources, such as information collected from GPS devices, mobile phones, internet searches, social media, public record, surveys, and more.

Big data supports insurers’ analysis and development of premium pricing based on “risk classification” for insurance products, by increasing the number of variables that could be assessed. At the same time, big data also enables insurers to practice “price optimization” in which data about an individual, such as shopping habits or pricing tolerance, is used to set premiums for an individual consumer. This practice may lead to individuals paying different premiums for similar policies. Certain states have restricted price optimization

Insurance companies in the U.S. are increasingly using data brokers, which purchase, sell, collect and analyse big data, and develop related products (for example, by integrating data from social media). Data brokers do not have a direct relationship with the individuals from whom the data originates, which can raise privacy and transparency concerns (see publication in this regard by the U.S. Federal Trade Commission (“FTC”), here, and by the Privacy Commissioner of Canada, here).

The FIO called on state insurance regulators to specifically enforce state and federal legal requirements that are applicable to use of big data by insurers. The Report further highlights significant legislative and regulatory gaps in the U.S.

Cyber Risk

Insurance companies maintain vast databases of personal information regarding past, present and potential consumers. The use of big data, as well as increased use of outsourcing by insurers, can lead to significantly larger cybersecurity risk.

U.S. state insurance regulators have been cooperating on several regulatory initiatives, in order to set a mandatory cybersecurity standard for insurance companies. These initiatives include the Cyber Security Task Force and  the Insurance Data Security Model Law (IDSML), which is not yet finalized. Of special note are the Cybersecurity Requirements for Financial Services Companies, issued by the New York State Department of Financial Services (“DFS“), which came into effect on March 1, 2017.

The FIO encourages insurers to adopt cybersecurity strategies based on best practices guidelines, such as the Framework for Improving Critical Infrastructure Cybersecurity, published by the National Institute for Standards and Technology. At the same time the FIO is pressing on state regulators to promote increased cybersecurity and data protection awareness among insurance companies, through new legislation and regulations, cybersecurity training and hiring, and frequent cybersecurity examinations.

Conclusion

Protection of insurance consumers is critical to the functioning of a stable and fair insurance marketplace. Technology and big data, in particular, have enabled growth and development of traditional insurance products. In fact, big data is often at the core of InsurTech products. Regulators are catching up with such advancements, and have identified what they see as regulatory gaps. Insurers that sell insurance, through traditional and technology-enabled channels, should consider anticipated regulatory requirements, when developing new products, and be prepared to address new regulatory standards.

Insurers in Canada operate in a different legal and regulatory landscape. The use of big data in Canada is subject to human rights legislation, privacy legislation, and personal health information. However, Canadian insurers that operate in the U.S., and Canadian companies which provide services to U.S. insurance companies, may wish to consider the trend in the U.S. of increased enforcement and regulation with respect to big data and cybersecurity.

For more information about our firm’s Fintech expertise, please see our Fintech group page. Information about McCarthy Tétrault’s Cybersecurity, Privacy and Data Management Group is available here. Please visit our firm’s 2017 edition of the Cybersecurity Risk Management Guide.