• Tech Law McGill

Regulation of Deepfakes in Canada: Is Criminal Liability the Right Answer?

This article was written by Arvin Khodayari, 4L. This is up to date as of January 2020.


“The rise of synthetic media and deepfakes is forcing us towards an important and unsettling realization: our historical belief that video and audio are reliable records of reality is no longer tenable.”[1]


Giorgio Patrini Founder, CEO, and Chief Scientist of Deeptrace


A. Introduction

In June of 2019, a video of Facebook’s CEO, Mark Zuckerberg, was published on Instagram, where he seamlessly states the following:

“I wish I could keep telling you that our mission in life is connecting people, but it isn’t. We just want to predict your future behaviours. Spectre showed me how to manipulate you into sharing intimate data about yourself and all those you love for free. The more you express yourself, the more we own you.”[2]

While the audiovisual material seemed to portray the actual CEO, the excerpt was entirely computer-generated. Yet, at first glance, one could not tell. Similarly, in March of 2019, a U.K. energy firm was a victim of an unconventional cyberattack.[3] By using synthetic audio that mimicked the voice, accent, and speaking style of the parent company’s CEO, the cybercriminals convinced the CEO of the U.K. firm to comply with a request to wire $243,000.[4]

These examples illustrate the dramatic increase in the use of computer-generated audio and video deepfakes as weapons against individuals, corporations, and politicians in the last year. Since December 2018, the number of deepfakes present online has increased by 100%,[5] leading to various harms, from sabotage and exploitation of private individuals to undermining journalism, manipulation of elections, and distortion of democratic discourse.[6] This increase in presence and ensuing harms have raised the question as to how the law should evolve to regulate, limit, or even ban the use of deepfakes. This paper aims to consider whether the criminalization of deepfakes is the optimal means of regulation and, if so, what the nature of such legislation should be. Part I of this paper will briefly focus on the current state of deepfakes, while Part II will be analyzing the benefits of criminalization of such technology as opposed to the expansion of civil remedies. Finally, Part III will address the nature of such a potential criminalization. To do so, this paper will study different American federal and state bills, and ultimately discuss how Canada should implement such a criminalization.

B. Evolution and Commoditization of Deepfake Technology

The term “deepfake” is a popularized label—originating from an anonymous user on the online platform Reddit—for technologies that alter images and later audio as well.[7] More precisely, deepfake “leverages machine-learning algorithms to insert faces and voices into video and audio recordings of actual people and enables the creation of realistic impersonations.”[8] This allows the creation of audio and video “of real people saying and doing things they never said or did.”[9] This synthetic audiovisual media can be created with different deep-learning techniques. Still, the most recent and most popular of methods is Generative Adversarial Network (GAN) for its realistic outputs but, most importantly, for its simplified process.[10] In fact, there are now various computer applications and services that offer deepfake creation tools for a non-technical audience.[11] Such an increase in accessibility has led to the commoditization and democratization of the technology.[12]

According to a recent study conducted by Deeptrace, there are currently 14,678 deepfake videos online as of September 2019.[13] An overwhelming majority of the deepfake videos present online (96% of them) are of pornographic content.[14] This content has accumulated a total of 134,364,438 views as of February 2018[15]. These pornographic deepfakes predominantly target female subjects, especially from the entertainment section (i.e., actresses and musicians).[16] This reality must receive an important consideration in tailoring a potential criminalization legislation.

While this is the majority today, another looming risk is the rise of political deepfakes.[17] Imagine a video appearing the night before an election appearing to show a politician participating in a drug trade. While this video may be proven to be fictitious following extensive analysis, the election will have already taken place. This has been characterized by Marco Rubio, a member of the U.S. Senate Select Committee on Intelligence, as the “next wave of attack against American and Western democracies.” Several scholars agree that political deepfakes will be prevalent and problematic during the 2020 American presidential election.[18] Nevertheless, the aim of this paper is not to dive in-depth in the technology behind deepfake and its various harms but rather to consider the legal measures to counter its consequences. The former has already been thoroughly analyzed by several excellent legal scholars.[19]

C. Benefits of Criminalization Over the Expansion of Civil Remedies

Before discussing the nature of a potential criminalization of deepfakes, one must first ask whether Canada ought to criminalize deepfakes in certain instances. Most importantly, is such a criminalization more advantageous than an expansion of civil remedies? Only if the answer to both these questions is in the affirmative, should one examine what a potential legislation would be.

1. The Challenge of Attribution

Civil rights of action suffer from certain legal impediments, specific to deepfakes, that criminal liability may solve, notably the challenge of attribution of liability. In fact, “[c] ivil liability cannot make a useful contribution to ameliorating the harms caused by deep fakes if plaintiffs cannot tie them to their creators.”[20] This challenge is exacerbated by the prevalence of anonymity in online publications.[21] This will likely be the case for deepfakes intended to harm individuals. Other than the question of anonymity, knowing who published the deepfake does not necessarily solve the problem: there is an added difficulty of determining whether the individual was the actual creator of the deepfake.[22] Let’s take the example of the Mark Zuckerberg deepfake brought up in the introduction. Is the Instagram publisher the creator of the deepfake or merely an individual genuinely thinking they are sharing accurate and authentic information? While the identification of the creator or distributor of a harmful deep fake may be beyond the practical reach of private plaintiffs, law enforcement entities benefit from “much greater investigative capacities in addition to the ability to seek extradition.”[23]

2. Deterrence as the Main Tool Against Harms of Deepfakes

Another significant difference between civil liability and criminal liability for deepfakes is the extent of its deterrence.[24] In fact, due to the cost, length, and impediments of civil suits such as attribution, the chances of such a lawsuit to take place against a producer of a deepfake is unlikely; only a limited group of individuals will be able to seek and recover damages successfully.[25] This, in turn, minimizes the deterrent effect of civil liability. This is different for criminal penalties. As stated by Danielle Keats Citron and Robert Chesney, while “being judgment proof might spare someone from fear of civil suit […] it is no protection from being sent to prison and bearing the other consequences of criminal conviction.”[26] A high level of deterrence is crucial and should be the main focus of any attempt to minimize the harm of deepfakes. In fact, while damages can help make the victim whole, it will not keep the video universally off the internet where it can continuously be distributed to others.[27] Therefore, the priority of any means against deepfake should aim to have the most significant impact before the publication, where the harm becomes practically unstoppable. The optimal way is the attempt to deter the producers of deepfake through criminal liability.

While civil remedies can be modernized or expanded through statutory torts specific to deepfakes or the adoption for subsequent privacy torts such as the tort of false light, the realities, and systemic harms remain the same. To allow the regulation of deepfake to rest on civil suits would only passively encourage its proliferation by diminishing the level of deterrence. This does not entail that criminalization will not suffer from any weaknesses. In fact, law enforcement has previously had difficulties in pursuing other forms of online abuse in part because of the lack of training in the relevant laws. Consequently, while a wide range of deep fakes might warrant criminal charges, some have argued that only the most extreme cases will attract the attention of law enforcement.[28] Nevertheless, criminalization seems better suited than an expansion of civil remedies for the aforementioned reasons.

D. Nature of a Potential Criminalization

1. Existing Canadian Laws Applicable to Deepfakes

Currently, Canada does not have any laws explicitly criminalizing the production or distribution of deepfakes. However, it may be argued that certain laws such as the Canada Elections Act (the “CEA”) and the Criminal Code may be transposable to deepfakes. However, their scope of application is limited to specific instances that would render their impact on the issue quasi inexistent.

In the case of the CEA, section 481 (1) states that an individual is guilty of an offence if they publish any material “regardless of its form” that purports to be distributed by a political party, candidate or prospective candidate during an election period.[29] Since section 481 (1) applies to “any material, regardless of its form,” there is the possibility of it applying to deepfakes. Certain practitioners and legal scholars have endorsed this interpretation. In fact, Pablo Jorge Tseng, an associate at McMillan LLP, stated in front of the Canadian House of Commons that “[w] hile such provisions are not specifically targeted at deepfake videos, such videos may very well fall within the scope of this section.”[30] It can be said that the CEA proactively addresses the deepfake threat to the election process.[31] It is worth noting that section 481 of the CEA has an exception for parody and satire.[32]

Unfortunately, the CEA only applies to the election of members to the House of Commons.[33] Therefore, section 481 would not apply to provincial nor municipal elections. While most provinces have their election acts, none cast a net as wide as the CEA. For example, while the CEA deals with “any material, regardless of its form,” most provincial acts only target the publication or distribution of a “false statement”[34] or “misleading information.”[35] An exception to this can be found in section 556 (4) of the Quebec Election Act, which states that “every person who knowingly spreads false news of the withdrawal of a candidate.”[36] Even if a court were to interpret false news as encompassing deepfakes, this specific section would only apply to deepfakes depicting the “withdrawal of a candidate.”[37]

On the other hand, another relevant law is section 162.1 of the Criminal Code about the prohibition of non-consensual pornography. Section 162.1 penalizes “everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct.”[38] This section of the Criminal Code does not have a requirement of intent or motive.[39]

It can be argued that the term “depiction” in the statute can apply to more than a real-life representation of a given thing.[40] The term “depiction” is also used in non-consensual pornography statutes of thirty-one American states including the state of Pennsylvania where “visual depiction” included “computer image.” [41] However, Canadian statute requires “a visual recording of a person made by any means including a photographic, film or video recording.” [42] Additionally, the British Columbia Court of Appeal stated that the purpose of this section in the Criminal Code was “to protect one’s privacy interest in intimate images sent to and entrusted with another.”[43] This supports the non-applicability of section 162.1 to deepfakes.

It is evident that the above-mentioned laws are not ideally suited to face deepfake harms and are limited to very exceptional circumstances. Section 162.1 of Criminal code would likely not apply to deepfakes, and the CEA only deals with federal elections and only applies during election periods.

2. American Initiatives and Lessons Learned

In the past year, several bills and acts have been introduced, both at the House and state level, proposing different means of government regulation on the production and publication of deepfakes. We will examine two different legislative proposals, the DEEPFAKES Accountability Act[44] and California Bill AB-1280, Crimes: Deceptive Recordings[45]. Both propose very different approaches. In fact, the former only aims to criminalizes deepfakes that are not identified as such, whereas the latter criminalizes the production and distribution of specific categories of deepfakes, whether they are identified or not as such.

The DEEPFAKES Accountability Act (the “Act”) was introduced at around the same time as U.S. senators Marco Rubio and Mark Warner sent letters to eleven social media companies, including Facebook, Twitter, and YouTube, urging them to develop industry standards for sharing, removing and archiving synthetic content.[46]

The Act employs the term “advanced technological false personation record” (an “ATFPR”) which is defined as “any deepfake,” which is in turn defined as:

any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof—

(A) which appears to authentically depict any speech or conduct of a person who did not in fact engage in such speech or conduct; and

(B) the production of which was substantially dependent upon technical means, rather than the ability of another person to physically or verbally impersonate such person.[47]

The law then criminalizes the production of an ATFPR without proper disclosure as to the falsity of the production.[48] In fact, the Act only criminalizes an ATFPR where there is neither a “clearly articulated verbal statement that identifies the record as containing altered audio and visual elements” and “an unobscured written statement in clearly readable text appearing at the bottom of the image throughout the duration of the visual element that identifies the record as containing altered audio and visual elements.”[49] The production must also contain an “embedded digital watermark clearly identifying such record as containing altered audio or visual elements.”[50]

The Act also requires an additional intent element. In fact, the Act only penalizes individuals who fail to fulfill the requirements mentioned above, combined with the intent to either humiliate or otherwise harass a person with sexual content of a visual nature, or the intent to cause violent or political strife.[51] The penalty for violation of these requirements is a maximum of five years of imprisonment.[52]

In sum, the Act only aims to criminalize deepfakes that do not meet the disclosure requirement and have the requisite intent. However, it does not criminalize the actual production of deepfakes themselves based on the nature of the content. The deepfake itself is still permitted as long as it adds certain disclosing elements. Consequently, none of the 14,678 deepfake videos online as of September 2019 would be criminalized as long as they meet the identification requirement. Similarly, it would seem that a producer of non-consensual deepfake pornography, that would identify the content as an ATFPR and fulfill all other disclosure requirements, would avoid criminal liability under this act.

The Act also creates certain exceptions, some of which are worth discussing. The Act states that the disclosure and watermark requirements do not apply to any ATFPR that “have not been substantially digitally modified,”[53] nor to those where “a reasonable person would not mistake the falsified material activity for actual material activity of the exhibited living person.”[54] These are very arbitrary thresholds and are bound to create uncertainty in its application. Finally, the requirements under the Act will not apply to any ATFPR “produced by an officer or employee of the United States, or under the authority thereof, in furtherance of public safety or national security.”[55] This exception would allow the use of deepfake technology by the President of the United States, or any politician really, in “furtherance of public safety or national security.” The broad concepts of public safety or national security seem to provide much leeway. If the aim of the use of deepfake is the “furtherance of public safety or national security,” it already fails to meet the intent required under the Act for criminal liability. Why, then, should there be an exception clause for officers and employees of the United States?

On the other hand, the California Bill AB-1280 (the “Bill”) goes a step further. The Bill aims to add a specific section to the Penal Code of California dealing with the production of specific types of deepfakes.[56] More precisely, it aims to criminalize the production and distribution of deepfakes that “[depict] an individual personally engaging in sexual conduct” and deepfakes that intend to “coerce or deceive any voter into voting for or against a candidate or measure in that election.”[57] The Bill is explicitly targeting two types of deepfakes: non-consensual pornography and political deepfakes. In the case of the latter, there is a subjective requirement as the producer must intend that the deepfake coerce or deceive voters.

In the Bill, deepfake is defined as “any audio or visual media in an electronic format, including any motion-picture film, video recording, or sound recording that is created or altered in a manner that it would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of the individual depicted in the recording.”[58] The penalty for an offence under this Bill is a maximum fine of $10,000 and one-year imprisonment.[59]

The Bill was introduced following to approval of two other bills by the Governor of California, AB-602 and AB-730, which impose civil liability for distribution of non-consensual pornographic deepfakes and deepfakes used to deceive voters, respectively.[60] This Bill goes one step further in criminalizing this behaviour. This seems to support the view that specific civil liability for deepfakes is insufficient in and of itself.

What these two different legislative initiatives demonstrate is that there are significant disparity and a lack of consensus on the different possible avenues for criminalization to respond the harms of deepfakes.

3. Guidelines for Regulation in Canada

Based on the current state of deepfake technology and the legislative efforts in the United States, what would the ideal legislation in the Canadian context be?

3.1 Adequate Definition of a “Deepfake”

One of the essential elements is defining the legal concept of deepfake. On the one hand, the concept must be narrow enough to avoid constitutionality challenges and arbitrary interpretation. However, it should also be inclusive enough to avoid having a technologically obsolete definition in the near future. For example, the inclusion of the technology of artificial intelligence would render any resulting definition under-inclusive, as other methods of creating deepfakes may arise.[61] That is why some have argued that the term “computer-generated” should rather be included.[62] The definition used in the DEEPFAKES Accountability Act, “the production of which was substantially dependent upon technical means, rather than the ability of another person to physically or verbally impersonate such person”[63] is another viable option that would respond to the raised concerns. Others, such as Danielle Citron, have suggested simpler definitions, such as “a video appearing to be authentic but that is created from other images, videos, or audio.”[64] However, this fails to encompass purely audio deepfakes, which, as demonstrated by the example of the U.K. energy firm in the introduction, may be as harmful as audiovisual deepfakes.

An optimal term that seems to fall in between the lengthy definition used in the DEEPFAKES Accountability Act and the one proposed by Citron is the one used in the now-expired New York Assembly Bill A08155: “digital replica.” It is defined as a “computer-generated or electronic reproduction of a living or deceased individual’s likeness or voice that realistically depicts the likeness or voice of the individual being portrayed.”[65]

3.2 Characterization of a “Malicious” Deepfake

Determining what constitutes a malicious deepfake is perhaps the biggest challenge in any deepfake criminalization effort. While they may cause harm, deepfakes can also have beneficial applications.[66]

In both American laws, the line is drawn through the intent of the producer. In fact, both legislations require a specific intent. This is the right approach and has been endorsed by legal scholars: an element “gauging intent would be useful in order to differentiate between beneficial and harmful deepfakes.” [67] This standard could be influenced by information such as “whether the creator profited from the video, on what platforms and how often the creator posted the video, and more context-specific clues about why the video was created.”[68] Obviously, such an analysis would require a heavy emphasis on the facts of the situation.[69] For example, the intent requirement would allow the production of deepfakes such as the one used by Disney to include the now-deceased actor Peter Crushing in the 2016 Film “Rogue One: A Star Wars Story.”[70]

However, the DEEPFAKES Accountability Act combines the requisite intent with an objective element of malicious content. In fact, there is the requirement that the deepfake exhibits a “material activity.” This is defined as “any falsified speech, conduct, or depiction which causes, or a reasonable person would recognize has a tendency to cause perceptible individual or societal harm, including misrepresentation, reputational damage, embarrassment, harassment, financial losses, the incitement of violence, the alteration of a public policy debate or election, or the furtherance of any unlawful act (emphasis added).”[71] This adds an objective threshold to the actus reus by requiring that the production of a deepfake cause “perceptible individual or societal harm.”[72] Such an objective standard and the accompanying illustrations would be beneficial as they would allow individuals to understand where the threshold for a malicious deepfake rests. However, unlike the DEEPFAKES Accountability Act, the determination of liability should not depend on whether or not the producer identifies the deepfake as such. Instead, such an element should be considered when determining whether the producer had the requisite intent for criminalization. To allow the identification of the deepfake to absolve the producer from criminal liability would undermine the objective of criminalization.

While the elements mentioned above might be right for general deepfakes, the case should be different for pornographic deepfakes. It has been suggested by American scholars that “[a] n ideal federal statute would prohibit the online publication of [pornographic] deepfakes and would not require an intent to harm.”[73] This should be the case in Canada as well.

Such an approach is consistent with section 162.1 of the Criminal Code, which establishes that “the motives of an accused are irrelevant” for non-consensual pornography distribution.[74] At a time when there may be no discernable difference between an actual intimate image and a computer-generated one, why should the intent requirement be any different? The harm is identical to an online publication of actual non-consensual pornography.[75] This can be demonstrated by the unfortunate case of the journalist Rana Ayyub. After campaigning for justice for a rape victim, a pornographic deepfake of her was produced, circulated, and ultimately viewed more than 40,000 times.[76] Less than 48 hours following the publication, her online accounts were flooded with screenshots of the video, and she received multiple death threats.[77] This reached a level where the United Nations Human Rights Council issued a public statement confirming the risk to her safety.[78] All of this ensued from a fake depiction of her. How is this level of harm any different than the one caused by actual non-consensual pornography? The California Bill AB-1280 studied above has also opted for the absence of an intent requirement for pornographic deepfakes.[79]

3.3 Exception for Satire and Issues of Legitimate Public Concern

While the requirement of a specific intent contributes to the objective of striking a balance between the malicious uses of the technology and the beneficial uses of it, an exception for satire and issues of legitimate public concern would reinforce it further. Such an exception serves two main purposes. First, it would further reduce the likelihood that the legislation infringes on the rights under the Canadian Charter of Rights and Freedoms (the “Charter”) or that it only minimally impairs said rights. Second, it would also ensure that the criminalization does “not quell beneficial uses of deepfakes.” [80] An example of a deepfake that could fall within this exception would be the one produced by the Malaria No More campaign showing David Beckham delivering an appeal to end malaria in nine languages.[81]

3.4 Compliance with the Canadian Charter of Rights and Freedoms

Another challenge of the criminalization of certain deepfakes is ensuring its compliance with section 2 (b) of the Charter: the freedom of thought, belief, opinion and expression.[82] The first question to be asked is whether deepfakes are protected by section 2 (b) of the Charter. One can answer this question by drawing an analogy with false statements. In fact, in its purest form, deepfakes are the most compelling false statement in a computer-generated format. The Supreme Court of Canada has previously concluded that “the deliberate publication of statements known to be false, which convey meaning in a non‑violent form, falls within the scope of section 2 (b) of the Charter.”[83] The Court refused to qualify a lie as an illegitimate form of expression as this would require Courts “to depart from its view that the content of a statement should not determine whether it falls within section 2 (b)”.

While this may be the case up to now, some American scholars have argued that deepfakes should lose the protection of freedom of expression in certain instances. While the extent and nature of the right to freedom of expression differ between the United States and Canada, the rationale of such an argument still applies in the Canadian context. It has been argued that when “false statements do not merely state false facts, but are also given in a form that carries with it indicia for reliability (such as a falsified newspaper or video or audio tape), the government should have greater power to regulate than it typically has to regulate false words.”[84] One of the fundamental pillars of freedom of expression is the pursuit of truth through the marketplace of ideas.[85] The rationale behind this marketplace of ideas is that “the free flow of ideas is the best way to get the truth.”[86] In fact, the “best test of truth is the power of the thought to get itself accepted in the competition of the market.”[87] However, what does one test the doctored reality against when both video and audio evidence can be as unreliable, and verbal reports can “easily be designed to endorse false facts?”[88] This danger is exacerbated by the phenomenon of disbelief by default. Individuals can rely on the existence of deepfakes to “call into question the veracity of real videos [or audio] in order to undermine credibility and cast doubt,” further eroding trust in journalism.[89] Let’s take the example of the now-infamous Donald Trump Access Hollywood audio tape. While Trump has already repeatedly claimed that his tape was fake[90], the existence of deepfakes can add unwarranted credibility to these excuses. While this may be a drastic and dystopian point of view, it is something that we are forced to consider.

Nevertheless, even if deepfakes are protected under section 2 (b) of the Charter, it seems consistent with jurisprudence that the criminalization of certain types of deepfakes would be justified under section 1 of the Charter. While the aim of this paper is not to complete an exhaustive analysis of such a constitutional justification, it is worthwhile to address the required elements in comparison with the Supreme Court of Canada’s finding that criminal statutes criminalizing defamatory libel were justified.[91]

The first requirement under section 1 of the Charter is that the legislation must be prescribed by law. This requires that the legislation provide “an intelligible standard according to which the judiciary must do its work.”[92] It must also allow individuals to understand what the law is and expects of them, as well as to guide legal debate about how to apply the law to a particular set of facts.[93] The proposed deepfake legislation provides for both an objective determination of a malicious deepfake as well as a requisite intent for all but pornographic deepfakes. This combination of an objective and subjective element would provide for a clear standard of behaviour.

Following such a finding, the legislation must meet the requirements detailed by the Supreme Court of Canada in R v Oakes: a sufficiently important objective, a rational connection to the objective, a reasonable degree of infringement, and proportionality per se.[94] In the case of the criminal statutes for defamatory libel, the Supreme Court of Canada found that such criminalization met all the above-mentioned requirements in R v Lucas. Such a finding is an illustrative example that shares several similarities with the criminalization of deepfakes.

Sufficiently Important Objective

The objective of criminal statutes criminalizing defamatory libel is the protection of an individual’s reputation from wilful and false attacks. The Court found that such protection “recognizes both the innate dignity of the individual and the integral link between reputation and the fruitful participation of an individual in Canadian society” and was therefore a pressing objective.[95] The objective behind the criminal liability for specific deepfakes is substantially similar. The proposed legislation has an added objective of protecting the integrity of the democratic process. Therefore, the proposed criminalization of deepfakes would likely satisfy this requirement.

Rational Connection to the Objective

The Supreme Court then found that since the statute was narrowly defined, caught only the most odious offenders, and offered substantial protection by deterring those very offenders, the measures were rationally connected to its objective.[96] Similarly, the proposed deepfake legislation is narrowly defined as it only targets the producers of deepfakes who satisfy the objective requirement of malicious content and the subjective requirement of malicious intent. This is rationally connected to the objective of protection of an individual’s reputation from wilful and false attacks.

Reasonable Degree of Infringement

Furthermore, the Court concluded that the requirement of an intent to defame ensured that the legislation only minimally impaired the freedom of expression of the accused.[97] Similarly, the proposed criminalization above requires a specific intent by the producer of a non-pornographic deepfake as well as the falsity of the content. These mens rea elements would ensure that section 2 (b) is minimally impaired, as was concluded by the Court in R v Lucas.

In the case of pornographic deepfakes, it was suggested that criminalization should not require intent. This is not incompatible with the Charter. In fact, section 162.1 of the Criminal Code establishes that the motives of the accused are irrelevant for non-consensual pornography[98]. Since harms of deepfake pornography are identical to authentic non-consensual pornography distribution, its criminalization without intent should be as equally allowed.

Proportionality

Finally, the Court found that defamatory libel was far removed from the core values of freedom of expression and thus merited minimal protection under the Charter. Consequently, the deleterious effects were proportional to the objectives of the statute. Malicious deepfakes, whether they aim to cause harm, humiliation, or political strife, are comparably inimical to the core values of freedom of expression as they intend to mislead the public. Therefore, their criminalization would likely constitute a reasonable degree of infringement on the rights conferred by section 2 (b) of the Charter.

While an exhaustive analysis would be required, the example for R v Lucas is illustrative of the likelihood of a carefully drafted legislation criminalization deepfakes being justified under section 1 of the Charter.

E. Conclusion

The paper aimed to consider whether criminal liability for the production of specific deepfakes was an optimal means of regulation in Canada, and, if so, what the nature of such liability should be. This paper has demonstrated that while such criminalization may face constitutional challenges, a carefully tailored legislation may be justified and be an effective means to reduce the harms caused by the proliferation of malicious deepfakes. American legislative initiatives have given great insight in terms of the nature and extent of such criminalization. However, while such legal responses might be effective, they are not the only solutions to malicious deepfakes. Legal scholars have discussed alternatives such as technological responses, market responses, and coercive responses such as military action.[99] Others have defended measures such as the promotion of media literacy, recognition of the vital role of legitimate journalism, and robust fact-checking organizations.[100] While there is no consensus, all agree that measures must be taken. The dangers of deepfakes are especially important where distrust of certain individuals or communities already exists, and deepfakes may leverage our confirmation biases to edge its way into the national discourse.[101] Until any of these measures are taken, “democracies will have to accept an uncomfortable truth: in order to survive the threat of deepfakes, they are going to have to learn how to live with lies.”[102]

[1] Henry Ajder et al, “The State of Deepfakes: Landscape, Threats, and Impact” (2019), online (pdf): Deeptrace <https://deeptracelabs.com/resources/>. [2] Bill_posters_uk, “I wish I could...” (2019), online: Instagram <https://www.instagram.com/p/BypkGIvFfGZ/?utm_source=ig_embed>. [3] Catherine Stupp, “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case” (2019), online: The Wall Street Journal <https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402>. [4] Henry Ajder et al, supra note 1 at 14. [5] Ibid at 1. [6] Robert Chesney & Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” (2019, Forthcoming) 107 Cal L Rev 1. at 16-23, 27. [7] Benjamin Goggin, “From porn to 'Game of Thrones': How deepfakes and realistic-looking fake videos hit it big” (23 June 2019), online: Business Insider <https://www.businessinsider.com/deepfakes-explained-the-rise-of-fake-realistic-videos-online-2019-6>. [8] Citron & Chesney, supra note 6 at 22. [9] Ibid. [10] Henry Ajder et al, supra note 1 at 13. [11] Ibid at 5. [12] Ibid at 3. [13] Ibid at 1. [14] Ibid. [15] Ibid. [16] Ibid at 2. [17] Holly Kathleen Hall, "Deepfake Videos: When Seeing Isn't Believing" (2018) 27:1 Catholic U J of L & Technology 51 at 59. [18] Ibid. [19] Citron & Chesney, supra note 6 at 16-29; Hall, supra note 17 at 56-61. [20] Citron & Chesney, supra note 6 at 42. [21] Elizabeth Caldera, "Reject the Evidence of Your Eyes and Ears: Deepfakes and the Law of Virtual Replicants" (2019) 50:1 Seton Hall L Rev 177 at 191. [22] Citron & Chesney, supra note 6 at 42. [23] Ibid. [24] Ibid. [25] Douglas Harris, "Deepfakes: False Pornography Is Here and the Law Cannot Protect You" (2018-2019) 17 Duke L & Tech Rev 99 at 123. [26] Citron & Chesney, supra note 6 at 42. [27] Harris, supra note 25 at 118. [28] Citron & Chesney, supra note 6 at 42. [29] Canada Elections Act, SC 2000, C 9, s 481. [30] House of Commons, Standing Committee on Access to Information, Privacy and Ethics [ETHI], Evidence, 1st Session, 42nd Parliament, 16 October 2018, 1110 (Mr. Pablo Jorge Tseng, Associate, McMillan LLP, as an individual). [31] B J Siekierski, “Deep Fakes: What Can Be Done About Synthetic Audio and Video?” (2019), online: Library of Parliament <https://lop.parl.ca/sites/PublicWebsite/default/en_CA/ResearchPublications/201911E#a3>. [32] Canada Elections Act, supra note 29, s 481 [33] Ibid, Preamble, para 1. [34] The Elections Act, CCSM, c E30, s 181(2). [35] Election Act, RSBC 1996, C 106, s 266(1)(a). [36] Election Act, C E-3.3, s 556(4). [37] Ibid. [38] Criminal Code, RSC 1985, c C-46, s 162.1(1). [39] Ibid, s 162.1(1)(4)(b). [40] Harris, supra note 25 at 122. [41] Ibid. [42] Criminal Code, supra note 38, s 162.1(2). [43] R v Craig, 2016 BCCA 154 at para 124. [44] US, Bill HR 3230, Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, 116th Cong, 2019. [45] US, AB 1280, An act to add Section 644 to the Penal Code, relating to crimes, and making an appropriation therefor, 2019-20, Reg Sess, Cal, 2019. [46] Adrian Croft, “From Porn to Scams, Deepfakes Are Becoming a Big Racket—And That’s Unnerving Business Leaders and Lawmakers” (2019), online: Fortune <https://fortune.com/2019/10/07/porn-to-scams-deepfakes-big-racket-unnerving-business-leaders-and-lawmakers/>. [47] US, Bill HR 3230, Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, 116th Cong, 2019, § 1041(n)(3). [48] Ibid, § 1041 (a). [49] Ibid, § 1041 (c). [50] Ibid, § 1041 (b). [51] Ibid, § 1041 (f)(1)(A). [52] Ibid, § 1041 (f)(1). [53] Ibid, § 1041 (j)(1)(C). [54] Ibid, § 1041 (j)(1)(E). [55] Ibid, § 1041 (j)(1)(F). [56] US, AB 1280, An act to add Section 644 to the Penal Code, relating to crimes, and making an appropriation therefor, 2019-20, Reg Sess, Cal, 2019, s 1. [57] Ibid. [58] Ibid. [59] Ibid. [60] Matthew F Ferraro, “Deepfake Legislation: A Nationwide Survey—State and Federal Lawmakers Consider Legislation to Regulate Manipulated Media” (2019) at p 9-11, online (pdf): WilmerHale <https://www.wilmerhale.com/en/insights/client-alerts/20190925-deepfake-legislation-a-nationwide-survey>. [61] Caldera, supra note 21 at 198. [62] Harris, supra note 25 at 124. [63] US, Bill HR 3230, Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, 116th Cong, 2019, § 1041 (n(3)(B). [64] Caldera, supra note 21 at p 3-4. [65] US, 8155--B, An Act to amend the civil rights law, in relation to the right of privacy and the right of publicity; and to amend the civil practice law and rules, in relation to the timeliness of commencement of an action for violation of the right of publicit, 2017-88, Reg Sess, Ny, 2017, s 50(2). [66] Caldera, supra note 21 at p 199. [67] Ibid. [68] Ibid. [69] Ibid. [70] Donie O'Sullivan, “When seeing is no longer believing” (2019), online: CNN <https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/>. [71] US, Bill HR 3230, Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act of 2019, 116th Cong, 2019, § 1041 (n) (2). [72] Ibid, § 1041 (n)(2). [73] Harris, supra note 25 at 124. [74] Criminal Code, supra note 38, 162.1(4)(b). [75] Harris, supra note 25 at 124. [76] “I was vomiting: Journalist Rana Ayyub reveals horrifying account of deepfake porn plot” (21 November 2019), online: India Today <https://www.indiatoday.in/trending-news/story/journalist-rana-ayyub-deepfake-porn-1393423-2018-11-21>. [77] Danielle Citron, “How deepfakes undermine truth and threaten democracy” (2019), online: TED <https://www.ted.com/talks/danielle_citron_how_deepfakes_undermine_truth_and_threaten_democracy/footnotes#t-181199>. [78] Ibid. [79] US, AB 1280, An act to add Section 644 to the Penal Code, relating to crimes, and making an appropriation therefor, 2019-20, Reg Sess, Cal, 2019, s 1. [80] Caldera, supra note 21 at 199-200. [81] Isobel Asher Hamilton, “The CEO behind a David Beckham deepfake video thinks we will have totally convincing digital humans in 3 years” (27 April 2019), online: Business Insider <https://www.businessinsider.com/ceo-of-ai-startup-synthesia-thinks-well-have-photorealistic-digital-humans-in-3-years-2019-4>. [82] Canadian Charter of Rights and Freedoms, s 2(b), Part I of the Constitution Act, 1982, being Schedule B to the Canada Act 1982(UK), 1982, c 11. [83] R v Zundel, [1992] 2 SCR 731 at p 735 [Zundel]. [84] Marc Jonathan Blitz, "Lies, Line Drawing, and Deep Fake News" (2018) 71:1 Okla L Rev 59 at p 110. [85] Ford v Quebec (Attorney General), 2 SCR 712 at para 56. [86] Robert J. Sharpe & Kent Roach, The Charter of Rights and Freedoms (Toronto: Irwin Law, 2017) at 167. [87] Ibid, citing Abrams v United States, 250 US 616 at 630 (1919). [88] Blitz, supra note 84 at 110. [89] Tom Van de Weghe, “Six lessons from my deepfakes research at Stanford” (2019), online: Medium <https://medium.com/jsk-class-of-2019/six-lessons-from-my-deepfake-research-at-stanford-1666594a8e50>. [90] David Richardson, “Trump Still Wants You to Think the Access Hollywood Tape Is Fake” (9 April 2018), online: Observer <https://observer.com/2018/09/trump-still-wants-you-to-think-the-access-hollywood-tape-is-fake/>. [91] R v Lucas, 1998 CanLII 815, [1998] 1 SCR 439 [Lucas]. [92] Irwin Toy Ltd v Quebec (Attorney General), [1989] 1 SCR 927 at 983. [93] R v Nova Scotia Pharmaceutical Society, [1992] SCJ No 67, [1992] 2 SCR 606. [94] R v Oakes, [1986] 1 SCR 103 [Oakes] [95] Lucas, supra note 91 at para 48. [96] Ibid at para 54. [97] Ibid at para 67-68. [98] Criminal Code, supra note 38, s 162.1(4)(b). [99] Citron & Chesney, supra note 6 at 1. [100] Hall, supra note 17 at 75-76. [101] Mary Anne Franks & Ari Ezra Waldman, "Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions" (2019) 78:4 Md L Rev 892 at 896. [102] Robert Chesney & Danielle Citron, "Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics" (2019) 98:1 Foreign Affairs 147 at 155.

54 views

©2019 by Tech Law McGill