NZLII Home | Databases | WorldLII | Search | Feedback

University of Otago Law Theses and Dissertations

You are here:  NZLII >> Databases >> University of Otago Law Theses and Dissertations >> 2018 >> [2018] UOtaLawTD 9

Database Search | Name Search | Recent Articles | Noteup | LawCite | Download | Help

Edmond, Jack --- "Potential responses to the threat of 'fake news' in a digitalised media environment" [2018] UOtaLawTD 9

Last Updated: 19 September 2023

Potential responses to the threat of ‘fake news’ in a digitalised media environment

Jack Edmond

A dissertation in partial fulfilment of the degree of Bachelor of Laws (with Honours) at the University of Otago, Dunedin, New Zealand

October 2018

Acknowledgements

To my supervisor Professor Paul Roth, thank you for your help and patience throughout the year.

To Professor Colin Gavaghan for your insight and feedback.

To my flatmates for their friendship and support, particularly Raffie, Jonny and Zac for putting up with diss chat all year.

Finally, I would like to say thank you to my parents, I wouldn’t be where I am today without

your love and support.

Table of Contents

Part 1:

I. Introduction:

The Oxford Dictionary word of the year for 2016 was “post-truth”, which was fitting in a year where ‘fake news’ stories were ubiquitous.1 For example, a Buzzfeed report observed that false election-related stories were viewed more on Facebook than election stories generated by generally trusted media such as the New York Times.2 The emergence of the internet has effectively removed all barriers to enter the media market, creating what can be described as the “new media”.

The “new media” has had significant societal consequences, both positive and negative. In theory, the increase in media plurality that has accompanied the ‘new media’ should strengthen democracy, as people can vote with the knowledge required to make informed decisions. However, alongside facilitating the democratic process, the media has the potential to subvert it through unfair, selective, misleading, or completely false reporting.3 Inherent within the media is the power to undermine the democratic process, and cause serious reputational, emotional, and financial harm.4 The ‘new media’ has seen consumers transition from receiving their news through traditional means such as newspapers and broadcasting to social media, as suggested by one poll indicating that 62 per cent of United States adults receive their news from social media.5 Following the United States Election, ‘fake news’ has become a commonly used term, and has been recognised as a serious threat to an informed democracy.6 Thus, it is imperative that ways of mitigating this risk are assessed.

This dissertation will assess potential responses to ‘fake news’ and evaluate whether New Zealand law is fit for purpose in the current technological and media climate. In particular, it will assess whether a legal response is appropriate. Although it is unlikely that New Zealand is in imminent danger of a foreign state-driven disinformation campaign, it is still conceivable. With Russia currently being accused of launching online attacks on western democracies, it seems prudent to take a pro-active rather than a responsive approach.

1 Oxford Dictionary “Word of the Year 2016 is...” English Oxford Living Dictionaries

<https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016>.

2 Craig Silverman “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook” (16 November 2018) Buzz Feed News <www.buzzfeednews.com/article/craigsilverman/viral-fake- election-news-outperformed-real-news-on-facebook>.

3 Law Commission The News Media Meets ‘New Media’ (NZLC R128, 2013) at 27.

4 Ibid.

5 Angela Moon “Two-thirds of American adults get news from social media: survey” (9 September 2017) Reuters <www.reuters.com/article/us-usa-internet-socialmedia/two-thirds-of-american-adults-get-news-from- social-media-survey-idUSKCN1BJ2A8>.

6 Michael Safi “Fake news: an insidious trend that's fast becoming a global problem” (2 December 2016) The Guardian < https://www.theguardian.com/media/2016/dec/02/fake-news-facebook-us-election-around-the- world>.

The problem of ‘fake news’ has been prominent in some overseas jurisdictions. Accordingly, most of the research and discussion in this dissertation will be based on developments in France, Germany, the United Kingdom, and the United States.

This dissertation will be divided into two parts. The first part will provide a background to ‘fake news’, exploring its history, and assessing the harms it can cause. It will then attempt to define the term. The second part will evaluate different responses to the issue, and attempt to establish the appropriate response. The overarching challenge will be protecting freedom of expression, while trying to uphold the integrity of New Zealand’s democratic system.

II. A background to ‘fake news’

  1. A history of fake news

‘Fake news’ is not new, misinformation, lies and propaganda existed long before the advent of the modern press. The term has been employed to describe several different activities. Firstly, there can be ‘news’ which is presented for commercial purposes through enticing advertisers and readers, often described as ‘clickbait’ articles. Second, there is news which is disseminated for political purposes, or in other words propaganda. Third, there is ‘news’ which is not fabricated, but is very inaccurate due to deliberate bias or a lack of journalistic standards. Lastly, the term has been used by people in power to de-legitimise information that disagrees with them.7

History contains countless examples of ‘fake news’.8 French researcher Francois-Bernard Huyghe has traced ‘fake news’ to the Egyptian pharaohs in 1274 BC. According to Francois, Ramses II claimed victory over the Hittite people in the battle of Kadesh, which is celebrated in bas-reliefs and Egyptian texts. Francois claims the battle was actually a “semi-defeat”; the real triumph was “that of propaganda, of the sculptors and scribes”.9 Propaganda and ‘fake news’ have also been prevalent during wartime, where the use of biased and misleading content has been employed as a powerful weapon. It has been used by nations to dehumanise and fuel hatred towards the enemy. This tactic was especially prominent during the Second World war. Professor Philip M. Taylor, of the University of Leeds, stated World War Two “witnessed the greatest propaganda battle in the history of warfare”. He highlighted the prevalence of the distribution of leaflets, which were employed to plant doubt and fear in the minds of enemy soldiers, while boosting the morale of allies.10 Another famous example of ‘news’ for political purposes is the Zinoviev Letter, which was published by the Daily Mail four days before the British general election of 1924, intending to damage the Labour Party’s chances. The letter was a forgery, designed by two members of a Russian monarchist organisation to sour relations between the Soviet Union and the United Kingdom.11

More recently, there has been an increase in the prevalence of foreign state-backed disinformation campaigns. The Cambridge Analytica affair is a topical example of a campaign directed at influencing a democratic outcome. Cambridge Analytica scraped data from 50 million people, using the information gathered to target people with personalised advertising campaigns aimed at influencing the 2016 United States Election.12 Approximately 32,000 voters were paid $2-3 to take a detailed personality and political survey that required them to log in through Facebook. The app collected data such as likes and personal information from the test-taker’s Facebook account, as well as their friends’ data, amounting to over 50 million

7 Professor Julian Petley “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”, at 4.

8 Ibid, at 5.

9 “Before Trump, the long history of fake news” (15 Jul 2018) The Star Online

<www.thestar.com.my/news/world/2018/07/15/before-trump-the-long-history-of-fake-news/>.

10 Dr Rod Oakland “Words at War – Leaflets and Newspapers in World War Two” (26 August 2012)

<www.psywar.org/newspapers.php>.

11 Petley, above n 7, at 7.

12 Alex Hern “Cambridge Analytica: how did it turn clicks into votes?” (6 May 2018) The Guardian

<www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher- wylie>.

people’s Facebook data. The personality survey results were combined with Facebook data such as likes to identify psychological patterns. Algorithms then combined the data with other sources such as voter records to create “a superior set of records”, with hundreds of data points per person. This allowed Cambridge Analytica to develop targeted personalised advertising campaigns. 13 The true effect this process had on the election is unclear, although it appears this kind of activity certainly has the capacity to significantly disrupt a democratic process if misused.

Furthermore, a University of Oxford report provided evidence of formally-organised social media disinformation campaigns in 48 countries.14 Notably, Russia has been accused of facilitating the dissemination of ‘fake news’ to destabalise western democracies. During the 2016 United States Election, Russia placed over 3000 adverts on Facebook and Instagram, promoting 120 Facebook pages that reached 126 million Americans.15 Further, Russia used sophisticated targeting techniques, targeting customised audiences to strengthen extreme voices in campaigns. This was achieved through targeting topical issues such as race relations and immigration.16 There have also been allegations of Russian influence in the success of the Vote Leave campaign in the EU Referendum. Indeed, in a six-month period in 2016, Russian news providers ‘Russia Today’ and ‘Sputnik’, published 261 media articles with anti-EU content, attempting to influence the EU Referendum. The news providers had more reach on Twitter regarding anti-EU content than both Vote Leave and Leave.EU during the Referendum.17 Another investigation found 156,252 Russian accounts tweeted about #Brexit, and that they posted over 45,000 Brexit messages in the last 48 hours of the campaign. Most of these were encouraging people to vote for Brexit.18 It would be concerning if these messages contained falsities, as it disproving falsities within 48 hours of polling day can be difficult.

  1. The current digital climate

Although ‘fake news’ has always existed, the advent of the internet has caused a technological shift that has changed the way it can be viewed and shared. Digital platforms such as Google and Facebook have enabled ‘fake news’ to spread at unprecedented rates, and advertising has enabled its commercialisation. Facebook and Google essentially have a duopoly over the digital market, with their share of the online advertising market comprising 90% of the growth in United Kingdom digital advertising in 2016, and 85% of United States digital advertising

13 Ibid.

14 Samantha Bradshaw and Philip N. Howard “Challenging Truth and Trust: a global inventory of organized social media manipulation” Computational Propaganda Research Project, Oxford Internet Institute (July 2018)

<http://comprop.oii.ox.ac.uk/research/cybertroops2018/> .

15 Olivia Solon “Russia-backed Facebook posts ‘reached 126m Americans’ during US election” (31 October 2017) The Guardian <www.theguardian.com/technology/2017/oct/30/facebook-russia-fake-accounts-126- million>.

16 Letter from Rebecca Stimson (Facebook) to Damian Collins regarding Chair, Digital, Culture, Media and Sport Committee (8 June 2018).

17 “Putin’s Brexit? The Influence of Kremlin media and bots during the 2016 UK EU referendum” (10 February 2018) 89up <http://89up.org/russia-report> .

18 Kate Holton “Russian Twitter accounts promoted Brexit ahead of EU referendum (15 November 2017) Reuters <www.reuters.com/article/us-britain-eu-russia/russian-twitter-accounts-promoted-brexit-ahead-of-eu- referendum-times-newspaper-idUSKBN1DF0ZR>.

revenues.19 Indeed, ‘Google’ is so prominent it is now often used as a verb, and according to the Pew Research Centre 44% of United States Adults get their news on Facebook.20

It is clear the current media climate has been heavily influenced by the proliferation of social media and ‘fake news’. The extraordinary growth of advertising revenues for Facebook and Google has been accompanied by a proportionate decline in the revenue of old media such as newspapers. Enders Analysis observed how the business model of national news organisations has been destabilised by the deviation of advertising revenue away from print advertising in favour of Google and Facebook.21 Since 2012, news organisations have lost £31 in print advertising, accompanied by a gain of merely £1 in digital advertising.22 Google and Facebook are absorbing much of growth in the digital advertising market, meaning traditional media are facing constrained resources.

The two platforms operate using an advertising based business model. The layout of Facebook is carefully designed, allowing them to ‘control consumption’.23 Tools such as the ‘like’ button, and the ability to share posts and articles with friends allow an exponential spread of information to occur. To maximise user engagement, it is in Facebook’s interest to make it easy to ‘like’ or ‘share’ content. This often comes at the expense of people sharing articles they have not read, meaning they have not assessed its accuracy. Furthermore, the dissemination of ‘fake news’ is exacerbated by the algorithms used by platforms to control which content they expose a user to.24 Social media companies are increasingly engaged in editorial behaviour as they can choose which news stories are shown. It has been argued this creates a ‘filter bubble’ or ‘echo chamber’ effect. This is where people are only exposed to content matching their current worldview, as it aligns with what they have liked and shared historically.25 Indeed, Facebook has stated their algorithms do have a significant ‘filter bubble’ effect.26 The ‘filter bubble’ effect is concerning as it means people are less equipped to critically evaluate content. Readers will struggle to determine the veracity of journalism as they are being denied a balanced portrayal of events.

Furthermore, it is arguable Facebook have not placed enough emphasis on ensuring the origin of a news story is transparent.27 There is no branding difference on the Facebook news feed between ‘fake news’ sites and respected journalists, as all the content is formatted in its own

19 Enders Analysis “UK digital ad forecast 2016-2018: Strong but uneven growth” (November 2016) Enders Analysis <www.endersanalysis.com/content/publication/uk-digital-ad-forecast-2016-2018-strong-uneven- growth>.

20 Jeffrey Gottfried and Elisa Shearer “News Use Across Social Media Platforms 2016” (May 26 2016) Pew Research Centre Journalism and Media <www.journalism.org/2016/05/26/news-use-across-social-media- platforms-2016/>.

21 Enders Analysis “News brands: Rise of membership as advertising stalls” (February 2017) Enders Analysis

<www.endersanalysis.com/content/publication/news-brands-rise-membership-advertising-stalls>.

22 Ibid.

23 “Manifestos and Monopolies” (21 February 2017) Stratechery <https://stratechery.com/2017/manifestos-and- monopolies/>.

24 Dr. Ansgar Koene, Horizon Digital Economy Research Institute, University of Nottingham “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017” at 16.

25 Ibid.

26 Eytan Bakshy and Solomon Messing “Exposure to ideologically Diverse News and Opinion Future Research” (24 May 2015) Solomon Messing <https://solomonmessing.wordpress.com/2015/05/24/exposure-to- ideologically-diverse-news-and-opinion-future-research/>.

27 Mark Sweney “Facebook’s rise as news source hits publishers’ revenues” (15 Jun 2016) The Guardian

<www.theguardian.com/media/2016/jun/15/facebooks-news-publishers-reuters-institute-for-the-study-of- journalism>.

style.28 Facebook emphasises the person who shared the post over the original publisher. Therefore, people are relying on the trustworthiness of their friend, and often disregard the reliability of the source. 29 This can lead to a proliferation of material lacking credibility, made purely for financial or political gain. Historically, a media brand would serve as a yardstick against which people could assess a source’s validity. However, the way Facebook operates its News Feed has made this difficult.

The aforementioned properties, coupled with the incentive of advertising revenue appear to disproportionately stimulate the dissemination of ‘fake news’ compared with legitimate journalism. Indeed, in the final three months of the United States election, the top performing ‘fake news’ stories were more popular that the top stories from 19 established news outlets combined. Interestingly, the investigation found that 17 out of 20 of the ‘fake news’ stories were pro Trump or against Hillary Clinton, indicating its potential to influence democratic outcomes.30 This trend can be explained by the sensational effect of ‘fake news’. ‘Fake news’ demands attention, enticing people to ‘like’ or ‘share’ content. This causes journalists to feel compelled to exaggerate and embellish stories to maintain their audience, causing them to disregard commonly accepted journalistic standards.31

Quality journalism is an essential counter-measure to ‘fake news’. The current digital advertising climate operates to disadvantage such journalism, as journalists who invest in verification are competing - often in a race against time - with sensationalised news that disregards the truth.32 News organisations are now increasingly dependent on digital platforms. Hence, even seemingly minor changes to the algorithms of platforms can significantly influence user engagement with certain material. Further, Facebook has a purely commercial motive. The algorithm will likely be tasked with the sole objective of engaging users, and supplying users with content they want to see. Thus, given the popularity of sensationalised media, there is often no obligation or incentive to promote quality journalism.33

  1. The threat ‘fake news’ poses to democracy

John Stuart Mill’s harm principle states that “the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.”34 Thus, it appears that before responses to ‘fake news’ can be considered, specific harms should be identified.

Democracy is dependent on open political debate, and reasoned political discourse is impossible without a shared basis of facts. According to Professor Leighton Andrews, although opinions are important, and facts must be disputed or interpreted to assist different arguments, “the trust that underpins democratic decision-making, including the outcomes of elections, requires some consensus on what is generally regarded to be ‘true’”.35 For example, reasoned

28 Alex Hern “Facebook doesn’t need to ban fake news to fight it” (25 November 2016) The Guardian

<www.theguardian.com/technology/2016/nov/25/facebook-fake-news-fight-mark-zuckerberg>.

29 Ibid.

30 Silverman, above n 2.

31 Koene, above n 24, at 12.

32 Ibid, at 12.

33 Ibid, at 12.

34 Clay Calvert and Austin Vining Filtering Fake News Through a Lens of Supreme Court Observations and Adages (L. Rev. 153, North Carolina Law Review Association, 2017), at I.

35 Stephan Lewandowsky and James Ladyman and Professor Jason Reifler “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

economic debate requires agreement on the rate of inflation and unemployment. Contrary to the public interest, ‘fake news’ subverts trust in statistics and official data, creating a difficult environment for political and economic debate to flourish.

Further, the propagation of conspiracy theories through ‘fake news’ poses a significant threat of harm. Sander van der Linden, a scholar of psychology conducting research at Yale University, has observed that although it is relatively clear that NASA did not fake the moon landing, and that global warming is not a hoax, there is “a committed subculture of conspiracy theorists” that argue the opposite.36 Many experts have dismissed these beliefs as held by a small portion of society out of touch with reality. However, while not held by the majority, there is still cause for concern. Indeed, a report conducted by the Yale Program on Climate Change found that 13% of Americans believe Global Warming is a hoax.37 Moreover, hours after the Boston Marathon Bombing, many suggested the attack was an inside job, and that the event was a hoax.38 Although questioning and holding the government to account is essential for democracy, spreading misleading information can cause great harm. In a 2013 study, Karen Douglas and Daniel Jolley observed that people who received information stating global warming was a hoax were more reluctant to engage politically, and to engage in behavioural changes, such as reducing their carbon footprint.39 These findings are alarming in the context of ‘fake news’ as they support the notion that even seemingly trivial publications supporting conspiracy theories have the capacity to reduce participation in democratic processes through instilling mistrust in institutions. This diverts attention from significant political, and social issues.40

  1. Does ‘fake news’ actually pose a threat to democracy?

There are those who deny ‘fake news’ poses a threat to democracy. It is arguable our exposure to ‘fake news’ is a temporary phenomenon, and that people would not be in such a frenzy if Hilary Clinton was elected. Lord Blencathra described ‘fake news’ as “an over hyped American concern exaggerated by the United States mainstream media who were very strong and partisan supporters of Hillary Clinton.”41 He claims these supporters fail to recognise Hillary lost due to personal failings and campaign errors. 42 Indeed, ‘fake news’ is a convenient excuse for the surprise loss of an election. Further, a Stanford study has allegedly disproved claims that ‘fake news’ shifted the election in Trump’s favour. Gentzkow an economics professor of Stanford, said “A reader of our study could very reasonably say, based on our set of facts, that it is unlikely that fake news swayed the election.”43 Blencathra observed that following this report most claims about ‘fake news’ swinging the election dissipated.44

36 Sander van der Linden (October 2013) “What a Hoax” Scientific American Mind

<https://scholar.princeton.edu/sites/default/files/slinden/files/conspiracyvanderlinden.pdf> at 41.

37 Anthony Leiserowitz and Edward Maibach and Connie Roser-Renouf and Seth Rosenthal and Mathew Cutler (5 July 2017) Climate Change in the American

Mind: <http://climatecommunication.yale.edu/publications/climate-change-american-mind-may-2017/2/> .

38 Linden, above n 36, at 41.

39 Karen Douglas and Daniel Jolley “The social Consequences of Conspiracism: Exposure to Conspiracy Theories Decreases Intentions to Engage in Politics and to Reduce One’s Carbon Footprint” (4 Januruary 2013) British Journal of Psychology.

40 Linden, above n 36.

41 The Rt. Hon the Lord Blencathra “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”at 4.

42 Ibid, at 4.

43 Kristen Crawford “Stanford study examines fake news and the 2016 presidential election” Stanford News https://news.stanford.edu/2017/01/18/stanford-study-examines-fake-news-2016-presidential-election/>.

44 Blencathra, above n 41, at 5.

Facebook CEO Mark Zuckerberg said he thought it was a “pretty crazy idea” that Facebook had influenced the outcome of the United States election.45 However, Facebook’s own marketing indicates against this. A page run by Facebook highlighting the role of Facebook data in the election of United States Senator Pat Toomey, stated the “made-for-Facebook creative strategy was an essential component to Senator Pat Toomey’s re-election, as the senator won by less than 100,000 votes out of 6,000,000 cast”. 46 The company used data to target users likely to vote for Toomey.47 Additionally, Facebook CEO Mark Zuckerberg has since indicated he regrets dismissing concerns that Facebook had a role in the 2016 United States Election.48 Thus, it seems ‘fake news’ definitely has the capacity to undermine democracy.

It is arguable that those who like and share ‘fake news’ are those who are already predisposed and committed to a certain viewpoint, and thus it has no influence. However, this appears to overly simplify politics as it characterises people as either right or left wing. It discounts the role of reasoned political discourse, and the potential for legitimate information to sway political opinion. By further committing those who are predisposed to a certain view, we are limiting the potential for those people to have their views changed by exposure to legitimate political debate.

  1. An overview

It is conceded that ‘fake news’ may not have swung the United States election, and that interest in the subject has been sparked by Trump’s victory and fear of Russian influence in Western democracy. Nevertheless, although it may be difficult to prove, ‘fake news’ clearly has the capacity to undermine democratic processes. Thus, it seems we cannot continue to ignore the impact ‘fake news’ could have on our democracy. If ‘fake news’ became commonplace in New Zealand, it would likely be to the detriment of our media system. This is especially likely in a digital environment that makes it difficult to finance quality journalism. If digital platforms continue to not just tolerate ‘fake news’, but privilege it, our media landscape may be completely undermined. It is arguable that even if ‘fake news’ is not currently undermining democratic processes, this may be because a real news competitor remains. Alarmingly, this may not be true if digital platforms continue as they are.49

45 Olivia Solon “Facebook's fake news: Mark Zuckerberg rejects 'crazy idea' that it swayed voters” (11 November 2016) The Guardian <www.theguardian.com/technology/2016/nov/10/facebook-fake-news-us- election-mark-zuckerberg-donald-trump>.

46 Amanda Bloom “The best content to influence voters” Facebook Business

<https://www.facebook.com/business/success/toomey-for-senate>.

47 Ibid.

48 Sam Levin “Mark Zuckerberg: I regret ridiculing fears over Facebook's effect on election” (28 September 2017) The Guardian <www.theguardian.com/technology/2017/sep/27/mark-zuckerberg-facebook-2016- election-fake-news>.

49 News Media Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

III. Defining ‘fake news’

  1. The need to define ‘fake news’

Given criticisms of ‘fake news’ and the need to regulate it, there is a real need to define ‘fake news’ for the sake of eliminating vagueness. Dave Itzkoff stated in the New York Times that “the phrase ‘fake news’ has now been used so liberally, it’s meaningless.”50 An overly broad definition of the term can lead to it being misused by politicians to undermine journalism that disagrees with them, and has the potential to weaken trust in professional journalism generally.51 For example, in the United States, President Trump has accused media bodies from CNN to the New York Times of spreading ‘fake news’.52 An assistant to Trump stated the White House “will continue using the term “fake news”, until the media understands that their “monumental desire” to attack the President is wrong”.53 It is this carefree and liberal use of the term that is concerning as it is attempting to deter legitimate reporting and fact finding. It suggests that journalism becomes ‘fake’ where it challenges an elected politician, resembling an authoritarian approach to the media. A liberal use of the term is particularly troubling for independent news providers that lack the brand recognition of established outlets and are thus more vulnerable to attacks on the integrity of journalism.54 Indeed, the weaponisation of the term ‘fake news’ to justify political attacks on journalism may pose a greater danger than ‘fake news’ itself.55

There are three types of information that arguably cannot be considered ‘fake news’; these are predictions, opinions, and satire. Given that predictions and opinions are subjective, it seems they cannot be deemed false. However, difficulty arises when considering predictions and opinions that are based on objectively false information. To exclude such content from the definition may allow similar harms to persist. For example, an opinion or prediction piece related to a proposed referendum on gun ownership that draws on false statistics of annual gun related deaths would cause harm through misleading people in their choice, disrupting a democratic process. Notably, the Defamation Act 1992 provides that ‘honest opinion’ is only a defence where the opinion is based on facts not materially different from the truth;56 and the opinion is genuine.57 Thus, it would be consistent with defamation law to categorise an opinion that is not held in good faith, and relies on falsities as ‘fake news’. Further, like in the context of defamation, an opinion or prediction can have the same harmful effects as other ‘fake news’. Although it has the capacity to cause harm, satire should not be considered ‘fake news’ as it does not purport to be serious, and is thus not ‘news’. The humourous context should be sufficient to displace notions of ‘fake news’.

  1. The proposed definition

The proposed definition of this dissertation is: ‘Material presented through any medium that purports by both appearance and content to be real news to the reasonable reader, and

50 Calvert and Vining, above n 34, at I.

51 News Media Association, above n 47, at 1.

52 The Independent Monitor for the Press (IMPRESS) “Submission to the Digital, Culture, Media & Sport Committee Fake News inquiry 2017” at 52.

53 Ibid.

54 Ibid, at 56.

55 Ibid, at 59.

56 Defamation Act 1992, s 11(a).

57 Section 10(1).

knowingly and intentionally presents information with the capacity to cause harm as true for the purpose of misleading the reader where it is false.58 This definition attempts to be as narrow as possible to avoid loss of meaning, and the term being used as a weapon against mainstream media.

  1. The proposed definition explained

The first element of this definition is that it can be ‘material presented through any medium’. This is because it is imperative that a definition retains some malleability to allow for technological development. It is uncertain what future mediums will be available to disseminate ‘fake news’. The potential for fake videos and fake photos to be distributed is especially concerning. It is plausible that technology could arise that allows people to make convincing videos and photos that make it look like a politician did or said something they did not. This type of ‘fake news’ would often be very difficult to refute, especially if published days before an election.

Secondly, the material must ‘purport by both appearance and content to be real news to the reasonable reader’.59 The definition should focus on the noun-based aspect, ‘news’, as much as the adjective ‘fake’ aspect. It is important that the definition avoid covering content that does not attempt to be news.60 It seems undesirable that anyone could come under its scope for making a status on Facebook, or sharing information through social media. It would be prudent to target the creators of such material, who intentionally present it to appear as real news. Otherwise, the scope of the definition may be too wide, as people will become hesitant to engage in political discussion. This requirement would include material such as fake videos to the extent they were considered ‘news’. What constitutes ‘news’ may cause some uncertainty. The Concise Oxford English Dictionary defines ‘news’ as "newly received or noteworthy information about recent events."61 This definition is broad as there is no qualitative requirement attached to the definition; it can merely be ‘newly received’. Importing a ‘reasonable reader’ standard will help narrow the definition of what is ‘news’, and will enable both the appearance and content of the material to be evaluated.

It is also important that information which is false, but inadvertently included in material, or information that is truthful but exaggerated is not considered ‘fake news’. Thus, material must be ‘knowingly and intentionally’ presented as ‘true for the purpose of misleading the reader’.62 All journalists are prone to human error, and thus we cannot label a simple mistake as ‘fake news’. To do so would unduly infringe on freedom of expression, and could chill democratic debate, as it would deter journalists from reporting in many cases.

An element of importance, or a de minimis standard is worth considering. Thus, the material should have the ‘capacity to cause harm’. This is because there will be cases where the social cost of applying the law to a violation will outweigh the benefits.63 De minimus non curat lex

58 This definition is derived from Calvert and Vining, above n 48, at I and The Independent Monitor for the Press, above n 50, at 29.

59 Calvert and Vining, above n 34, at I.

60 Ibid.

61 Shorter Oxford English Dictionary (6th ed, Oxford University Press, Oxford, 2007), at 1919.

62 Calvert and Vining, above n 34, at I.

63 Andrew Inest “A Theory of De Minimis and a Proposal for Its Application in Copyright” (2006) 21 Issue 2 Article 6 Berkeley Technology Law Journal 957.

is commonly translated as “the law does not concern itself with trifles.” 64 Applying a de minimis standard will allow reference to the size and type of the harm, the cost of adjudication, the purpose of the rule or statute in question, the effect of adjudication on the rights of third parties, and the intent of the infringer.65 Evaluating the importance of material would be beneficial as it applies an additional standard that helps further narrow the definition of ‘fake news’, reducing concerns of curbing freedom of expression. Nevertheless, there may be concern that allowing such subjectivity unduly widens the scope of judicial power, increasing the potential and incentive for people to try eliminate content for personal gain. Indeed, the subjectivity and fact-intensive nature of de minimis determinations will create inevitable controversy. However, in many cases the costs incurred by a prospective plaintiff are less than the total cost to society, meaning it may be worth bringing a case individually, but not from a social perspective. Thus, a de minimis standard will help reduce concerns regarding freedom of expression, while also reducing the social costs associated with adjudicating such claims.

Lastly, the standard to which falsity should be measured is contentious. Ideally, if material could be determined as ‘objectively verifiable as false’ this would avoid labelling legitimate content as ‘fake news’. Labelling content as true or false is difficult as there is often no settled yardstick against which truth can be measured. Without empirical evidence, it is difficult to state a proposition is unequivocally true or false. It seems that a standard of ‘objectively verifiable as false’ may be too difficult to attain, and too narrow. Additionally, the interpretation of facts can involve subjectivity. An ‘objectively verifiable’ standard would be applying a standard of scientific certainty, whereas legal certainty may be more appropriate. Two legal standards are possible; the high criminal standard of ‘proof beyond all reasonable doubt’, and the civil standard of ‘the balance of probabilities’. When dealing with freedom of expression, it may be preferable to apply a high standard of proof, namely ‘beyond all reasonable doubt’. This would not require scientific or empirical proof, but would set a high standard. It would protect free speech, while allowing material to be defined as ‘fake news’ where necessary.

  1. Conclusion

It is submitted the above definition achieves a good balance between allowing people to freely share and produce material, while deterring against the intentional deception of the public.

64 Bryan Garner (ed) Black’s Law Dictionary (10th ed, Thomson Reuters, St. Paul, 2014) at 524.

65 Inest, above n 63, at 951.

Part 2

I. Legislative action in other jurisdictions

Legislative change is controversial as it risks inhibiting democratic debate and the exchange of ideas. It is arguable that the truth will triumph in the marketplace of ideas. Further, freedom of expression becomes especially important in an electoral context, and it is imperative that people are free to challenge those seeking power. Nonetheless, freedom of expression is misused where this discourse shifts from genuine political debate to deceitful propaganda. A fundamental tenet of democratic debate is that it is based on truth rather than falsehood.66 Indeed, there is arguably no value in allowing completely fabricated arguments to influence such an important aspect of society.

Historically, the United States has followed a libertarian philosophy that prioritises freedom of speech, and has thus been hesitant to legislate against ‘fake news’. Conversely, European nations have taken a more proactive response. This has been provoked by widespread concern regarding Russian influence in western political processes. Both France and Germany have proposed drastic legislative change to combat ‘fake news’.67 There has also been considerable discussion in the United Kingdom regarding legislative action. The United Kingdom Select Committee conducted an investigation, and has asked for submissions on the matter. The British government has also set up what has been described as a ‘fake news unit’, while Italy has an online service where people can report ‘fake’ articles.68 The ‘fact news unit’, or “National Security Communications Team” is inter alia charged with “combatting disinformation by state actors and others”.69 The European Union is also developing a “code of practice” to provide guidelines for social media companies.70 This section will examine the responses of France and Germany, and assess their applicability to New Zealand’s legal system.

  1. France’s law against ‘fake news’

A law aiming to prevent ‘fake news’ passed a vote in French Parliament in June this year. The law was supported by President Emmanuel Macron. It attempts to prevent extremist groups and Russian state-driven media companies from influencing French democracy with ‘fake news’.71 The law empowers the independent-broadcast authority to unilaterally suspend the license of any “foreign-influenced media organisation” during a national electoral campaign.72

66 Dr Karol Lasok QC “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017” at 40.

67 “Macron vows new law to combat fake news with Russian meddling in mind”

(4 January 2018) The Local <www.thelocal.fr/20180104/france-announces-law-to-combat-fake-news-with- russian-meddling-in-mind>.

68 “Could France's fake news law be used to silence critics?” (4 June 2018) The Local fr

<www.thelocal.fr/20180604/could-frances-fake-news-law-be-used-to-silence-critics>.

69 BBC “Government announces anti-fake news unit” (23 January 2018) BBC News <www.bbc.com/news/uk- politics-42791218>.

70 Ibid.

71 Zachary Young “French Parliament passes law against ‘fake news’” (4 July 2018) Politico

<www.politico.eu/article/french-parliament-passes-law-against-fake-news/>.

72 Pascal-Emmanuel Gobry “France’s ‘Fake news’ Law Won’t Work” (15 February 2018) Bloomberg

<www.bloomberg.com/view/articles/2018-02-14/fake-news-france-s-proposed-law-won-t-work>.

It provides that social media platforms make available the identities of advertisers who pay to spread content, accompanied by the amount they are paying.73 The law forces social media platforms to allow users to flag stories they think are false.74 The platform then has to tell authorities and make public the action they are taking. Further, in an election period, anyone can ask a judge to remove false content within 48 hours where it is “massively and artificially spread.” The ban on ‘fake news’ will apply during the three months before an election.75

  1. Criticism of the law

The law has compelled considerable criticism, with many deeming it a threat to democracy and free speech. Macron has defended the law, stating it is necessary to stop the circulation of “lies made up to tarnish political officials, personalities, public figures, journalists.”76 He has also stated “Democracy is about true choices and rational decisions. The corruption of information is an attempt to corrode the very spirit of our democracies.”77 The bill states the law is responding to “the recent electoral climate” in countries such as the United States and the United Kingdom, where Russian influence in democratic processes is suspected. However, Marine Le Pen, leader of the far-right National Rally said it crossed a line between “a good- faith attempt to prevent misinformation” and “indirect censorship”, and that it would “infantilize” French people.78 Further, Jean-Luc Mélenchon, of the hard-left France Unbowed, called it “a crude attempt to control information.”79 Predictably, the media have also condemned the law for the threat it poses to freedom of expression, and for the confusion it has caused.80

Notably, anyone who finds a false story will be empowered to sue under an emergency procedure known as “référé”, to have it removed where it is “massively and artificially spread”. This requires a judge to decide within 48 hours whether content should be taken down.81 A Bloomberg article observed two plausible outcomes. Firstly, judges will likely face an onslaught of claims, and will simply surrender and use the law’s narrow scope to throw out all complaints. Second, judges may make mistakes through working under emergency conditions, removing legitimate speech.82 This is concerning, as many of the biggest scandals started as internet-driven rumours that may have appeared fake initially. Further, an alarming absurdity was identified. If for example, a damaging rumour spread about a candidate that is quashed under the new law, and the candidate then wins the election by a small margin, and on appeal it becomes apparent that the rumour was true. What would be the response here? This is a very plausible scenario, and one that would present profound difficulty.83

73 Ibid.

74 Ciara Nugent “France is Voting on a Law Banning Fake News. Here’s How it Could Work” (June 7 2018) Time <http://time.com/5304611/france-fake-news-law-macron/> .

75 Ibid.

76 Ibid.

77 Jonathan Turley “Embracing Macron’s was against ‘fake news’ could kill free speech” (26 April 2018) The Hill <http://thehill.com/opinion/civil-rights/385009-embracing-macrons-war-against-fake-news-could-kill-free- speech> .

78 Nugent, above n 74.

79 Ibid.

80 Ibid.

81 Gobry, above n 72.

82 Ibid.

83 Ibid.

Moreover, the law appears to have significant practical flaws. These flaws stem from the fact that in many cases it will take months to determine the veracity of claims.84 An example of this is the offshore bank account allegations against Macron.85 Judges may struggle to verify information in 48 hours. Further, of note is that the East Stratcom Task Force compiled a hall of shame of 3,800 news articles it labelled as ‘fake news’.86 The East Stratcom Task Force was set up in 2015 by the European Union to address false and misleading stories about the European Union, following Russia’s hybrid war campaign in Ukraine. It is based in Brussels, and has 14 staff. It has recently received a significant increase in budget, with a grant of £1.1M from the European Union budget awarded for the 2018-20 period.87An article that called Ukraine “an oligarch state with no independent media”, and discussed the country’s terrible World War II record against Polish Jews made the list. The article was labelled as ‘fake’ even though the article was based on a lecture given by a journalist who had spent time in Ukraine, and is a view shared by many.88 This highlights the dangers associated with trying to categorise content as true or false. Any such endeavour will encounter difficulties, which may elicit significant consequences. Thus, this law is likely to create significant uncertainty in an area where caution should be advised given the importance of the media and free speech.

Reassuringly, the offence is narrowly drafted to protect free speech. For material to be considered ‘fake’, it must be proven the author knowingly and maliciously lied, which is a high evidentiary bar. It must also be established the news “disturbed or could have disturbed public peace.” The government has ensured that only material that is viral and “manifestly false” will be removed.89 Indeed, a strong burden of proof would help stifle criticism. It seems a requirement that content be proven beyond all reasonable doubt to be false would be prudent, as referred to in this dissertation’s definition. However, setting a standard of proof which is too high will only make it more difficult to make timely determinations. Nonetheless, it seems necessary to protect freedom of speech.

  1. Applicability to New Zealand

When considering the practical flaws alongside the law’s potential to infringe on freedom of expression, this law is unsatisfactory. It seems France may have overstepped, placing freedom of speech, and thus democracy at risk. However, France should be commended for taking affirmative action. Indeed, it is no small matter for someone to be misrepresented online, and to have that shared with millions of people. Internet platforms have been complicit for too long, and thus some response is necessary. Nevertheless, given the flaws mentioned above, it seems this approach would be inappropriate for New Zealand. A less extreme, more balanced approach might be more prudent.

84 Nugent, above n 74.

85 Jon Henley “Emmanuel Macron files complaint over Le Pen debate ‘defamation’” (4 May 2017) The Guardian < www.theguardian.com/world/2017/may/04/emmanuel-macron-files-complaint-over-marine-le-pen- debate-remark>.

86 Jonathan Turley “Europe’s Invasive Species: We Should Keep Macron’s Oak And Send Back His Speech Limits” (29 April 2018) Jonathan Turley <https://jonathanturley.org/2018/04/29/europes-invasive-species-we- should-keep-macrons-oak-and-send-back-his-speech-limits/comment-page-1/>.

87 Jennifer Rankin “EU anti-propaganda unit gets €1m a year to counter Russian fake news” (25 November 2017) The Guardian <www.theguardian.com/world/2017/nov/25/eu-anti-propaganda-unit-gets-1m-a-year-to- counter-russian-fake-news>.

88 Turley, above n 86.

89 The Local, above n 68.

  1. Germany’s law against ‘fake news’

Like France, Germany is concerned about Russian interference in their democracy. Although Russia has denied allegations, German officials have accused Russia of attempting to manipulate German media to weaken voter trust in moderate mainstream government under Chancellor Angela Merkel, and stimulate divisions within the EU so that it drops sanctions against Moscow.90 An example of such misinformation is the case of a German-Russian girl whom Russian media alleged was kidnaped and raped by migrants in Berlin. This claim was refuted by German authorities who stated that a medical examination proved she had not been raped.91 This misinformation allegedly stems from Russian support for eurosceptic anti- immigrant parties in Germany.92

  1. The Network Enforcement Act

Germany has proposed a law called the Network Enforcement Act, or NetzGD. The law forces social media companies to police the proliferation of illegal content on their platforms. It is aimed at crimes that pose danger to the existence of a free, open and democratic society.93 Passing this law has made Germany the first European Union member to pass legislation imposing liability on social networks for the removal and blocking of illegal content within specific timeframes. The law does not create a pro-active monitoring obligation, but requires platforms to have an easily recognisable, directly accessible and permanently available procedure for submitting complaints.94 Failing to act within 24 hours for “obviously illegal content”, and seven days for “anything requiring longer consideration” can result in fines of up to 50 million Euros.95 The Act applies to social networks, which are broadly defined in s 1 as “internet platforms which are designed to enable users to share any content with other users or to make such content available to the public.” It does not create any new categories of illegal content. Section 1(3) outlines illegal content that must be removed, referencing over 20 sections of the German Criminal Code. These include broader crimes such as defamation, incitement to hatred, dissemination of depictions of violence, as well as provisions such as treasonous forgery and breach of the public peace by threatening to commit offenses.96 These are offences which could encompass ‘fake news’, and thus this is a considerable step towards increasing platform liability in the ‘fake news’ context.

  1. Criticism of the law

The law has faced an outcry from those who see it as an affront to freedom of expression, amounting to censorship. Indeed, the European Digital Rights Initiative has stated they “would strongly oppose making private, profit-oriented companies into arbiters of truth and the legislators, judges, juries and executioners of our freedom of communication."97 Significantly,

90 Andreas Rinke and Andrea Shalal “Germany alarmed about potential Russian interference in election: spy chief” (16 November 2016) Reuters <www.reuters.com/article/us-germany-election-russia/germany-alarmed- about-potential-russian-interference-in-election-spy-chief-idUSKBN13B14O?il=0>.

91 Ibid.

92 Ibid.

93 Professor Vincent Cunnane and Dr Niall Corcoran “Proceedings of the 5th European Conference on Social Media” (paper presented at Limerick Institute of Technology Ireland 21-22 June 2018).

94 Ibid.

95 Ibid.

96 Ibid.

97 Lewis Sanders “Is criminalizing fake news the way forward?” (14 December 2016) DW <www.dw.com/en/is- criminalizing-fake-news-the-way-forward/a-36768028>.

there is no sanction against platforms that erase legitimate communication, and thus there is a risk that an excessive amount of communication will be deleted, curbing political debate.98 This concern could be mitigated by parliament providing a legal framework outlining when, how and why a social media platform should limit free speech. In this regard, the NetzDG provides a list of communication that is illegal.99 However, there will invariably be uncertainty in the law, with many ambiguous cases. Alternatively, a trusted third party could be used to advise platforms on the legality of content. However, this may also be controversial, and would be administratively difficult, requiring significant resources.

Moreover, like France’s law, NetzDG has been criticised for imposing an unreasonably short time limit. According to Berlin-based copyright and media lawyer Dominik Hoech, “In many cases, it just isn’t possible to identify within such a short timeframe what’s ‘obviously illegal’”. He stated, “Sometimes we spend months in court arguing over questions of ‘true or false.’”100 Jacques Pezet of Correctiv, a German non-profit investigative newsroom that works with Facebook to investigate potentially false news stories said his team often cannot determine the veracity of content within a week. Pezet stated he does not think the 24 hour or seven-day timeframe should influence their work.101

  1. Applicability to New Zealand

Like France’s law on ‘fake news’, Germany’s NetzDG is rife with imperfections. It seems absurd that profit-oriented companies such as Facebook become the body that decides what is illegal or true. It is undesirable to task platforms with removing content, as this may cause the inadvertent removal of legitimate content, curbing political debate. There are also similar flaws regarding the time-limit imposed. Thus, this is unlikely to be an appropriate response for New Zealand.

98 Cunnane and Corcoran, above n 93.

99 Ibid.

100 Josie Le Blond “In Germany, a Battle Against Fake News Stumbles into Legal Controversy” (25 May 2017) Coda <https://codastory.com/disinformation-crisis/information-war/in-germany-a-battle-against-fake-news- stumbles-into-legal-controversy>.

101 Ibid.

II. New legislation based on the Harmful Digital Communications Act

The problem of fake news’ can be analogised with online harassment. Like ‘harassment’, ‘fake news’ has always been part of society. However, the internet and the ease in which material can be disseminated has given it a more anonymous and frightening quality. Like online harassment, the harms associated with ‘fake news’ are exacerbated by its anonymity, ease of dissemination, the victim’s helplessness - whether it be society or the individual the content relates to - and its relative permanence unless remedied.102 Thus, it is arguable a similar response to ‘fake news’, as was adopted against online harassment in the form of the Harmful Digital Communications Act 2015 (HDCA) should be instituted. It is proposed that new legislation be drafted with the aim of addressing the harms that can result from the dissemination of ‘fake news’ online. The HDCA could be a model upon which the proposed legislation could be based.

  1. Arguments in favour of using the HDCA as a framework for alternative legislation

Section 3 of the HDCA states the purpose of the Act is to ‘deter, prevent, and mitigate harm caused to individuals by digital communications;103 and provide victims of harmful digital communications with a quick and efficient means of redress’.104 This purpose aligns well with the issue of ‘fake news’, as ‘fake news’ causes harm through digital communications, which also requires quick and efficient redress.

The HDCA operates by applying several “communication principles”, which broadly cover communications that are judged as “indecent”, “false” or “used to harass an individual”, as well as a catalogue of other transgressions.105 Among these is that a digital communication should not make a “false allegation”.106 Thus, ‘fake news’ which makes a false allegation about someone would likely come under the Act. However, harm requires “serious emotional harm”, which is a difficult hurdle to overcome.107 ‘Fake news’ such as the Pizzagate scandal in the United States, where Hillary Clinton was accused of running a child sex ring, may fall under the criminal offence provided by the Act. However, other harmful ‘fake news’ articles would likely fall outside of the scope of the Act, including political content based on fabricated information. It is also likely Clinton would have had at least one other form of redress available to her, namely defamation.

The HDCA has several procedural elements which make it an ideal framework to tackle ‘fake news’. The Act allows an ‘approved agency’, which is currently Netsafe, to help reach an outcome through non-coercive means, which is considerably more efficient than litigation. In the context of ‘fake news’, this is valuable as significant harm can be caused before a remedy can be obtained in court.108 Further, s 17 of the Act provides that the District or High Court may appoint a technical adviser to assist it in determining an application for an order under s

19. Thus, a specialist ‘fake news’ technical adviser could be employed. This would help review

102 Stephanie Panzic “Legislating for E-Manners – Deficiencies and unintended consequences of the Harmful Digital Communications Bill” (October 2014) Internet NZ <https://internetnz.nz/blog/legislating-e-manners--- deficiencies-and-unintended-consequences-harmful-digital-communications>.

103 Harmful Digital Communications Act 2015, s 3(a).

104 Section 3(b).

105 Section 6.

106 Section 6(1)(6).

107 Section 4.

108 Section 8.

and dismiss trivial claims, streamlining the process. However, it would require significant resources, and as ‘fake news’ is not currently a widespread issue in New Zealand, it is unlikely to be fiscally justifiable. Additionally, s 18 allows the court to grant an interim order pending determination of an application for an order under s 19. Although this could stifle free speech, it would help quickly remove content so that further damage cannot occur during the case’s deliberation. An interim order would be especially valuable in the days immediately prior to an election or referendum, which cannot be given back.

  1. Necessary departures from the Act

Several departures from the HDCA would be required for the proposed legislation to adequately address ‘fake news’. Firstly, the communication principles could be extended to include ‘false statements that pose a serious risk to the integrity of democracy’. This addition would help raise awareness of the influence ‘fake news’ can have on society. It would allow people to complain where they think false information is being spread with the potential to undermine the integrity of our democracy. This would be valuable during election periods, and referendums. Furthermore, while ‘fake news’ can pose a threat to individuals, it invariably affects society generally where it undermines democratic processes. Thus, for the proposed Act to be effective, it will need to deal with harm to both society and individuals. Thus, the terms ‘victim’, and ‘individual’ will need to be changed so that harm to ‘society generally’ is included.

  1. A regulatory body to deal with complaints

Netsafe is the body tasked with handling complaints under the HDCA.109 Their role is to receive and assess complaints about harm caused to individuals by digital communications; investigate complaints, and use advice, negotiation, mediation, and persuasion to resolve complaints.110 Netsafe can refer cases to the District Court if the problem cannot be solved. The Court can make an order to remove content, or impose penalties.111 It is unlikely that Netsafe is currently equipped to determine the truth or falsity of stories, and effectively deal with ‘fake news’. However, with additional funding it could be extended to do so. Alternatively, the Press Council could be utilised as it has a relatively wide scope, which includes resolving complaints regarding digital media.112 The Press Council is also more specialised in news media compared with Netsafe. However, it is worth noting that Netsafe and the Press Council do not have any enforcement powers, and thus may have difficulty addressing the more serious purveyors of ‘fake news’. To acquire enforcement powers, a body would require statutory backing. The Broadcasting Standards Authority (BSA) can order a broadcaster to publish a statement,113 pay costs to the Crown or complainant,114 or refrain from broadcasting or advertising.115 However, the BSA only covers television and radio broadcasting, or online material that has been broadcasted, and thus in most cases could not

109 Ministry of Business, Innovation & Employment “Complaints Agency” Consumer Protection

<www.consumerprotection.govt.nz/general-help/laws-policies/online-safety/harmful-digital-communications- act/HDCA website?>.

110 Section 8(1)(c).

111 Section 8(5).

112 New Zealand Media Council “Statement of Principles” <www.mediacouncil.org.nz/principles>.

113 Broadcasting Act 1989, s 13(1)(a).

114 Section 13(2)(b).

115 Section 13(1)(b).

apply to ‘fake news’.116 Parliament could award statutory backing to a body to deal with ‘fake news’, enabling that body to have a practical effect in difficult cases. However, it would be a significant and controversial step. Nevertheless, it seems that in less difficult cases, a body such as Netsafe or the Press Council could be effective. It could help reach a quick and efficient outcome through less coercive means. Furthermore, if guidelines were developed for the District Court to assess claims about ‘fake news’, a body like Netsafe or the Press Council could refer difficult cases to them where appropriate. Thus, although it may require significant funding, it seems Netsafe or the Press Council’s role could be extended to include an attempt to reduce the harms associated with ‘fake news’.

  1. The penalty

Under the HDCA, the courts can issue a maximum sentence of two years in prison, a $50,000 fine,117 or a $200,000 fine for body corporates.118 The penalties at stake are substantial, and the "communications principles" are broad. Indeed, the penalties may be overly draconian in the context of ‘fake news’, giving rise to concerns regarding freedom of expression. It would be undesirable for the media to constantly fear reprimand for releasing a false statement. However, the criminal offence under the HDCA requires intent to cause harm.119 Thus, in the context of ‘fake news’, a similar requirement of ‘intent to disrupt a democratic process’ could help mitigate fear of inadvertently making an error, reducing concerns regarding freedom of expression.

  1. The requirement of harm

As mentioned above, the HDCA deems “harm” to be “serious emotional distress”, which is a high standard.120 This will need to be changed in the ‘fake news’ context. A better standard for harm may be digital communications that cause ‘significant disruption of a democratic process’. This dissertation’s definition of ‘fake news’ could assist in determining whether content should fall within the ambit of the proposed Act. Nevertheless, imposing criminal liability may be difficult given the risk of harm is likely to come from overseas sources such as Russia. Moreover, it is unlikely that the typical suspects for disseminating ‘fake news’ would see New Zealand law as a disincentive. Additionally, the anonymity of the internet may make it difficult to find such individuals. However, it is possible to identify a potential defendant through applying to the Court for a Norwich Pharmacal order.121 A Norwich Pharmacal order requires a third party who is ‘mixed up’ in wrongdoing to disclose the identity of the wrongdoer. The application of this order is widespread and can be sought in various legal scenarios.122 Thus, it is worth discussing criminal liability, although it may be more prudent to focus primarily on the civil remedies available under the Act.

116 Broadcasting Standards Authority “Introduction” BSA <https://bsa.govt.nz/standards/introduction>.

117 Section 22(3)(a).

118 Section 22(3)(b).

119 Section 22(1)(a).

120 Section 4.

121 The term is derived from Norwich Pharmacal v Commissioners of Customs and Excise [1973] UKHL 6; [1974] AC 133.

122 Brett Wilson “Norwich Pharmacal Orders – Identifying Anonymous” Brett Wilson LLP

<www.brettwilson.co.uk/services/defamation-privacy-online-harassment/norwich-pharmacal-orders- identifying-the-anonymous/>.

  1. The civil remedies

The HDCA’s civil remedies are what makes the Act such an attractive prospect when addressing ‘fake news’. The court’s ability to impose take down orders, an apology, publication of a correction, and a right of reply could prove invaluable.123 Further, releasing the identity of the author of the ‘fake news’ would help uncover the bias behind the information.124 Finally, the power to have material removed would help mitigate the risk of further harm being caused. Although these responses infringe on freedom of speech, they would be beneficial in many circumstances, and if implemented with the right guiding principles, would likely see a positive result.

  1. Arguments against using the HDCA as a framework for legislative change:

Although the Act requires the ‘Approved Agency’ and the District Court to act consistently with the rights and freedoms contained in the New Zealand Bill of Rights Act 1990, the proposed provision could stifle political debate.125 This is because political commentators may abstain due to the fear of being held liable for making an error. However, incorporating a requirement that there be ‘intent to disrupt a democratic process’ would likely help mitigate this fear. Further, given the recent prevalence of ‘fake news’, and the effect it can have on democratic processes, a response is necessary. In the HDCA context, the prevalence of online bullying meant action was required, and Parliament should be commended for taking a proactive step in the right direction. Indeed, the 2016 United States Election, the Cambridge Analytica affair, and the European Union referendum are examples of what can happen if preventative measures against ‘fake news’ are not adopted.

Furthermore, the safe harbour provision also arguably mitigates concerns of curbing democratic debate.126 Section 23 provides protection against civil or criminal proceedings for an online host if they follow the process provided in s 24. Section 24 provides that the content host must notify the author of the complaint as soon as practicable and no later than 48 hours after receiving the complaint. The host must then provide the author with a copy of the complaint, and must notify them that they may submit a counter-notice to the host within 48 hours. Additionally, s 24(2)(b) may help solve the anonymity problem as it provides that where a host is unable to contact an author after taking reasonable steps to do so, the host must take down or disable the specific content. This would assist in removing content created by trolls and bots, helping ensure that digital content is made by legitimate and identifiable people. An absurdity may arise where legitimate content is removed because people are busy and uncontactable. Nevertheless, it would also be absurd if people could ignore correspondence to ensure illegitimate content remains. Thus, the safe harbour provision appears to mitigate free speech concerns.

The width of the Act also provides floodgates concerns. If the floodgates were opened to an array of claims this may have significant implications for the workload of the judiciary, and on bodies such as Netsafe, or the Press Council. This may require unwanted expansion which may not be fiscally justifiable. This fear is exacerbated by the fact that there is no threshold for

123 Section 19(1).

124 Section 19(2)(b).

125 Section 19(6).

126 Section 23.

complaining under the safe harbour provision, meaning online content hosts must facilitate the complaint and counter-notice procedure, regardless of the complaint’s merit.127 Although the Agency has discretion to decline to investigate trivial claims, there is no real disincentive for trivial claims. This gives rise to concerns over Russian bots, or other internet trolls barraging content hosts with claims. This may cause content hosts to simply remove content rather than endure the administrative burden of determining the validity of a vast number of claims, weakening the marketplace of ideas.128

Lastly, it is arguable that the proposed provision will lack practical effect, as action will often be taken too late. Efficiency is paramount when considering ‘fake news’. This is because the more content proliferates, the more people will begin subscribing to information as it becomes embedded within social media ‘filter bubbles’. Further, the HDCA has been subjected to widespread criticism, with claims its scope is too wide, and that it is inconsistent with off-line law.129 It is arguable the distinction between off-line and on-line liability is justifiable due to the different harms that occur on the internet. In the context of ‘fake news’, a digital post can reach millions of people in a matter of hours, compared with posters, or print newspapers, which are unlikely to have such a wide reach. The exponential dissemination that can occur online can have significant consequences. Thus, a specific online approach as taken in the HDCA is prudent in the current digital climate.

  1. Conclusion

There are a several aspects of the HDCA that make it a useful framework for creating a similar, yet distinct, piece of legislation to deal with the harms caused by ‘fake news’. Although the proposed law may spark criticism, and it may be difficult to find the right regulatory body, it could be a step in the right direction. It is not a piecemeal approach, and if New Zealand was to encounter ‘fake news’ during its next election, it would likely serve to protect the integrity of its democracy. With a range of communication principles to consider, and the safe harbour provisions, it seems the effect on freedom of expression, and floodgates concerns can be minimised. Hence, it is proposed that the New Zealand government implement such a law.

127 Ibid.

128 Ibid.

129 Daniel Gambitsis “The Unintended Consequences of the Harmful Digital Communications Act” Equal Justice Project <http://equaljusticeproject.co.nz/2015/07/cross-examination-the-unintended-consequences-of- the-harmful-digital-communications-act/> .

III. Reform to Electoral law

Reform to electoral law has been discussed as a means to mitigate the effects of ‘fake news’. In Britain, reform to electoral law has been urged by the Electoral Commission.130 This was triggered by several online political campaign scandals, including Cambridge Analytica, and the Leave Campaign. Sir John Holmes, chair of the Electoral Commission, stated “Urgent action must be taken by the United Kingdom’s governments to ensure that the tools used to regulate political campaigning online continue to be fit for purpose in a digital age”.131 It appears prudent to assess current electoral law in New Zealand, and evaluate the recommendations posed by the Commission.

  1. Electoral law in New Zealand

In contrast to the United Kingdom, New Zealand’s electoral system appears to provide for considerable transparency in advertising, while also dealing with overseas influence in elections. There have been suggestions in the United Kingdom to reform electoral law so that all digital political campaign material states who paid for it, bringing online advertisements in line with physical leaflets and advertisements. This is not required in New Zealand, as it is already an illegal practice to publish an “election advertisement” without including a statement setting out the name and address of the advertisement’s “promoter”.132 Further, online and offline advertising is treated equally, as the medium in which an advertisement is presented is not relevant.133 Thus, it appears transparency in advertising is not an issue. Moreover, there was concern about the influence of foreign organisations and individuals.134 However, New Zealand law also addresses overseas influence in our elections. A third party wishing to spend more than a set amount on election advertising is first required to register with the Electoral Commission. There are also restrictions on the sources of donations to a political party. An “overseas person” can donate a maximum of $1500 to a given political party per electoral cycle.135 If a party receives a donation from an overseas person that exceeds $1500, the party must return all additional money to the donor, or give it to the Electoral Commission.136 Thus, while our electoral law could completely ban overseas influence, and could increase the penalties associated with breach of electoral rules, it does not appear that this is warranted. Hence, it seems prudent to assess alternative options for reform.

  1. Principles of electoral law

Given the role electoral law plays in upholding democracy, it is important that it develops alongside the changing technological climate. Democracy has a two-fold role: it has a factual role in determining what substantive solution to adopt for a contentious issue, and it has a legitimating role in enabling that solution to be accepted, and regarded as binding on all members of society. 137 Andrew Geddis has stated that “if an election is going to provide a

130 Jim Waterson “U democracy under threat and need for reform is urgent says regulator” (26 June 2018) The Guardian <www.theguardian.com/politics/2018/jun/26/uk-democracy-under-threat-and-reform-is-urgent-says- electoral-regulator>.

131 Ibid.

132 Electoral Act 1993, s 204(f).

133 Section 3A(1)(a).

134 Waterson, above n 130.

135 Section 207K.

136 Section 207K(2A).

137 A Geddis Electoral Law in New Zealand: Practice and Policy (2nd ed, LexisNexis, Wellington, 2014) at 10.

legitimate way of deciding who will wield public decision-making power in society, each elector’s vote must represent a freely made choice between the various contestants”.138 Unless we can be confident that an election result is representative of each individual’s genuine assessment of the electoral participants, then that result does not tell us with confidence who the population wants to allocate public decision-making power.139 If the election process fails to provide this information, it has failed its most basic function of playing a legitimating role in enabling a solution to be accepted.140 The overarching objective of the electoral system is to promote the legitimacy, integrity and efficiency of New Zealand’s democracy. Limits on freedom of expression and political participation should be focused and proportional. There will often be conflict between upholding the integrity of the system, and achieving efficiency and participation.141 ‘Fake news’ has the capacity to undermine the integrity of the electoral system through subverting this legitimating role. It exposes voters to undue influence through misrepresenting and often embellishing certain aspects of the election. Thus, ‘fake news’ threatens to undermine the electoral system by disrupting legitimate debate, and interfering with individual’s access to legitimate information.

  1. Potential for reform

To uphold the legitimating role of democracy, it may be necessary to assess the potential for reform to New Zealand electoral law. Following the 1999 election, there were concerns that candidates had achieved last-minute publicity through making sensationalist claims the day before polling day which were later shown to be false.142 Parliament responded by creating s 199A of the Electoral Act 1993, which provides that publishing false statements to influence voters is a corrupt practice. There is a high threshold to meet this test. The published statement must be knowingly false, and in a material particular;143 be made on election day or the 2 days prior,144 and be made with the intention of influencing the vote of any elector145. The limited time-period was designed to balance concerns regarding freedom of expression, with the concern that votes will cast based on false information that the “marketplace of ideas” does not have time to expose.146 Being convicted of a corrupt practice has serious implications. It is punishable by up to two years imprisonment and/or a fine of up to $40,000.147 A person found guilty of a corrupt practice offence cannot vote,148 and cannot be an MP or stand for Parliament for 3 years.149

138 Ibid, at 117.

139 Ibid.

140 Ibid.

141 Chris Hubscher “Regulatory Impact Statement – Electoral Amendment Bill: Advance Voting ‘Buffer Zones’ & Prohibition on False Statements to Influence Voters” Ministry of Justice (22 June 2016)

<www.justice.govt.nz/assets/Documents/Publications/electoral-amendment-bill-advance-voting-buffer-zones- and-prohibition-on-false-statements-to-influence-voters.pdf> at 3.

142 Justice and Electoral Committee, Report on the Inquiry into the 1999 General Election, I.7C, 2001, p 91 as cited in A Geddis Electoral Law in New Zealand: Practice and Policy (2nd ed, LexisNexis, Wellington, 2014) at 125.

143 Section 199A(1)(a).

144 Section 199A(3)(a).

145 Section 199A(1).

146 Justice and Electoral Committee, Report on the Inquiry into the 199 General Election, I.7C, 2001, p 92 as cited in A Geddis Electoral Law in New Zealand: Practice and Policy (2nd ed, LexisNexis, Wellington, 2014) at 125.

147 Section 224(1).

148 Section 80(1)(e).

149 Section 47(1).

  1. Arguments in favour of reform

Although it may pose a risk of restricting freedom of expression, it would be prudent to consider whether the time frame requirement of this provision could be extended. The policy objective of s 199A is to deter deliberately false statements where they cannot be adequately addressed through other complaints processes, or through rebuttal or media scrutiny.150 It is arguable that in the current digital climate, where ‘fake news’ stories can become viral within hours, and people are enclosed in their own personal ‘filter bubbles’, that such content cannot be adequately addressed by the marketplace of ideas.

Further, although freedom of expression is essential to political debate, it is counter-productive to allow politicians and people engaging in political discussion to tell overt lies. This is because reasoned political discourse is impossible without a shared basis of facts. However, there is a risk of punishing hyperbole, which is arguably an inherent aspect of politics. Indeed, both Donald Trump and Hilary Clinton would be guilty many times over if such a provision were introduced in the United States. Nevertheless, the question remains why not? Why do we allow figures in such powerful and senior positions to lie to the public they are bound to serve? We should be holding these people to the highest standard.

Psychological research has suggested people tend to be reluctant to correct previously believed facts. Daniel Kahneman and Amos Tversky conducted a study that proved people are prone to ‘psychological anchoring’.151 This is where people tend to rely too heavily on one trait or piece of information when making decisions. It is the tendency to place irrational emphasis on the first piece of information one receives when making decisions.152 This was proved through spinning a wheel with the numbers 1 through to 100, and then asking subjects to estimate whether the percentage of United Nations memberships accounted for by African countries was higher or lower than the number on the wheel. The study found the anchoring value of the number was significant, with those landing on 10 estimating 25%, and those landing on 60 estimating 45%.153 Additionally, Elizabeth Kolbert cited a Stanford study which confirmed that once an opinion is formed, people tend to abandon rationality and stick to that opinion, which can make it very difficult to refute false claims.154 The study noted that “impressions are remarkably perseverant”.155 An example of this theory in practice is the Leave campaign sticking with their claim that Britain’s net payment to the European Union was estimated at

£350 million. Although this claim was disproven, and now has very few supporters, the Leave campaign utilised it well.156 The above studies suggest that people will take an initial figure, and even if it is proven to be false they will fail to sufficiently abandon that initial figure. This incentivises outrageous overestimates in the hope that people will think the real estimate is correlated to some degree. This is dangerous throughout an election period or during a

150 Hubscher, above n 141, at 47.

151 Ryan McElhany “The Effects of Anchoring Bias on Human Behavior” (23 May 2016) Thought Hub

<www.sagu.edu/thoughthub/the-affects-of-anchoring-bias-on-human-behavior>.

152 Ibid.

153 Ibid.

154 Elizabeth Kolbert “Why Facts Don’t Change Our Minds” (27 February 2017) The New Yorker

<www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds?mbid=social_facebook)>.

155 Ibid.

156 John Lichfield “Boris Johnson’s £350m Claim is Devious and Bogus. Here’s why” (18 September 2017) The Guardian <www.theguardian.com/commentisfree/2017/sep/18/boris-johnson-350-million-claim-bogus-foreign- secretary>.

referendum as it could swing a large portion of voters who only discover the truth after they have voted, or simply remain misinformed. It suggests that in the current technological climate, where stories can become viral without hours, more time may be required to refute false claims. Thus, it is proposed that the prohibited period be extended to 7 days to address these concerns.

  1. Arguments against reform

As hyperbole and false statements are seemingly inherent within politics, reform may risk a proliferation of complaints. This raises concern over the administrative burden which could arise due to the provision, conflicting with the principle of efficiency. Another concern is that people could be found liable for inadvertently distributing ‘fake news’. It is important that the provision targets the right people to reduce fears of curbing freedom of expression. The definition of ‘fake news’ discussed earlier in this dissertation may achieve this. Applying the proposed definition would mean material would have to be ‘knowingly’ and ‘intentionally’ presented as true for the purpose of misleading the reader. This would help mitigate concerns of people being held liable for merely sharing content they find on social media, and inadvertently distributing ‘fake news’. Notably, this would be inconsistent with defamation law, where a person who publishes defamatory content may be liable regardless of their intention.157 What matters is not what the defendant intended the words to convey, but rather what they do convey to a reasonable reader or listener.158 Nevertheless, given the proposed legislation is in an electoral context, it appears a higher standard should be set to avoid unduly infringing on freedom of expression. Thus, inconsistencies with defamation law appear reasonable. It will help remedy the intended mischief, as it appears the law should be aimed at penalising the creators of content, rather than those who share it unintentionally. The element of ‘knowledge’ and ‘intent’ would also help quell concerns of punishing inadvertent hyperbole, reducing floodgates concerns.

Extending the scope of the prohibition period may discourage the publishing of false statements, reducing the risk of voters being unduly influenced. However, due to the serious penalties associated with breach of s 199A, it is proposed the penalties for breach would have to be amended for this proposal to be a proportional response. Otherwise, there is a risk this provision could discourage genuine political commentary. Thus, it is submitted that this become a new offence, rather than an amendment of s 199A. Section 199A could remain, as it addresses more serious conduct, and thus the penalties remain relevant. By creating a similar but new provision, and reducing the penalty, the offence could appease concerns regarding freedom of expression, while still providing some disincentive against ‘fake news’.

  1. Winston Peters v Electoral Commission

It is important to note that Parliament considered a similar extension of the section in 2016. In Winston Peters v Electoral Commission, the High Court held that statements first published more than 2 days prior to election day would be subject to s 199A if they remained passively where they were originally published.159 This judgment widened the scope of s 199A considerably, as there was no requirement that there be any act to republish, promote, or share the statement within the prohibited period. In 2016 Parliament considered this judgment, and intentionally amended s 199A, reversing the precedent set in Peters. The amendment reversed the decision through making it clear the prohibition does not apply if the statement was

157 E Hulton & Co v Jones [1909] UKLawRpAC 57; [1910] AC 20.

158 Rubber Improvement Ltd v Daily Telegraph Ltd [1964] AC 234 at [258].

159 Peters V Electoral Commission [2016] NZHC 394 at [51].

published prior to the prohibited period. Parliament was seemingly concerned freedom of expression would be unduly restricted if the provision included material published before the two-day period. This is a reasonable concern given the days immediately prior to an election are a critical time for political discussion, and play a significant role for informing voters. Thus, to restrict or disincentivise free speech in this period also threatens democracy. Further, in most cases it is likely there will be time to debate the veracity of material, and time to counter it through legitimate media. Nevertheless, the proposed amendment can be distinguished from the precedent set by Winston Peters. This is because while it widens the scope of material that can fall under the provision, material published before the 7-day prohibited period that passively remains would still fall outside the provision’s scope. Allowing this defence, and reducing the penalty will help ensure the offence, sanction and the restriction on freedom of expression are proportional to the objectives.

  1. Conclusion:

Hence, one’s view on the proposed reform to s 199A will likely depend on the extent to which one agrees that more speech is the answer to harmful speech, or whether the behavioural theory discussed above is subscribed to. Following behavioural psychology, more speech is ineffective overall as a countermeasure. It appears an extension of the prohibited period alongside a reduction of the penalty would be a proportional response to ‘fake news’. This is because we should not allow people to tell overt lies immediately before an election, especially where the current digital climate has made it increasingly difficult to refute statements.

IV. Advertising regulation

A significant element of the debate about ‘fake news’ is the extent to which it is funded by digital advertising, and what steps can be taken to ensure advertising revenue does not inadvertently flow to ‘fake news’ providers. Advertising is an essential element of the business model of almost every website on the internet. Stricter regulation of advertising would directly influence the prevalence of ‘fake news’. It would have an especially large effect on ‘fake news’ made purely for commercial purposes. Standards and self-regulation developed by the mainstream advertising industry could help address the issue of ‘fake news’, allowing buyers to take steps to ensure their advertising spending does not inadvertently flow to ‘fake news’ services.160

  1. The current regulatory framework

The Advertising Standards Authority (ASA) is a self-funded comprehensive advertising standards body in New Zealand. The ASA allows users to complain about the content and placement of advertisements.161 It operates to ensure that advertising is socially responsible and truthful.162 However, it is a voluntary body, and does not specifically focus on the misplacement of advertising. Thus, it is arguably inadequate in the context of ‘fake news’. Many of the purveyors of ‘fake news’ will not join the ASA as their malicious agenda is unlikely to align with the body’s purpose. Thus, this section will assess the response of a jurisdiction that better deals with the issue.

  1. The United Kingdom’s response

It has been observed that similar challenges regarding advertising-revenue arise from sites hosting copyright-infringing content.163 In response, the City of London Police Intellectual Property Crime Unit (PIPCU), working with rights-holders and the advertising industry, developed a framework to identify sites whose primary purpose is the infringement of copyright, and compile them in an ‘Infringing Website List’ (IWL).164 The list is made available to people buying advertising slots to inform their buying decisions and can serve as a ‘blacklist’. An IWL could be a potential answer to the commercialisation of ‘fake news’. However, it does pose risks of restricting free speech through inadvertently restricting advertising-revenue to legitimate websites, causing them to shut down. Nevertheless, it is arguably justifiable as it is an indirect effect on freedom of speech, as it merely restricts where advertisements can be placed. Although any infringement on free speech is undesirable, an IWL still allows websites to exist, but tries to limit their advertising revenue where they produce harmful content. Thus, it is likely the benefits associated with deterring harmful content will outweigh the effect on freedom of expression.

160 Internet Advertising Bureau “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

161 Advertising Standards Authority “Our Jurisdiction” Advertising Standards Authority <http://www.asa.co.nz/about-us/our-jurisdiction/> .

162 Advertising Standards Authority “From the Chair” Advertising Standards Authority <http://www.asa.co.nz/about-us/from-the-chair/> .

163 Internet Advertising Bureau, above n 160, at 16.

164 Ibid.

Indeed, the approach to copyright in the United Kingdom can help inform our approach to regulating advertising to counter ‘fake news’. For an IWL to work, an independent and authoritative third party responsible for identifying ‘fake news’ websites must be established.165 However, identifying ‘fake news’ websites will often prove to be difficult as the truth operates within a blurred spectrum. It can take considerable time and effort to determine the veracity of content. Thus, to allow buyers to make informed decisions about purchasing digital advertising, a robust and independent process for identifying ‘fake news’ services is required.166 It would need to be objective, and able to distinguish between disinformation which is intentionally false or inaccurate, and misinformation that is unintentionally false.167 This has been demonstrated to be a difficult process. For example, the European Union has encountered criticism regarding the approach of a counter-propaganda unit it formed called East Stratcom.168 East Stratcom runs a website, ‘Disinformation Review’, which was formed to identify false statements about the European Union, and counter these with the truth. Due to translation errors, the unit erroneously named three Dutch websites as purveyors of ‘fake news’.169 This highlights the danger of ‘fake news’ regulation, and any attempt to label content as ‘fake news’. There will invariably be errors which will infringe on freedom of expression. It is arguable that eliminating websites is an intolerable infringement on free speech. Many of important stories and breakthroughs have stemmed from a singular source. For example, if stories such as the Cambridge Analytica scandal were missed because a website had been blacklisted, this would be a considerable loss for society. However, applying a utilitarian approach, if such a unit makes these classification errors infrequently, the overall benefit to society is still likely to outweigh the cost.

  1. A three-pronged test to identify ‘fake news’

Lord Blencathra has proposed three tests to identify and control ‘fake news’.170 They are derived from an Economist journalist named Mr. Edward Lucas writing for the Times.171 He stated that Facebook and Google can expose ‘fake news’, and that internet giants have a responsibility to warn people that they are going to encounter ‘fake news’.172 The first test is whether a site has “real-world contact details”. A street address, phone number and named individuals responsible for the content all indicate good faith, while their absence suggests a likelihood of illegitimacy. Secondly, it should be assessed whether the internet registration data, available from a simple “who is” search, is public and real. While people may be privacy conscious, and can hide behind digital anonymity, these hosts should not be afforded the same credibility as those willing to account for their content. Thirdly, how a website handles their mistakes is important.173 Apologies, corrections, clarifications, letters of rebuttal and complaint are all practices that suggest a website is not a deliberate purveyor of ‘fake news’. If such standards are absent, this is arguably a sign of illegitimacy or ‘fake news’. 174

165 Internet Advertising Bureau, above n 160.

166 Ibid.

167 Internet Advertising Bureau, above n 160, at 18.

168 Jennifer Rankin “Tech firms could face new EU regulations over fake news” (24 April 2018) The Guardian < https://www.theguardian.com/media/2018/apr/24/eu-to-warn-social-media-firms-over-fake-news-and-data- mining>.

169 Ibid.

170 Blencathra, above n 41, at 92.

171 Edward Lucas “Facebook and Google can expose fake news” (6 February 2017) The Times < https://www.thetimes.co.uk/article/facebook-and-google-can-expose-fake-news-k7stxnrpq>.

172 Ibid.

173 Ibid.

174 Ibid.

Applying these three tests may help create an IWL that could reduce the prevalence of ‘fake news’ websites. It would do this through minimising the risk that advertisements will inadvertently appear on websites that post harmful content. Further, the objectivity of these tests is appealing as they do not require the state or another body to become an arbiter of truth. Nevertheless, the tests may be difficult to apply. An important concern is whether there is enough incentive for advertising buyers to avoid websites on the IWL. One incentive would be to maintain brand integrity, and to uphold wider societal trust in advertising. However, the extent to which this incentive would manifest itself is unclear. Nonetheless, the IWL has the potential to become an authoritative source of information for the advertising industry if a trusted independent party, such as Netsafe or the Press Council determines which websites are added to the list. This third party would need to adopt a robust test like the one mentioned above. In the United Kingdom, the PIPCU is the third party that determines which sites are added to the list.175 Thus, if the appropriate third party was adopted, the IWL could still have a positive impact, and would be a step in the right direction.

  1. Voluntary action from digital platforms

Additionally, there may be scope to apply pressure to Google and other digital platforms to apply the three tests to websites, or educate users on how to apply the three tests. This would ensure users are more informed, and would likely apply pressure to websites to uphold good practices. Historically, these companies have made similar editorial and moral decisions. For example, Google has re-configured their algorithm so that googling “free child porn” links stories about people being arrested and jailed for sharing child abuse images.176 Thus, there does not seem to be any reason why they could not implement these tests when linking users to other websites, or educate people on using the tests to assess the legitimacy of content. Furthermore, applying these tests would not directly infringe on free speech, or amount to censorship, as people would still be able to construct such websites and visit them. People would merely be visiting these websites knowing “it has no real-world presence, hides its origins and never apologises or corrects mistakes”.177 We should not stop people consuming ‘fake news’ if they want to, but we can try deter them from doing so.

  1. The Digital Trading Standards Group

The United Kingdom also has a self-regulatory group called the Digital Trading Standards Group (DTSG).178 The DTSG is purely a United Kingdom initiative, although there could be considerable benefit gained through implementing a similar organisation in New Zealand. It was established in 2012 as a cross-industry initiative to develop guidelines aimed at reducing the risk of digital advertising misplacement; upholding brand safety; and protecting the integrity of digital advertising.179 While it is a voluntary body like the ASA, the ASA has a more general purpose.180 The purpose of the ASA is to ‘seek to maintain at all times and in all media a proper and generally acceptable standard of advertising and to ensure

175 Internet Advertising Bureau, above n 160, at 17.

176 Nate Hoffelder “Google Blocks Child Porn from Search Results – Replaces it with Google-Generated Spam” (18 November 2013) The Digital Reader < https://the-digital-reader.com/2013/11/18/google-blocks-child-porn- search-results-replaces-google-generated-spam/>.

177 Blencathra, above n 41, at 94.

178 Internet Advertising Bureau, above n 160.

179 Joint Industry Committee for Web Standards “Brand Safety” JICWEBS

<https://jicwebs.org/standards/brand-safety/>.

180 Advertising Standards Authority, above n 160.

that advertising is not misleading or deceptive, either by statement or by implication’. The DTSG is more focused on brand safety, and the risk of advertising misplacement, rather than general advertising standards. By signing up to the DTSG’s Good Practice Principles, businesses are committing to minimise the risk of advertising misplacement, and inject greater transparency into the digital display advertising market.181 The Principles commit a buyer and seller to agree where they want their advertising to appear, and how this will be managed. The Principles ensure each signatory business has their advertising misplacement minimisation policies - verified by the joint Industry Committee for Web Standards (JICWEBS) - approved by an independent third party.182 Businesses that are deemed compliant, after reviewing the verification provider’s report, will be awarded a seal to show the market they are complying with the agreed standard.183

  1. The positive role advertising plays in society

Although digital advertising helps fund ‘fake news’, it also plays a positive role in society through making non-publicly funded content widely available to citizens for little or no cost.184 It funds social networking, webmail, price comparison sites, and other useful online tools. Digital advertising has become the fastest growing marketing medium in the United Kingdom. It has also been a key driving force for innovation and creativity, supporting media pluralism at an unparalleled scale.185 As journalism is funded by advertising and plays such an integral role in facilitating democratic debate, we must be wary of regulation as it is essential that people can access information from a range of diverse and trusted sources.186 Thus, any regulation of digital advertising will be controversial. The above approach arguably mitigates these concerns as it is a voluntary body. It also only has an indirect effect on freedom of expression as it does not prohibit websites from the internet, it merely makes their existence less viable.

  1. Conclusion:

The responses mentioned above are unlikely to eliminate the issue entirely. This is because considerable concern lies with social media platforms and malicious actors, and ensuring they become a signatory of the DTSG may be difficult. It is also unlikely people would refrain from advertising on big social media platforms given the substantial traffic such websites attract. Nevertheless, it seems that implementing an IWL, and introducing a body like the DTSG, would achieve a positive result, as it would increase awareness of advertising misplacement. It could be achieved while the ASA focuses on regulating advertising more generally. If such a body was established, it is imperative that the entire market be reflected in its composition. The ‘buy’ and ‘sell’ side both need to be equally represented.187 It is also imperative that the agency’s principles be drafted in a technologically neutral way. This will provide flexibility for inevitable technological development and innovation in the digital advertising market.188 The organisation would help mitigate the effect advertising has in perpetuating ‘fake news’. Moreover, it would do so while posing a limited threat to freedom of expression. Therefore, it would be a prudent minimum response.

181 Joint Industry Committee for Web Standards, above n 179.

182 Ibid.

183 Ibid.

184 Internet Advertising Bureau, above n 160, at 4.

185 Internet Advertising Bureau, above n 160, at 5.

186 Ibid.

187 Internet Advertising Bureau, above n 160.

188 Ibid.

V. Data Protection

The Cambridge Analytica affair has served to illustrate the dangers posed by digital platforms, prompting many to reconsider a hands-off approach to media regulation. This is because it emerged that the company mined data from 50 million Facebook users to secretly target people with advertisements during the 2016 United States election.189 The use of personal data by digital platforms has the capacity to be influential in an election. Companies such as Cambridge Analytica divide people into categories and micro-target users through gathering a range personal data, including past purchases, petitions signed, sites visited, and news sources or advertisements clicked on.190 Targets are encouraged to like pages, follow accounts, and share information. They are then observed to see how they interact with material they are shown online, and the content is modified accordingly.191 Micro-targeting is arguably crucial to the success of ‘fake news’. Often, if content were disseminated too widely it would be refuted or produce a backlash. It is essential that the ‘right’ people see the content. Thus, data protection law offers a good opportunity to indirectly tackle fake news. This section will assess the European Union’s response to the issue and recommend that New Zealand adopt a similar response.

  1. The GDPR

Regulation (EU) 2016/679 (General Data Protection Regulation) (GDPR) was implemented on May 25 2018, and appears to be an important step towards positive change.192 Article 1(2) of the GDPR protects privacy by giving users control over their personal data. Article 4(1) broadly defines ‘personal data’ as “any information relating to an identified or identifiable person”. The regulation covers the collection of any personal data from a device in Europe, and thus applies to all the major internet platforms and other entities outside of the EU who obtain personal data from Europe.193 It makes it easier for people to find out what information companies hold about them, and imposes tough penalties on companies who do not keep data secure. It also requires much clearer consent from individuals providing their information. The law requires companies using political or philosophical views to obtain explicit consent for each specific use and each entity that will receive personal data.194 Further, companies cannot make use of their service contingent on users opting in.195 This will make it more difficult to efficiently disseminate ‘fake news’ through micro-targeting users. Companies who breach the legislation can face fines of 4 per cent of their turnover or 20 million Euros, whichever is greater.196 The GDPR will likely see a shift of power away from third-party data brokers such as Cambridge Analytica and ad-tech intermediaries, to the major platforms which will be collecting the

189 Carole Cadwalladr and Emma Graham-Harrison “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach” (17 March 2018) The Guardian

<www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election>.

190 Karen Kornblugh “Could Europe's New Data Protection Regulation Curb Online Disinformation?” (20 February 2018) Council on Foreign Relations www.cfr.org/blog/could-europes-new-data-protection-regulation- curb-online-disinformation>.

191 Ibid.

192 Nathan Gardels “The ‘techlash’ is shaping the next phase of the digital revolution” (27 April 2018) The Washington Post <www.washingtonpost.com/news/theworldpost/wp/2018/04/27/general-data-protection- regulation/?noredirect=on&utm_term=.f4d673589259>.

193 GDPR art 3(1).

194 Article 9(1).

195 Article 7(4).

196 Article 83(5).

consent of users.197 Indeed, Cambridge Analytica has filed for bankruptcy since the Facebook scandal.198 If implemented correctly, the European Union data protection requirements for transparency, data minimisation and purpose limitation could greatly mitigate the type of harms that can be caused by ‘fake news’, made evident by the Cambridge Analytica affair.

Facebook has stated that it will go beyond compliance and apply the rules of the GDPR outside the European Union.199 However, Facebook would still be within its right to stop doing so at any point. Further, Emily Sharpe, a privacy and public policy manager at Facebook, said that while some aspects of the GDPR were expected to apply globally, other regulatory steps regarding users’ consent, particularly around the collection of their sensitive data like political or religious affiliation, was expected to only apply within the European Union.200 Moreover, the decision to move 1.5 billion users away from the scope of the GDPR covered Irish headquarters to the more easy-going regime in the United States suggests the company is not completely willing to oblige.201

  1. Argument for similar regulation in New Zealand

New Zealand may benefit from implementing regulation like the GDPR as it would help address the risk of data being used to exploit people’s vulnerabilities and systematically target people with ‘fake news’. It would do so without limiting freedom of expression, or requiring a judge or platform to become an arbiter of truth. Through making it harder to target certain groups of people, many ‘fake news’ distributers will suffer financially as they are not able to reach the ‘right’ recipients. Indeed, Facebook’s deputy head of privacy has described the GDPR as the biggest change for the company since it was founded. 202 However, specific groups of television viewers have been targeted due to their presumed personality traits for decades. As Cecilia Bonefeld-Dahl observed, this form of targeted advertising is not illegal, and thus there is a risk that we are overreacting. 203 It may be that recent scandals, and the current political climate are causing us to consider disproportionate responses. It is possible that this is merely a blemish in history; a phase that will dwindle. Thus, regulating or making legislative change could be premature.

It is also unclear whether the law will have any practical effect. This is because people tend to ignore procedural questions such as whether they consent to their data being used. Indeed, a survey found that people do not read the long list of terms and conditions. Forty-three percent said there is no point reading them as companies will do what they want regardless.204 This is

197 Kornbluh, above n 190.

198 Nicholas Confessore and Mathew Rosenberg “Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data” (2 May 2018) The New York Times <www.nytimes.com/2018/05/02/us/politics/cambridge- analytica-shut-down.html>.

199 Erin Egan and Ashlie Beringer “Complying With New Privacy Laws and Offering New Privacy Protections to Everyone, No Matter Where You Live” (17 April 2017) Newsroom

<newsroom.fb.com/news/2018/04/new-privacy-protections/>.

200 Mark Scott “Zuckerberg: Facebook will apply EU data privacy standards globally” (4 May 2018) Politico

<www.politico.eu/article/zuckerberg-facebook-eu-data-will-apply-privacy-standards-globally/>.

201 “Facebook to exclude billions from European privacy laws” (19 April 2018) BBC News

<www.bbc.co.uk/news/technology-43822184?ocid=socialflow_twitter>.

202 Kornblugh, above n 190.

203 Rankin, above n 168.

204 Rachel Coldicutt “Data protection laws are useless if most of us can't locate the information we're agreeing to” (25 April 2018) Independent

<www.independent.co.uk/voices/data-protection-gdpr-facebook-cambridge-analytica-legislation- a8320381.html>.

something the GDPR should address, and is perhaps something New Zealand could investigate if it were to adopt similar legislation. This tendency is exacerbated where tech companies intentionally make processes easy to use, but difficult to understand. For example, Facebook has outlined that it has updated its terms to better explain its service, promising it will be easier to control personal data privacy and security settings. This is followed by a request to review and accept. It is likely most people will accept without reviewing. The “I agree” button is large, and it is conveniently located where your cursor still hovers from the previous screen.205 The alternative is a cumbersome tour through a combination of pages to navigate through new settings.206 Hence, Facebook’s ability to manipulate the design of their platform for their benefit may render this new legislation futile. Indeed, such rights are useless unless the public can exercise them. The introduction of new codes of practice for design and consent that oblige technology companies to make protecting data easily understandable has been suggested.207

  1. Conclusion:

Implementing legislation similar to the GDPR would make it more difficult for people to micro-target certain groups in society, and attempt to manipulate them with ‘fake news’. It would restrict the way companies such as Cambridge Analytica use and manipulate data, and thus New Zealand would be stepping in the right direction in terms of controlling the dissemination of ‘fake news’.

205 Ibid.

206 Ibid.

207 Ibid.

VI. Non-legal responses to ‘fake news’

  1. Education

There is a strong argument that the answer to ‘fake news’ is countering it with quality journalism. The difficulty lies in ensuring the public can identify legitimate journalism. Enhancing media literacy through education would allow the public to become more discerning consumers of information, applying pressure to journalists to maintain high standards. Education is often considered a preferable remedy to regulation or legislation as it does not curb freedom of expression. This seems particularly so in the context of ‘fake news’, as the truth can operate within a blurred spectrum, and the interpretation of facts is often subjective.208 A Stanford study tested the ability of United States primary, middle and high school children to distinguish between advertising, sponsored content and real articles on online media. It found that out of 203 middle school students 80 per cent believed native advertisements were real news stories. The study determined that “Overall, young people’s ability to reason about the information on the Internet can be summed up in one word: bleak.” 209 Thus, although it is unclear whether New Zealand shares this lack of media literacy, education on how to identify quality content would be a prudent response to ‘fake news’.

It is essential that schools start adapting their curriculum to recognise the dangers of the modern digital climate.210 Technology now plays an unprecedented role in our lives, and thus it is important our education reflects this development. In the current digital climate, there are few aspects of a child’s development that are not influenced by the digital world. Hence, ICT and digital issues are not subjects that should be on the periphery of the curriculum. They should become core areas of study, especially during high school as this is the period where young adults are preparing to engage in democratic processes.211 Government policy could be used to implement changes to the public-school curriculum, making media literacy a core element of the educational system.212 Further, it has been suggested that media literacy education should be extended beyond school. Many adults are not sufficiently literate in the art of interpreting media. Adults may be less media literate as they lack the digital savviness associated with growing up in our current digital world. Thus, implementing initiatives for both adults and children would likely see positive results.

  1. A voluntary response from digital platforms

Digital platforms themselves may be in the best position to respond to the issue of ‘fake news’. Facebook and Google have already taken some action to mitigate the effects of ‘fake news’, but the issue remains.213. While changes to algorithms have been suggested, there is also a risk of digital platforms taking on an inappropriate role as an arbiter of truth. It would be undesirable

208 The British and Irish Law, Education and Technology Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

209 Stanford History Education Group “Evaluating Information: The Cornerstone of Civic Online Reasoning” (Executive Summary 2016), at 4.

210 Dr Sandra Leaton Gray and Professor Andy Phippen “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”, at 5.

211 Ibid, at 9.

212 Dennis Reineck “Revisiting media and information literacy in the fake news age” (4 May 2017) DW < https://www.dw.com/en/revisiting-media-and-information-literacy-in-the-fake-news-age/a-38700873>.

213 Emily Peck “Facebook is Finally Taking Action Against Fake News” (15 December 2016) Huffington Post

<www.huffingtonpost.com/entry/facebook-fake-news_us_5852c248e4b02edd41160c78>.

for Facebook and Google to take on a role that resembles censorship.214 Taking small steps such as using the actual brands of original media in the Facebook news feed or in Google search results would help make the information of a source’s legitimacy more easily available. A levy on the profits of digital platforms could also be used to help fund media literacy education. Given the advertising revenue generated by such companies, they are certainly in a position, and arguably have a duty to contribute to resolving the issue.

  1. Fact-checking websites

Facebook has announced its willingness to work with fact checking websites to address ‘fake news’.215 It has also initiated a Journalism Project designed to strengthen the relationship between publishers and the platform through outreach and measures to recognise and reward real news. 216 In 2017 Facebook started educating journalists from around the world on best practices so that they can distribute content on Facebook, engaging and building audiences around their reporting.217 Facebook has also promoted media literacy, working with the News Literacy Project to produce a series of public service ads to raise awareness of the issue. Further, Facebook have increased efforts to curb ‘fake news’. They improved the platform so that people can report ‘fake news’ more easily, and are attempting to disrupt the financial incentives associated with spamming. Moreover, they have launched a program to work with third-party fact checking organisations that are signatories of Poynter’s International Fact Checking Code of Principles to identify ‘fake news’ on Facebook.218

The International Fact-Checking Network’s (IFCN) code of principles is a series of commitments made by organisations to promote excellence in fact-checking. 219 The IFCN is a body that monitors and approves fact-checking organisations. It performs a range of functions, including checking whether viral stories, or claims made by public figures are true. It does this through promoting five commitments; a commitment to non-partisanship and fairness, transparency of sources, transparency of funding and organisation, transparency of methodology, and an open and honest corrections policy.220 The IFCN has seven counsellors from around the world who engage in fact-checking in their countries and regions. They are a group of journalism and media experts who are the first filter for applications received. Signatories are encouraged to publish a badge showing they are members. Where such a network develops a good reputation, there will be a strong incentive to join. If this occurs, the network could play an influential role in reducing ‘fake news’. Increasing the number of fact checking organisations is a prudent response as it reduces the prevalence of ‘fake news’ while having a negligible effect on freedom of expression.

214 Professor Leighton Andrews “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

215 Peck, above n 213.

216 Fidji Simo “Introducing: The Facebook Journalism Project” (11 January 2017) Facebook for Media www.facebook.com/facebookmedia/blog/introducing-the-facebook-journalism-project>.

217 Ibid.

218 Ibid.

219 International Fact-Checking Network “Commit to transparency – sign up for the International Fact-Checking Network’s code of principles” <https://ifcncodeofprinciples.poynter.org>.

220 Ibid.

  1. Conclusion:

Increased education and voluntary action from digital platforms are preferable responses to legislative or regulatory action as they have the capacity to reduce ‘fake news’ without unduly restricting freedom of expression. However, in the short-term a successful response to ‘fake news’ will likely require more than voluntary action by major digital platforms. This is because there are significant financial disincentives for them to do so, and thus any efforts are likely to have a limited impact. The current digital advertising market is undeniably dominated by Facebook and Google, and many of the media organisations that could have been relied on to counter ‘fake news’ have been badly affected by a sudden decrease in advertising revenue. Thus, while education and fact-checking websites may be a long-term answer, some legal action is required to address ‘fake news’ in the short term.

VII. Conclusion:

‘Fake news’ is a legitimate concern for an informed democracy. Discourse surrounding potential responses to ‘fake news’ is therefore vital to protect New Zealand from internal or external influences on its democracy. ‘Fake news’ is a difficult issue to address as any effective response will invariably pose risks of restricting freedom of expression. It is proposed that a combination of the responses discussed in this dissertation is required to effectively combat ‘fake news’.

Although it is likely to be effective, the legislative action taken by France and Germany appears to unduly infringe on freedom of expression. A less intrusive response would be reforming electoral law so that a provision resembling s 199A is introduced with a seven-day prohibited period. This would reflect the challenges associated with the modern digital climate. It would strike a good balance between efficacy and free speech as it only applies during the seven days before an election or referendum. However, this response would neglect the harms that occur outside the seven-day period, and thus legislation derived using the HDCA as a framework may help address these harms. It risks restricting free speech, but if the penalties were reduced, and the appropriate third party was implemented to deal with the issue, it would likely have a positive impact.

Unfortunately, we are in a digital environment that rewards the distribution of content over its creation, and thus legitimate journalism risks being overtaken by sensationalised journalism aimed at generating traffic. Reducing the misplacement of advertising on ‘fake news’ websites would help mitigate this risk, as it will help incentivise quality journalism. Thus, it is recommended that an Infringing Websites List be established, and that the steps outlined by Lord Blencathra be used to inform that list. Additionally, a body like the DTSG could be introduced to take a more focused approach to reducing advertising misplacement, helping reduce the financial incentive to disseminate ‘fake news’.

Moreover, implementing a data protection response like the GDPR may also be effective, as it will inhibit the ability of organisations such as Cambridge Analytica to micro-target specific audiences, taking advantage of their vulnerabilities and beliefs. It would help increase public awareness of personal data, and how it is being used to further a political agenda and make a substantial profit. It would help reduce the risk of incidents such as the Cambridge Analytica affair occurring in New Zealand, protecting our democracy from external influence.

New Zealand law does not appear fit for purpose in the current digital climate, and thus we are at risk of internal and external influence on our democracy through ‘fake news’. Thus, in the short-term, legal action seems necessary. If implemented alongside legal initiatives, non-legal responses such as enhancing media literacy through education and increasing the quality and prevalence of fact-checking websites could have a positive impact in the long-term. Ideally, we will reach a point where these non-legal responses are sufficient. This is likely to occur in a society that promotes a high level of media literacy, and has a digital environment that incentivises quality journalism. Following the recommendations above will help facilitate that environment, reducing the effect of ‘fake news’ in the short and long term, while balancing the risk of curbing freedom of expression.

Bibliography

  1. Cases
  1. New Zealand

E Hulton & Co v Jones [1909] UKLawRpAC 57; [1910] AC 20.

Rubber Improvement Ltd v Daily Telegraph Ltd [1964] AC 234.

Peters v Electoral Commission [2016] NZHC 394.

  1. United Kingdom

Norwich Pharmacal v Commissioners of Customs and Excise [1973] UKHL 6; [1974] AC 133.

  1. Legislation Broadcasting Act 1989. Defamation Act 1992. Electoral Act 1993.
Harmful Digital Communications Act 2015.

Regulation (EU) 2016/679 (General Data Protection Regulation).

  1. Books

Shorter Oxford English Dictionary (6th ed, Oxford University Press, Oxford, 2007).

Bryan Garner (ed) Black’s Law Dictionary (10th ed, Thomson Reuters, St. Paul, 2014).

A Geddis Electoral Law in New Zealand: Practice and Policy (2nd ed, LexisNexis, Wellington, 2014).

M Puppis Media Governance: A New Concept for the Analysis of Media Policy and Regulation

(Volume 3, issue 2, 2010).

  1. Journal articles

Clay Calvert and Austin Vining Filtering Fake News Through a Lens of Supreme Court Observations and Adages (L. Rev. 153, North Carolina Law Review Association, 2017), at I.

Karen Douglas and Daniel Jolley “The social Consequences of Conspiracism: Exposure to Conspiracy Theories Decreases Intentions to Engage in Politics and to Reduce One’s Carbon Footprint” (4 January 2013) British Journal of Psychology.

Andrew Inest “A Theory of De Minimis and a Proposal for Its Application in Copyright” (2006) 21 Issue 2 Article 6 Berkeley Technology Law Journal 957.

Lyrissa Barnett Lidsky “Where’s the Harm?: Free Speech and the Regulation of Lies” (2008) Wash & Lee L Rev 65.

Lee Royster “Fake News: Potential Solution to the Online Epidemic” (2017) 96 N C L Rev 270.

Emma M Savino “Fake News: No One Is Liable, and That Is a Problem” (2017) 65 Buff L Rev 1101.

  1. Parliamentary and government materials

Law Commission The News Media Meets ‘New Media’ (NZLC R128, 2013) at 27.

  1. Law commission submissions

Professor Leighton Andrews “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

BBC “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”. Electoral Commission “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

British and Irish Law, Education and Technology Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Gavin Devine, CEO of Newgate Communications “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Guardian News & Media “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Dr Michael Holland “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

The Rt. Hon the Lord Blencathra “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

The British and Irish Law, Education and Technology Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Facebook “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Dr Sandra Leaton Gray and Professor Andy Phippen “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”,

The Independent Monitor for the Press (IMPRESS) “Submission to the Digital, Culture, Media & Sport Committee Fake News inquiry 2017” at 52.

Internet Advertising Bureau “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Dr. Ansgar Koene, Horizon Digital Economy Research Institute, University of Nottingham “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Dr Karol Lasok QC “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Stephan Lewandowsky and James Ladyman and Professor Jason Reifler “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Ofcom “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

News Media Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

News Media Alliance “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Research Libraries UK “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Royal Statistical Society “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Professor Julian Petley “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Press Association “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Voice of the Listener & Viewer Ltd “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

Dr Alison Wakefield, Institute of Criminal Justice Studies, University of Portsmouth “Submission to the Digital, Culture, Media & Sport Committee Fake news inquiry 2017”.

  1. Reports

Professor Vincent Cunnane and Dr Niall Corcoran “Proceedings of the 5th European Conference on Social Media” (paper presented at Limerick Institute of Technology Ireland 21- 22 June 2018).

Justice and Electoral Committee, Report on the Inquiry into the 1999 General Election, I.7C, 2001.

Stanford History Education Group “Evaluating Information: The Cornerstone of Civic Online Reasoning” (Executive Summary 2016).

European Commission “A multi-dimensional approach to disinformation” Report of the independent High level Group on fake news and online disinformation (Executive Summary 2018).

  1. Official government websites

Advertising Standards Authority “Our Jurisdiction” Advertising Standards Authority <http://www.asa.co.nz/about-us/our-jurisdiction/> .

Advertising Standards Authority “From the Chair” Advertising Standards Authority <http://www.asa.co.nz/about-us/from-the-chair/> .

Broadcasting Standards Authority “Introduction” BSA https://bsa.govt.nz/standards/introduction>.

Chris Hubscher “Regulatory Impact Statement – Electoral Amendment Bill: Advance Voting ‘Buffer Zones’ & Prohibition on False Statements to Influence Voters” Ministry of Justice (22 June 2016) </www.justice.govt.nz/assets/Documents/Publications/electoral-amendment-bill- advance-voting-buffer-zones-and-prohibition-on-false-statements-to-influence-voters.pdf>.

Joint Industry Committee for Web Standards “Brand Safety” JICWEBS

<https://jicwebs.org/standards/brand-safety/>.

Ministry of Business, Innovation & Employment “Complaints Agency” Consumer Protection

<www.consumerprotection.govt.nz/general-help/laws-policies/online-safety/harmful-digital- communications-act/HDCA website?>.

New Zealand Media Council “Statement of Principles”

<www.mediacouncil.org.nz/principles>.

Online Media Standards Authority “OMSA Members Join the New Zealand Press Council” (20 December 2016) OMSA Media Release <http://omsa.co.nz/wp-content/uploads/OMSA- Release-OMSA-Members-Join-Press-Council-20-December-2016.pdf> .

  1. Internet resources

Eytan Bakshy and Solomon Messing “Exposure to ideologically Diverse News and Opinion Future Research” (24 May 2015) Solomon Messing

<https://solomonmessing.wordpress.com/2015/05/24/exposure-to-ideologically-diverse- news-and-opinion-future-research/>.

BBC “Government announces anti-fake news unit” (23 January 2018) BBC News

<www.bbc.com/news/uk-politics-42791218>.

“Before Trump, the long history of fake news” (15 Jul 2018) The Star Online

<www.thestar.com.my/news/world/2018/07/15/before-trump-the-long-history-of-fake- news/>.

Amanda Bloom “The best content to influence voters” Facebook Business

<https://www.facebook.com/business/success/toomey-for-senate>.

Samantha Bradshaw and Philip N. Howard “Challenging Truth and Trust: a global inventory of organized social media manipulation” Computational Propaganda Research Project, Oxford Internet Institute (July 2018) <http://comprop.oii.ox.ac.uk/research/cybertroops2018/> .

Carole Cadwalladr and Emma Graham-Harrison “Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach” (17 March 2018) The Guardian

<www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us- election>.

Rachel Coldicutt “Data protection laws are useless if most of us can't locate the information we're agreeing to” (25 April 2018) Independent

<www.independent.co.uk/voices/data-protection-gdpr-facebook-cambridge-analytica- legislation-a8320381.html>.

Nicholas Confessore and Mathew Rosenberg “Cambridge Analytica to File for Bankruptcy After Misuse of Facebook Data” (2 May 2018) The New York Times

<www.nytimes.com/2018/05/02/us/politics/cambridge-analytica-shut-down.html>.

“Could France's fake news law be used to silence critics?” (4 June 2018) The Local fr

<www.thelocal.fr/20180604/could-frances-fake-news-law-be-used-to-silence-critics>.

Kristen Crawford “Stanford study examines fake news and the 2016 presidential election” Stanford News https://news.stanford.edu/2017/01/18/stanford-study-examines-fake-news- 2016-presidential-election/>.

Erin Egan and Ashlie Beringer “Complying With New Privacy Laws and Offering New Privacy Protections to Everyone, No Matter Where You Live” (17 April 2017) Newsroom

<newsroom.fb.com/news/2018/04/new-privacy-protections/>.

Enders Analysis “UK digital ad forecast 2016-2018: Strong but uneven growth” (November 2016) Enders Analysis <www.endersanalysis.com/content/publication/uk-digital-ad-forecast- 2016-2018-strong-uneven-growth>.

Enders Analysis “News brands: Rise of membership as advertising stalls” (February 2017) Enders Analysis <www.endersanalysis.com/content/publication/news-brands-rise- membership-advertising-stalls>.

“Facebook to exclude billions from European privacy laws” (19 April 2018) BBC News

<www.bbc.co.uk/news/technology-43822184?ocid=socialflow_twitter>.

Daniel Gambitsis “The Unintended Consequences of the Harmful Digital Communications Act” Equal Justice Project <http://equaljusticeproject.co.nz/2015/07/cross-examination-the- unintended-consequences-of-the-harmful-digital-communications-act/> .

Nathan Gardels “The ‘techlash’ is shaping the next phase of the digital revolution” (27 April 2018) The Washington Post

<www.washingtonpost.com/news/theworldpost/wp/2018/04/27/general-data-protection- regulation/?noredirect=on&utm_term=.f4d673589259>.

Pascal-Emmanuel Gobry “France’s ‘Fake news’ Law Won’t Work” (15 February 2018) Bloomberg <www.bloomberg.com/view/articles/2018-02-14/fake-news-france-s-proposed- law-won-t-work>.

Jeffrey Gottfried and Elisa Shearer “News Use Across Social Media Platforms 2016” (May 26 2016) Pew Research Centre Journalism and Media <www.journalism.org/2016/05/26/news- use-across-social-media-platforms-2016/>.

Jon Henley “Emmanuel Macron files complaint over Le Pen debate ‘defamation’” (4 May 2017) The Guardian < www.theguardian.com/world/2017/may/04/emmanuel-macron-files- complaint-over-marine-le-pen-debate-remark>.

Alex Hern “Facebook doesn’t need to ban fake news to fight it” (25 November 2016) The Guardian <www.theguardian.com/technology/2016/nov/25/facebook-fake-news-fight-mark- zuckerberg>.

Nate Hoffelder “Google Blocks Child Porn From Search Results – Replaces it with Google- Generated Spam” (18 November 2013) The Digital Reader < https://the-digital- reader.com/2013/11/18/google-blocks-child-porn-search-results-replaces-google-generated- spam/>.

Kate Holton “Russian Twitter accounts promoted Brexit ahead of EU referendum (15 November 2017) Reuters <www.reuters.com/article/us-britain-eu-russia/russian-twitter- accounts-promoted-brexit-ahead-of-eu-referendum-times-newspaper-idUSKBN1DF0ZR>. “Manifestos and Monopolies” (21 February 2017) Stratechery

<https://stratechery.com/2017/manifestos-and-monopolies/>.

International Fact-Checking Network “Commit to transparency – sign up for the International Fact-Checking Network’s code of principles” <https://ifcncodeofprinciples.poynter.org>.

Karen Kornblugh “Could Europe's New Data Protection Regulation Curb Online Disinformation?” (20 February 2018) Council on Foreign Relations www.cfr.org/blog/could- europes-new-data-protection-regulation-curb-online-disinformation>.

Elizabeth Kolbert “Why Facts Don’t Change Our Minds” (27 February 2017) The New Yorker

<www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our- minds?mbid=social_facebook)>.

Anthony Leiserowitz and Edward Maibach and Connie Roser-Renouf and Seth Rosenthal and Mathew Cutler (5 July 2017) Climate Change in the American Mind: <http://climatecommunication.yale.edu/publications/climate-change-american-mind- may-2017/2/> .

Sam Levin “Mark Zuckerberg: I regret ridiculing fears over Facebook's effect on election” (28 September 2017) The Guardian <www.theguardian.com/technology/2017/sep/27/mark- zuckerberg-facebook-2016-election-fake-news>.

John Lichfield “Boris Johnson’s £350m Claim is Devious and Bogus. Here’s why” (18 September 2017) The Guardian <www.theguardian.com/commentisfree/2017/sep/18/boris- johnson-350-million-claim-bogus-foreign-secretary>.

Sander van der Linden (October 2013) “What a Hoax” Scientific American Mind

<https://scholar.princeton.edu/sites/default/files/slinden/files/conspiracyvanderlinden.pdf>.

Edward Lucas “Facebook and Google can expose fake news” (6 February 2017) The Times < https://www.thetimes.co.uk/article/facebook-and-google-can-expose-fake-news-k7stxnrpq>.

“Macron vows new law to combat fake news with Russian meddling in mind”

(4 January 2018) The Local <www.thelocal.fr/20180104/france-announces-law-to-combat- fake-news-with-russian-meddling-in-mind>.

Ryan McElhany “The Effects of Anchoring Bias on Human Behavior” (23 May 2016) Thought Hub <www.sagu.edu/thoughthub/the-affects-of-anchoring-bias-on-human-behavior>.

Angela Moon “Two-thirds of American adults get news from social media: survey” (9 September 2017) Reuters <www.reuters.com/article/us-usa-internet-socialmedia/two-thirds- of-american-adults-get-news-from-social-media-survey-idUSKCN1BJ2A8>.

Dr Rod Oakland “Words at War – Leaflets and Newspapers in World War Two” (26 August 2012) <www.psywar.org/newspapers.php>.

Oxford Dictionary “Word of the Year 2016 is...” English Oxford Living Dictionaries

<https://en.oxforddictionaries.com/word-of-the-year/word-of-the-year-2016>.

Jonathan Turley “Europe’s Invasive Species: We Should Keep Macron’s Oak And Send Back His Speech Limits” (29 April 2018) Jonathan Turley

<https://jonathanturley.org/2018/04/29/europes-invasive-species-we-should-keep-macrons- oak-and-send-back-his-speech-limits/comment-page-1/>.

Jennifer Rankin “EU anti-propaganda unit gets €1m a year to counter Russian fake news” (25 November 2017) The Guardian <www.theguardian.com/world/2017/nov/25/eu-anti- propaganda-unit-gets-1m-a-year-to-counter-russian-fake-news>.

Jennifer Rankin “Tech firms could face new EU regulations over fake news” (24 April 2018) The Guardian < https://www.theguardian.com/media/2018/apr/24/eu-to-warn-social-media- firms-over-fake-news-and-data-mining>.

Dennis Reineck “Revisiting media and information literacy in the fake news age” (4 May 2017) DW < https://www.dw.com/en/revisiting-media-and-information-literacy-in-the-fake-news- age/a-38700873>.

Andreas Rinke and Andrea Shalal “Germany alarmed about potential Russian interference in election: spy chief” (16 November 2016) Reuters <www.reuters.com/article/us-germany- election-russia/germany-alarmed-about-potential-russian-interference-in-election-spy-chief- idUSKBN13B14O?il=0>.

Michael Safi “Fake news: an insidious trend that's fast becoming a global problem” (2 December 2016) The Guardian < https://www.theguardian.com/media/2016/dec/02/fake- news-facebook-us-election-around-the-world>.

Lewis Sanders “Is criminalizing fake news the way forward?” (14 December 2016) DW

<www.dw.com/en/is-criminalizing-fake-news-the-way-forward/a-36768028>.

Mark Scott “Zuckerberg: Facebook will apply EU data privacy standards globally” (4 May 2018) Politico <www.politico.eu/article/zuckerberg-facebook-eu-data-will-apply-privacy- standards-globally/>.

Craig Silverman “This Analysis Shows How Viral Fake Election News Stories Outperformed Real News on Facebook” (16 November 2018) Buzz Feed News

<www.buzzfeednews.com/article/craigsilverman/viral-fake-election-news-outperformed- real-news-on-facebook>.

Olivia Solon “Facebook's fake news: Mark Zuckerberg rejects 'crazy idea' that it swayed voters” (11 November 2016) The Guardian

<www.theguardian.com/technology/2016/nov/10/facebook-fake-news-us-election-mark- zuckerberg-donald-trump>.

Fidji Simo “Introducing: The Facebook Journalism Project” (11 January 2017) Facebook for Media www.facebook.com/facebookmedia/blog/introducing-the-facebook-journalism- project>.

Olivia Solon “Russia-backed Facebook posts ‘reached 126m Americans’ during US election” (31 October 2017) The Guardian <www.theguardian.com/technology/2017/oct/30/facebook- russia-fake-accounts-126-million>.

Mark Sweney “Facebook’s rise as news source hits publishers’ revenues” (15 Jun 2016) The Guardian <www.theguardian.com/media/2016/jun/15/facebooks-news-publishers-reuters- institute-for-the-study-of-journalism>.

Stephanie Panzic “Legislating for E-Manners – Deficiencies and unintended consequences of the Harmful Digital Communications Bill” (October 2014) Internet NZ

<https://internetnz.nz/blog/legislating-e-manners-–-deficiencies-and-unintended- consequences-harmful-digital-communications>.

Emily Peck “Facebook is Finally Taking Action Against Fake News” (15 December 2016) Huffington Post <www.huffingtonpost.com/entry/facebook-fake- news_us_5852c248e4b02edd41160c78>.

“Putin’s Brexit? The Influence of Kremlin media and bots during the 2016 UK EU referendum” (10 February 2018) 89up <http://89up.org/russia-report> .

Jim Waterson “U democracy under threat and need for reform is urgent says regulator” (26 June 2018) The Guardian <www.theguardian.com/politics/2018/jun/26/uk-democracy-under- threat-and-reform-is-urgent-says-electoral-regulator>.

Brett Wilson “Norwich Pharmacal Orders – Identifying Anonymous” Brett Wilson LLP

<www.brettwilson.co.uk/services/defamation-privacy-online-harassment/norwich-pharmacal- orders-identifying-the-anonymous/>.

Zachary Young “French Parliament passes law against ‘fake news’” (4 July 2018) Politico

<www.politico.eu/article/french-parliament-passes-law-against-fake-news/>.

  1. Other resources

Letter from Rebecca Stimson (Facebook) to Damian Collins regarding Chair, Digital, Culture, Media and Sport Committee (8 June 2018).


NZLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.nzlii.org/nz/journals/UOtaLawTD/2018/9.html