NZLII Home | Databases | WorldLII | Search | Feedback

University of Otago Law Theses and Dissertations

You are here:  NZLII >> Databases >> University of Otago Law Theses and Dissertations >> 2020 >> [2020] UOtaLawTD 26

Database Search | Name Search | Recent Articles | Noteup | LawCite | Download | Help

McKenzie, William --- "The Legal Challenges of Algorithmic Management in the Workplace" [2020] UOtaLawTD 26

Last Updated: 22 September 2023

THE LEGAL CHALLENGES OF ALGORITHMIC MANAGEMENT IN THE WORKPLACE

William McKenzie

A dissertation submitted in partial fulfilment of the degree of Bachelor of Laws (with Honours) at the University of Otago – Te Whare Wānanga o Otāgo

October 2020

ACKNOWLEDGEMENTS

To my supervisor Dawn Duncan, for your invaluable feedback and guidance throughout the year;

To Colin Gavaghan, for sparking my interest in the legal aspects of artificial intelligence and emerging technologies;

And to my family and friends that had to listen to me talk about this dissertation all year;

Thank you.

CONTENTS

Introduction

Employers have long been searching for ways to increase worker productivity and workplace efficiency. In the 1890s, Frederick Taylor pioneered a new management technique called “scientific management” that swept through America’s factories.1 Taylor aimed to increase worker productivity by breaking work into smaller tasks and telling workers not only what to do, but exactly how to do it in the most efficient way.2 Managers stalked the factory floors with stopwatches, and workers were given a slip of paper each morning that would tell them how well they had performed the day before.3 The recent rise of artificial intelligence and, more specifically, machine learning algorithms have resulted in the potential for Taylor’s ideas to be implemented on a grand scale. Stopwatches and performance slips are being replaced with technological solutions that promise increases to workplace productivity and efficiency like never before. This new concept of “algorithmic management” poses unprecedented risks to the rights of workers across virtually all fields of work.

This dissertation aims to examine some of the challenges that arise when the current law is used to address these risks posed by algorithmic management. My inquiry will focus on three distinct examples of algorithmic management currently in use: gig-working platforms, hiring/recruitment algorithms and productivity/performance management algorithms. Focusing on these three distinct examples of algorithmic technology allows this dissertation to explore the effects of algorithmic management on not only employers and employees, but also on both non-standard workers and job seekers.

I will begin in Chapter I by defining “algorithmic management” and identifying some of the broad categories of challenge it can pose. Chapter II will then focus on gig-working platforms and how their algorithmic management techniques create legal challenges when attempting to classify workers as either employees or contractors. This chapter will examine these challenges within the context of Uber drivers, as their employment status has been subject to considerable legal debate. Chapter III will then examine hiring/recruitment algorithms and the impact that “algorithmic discrimination” could have on job seekers in New Zealand. The discrimination provisions in the Employment Relations Act (the ERA) and Human Rights Act were not drafted with algorithmic discrimination in mind.4 Subsequently, there are some legal challenges that arise when attempting to use the current law to address this novel form of discrimination.

1 Sarah O’Connor “When your boss is an algorithm” Financial Times (online ed, 8 September 2016).

2 Ifeoma Ajunwa, Kate Crawford and Jason Schultz “Limitless Worker Surveillance” (2017) 735 Cal L Rev 105 at 137; and O’Connor, above n 1.

3 O’Connor, above n 1.

4 Employment Relations Act 2000 [ERA], ss 103 and 104; and Human Rights Act 1993 [HRA], ss 22 and 65.

Chapter IV will focus on the justifiability of employers using productivity/performance management algorithms to assist in making significant workplace decisions. A lot of these decisions could greatly affect the lives of employees and may not be justifiable under the personal grievance provisions of the ERA.5 Finally, Chapter V will examine the data collection and surveillance concerns that arise under all three of the examples of algorithmic management considered in this paper. If our current Privacy Act is inadequate at dealing with these concerns, then the European Union’s General Data Protection Regulation (GDPR) may serve as some inspiration for how privacy law can be tailored to directly address algorithmic technologies.6

Chapter I: What is Algorithmic Management?

A: Definition

The term “algorithmic management” was initially used to describe the way in which gig- working platforms use sophisticated algorithms to allocate, optimise and evaluate work.7 Möhlmann and Zalmanson are among the many academics that use the term in this context. They identify five defining characteristics of algorithmic management: (1) constant tracking of workers’ behaviour; (2) constant performance evaluation of workers; (3) automatic implementation of decisions without the need for human intervention; (4) lack of worker interaction with humans; and (5) low management transparency.8

While these characteristics are typical of algorithmic management, they also result in a very restrictive definition that would exclude many of the algorithmic practices currently being implemented in the workplace. This paper aims to examine the use of algorithms in both the gig economy and in traditional workplaces, so a wider definition must be formulated.

I am defining algorithmic management broadly as a human resource management technique that uses data-driven algorithms to make automated or semi-automated decisions in the workplace. This varies in terms of complexity but, typically, these types of algorithms will rely on both extensive workplace data collection and sophisticated artificial intelligence known as “machine learning”. In the next section I will clarify this definition by identifying three different real-world examples of algorithmic management that this paper will focus on.

5 ERA, ss 103 and 103A.

6 Privacy Act 2020; and Regulation (EU) 2016/679 General Data Protection Regulation [2016] OJ L119. 7 Min Kyung Lee and others “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers” (paper presented to the Annual ACM Conference on Human Factors in Computing Systems, 2015).

8 Mareike Möhlmann and Lior Zalmanson “Hands on the Wheel: Navigating Algorithmic Management and Uber Drivers’ Autonomy” (paper presented to the International Conference on Information Systems, December 2017) at 4–5.

B: Three Distinct Examples of Algorithmic Management

1: Gig-working platforms

Gig-working platforms, such as Uber, provide arguably the most immediate example of algorithmic management currently in use. Often, gig-workers are managed and directed entirely by a smartphone app that uses algorithms to make its decisions. For instance, Uber drivers are not subject to any human management that you would expect in a typical working environment. Instead, the app performs critical management tasks such as job assignment, pay rates, performance management and even suspension or termination from the platform.9 This form of algorithmic management has raised many concerns, particularly in relation to accountability, that will be assessed in the next chapter.10

2: Hiring algorithms

Another example of algorithmic management is the use of algorithms to assist in making automated or semi-automated hiring decisions.11 These algorithms take input data about job candidates and then use various performance indicators to predict which candidates are “better”.12 This input data can come from many different sources including candidates’ resumes, their social media and internet footprints, or even data gathered from specially developed games.13 Unlike gig-working platforms, this algorithmic tool will usually still involve a human manager as the algorithm serves merely to assist them, rather than taking on the management role in its entirety. An example of one such service is Infor Talent Management, which uses 24 behavioural characteristics to create a data-driven predictive model that can “identify the best candidates”.14 Further examples of hiring algorithms, and some of the discriminatory challenges they pose, will be examined in Chapter III.

3: Performance and productivity algorithms

The final example of algorithmic management considered in this paper is the use of algorithms to assess the productivity or performance of employees. These algorithms are used in both blue

9 See: Alex Rosenblat and Luke Stark “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers” (2016) 10 International Journal of Communication 3758.

10 See: Charlotte S Alexander and Elizabeth Tippett “The Hacking of Employment Law” (2017) Vol 82 No 4 Missouri Law Review 974 at 1003–1013.

11 See: Pauline Kim “Data-Driven Discrimination at Work” (2017) Vol 48 William & Mary Law Review 857 at 862–863.

12 Stephanie Bornstein “Antidiscriminatory Algorithms” (2018) Vol 70 No 2 Alabama Law Review 519 at 530–

533.

13 Kim, above n 11, at 861-863; and Bornstein, above n 12, at 531.

14 Infor “Infor Talent Science” <www.infor.com/products/talent-science>; and Bornstein, above n 12, at 531.

and white-collar occupations throughout the world and take vast amounts of employee data collected through extensive workplace surveillance.15 After this data is analysed, the algorithm can produce an assessment of performance or productivity which the employer can then use to influence various management decisions such as promoting, demoting, shift scheduling or even dismissing.

One example of this software comes from Californian company Percolata, who provide machine learning algorithms to assist employers in making scheduling decisions in the retail sector.16 Percolata works by tracking and predicting customer foot traffic through the use of electronic sensors, which is then combined with employees’ sales data to assess the “true productivity” of employees.17 The algorithm supposedly learns things such as which employees work better when assigned together and at what times.18 This analysis is then used to produce a schedule containing the optimal mix of workers to maximise sales, with the “better” workers generally being awarded more hours.19 There are many concerns with allowing an algorithm to make, or influence, these decisions as they can have significant effects on the livelihood of employees. The justifiability of relying on algorithmic advice in the workplace will be considered in more depth in Chapter IV.

C: Four Broad Areas of Challenge

All three examples of algorithmic management considered throughout this paper pose challenges in four broad areas. These areas of challenge were identified by researchers for the Data & Society Research Institute and form the basis of many of the legal problems that will be examined in the upcoming chapters of this dissertation.20

1: Transparency

The lack of transparency associated with machine learning algorithms creates many novel legal challenges and is arguably the most difficult issue in relation to algorithmic management. Machine learning algorithms are notoriously opaque, taking potentially thousands of data

15 Valerio De Stefano Negotiating the algorithm: Automation, artificial intelligence and labour protection

(International Labour Office, Employment Working Paper No 246, 2018) at 8.

16 Percolata (2020) <www.percolata.com>.

17 O’Connor, above n 1.

18 O’Connor, above n 1.

19 O’Connor, above n 1.

20 Alexandra Mateescu and Aiha Nguyen “Explainer: Algorithmic Management in the Workplace” (Data & Society Research Institute, February 2019) at 13–14.

points into their analysis and uncovering relationships that can be very complex.21 This often results in the inner workings of the algorithm being completely incomprehensible to humans, which then creates legal problems when attempting to challenge or explain an algorithm’s decision.22 Some specific transparency challenges will be considered in more depth in Chapters III and IV.

2: Discrimination and bias

Advocates for the use of algorithmic management claim that it results in fairer decisions than traditional management as it replaces biased human managers with “neutral” data-driven analysis.23 However, many have begun to question this proposition as it is becoming increasingly clear that algorithms do not always behave as “neutral” decision makers and can be swayed by similar biases as human managers.24 The way in which this “algorithmic discrimination” can arise within hiring algorithms, as well as the potential effect it can have on job seekers, will be considered further in Chapter III.

3: Control and surveillance

At its core, algorithmic management relies heavily upon various forms of data collection in order to create and utilise predictive models. This data collection occurs in every example of algorithmic management examined throughout this paper. For example: GPS data about Uber drivers’ movements are collected by the app and can influence management decisions; hiring algorithms often collect applicants’ social media and internet footprint data; and performance/productivity algorithms rely upon data collected in the workplace such as email tracking or audio and video recording.25 This extensive data collection, and the surveillance that enables it, causes many concerns regarding both worker privacy and control. These issues will arise throughout each chapter of this paper, but Chapter V will focus specifically on whether New Zealand’s privacy law can reduce these concerns.

21 See generally: Colin Gavaghan and others Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation, 2019) at 42; And Kim, above n 11, at 881.

22 Lillian Edwards and Michael Veale “Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For” (2017) Vol 16 Duke Law and Technology Review 18 at 26.

23 Kim, above n 11, at 860.

24 See: Kim, above n 11; Bornstein, above n 12; Gavaghan and others, above n 21, at 43; Dave Heatley “Biased Algorithms – a good or bad thing?” (October 2019) New Zealand Productivity Commission

<https://www.productivity.govt.nz/futureworknzblog>; and Solon Barocas and Andrew Selbst “Big Data’s Disparate Impact” (2016) 104 California Law Review 671.

25 Mateescu and Nguyen, above n 20, at 5; Kim, above n 11, at 861; and De Stefano, above n 15, at 8.

4: Accountability

The final challenging area of algorithmic management is in relation to accountability. There are concerns that algorithms may allow employers to make unfair or even discriminatory workplace decisions without being held accountable for doing so. In this sense, algorithms may distance employers from their business decisions and provide a justification for decisions that would otherwise be frowned upon. Furthermore, some have even suggested that algorithms may allow employers to evade certain aspects of the law entirely.26 This issue will arise in Chapter II in relation to the legal status of gig workers, Chapter III in relation to “masking” discrimination and again in Chapter IV in relation to the justifiability of algorithmic decision- making.

Chapter II: The Employment Status of Gig-Workers

The first legal challenge posed by algorithmic management arises when attempting to define gig-workers as either “employees” or “independent contractors”. This chapter will examine this challenge in the context of Uber drivers (“drivers”), as they are arguably the most widely known example of algorithmically managed gig-workers in New Zealand and their employment status has been subject to significant legal discussion both locally and in other jurisdictions.27

A: The Difficulties of Categorising Gig-Workers

1: The ERA approach to defining “employees”

The framework for determining whether a worker is an “employee” is contained in s 6 of the ERA, which states that the court or Authority must determine the “real nature” of the relationship between the parties.28 The court or Authority must also consider all relevant matters in making this determination, including the intention of the parties, but must not treat statements of intention as being determinative.29 This provision largely leaves the matter of determination up to the court and, subsequently, the Supreme Court have confirmed four

26 See: Alexander and Tippett, above n 10.

27 See: Arachchige v Rasier New Zealand Ltd [2020] NZEmpC 35; Uber BV and others v Aslam and others [2018] EWCA Civ 2748, [2019] 3 All ER (CA); O’Connor v Uber Technologies Inc 82 F Supp 3d 1133, 80 Cal (ND Cal 11 March 2015); Berwick v Uber Technologies Inc, California UGC-15-546378 (Cal Super Ct 21 September 2015); Razak v Uber Techs Inc 951 F.3d 137 (3d Cir Pa 3 March 2020); and Fair Work Ombudsman “Uber Australia investigation finalized” (7 June 2019) <https://www.fairwork.gov.au/about-us/news-and-media- releases/2019-media-releases/june-2019/20190607-uber-media-release>;

28 Section 6(2).

29 Section 6(3).

different common law tests that aid in determining the “real nature” of a working relationship.30 These tests include: the intention of the parties, which can often be determined by examining the terms of the contract;31 the control test, which examines how much control the worker is placed under;32 the integration test, which examines whether the worker is “part and parcel” of the business and whether the work performed is fundamental to that business;33 and the fundamental test, which considers the economic reality of the working relationship.34

2: Applying the common law tests to Uber drivers

Currently, drivers in New Zealand are treated as independent contractors, meaning that they are not “employees” under s 6 and are subsequently not subject to the rights and protections of employment law. These rights and protections include things such as minimum wage, holiday pay, sick leave and the ability to take a personal grievance.35

Despite this, drivers do not sit comfortably into either the employee or contractor category when applying the common law tests. For instance, under the integration test, drivers provide their own vehicle, petrol and cell phone, which suggests that they are contractors. However, their work is fundamental to Uber’s business model, and not merely supplementary, which suggests that they are employees.

Similar contradictions arise under the fundamental test, as drivers pay their own tax and insurance and are paid on a per-trip basis, which again suggests that they are contractors. However, drivers lack the ability to set their own fees or bargain for a higher fee,36 nor can they subcontract their work or employ their own staff, which leans in favour of employment.

The strongest factor in favour of employment, however, arises under the control test. Drivers are not subject to the mechanisms of control that would typically be seen in a traditional employment relationship, such as set working hours, time and location, or availability for work. Instead, they are subject to novel forms of “soft control” facilitated by the use of algorithmic

30 Bryson v Three-Foot-Six Ltd [2005] NZSC 34 at [32].

31 Southern Taxis Ltd v Labour Inspector [2020] NZEmpC 63 at [73].

32 Clark v Northland Hunt Inc [2006] NZEmpC 119; (2006) 4 NZELR 23 (EmpC) at [30].

33 Challenge Realty Ltd v Commissioner of Inland Revenue [1990] 3 NZLR 42 (CA) at 65.

34 New Zealand Productivity Commission Technological change and the future of work: Final report (2020) at 87; and Employment New Zealand “Contractor versus Employee” (2020)

<https://www.employment.govt.nz/starting-employment/who-is-an-employee/difference-between-a-self- employed-contractor-and-an-employee/>.

35 Leota v Parcel Express Ltd [2020] NZEmpC 61 at [2].

36 Rosenblat and Stark, above n 9, at 3762–3763.

management.37 One of these controls arises from information asymmetries, as the Uber app prevents drivers from viewing the fare information or destination of a job before accepting it.38 Drivers are also only given around 15 seconds to accept or reject a job, and are subject to temporary or permanent suspension from the platform if they refuse or cancel too many jobs.39 This effectively removes the driver’s ability to refuse trips which are undesirable or unprofitable, and allows Uber to exert control over which jobs the driver performs.

Another form of soft control is achieved through Uber’s algorithmically determined “surge pricing” feature, which encourages drivers to move to specific geographical locations in order to obtain higher advertised fares.40 However, if drivers move to these “surge” zones, they can still receive job requests from lower-paying passengers outside of the zone, and risk being penalised for prioritising the more profitable surge jobs.41 This essentially allows Uber to prevent drivers from rejecting lower paid work in favour of higher paid work, again reducing the control that drivers have over their own working arrangements. Drivers that are attempting to log off the platform may also receive algorithmically generated messages encouraging them to stay online due to high demand, surge pricing or net earnings goals, essentially providing a financial incentive to continue working in certain areas.42 These “surge” zones and automatic nudges allow Uber to exert indirect control over the working time and location of their drivers in a manner that benefits the company.43

Customer reviews are also used by Uber as an indirect method of control, as drivers must maintain an average rating of around 4.6/5 or risk being terminated from the platform.44 Uber sends messages to drivers advising them that certain behaviours will achieve generally higher customer reviews, without explicitly telling drivers that they must behave in this manner.45 This allows Uber to indirectly control the workplace behaviour of drivers and promote a standardised experience for passengers, which again contradicts the claim that drivers are self- employed independent contractors.

37 Rosenblat and Stark, above n 9, at 3761; and Jeremias Adams-Prassl “What if Your Boss Was an Algorithm? The Rise of Artificial Intelligence at Work” (2019) Vol 41 Comparative Labor Law & Policy Journal 123, at 144–145.

38 Rosenblat and Stark, above n 9, at 3762; Uber BV and others v Aslam and others, above n 27, at [12]; and Mateescu and Nguyen, above n 20, at 6.

39 Rosenblat and Stark, above n 9, at 3762; and Uber BV and others v Aslam and others, above n 27, at [21]. 40 Rosenblat and Stark, above n 9, at 3765–3768; Mateescu and Nguyen, above n 20, at 6; and James Duggan and others “Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM” (2020) 30 Hum Resour Manag J 114 at 120.

41 Rosenblat and Stark, above n 9, at 3766.

42 At 3767–3768.

43 See: Mateescu and Nguyen, above n 20, at 6.

44 Rosenblat and Stark, above n 9, at 3774.

45 At 3775; and Duggan, above n 40, at 120.

The contradictions that arise when applying these common law tests, specifically in relation to algorithmic control, highlight the difficulty of categorising Uber drivers under our current employee/contractor distinction. If drivers were truly independent contractors, then they would not be subject to the algorithmic controls of the Uber app, and would be free to set their own fees and accept only the jobs they wanted to perform. However, if they were truly employees, then they would generally have some obligation to log into the app and continue working.

3: Concerns with the current approach

The most immediate concern raised by the current classification of Uber drivers as contractors is the potential for unfair working conditions. As mentioned earlier, drivers are not currently subject to the protections of employment law, meaning, amongst other things, that they are not guaranteed a minimum wage nor can they take a personal grievance if they are unfairly disadvantaged, suspended or terminated from the Uber platform. This allows Uber to change the pay rate of drivers, often at the driver’s disadvantage, without any repercussions other than potentially turning some drivers away from the platform. Often, when Uber makes one of these pay cuts, they justify it by claiming that it actually increases the amount of money that drivers receive by supposedly increasing ride demand or re-balancing the rate for factors such as being stuck in traffic.46 However, many drivers contend that these rate adjustments ultimately benefit Uber and the customers, leaving drivers to work longer or specific hours in order to receive the same pay as pre-adjustment rates.47

Unfairness can also arise as a result of driver performance being based upon customer reviews. Often, these reviews will be based upon things that are outside of the driver’s control, thus unfairly exposing them to potential automated disciplinary actions such as temporary or permanent suspension. For instance, a driver may receive a negative review due to slow traffic, road works or even blatant customer discrimination.48

Due to their status as contractors, drivers are unable to legally challenge any unfair changes to their pay rate or any suspension based upon unfair or discriminatory reviews.49 This disadvantage is supposedly justified by the non-financial benefit of flexibility and autonomy to choose when and where to work.50 However, for those who use Uber as their main source of

46 Rosenblat and Stark, above n 9, at 3764; and Lana Andelane “Uber’s new Auckland pricing trial criticised for ‘ripping off’ drivers” (25 July 2019) Newshub <https://www.newshub.co.nz/home/new- zealand/2019/07/exclusive-uber-s-new-auckland-pricing-trial-criticised-for-ripping-off-drivers.html>.

47 Rosenblat and Stark at 3764; and Andelane, above n 46.

48 See: Duggan, above n 40, at 127.

49 Note that employees are protected from unjustifiable action or dismissal under ERA s 103.

50 Rosenblat and Stark, above n 9, at 3761; and Duggan, above n 40, at 124.

income, the soft controls exercised by Uber make this supposed “flexibility” a fallacy. In reality, drivers must work “for long hours and at peak times” in order to obtain sufficient earnings and to maintain their customer ratings.51

Another major concern with the current classification of Uber drivers as independent contractors is in relation to accountability. There is a concern that the use of algorithmic management to control and assign work in the gig-economy allows companies such as Uber to classify their workers as contractors and thus avoid the traditional costs and obligations placed on employers.52 This is, in part, due to the lack of human oversight on the Uber app, which creates the illusion that workers are their “own bosses” and can work on their own terms when, in fact, they are subject to “soft” algorithmic controls that seriously limit their flexibility. Furthermore, many of the managerial tasks on the Uber app are splintered among passengers, drivers and the app itself, meaning that it can be difficult to identify the appropriate working relationship under our current understanding of employment.53 For example, performance reviews are performed by the passengers which can lead to automated disciplinary procedures, working hours are decided by the drivers themselves and work assignment is handled by the underlying algorithm in the Uber app. This managerial splintering allows Uber to claim that they are nothing more than a “technology platform” that acts as a “neutral intermediary” connecting passengers to third party drivers.54 However, Uber’s business model ultimately relies upon selling rides rather than technology. As one US District Judge put it — “Uber is no more a “technology company” than... John Deere is a “technology company” because it uses computers and robots to manufacture lawn mowers”.55 If the law in New Zealand continues to allow Uber to evade accountability and act as merely a “technology platform” with no obligations to their workers, then there is a risk that more companies may attempt to avoid employment law using similar algorithmic measures. As software innovations enable non- standard working arrangements to become increasingly common, this could result in employment law ultimately losing its relevance to modern workers.56

51 Duggan, above n 40, at 125.

52 See: Alexander and Tippett, above n 10, at 1004.

53 See: Alexander and Tippett, above n 10, at 1004–1008.

54 Uber “Uber B.V Terms and Conditions – New Zealand” (10 June 2020)

<https://www.uber.com/legal/en/document/?name=general-terms-of-use&country=new-zealand&lang=en>; and Rosenblat and Stark, at 3761.

55 O’Connor v Uber Technologies Inc, above n 27.

56 See: Alexander and Tippett, above n 10, at 1012–1013.

4: Recent developments in New Zealand

Earlier this year, a former Uber driver sought a declaration from the Employment Court that he was an employee of Uber rather than an independent contractor. This plaintiff, Mr Arachchige, sought this declaration so that he could pursue a personal grievance over what he claimed was an unjustifiable dismissal from the platform.57 While the judgment from the Employment Court has not been delivered yet, there have been some other recent developments throughout the year that suggest that Mr Arachchige may be successful.

One of these developments came from the case of Southern Taxis v Labour Inspector, where the Employment Court found that a group of commission taxi drivers were actually employees rather than contractors.58 Some of the Court’s reasoning is based upon factual circumstances closely resembling the relationship between Uber and their drivers. For instance, the commission drivers were only allocated jobs through a dispatcher and lacked no real ability to decline these jobs.59 This is comparable to the way that the Uber app assigns trips and punishes drivers who decline. The fares were also prescribed by the company with the driver possessing no ability to negotiate the fare with customers, again resembling Uber’s fare policy.60 The Court also found that the commission drivers were fundamental to the business, as they required drivers in order to operate an economic taxi business.61

Another recent case that appears promising for Uber drivers is Leota v Parcel Express Ltd.62 In this case, the Employment Court decided that a courier driver, who shared many factual similarities with Uber drivers, was an employee rather than an independent contractor.63 One of the main reasons why the Court reached this conclusion was due to the “significant degree of direction and control” that was exercised over the plaintiff’s work. As already mentioned, Uber exercises significant “soft controls” over the work performed by their drivers. Another main factor was due to the plaintiff’s inability to grow his own business, or to take any customers with him when he left the company.64 Again, this is a restriction that applies equally to Uber drivers, as they are unable to grow their business beyond the jobs that are algorithmically allocated to them through the app.

57 Arachchige v Rasier New Zealand Ltd, above n 27, at [1].

58 Southern Taxis Ltd v Labour Inspector, above n 31, at [124].

59 At [88].

60 At [89].

61 At [98].

62 Leota v Parcel Express Ltd, above n 35.

63 At [71].

64 At [61].

While the Court also considered many other factors in their analysis in both cases, their reasoning on these specific points suggests that the Arachchige case may be ruled in favour of the plaintiff. Clearly, this could have huge implications to Uber’s business in New Zealand as suddenly drivers would be considered employees and would be subject to the protections of employment law. This solution, while quelling many of the concerns associated with the current approach to drivers in New Zealand, could also destroy Uber’s “work as you want” model that many casual drivers may rely upon. The rest of this chapter will consider whether there is a more desirable solution that sufficiently balances workers’ interests with the ability for Uber’s business model of “flexibility” to operate effectively.

B: Potential Solutions

1: Intermediate category of “worker”

In the UK, there is a third category of “worker” that falls in-between employees and contractors.65 Individuals within this category are subject to some, but not all, of the protections afforded under employment law. For instance, they are entitled to things such as minimum wage, protections against unlawful wage deductions, and minimum statutory holidays and rest breaks, but not things such as protection against unfair dismissals.66

Recently, the UK Court of Appeal upheld the finding that Uber drivers are employed by Uber under a worker contract while they have their app switched on and are willing to accept assignments in the area.67 Their reasoning was largely centred around the “soft control” mechanisms mentioned throughout this chapter.68 This ruling, while currently being appealed to the Supreme Court,69 is considered a landmark case for affording fairer working conditions for gig-workers in the UK. Creating a third category such as this in New Zealand could potentially solve the challenge of classifying gig-workers, as it could afford some fairer minimum rights to workers while preserving the flexibility of the “work as you want” model that many casual gig-workers rely upon.

However, some commentators have expressed concerns that the creation of a worker category would only serve to further complicate employment law in New Zealand, as new case law

65 Employment Rights Act 1996 (UK), s 230(3); and UK Government “Employment Status”

<https://www.gov.uk/employment- status/worker#:~:text=A%20person%20is%20generally%20classed,a%20contract%20or%20future%20work>. 66 UK Government, above n 72; and Productivity Commission, above n 34, at 90.

67 Uber BV and others v Aslam and others, above n 27, at [103].

68 At [96].

69 The Supreme Court “Uber BV and others (appellants) v Aslam and others (Respondents)” (2020)

<https://www.supremecourt.uk/cases/uksc-2019-0029.html>.

would need to be developed in order to distinguish this new category from employees and contractors.70 This could effectively do nothing more than shift the uncertainty to a new category and fail to prevent any extensive litigation between workers and gig-working platforms.71 There is also the concern that creating such a category could create a barrier for individuals seeking to obtain the full rights of employment law.72 Both gig-workers and traditional contractors may end up settling for the lesser rights as “workers” under a new category when, in fact, they may have an arguable case for classification as employees. Furthermore, the New Zealand Productivity Commission has also expressed concern that such a category may actually make some forms of gig-work uneconomical and thus reduce “opportunities for work and value creation”.73

While an extensive discussion of this proposal is outside the scope of this dissertation, the concerns identified in this chapter should be sufficient to deter policymakers from implementing a third category of “worker” in New Zealand.

2: “Safe harbour” solution

Currently, gig-working firms such as Uber are unable to give increased benefits and support to their workers without risking being classified as employers. This is due to the integration test, which points in favour of an employment relationship if workers are given benefits that have traditionally been given to employees in the past, as this can be evidence of the worker being “part and parcel” of the organisation.74 Uber themselves even acknowledged this risk in their submissions to the Productivity Commission, stating that they would like to offer “more support and benefits” to drivers but are currently unable to due to the “binary construct of employment law” that risks labelling them as employers and ultimately undermining the flexibility of the Uber platform.75

One solution proposed by both the Productivity Commission and Uber is the implementation of a “safe harbour” regime.76 This could be based upon the “social charter” model implemented

70 Productivity Commission, above n 34, at 89.

71 Valerio De Stefano “The Rise of the ‘Just-in-Time Workforce’: On-Demand Work, Crowd Work and Labour Protection in the ‘Gig-Economy’” (2016) International Labour Office Conditions of Work and Employment Series No 71 at 19.

72 See: De Stefano, above n 71, at 20–21.

73 Productivity Commission, above n 34, at 89.

74 Productivity Commission, above n 34, at 88; and see Southern Taxis Ltd v Labour Inspector, above n 31, at [95].

75 Uber Technological change and the future of work – Submission on the Productivity Commission’s Issues Paper (June 2019) <https://www.productivity.govt.nz/assets/Submission-Documents/bc03eb38e0/Sub-027- Uber.pdf> at 4.

76 Productivity Commission, above n 34, at 88–89; and Uber, above n 75, at 5.

in France, allowing firms to apply to the Ministry of Business, Innovation and Employment (MBIE) to seek clarification that their workers are independent contractors.77 If a firm received this certification from MBIE, it would allow them to offer fairer working conditions with increased benefits and support without risking the classification of employment. In order to obtain this certification, the Productivity Commission suggests that firms would need to meet some minimum specified criteria, including: non-exclusivity, allowing workers to freely enter or leave the platform and work for other platforms; fair and transparent termination processes with the ability for appeal; clear communication of changes in conditions or prices; the ability for dialogue between workers and firms; robust health and safety practices; and development opportunities and protections, including things such as parental leave and insurance.78

The potential benefits to gig-workers of implementing such a regime would be two-fold. Firstly, it would allow them to receive extra benefits and support while retaining the more flexible contractor status that many casual gig-workers and platforms rely upon. Secondly, it would encourage firms to meet the minimum “safe harbour” criteria in order to prevent any challenge to their legal employer status, again creating fairer working conditions for gig- workers. While this regime could take considerable time and resources to establish in New Zealand, the potential benefits should encourage policymakers to at least explore implementing a “safe harbour” solution to the unfair working conditions of gig-workers.

The viability of implementing either of the solutions identified in this paper ultimately rests upon the upcoming decision of the Employment Court in the Arachchige case. If the court rules in favour of Uber drivers, then it may set a precedent for the employee status other types of gig-workers, thus making a “safe harbour” regime redundant. However, this would also risk destroying the “flexibility” of gig-working business models, potentially making them uneconomical and ultimately reducing work opportunities in New Zealand. Regardless of the solution taken, it is clear that our traditional approach to employment classification needs to evolve in response to algorithmic management if modern workers are to receive fair working conditions.

77 Productivity Commission, above n 34, at 88; and Uber, above n 75, at 5.

78 Productivity Commission, above n 34, at 88.

Chapter III: Algorithmic Discrimination in the Hiring Process

The next legal challenge of algorithmic management arises from the use of discriminatory hiring/recruitment algorithms to assess job candidates. This chapter will examine how “algorithmic discrimination” can occur within these algorithms, as well as the potential effect on jobseekers if the current law is not re-worked to address this novel form of bias.

A: The Use of Algorithms in the Hiring Process

The rise of machine learning algorithms in conjunction with so-called “big data”79 has enabled employers to increasingly automate decisions about who they should hire. Often referred to as “people analytics”, this new data-driven hiring practice aims to assist employers in the recruitment process by predicting which candidates are “better” than others.80

As mentioned in Chapter I, these algorithms can use many different types of data about candidates in order to make their predictions.81 This data can range from things such as candidates’ resumes, social media data, or even data obtained from specially developed tests or games.82 Software company Entelo provides an example of a model that uses information obtained from candidates’ social media and internet footprints. Their software has been described as “follow[ing] the digital footprint of your candidates with social and professional information aggregated from over 50 sites across the web”.83 These AI-based hiring algorithms are not exclusive to North America, as New Zealand-based company QJumpers also offers recruitment software that “automatically scours publicly available data, such as networking sites, social media, company websites and blogs, to identify candidates for your job”.84 The information obtained from this automatic data collection, such as credentials, location, skills, education or experience, is then prioritised to “produce a ranked list of potential candidates”.85 While QJumpers is likely not as advanced as the machine learning prediction algorithms in North America, it still highlights the fact that employers in New Zealand are increasingly seeking to automate their hiring practices.

79 Defined as “extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions” (Oxford Dictionary)

80 Kim, above n 11, at 860.

81 See the discussion in Chapter I about hiring algorithms.

82 Kim, above n 11, at 861-863; and Bornstein, above n 12, at 531.

83 HR.com “Entelo Platform” <www.hr.com/buyersguide/product/view/entelo_entelo_platform> as cited in Bornstein, above n 12, at 531.

84 QJumpers Recruitment Software “Why QJumpers” <www.qjumpers.co.nz/why-qjumpers>.

85 QJumpers, above n 84.

Advocates of this new algorithmic hiring software claim that it not only increases efficiency within the hiring process, but also increases diversity.86 The claim is that since these algorithms replace potentially biased human decision-making with objective data-driven analysis, they reduce the possibility for discrimination within the hiring process.87 For example, Infor Talent Management aims to “help organisations build diverse teams”88 and Entelo takes this a step further with their bold promise to “eradicate unconscious bias” within the hiring process.89 Likewise, Australian-based recruitment software provider JobAdder echoes these sentiments with their promise of providing “the data [needed] to implement real change to move the needle when it comes to diversity”.90

Despite these alleged diversity increases, there is a growing concern amongst academics that these sorts of hiring algorithms could, in fact, actually result in further discrimination rather than eradicating it.91 The fear is that the unconscious bias exhibited by human managers will merely be replaced by data-driven algorithmic bias that could cause difficulties for our current legal framework.92 This has been described by Ifeoma Ajunwa as “a legal paradox” wherein the algorithms intended to prevent discrimination actually end up causing it.93 The next section of this chapter will examine the ways in which this algorithmic discrimination can arise, and the potential effects it can have on individuals.

B: The Threat of Algorithmic Discrimination and Bias

1: Sources of discrimination and bias

The main discriminatory threat posed by hiring algorithms has been referred to as “classification bias”.94 Pauline Kim defines this as “the use of classification schemes that have the effect of exacerbating inequality or disadvantage along lines of race, sex, or other protected characteristics”.95 In the context of hiring algorithms, this classification bias could arise when an employer uses an algorithm to classify candidates as either “good” or “bad” based on a

86 Bornstein, above n 12, at 532.

87 Kim, above n 11, at 869; and see Alex Miller “Want Less Biased Decisions? Use Algorithms.” Harvard Business Review (online ed, 26 July 2018).

88 Infor “Infor Talent Science” <www.infor.com/products/talent-science>.

89 Entelo “Entelo Diversity” <https://www.entelo.com/products/platform/diversity/>.

90 Job Adder “Recruitment Analytics” <www.jobadder.com/recruitment-analytics>.

91 See: Kim, above n 11; Bornstein, above n 12; Gavaghan and others, above n 21, at 43; Heatley, above n 24; and Barocas and Selbst, above n 24.

92 Barocas and Selbst, above n 24; and Kim, above n 11.

93 Ifeoma Ajunwa “The Paradox of Automation as Anti-Bias Intervention” (2020 Forthcoming) 41 Cardozo L Rev <www.ssrn.com>.

94 Kim, above n 11, at 890.

95 At 890 – 891.

number of different input variables.96 Given that these algorithms operate based on “objective” data, it may not be immediately clear how they could result in discriminatory or biased outcomes.97 However, as Solon Barocas and Andrew Selbst identified in their article on the subject, there are a taxonomy of different ways in which this data-driven bias can arise in practice.98

The first way in which bias or discrimination can arise within these algorithms is when the “target variable” is being defined.99 While in some situations this is straightforward, it proves to be more challenging when defining what makes a “good” employee. For example, an algorithm that detects spam emails will generally not have trouble defining the target variable, as any given email is simply either spam or not spam.100 However, when attempting to define what makes a “good” employee, the options are not merely binary, and there can be a multitude of different factors that come into play. For instance, the developer of a hiring algorithm must choose between predicted factors such as higher sales figures or longer job tenure when seeking to define what makes an employee “good” or “bad.”101 This process of choosing which variables to target can risk reintroducing human biases into the algorithm, as different choices may have an adverse impact on protected groups of individuals.102 For example, if a hiring algorithm made decisions based on predicted job tenure, and the job turnover rate of Māori was systematically higher than other groups, then the algorithm would behave in a manner that disadvantaged Māori.

A similar instance of algorithmic bias can also occur during the process of “feature selection” when the developer chooses which attributes to include in the algorithmic analysis.103 As Barocas and Selbst explain, this choice of attributes can have “serious implications for the treatment of protected classes”.104 If there are certain attributes that account for some variation within a protected group of individuals, and these attributes are not considered in the analysis, then the algorithm risks making broad and incorrect generalisations about members of the group.105 This same concern can also arise when seemingly discriminatory attributes, like race

96 Recall the discussion in the previous section about the types of data used by these algorithms.

97 See: Ajunwa, above n 93, at 13 – 14 for general discussion about the problem of “data objectivity.”

98 Barocas and Selbst, above n 24, at 677 – 693.

99 At 677 – 680.

100 At 678.

101 At 679.

102 At 680.

103 At 688 – 690; and Kim, above n 11, at 877.

104 Barocas and Selbst, above n 24, at 688.

105 At 688.

or sex, are excluded from the analysis. The resulting bias is known as “omitted variable bias”.106 Pauline Kim exemplifies this risk with a hypothetical model that considers military history as an indicator of job performance.107 In this example, military history is highly correlated to positive work performance amongst African Americans, but highly correlated to negative work performance amongst white workers.108 If the developer of this hypothetical model has chosen not to include race as an attribute, then the model might make the incorrect and broad generalisation that all workers with a military history are likely to be worse performers when, in fact, the opposite is true for African Americans.109 If, however, the developer had chosen race as an attribute, then the model would have realised that the correlation between poor work performance and military history only existed for white workers. The resulting model would therefore disadvantage African American jobseekers with a military history. This example shows how the process of attribute selection can often result in discriminatory outcomes.

Algorithmic bias also frequently occurs as a result of the “training data” that is used to teach the model.110 If this training data is non-representative of a group in society, then the resulting model could behave in a discriminatory manner.111 For example, Amazon recently scrapped one of their hiring algorithms after they realised that the model was blatantly discriminating against women.112 The algorithm had been trained on resumes submitted to the company over a 10-year period which, due to the male-dominated tech industry, were mostly from men.113 The algorithm then learned to prefer male candidates over women, and actually penalised resumes containing words relating to women.114 This discrimination arose from a combination of both the training data being non-representative of women, and from the historical lack of women within the technology industry. Similar algorithmic discrimination can also arise when the training data incorporates historical biased decisions or judgments.115 As Barocas and Selbst explain, when training data is itself skewed by bias, the resulting algorithm will “produce results that are at best unreliable and at worst discriminatory”.116 For instance, if a model is trained on prior hiring decisions made by a discriminatory employer, then the resulting

106 Kim, above n 11, at 878.

107 At 879.

108 Note that this is simply a hypothetical example based on no statistical analysis.

109 Kim, above n 11, at 879.

110 Barocas and Selbst, above n 24, at 680 – 687.

111 Heatley, above n 24.

112 Jeffrey Dastin “Amazon scraps secret AI recruiting tool that showed bias against women” Reuters Technology News (online ed, San Francisco, 10 October 2018).

113 Dastin, above n 112.

114 Dastin, above n 112.

115 Barocas and Selbst, above n 24, at 682.

116 At 684.

algorithm is likely to reflect those biases. This is because the algorithm might observe, for example, that a lesser number of women or racial minorities have been hired historically for a specific role and conclude that they are thus less suitable candidates.117

Another potential source of algorithmic bias occurs when seemingly neutral attributes act as “proxies” for protected characteristics such as race or sex.118 For example, Cathy O’Neil describes a hiring algorithm used by Xerox which discovered that the distance between candidates’ homes and the workplace was a predictor of job tenure.119 If the neighbourhoods surrounding the workplace were predominantly white, then the algorithm could have a racially disproportionate impact despite not directly using race as an input characteristic.120 The ability for attributes to serve as proxies in this manner, combined with the possibility of omitted variable bias, means that simply removing protected characteristics from the algorithm will not necessarily prevent discriminatory outcomes from occurring.

2: The novel impact of algorithmic discrimination

Many legal scholars have suggested that algorithmic discrimination is not a “novel topic of legal inquiry” because it is not substantially different from biased human decision-making.121 However, this section seeks to disprove that notion and provide some examples of the novel impacts that algorithmic discrimination can have when compared to its human counterpart.

While it is true that human managers will inevitably make biased hiring decisions, the potential adverse reach of one biased human manager pales in comparison to an algorithm that could unfairly prevent thousands of individuals from obtaining employment.122

An often mentioned example that displays this wide adverse algorithmic reach is the potential for “blackballing”.123 Cathy O’Neil tells the story of a young college student who was continuously rejected from minimum wage jobs because a personality test used in the hiring process identified that he had prior mental health issues.124 Since every company that he applied to were all using the same personality test, he was effectively “blackballed” from finding a low

117 Barocas and Selbst, above n 24, at 682.

118 Kim, above n 11, at 877; Alexander and Tippett above n 10, at 993; and Barocas and Selbst, above n 24, at 691 – 692.

119 Cathy O’Neil Weapons of Math Destruction (1st ed, eBook ed, Crown Publishers, New York, 2016) at chapter 6.

120 See: Kim, above n 11, at 863.

121 Ajunwa, above n 93, at 4.

122 At 8.

123 See: Alexander and Tippett at 994; and O’Neil, above n 119.

124 O’Neil, above n 119.

paying job.125 While this example dealt with personality tests, rather than algorithms, it highlights the potential adverse impact that can occur to individuals when multiple companies utilise the same data-driven hiring practices. If, for example, a particular hiring algorithm was being used by a plethora of companies (or different algorithms that were sensitive to similar characteristics) it could result in some individual candidates being “borderline unemployable”.126 Given the potential for algorithmic discrimination, this blackballing could occur at a disproportionately higher rate amongst members of protected groups. The negative effect is also increased by algorithmic opacity, as candidates may be unaware of the specific factors that have led to them being algorithmically blackballed or may not even realise that they have been blackballed at all. It would also be incredibly hard for an individual candidate to prove that they had, in fact, been algorithmically blackballed if they sought to take legal action. The difficulties with applying discrimination law to these algorithms will be discussed further in the next section.

Discriminatory algorithms can also have similar adverse impacts upon entire classes of individuals due to the possibility of “feedback looping”.127 Consider a hypothetical hiring algorithm that discriminates against women. This algorithm may disproportionately exclude women from its pool of “good” candidates due, in part, to the historical hiring practices of a company. Since this algorithm will inevitably result in the employer hiring lower numbers of women, the historical hiring practices will not change, and women will continue to be underrepresented as employees in this company. Therefore, the algorithm’s finding that women were not appropriate candidates will be reinforced, and a “feedback loop” will occur. These feedback loops can occur in many different algorithmic situations and ultimately serve to “reinforce a cycle of bias” against people of a protected group.128

C: Applying New Zealand’s Discrimination Law

1: An overview of the law

New Zealand’s approach to workplace discrimination is set out in both the Employment Relations Act (the ERA) and the Human Rights Act (the HRA).129 However, the ERA only

125 O’Neil, above n 119.

126 Alexander and Tippett. above n 10, at 994.

127 See: O’Neil, above n 119; and Kim, above n 11, at 882.

128 Kim, above n 11, at 882.

129 ERA, ss 103 and 104; and HRA, ss 22 and 65.

applies to “employees”130 and not jobseekers.131 Due to this chapter’s focus on jobseekers and discriminatory hiring algorithms, the discrimination provisions of the HRA will be considered instead.132

Firstly, the HRA sets out a list of prohibited grounds of discrimination, which includes things such as race, sex, religion, disability and employment status.133 However, the main provision of relevance here is s 22, which makes it unlawful for an employer to refuse work to applicants, or offer less favourable working conditions, based upon any of the prohibited grounds.134 It also makes it unlawful for anyone concerned with “procuring employment for other persons” to treat that person differently from others in the same circumstances by reason of any of the prohibited grounds.135 Clearly, both of these provisions are aimed at preventing employers or job-searching agencies from directly discriminating against potential applicants. However, in the context of algorithms, the employer is often treating applicants differently based upon a data-driven analysis, rather than directly due to any of the prohibited grounds. Therefore, algorithmic discrimination would, in most cases, not be direct enough to fall under these provisions alone.

This is where the “indirect discrimination” provision in the HRA becomes useful.136 Indirect discrimination is defined as any “conduct, practice, requirement, or condition, that is not apparently in contravention of any provision... [and] has the effect of treating a person... differently on one of the prohibited grounds”.137 This provision also provides a “good reason” defence for indirect discrimination.138

It seems that, in theory, there is nothing preventing this provision from applying to cases of algorithmic discrimination. If an employer is relying on a hiring algorithm (a practice) and that algorithm has the effect of treating an applicant differently based upon a prohibited ground, then the employer would be in contravention of these discrimination provisions unless they could establish a good reason for it.139

130 ERA, s 6.

131 Note that jobseekers are not “persons intending to work” under s 6 unless they have accepted work as an employee (s 5).

132 Note that the discrimination provisions in the HRA are very similar to the ERA anyway.

133 Section 21; and also see ERA, s 105.

134 Section 22(1)(a) and (b).

135 Section 22(2).

136 The ERA also contains reference to indirect discrimination in s 104.

137 HRA, s 65.

138 Section 65.

139 Sections 21, 22 and 65;

2: Practical challenges with the current law

This discrimination framework in the HRA appears, on the surface, to be quite capable of applying to algorithmic discrimination. However, there are practical challenges with applying this law that make it an inadequate legal response to the threat posed by algorithms. Most of these legal challenges arise from a mix of transparency and accountability issues.140

As explained by Pauline Kim, algorithmic hiring decisions “typically involve opaque decision processes, rest on unexplained correlations, and lack clearly articulated employer justifications”.141 Given that the onus is on the plaintiff to prove that they have been discriminated against,142 this algorithmic opacity could create serious hurdles for jobseekers that have been subject to algorithmic discrimination. These individuals could find it incredibly difficult to detect when they have been discriminated against and, even if they suspected it, they would have an even harder time proving it. The inner workings of these algorithms are often “incomprehensible to humans”, so the individual, their lawyer and the courts would all struggle to decipher whether or not the algorithmic decision was fair or based upon some discriminatory classification bias.143 In some cases, the algorithm itself may even be proprietary information, meaning that the applicant would be unable to prove any suspected bias.144 The opaque nature of algorithms, combined with plaintiff carrying the evidential burden, results in a discrimination framework that prevents job applicants from detecting, or enforcing, their rights in relation to algorithms.

Another concerning aspect of the current law is the possibility for employers to algorithmically “mask” their discriminatory behaviour and thus evade liability due to the transparency challenges.145 Barocas and Selbst describe “masking” as when an individual exploits the potentially discriminatory mechanisms of algorithms (training data, proxies, feature/input selection etc.) to intentionally discriminate against groups of individuals. The use of algorithms “conceals the fact that the decision makers determined and considered the individual’s class membership”.146 This, again, makes it a lot harder for job applicants to detect and enforce any discrimination. Employers effectively have the opportunity to take advantage of the evidentiary challenges of algorithmic discrimination, allowing them to intentionally discriminate against job applicants without risk of legal accountability.

140 Recall the broad challenges outlined in Chapter I.

141 Kim, above n 11, at 907.

142 McClelland v Schindler Lifts NZ Ltd [2015] NZHRRT 45 at [85].

143 See: Edwards and Veale, above n 22, at 26.

144 Kim, above n 11, at 921; and see State v Loomis 371 Wis 2d 235 (Wis 13 July 2016) at [46].

145 Barocas and Selbst, above n 24, at 692 – 694.

146 At 693.

There is also uncertainty surrounding how the “good reason” defence to indirect discrimination would apply in relation to algorithms.147 Some scholars are concerned that the mere existence of a statistical correlation may satisfy a “good reason” to use an algorithmic model, giving employers a defence to using such models even if they result in unfair or discriminatory outcomes.148 However, given the High Court’s historically strict approach towards the good reason defence in New Zealand,149 it seems likely that employers would be required to prove more than mere statistical correlation in order for a discriminatory algorithm to be justifiable. The defence typically involves an employer showing that their practice was “necessary” rather than just “convenient”.150 However, what this means in relation to algorithms is not immediately clear. Does this mean that the use of a hiring algorithm, at all, must be necessary? Or that the input variables of the algorithm must be necessary? Or, possibly, that the “target variable” must be necessary? This is more of a legal uncertainty, rather than a challenge, but it is still something that the either the Courts or legislators will be faced with determining in the future.

D: Potential Solutions

1: Rethinking the current approach to antidiscrimination law

As suggested by Pauline Kim, workplace hiring algorithms call for a “fundamental rethinking [of] antidiscrimination doctrine”.151 One of the ways in which this “rethinking” could take place is in relation to the burden of proof. As noted earlier, the evidentiary burden placed upon plaintiffs can be incredibly hard, if not near impossible, to satisfy when proving or detecting algorithmic discrimination. Ifeoma Ajunwa suggests that this evidentiary challenge be solved by creating a new category of discrimination called “discrimination per se”.152 According to Ajunwa, this new category would “entirely shift the burden of proof from plaintiff to defendant”.153 Therefore, if a job applicant can assert that a hiring algorithm has the potential to be discriminatory (by using “proxy” variables, for example) then the onus would shift to the employer to prove that the algorithm is not discriminatory.

147 HRA, s 65.

148 See: Kim, above n 11, at 921.

149 See: Proceedings Commissioner v Air New Zealand Ltd [1987] NZEOT 1; (1988) 7 NZAR 462; and Northern Regional Health Authority v Human Rights Commission (1997) 4 HRNZ 37.

150 Proceedings Commissioner v Air New Zealand Ltd, above n 149; and Northern Regional Health Authority v Human Rights Commission, above n 149.

151 Kim, above n 11, at 865.

152 Ajunwa, above n 93, at 44 –50.

153 At 45.

This would remedy a lot of the transparency issues that plaintiffs are faced with but would also bring some challenges of its own. Firstly, policymakers will inevitably face difficulties in deciding when such a category of discrimination would apply and, given the burden put on employers, is not something that would be taken lightly.154 Secondly, this new category of discrimination is arguably too strict on employers and could prevent hiring algorithms from being used in New Zealand. It may be just as difficult for employers to prove that their algorithms are non-discriminatory as it is for applicants to prove that they are discriminatory. This could discourage employers from using hiring algorithms, even if they were operating in a fair manner that actually reduced bias and discrimination.155

Ultimately, however, this new discrimination category would encourage developers, and employers, to ensure that their hiring algorithms were both non-discriminatory before implementing them, and transparent enough to prove as such. These benefits alone make the policy at least worth considering, or discussing, in New Zealand.

2: A regulatory regime for algorithms

Another proposed solution to algorithmic discrimination is implementing a regulatory regime that governs the use of algorithms.156 Under this solution, a regulatory body would be set up, possibly consisting of both legal and computer science experts, that would pre-approve algorithms before they could be used in New Zealand. Software developers would submit their algorithms to this body, who would then check for various things, such as the possibility for discrimination, before deciding to approve or disapprove. This would take any legal liability away from employers that were using approved algorithms and would instead give plaintiffs some sort of complaint/appeal process to the regulatory body.

This sounds like a decent solution in theory, although there would certainly be challenges implementing such a regime in practice. Firstly, it would no doubt take considerable time, support and resources to establish such a regime. There may be other solutions that were just as effective, but less resource consuming. Secondly, it may not even be possible for a body of experts to confidently approve algorithms, given their opaque and unintelligible nature. An in- depth examination of this solution is outside the scope of this dissertation, however it is

154 Ajunwa, above n 93, at 49.

155 Recall that the justification for these algorithms is that they replaced biased humans with neutral data.

156 For discussion in an American context see: Andrew Tutt “An FDA for Algorithms” (2017) 69 Admin L Rev 83.

certainly something worth considering for policymakers when approaching algorithmic discrimination in New Zealand.

Chapter IV: The Justifiability of Algorithmic Decision-Making in the Workplace

While the previous two chapters have examined legal challenges that affect both non-standard workers and jobseekers respectively, this chapter is focusing instead on standard employees and the influence of productivity/performance management algorithms in workplace decision- making. Specifically, this chapter will examine how algorithmically influenced decisions can affect employees and whether there are any challenges that arise when attempting to justify these decisions under our current employment law.

A: The Use of Algorithms for Management Decisions in the Workplace

As mentioned in Chapter I, it is becoming increasingly common for employers to utilise algorithms that are aimed at predicting or assessing workplace productivity and performance. These algorithms take a vast amount of input data gathered from workplace monitoring of things such as keystrokes, emails, internet use, audio and video surveillance and GPS tracking.157 Often, employers will also utilise other technologies such as wearable devices or mobile tracking apps in order to gather similar data.158 This data is then utilised by machine learning algorithms that can make both predictions and assessments of worker productivity or performance.

For example, an algorithm may use these variables to predict the performance of a worker at different times of the day, or when paired with other individual workers.159 Or, alternatively, the algorithm may simply combine these variables in order to produce a productivity/performance assessment or “score” about an individual worker. One such algorithm is the “Trigger-Task-Time algorithm” developed by Boston based start-up company Enaible.160 This algorithm uses a combination of employee monitoring and predictive machine learning technology to produce a “productivity score” for workers between 0 and 100.161

157 Stefano, above n 20, at 8.

158 See: Ifeoma Ajunwa “Algorithms at Work: Productivity Monitoring Applications and Wearable Technology as the New Data-Centric Research Agenda for Employment and Labor Law” (September 2018) 63 St. Louis U LJ 21.

159 Recall Percolata example from Chapter I.

160 Will Douglas Heaven “This startup is using AI to give workers a productivity score” MIT Technology Review (online ed, 4 June 2020).

161 Heaven, above n 160.

According to Enaible, the algorithm “simultaneously factors in complexity, sequence, internal and external factors, patterns, time of day, and duration” in order to produce these productivity scores.162

Regardless of the way in which a specific algorithm operates, it is clear that they can have an impact upon the decision-making processes of employers. Things such as workplace restructuring, job replacement, job description changes, rostering, promoting, demoting and even dismissing can now be at least semi-automated through the use of these algorithms.163 Enaible even offers another algorithm alongside their productivity-scoring software called “Leadership Recommender” that is said to provide AI-powered leadership recommendations to employers.164 This prospect for algorithmic decision-making has led to many concerns about the welfare of the workers who are subject to these decisions. As Phoebe Moore describes, the use of performance management algorithms to inform decisions about employees could expose them to “heightened structural, physical and psychological risks and stress”.165 Again, a lot of these concerns arise from both transparency and accountability issues. Workers will be unable to ensure that decisions are made fairly, honestly, and accurately if they are made on the advice of an inaccessible and incomprehensible algorithm.166 There is also a concern that employers could justify their decisions on the basis of this inexplicable algorithmic reasoning, effectively deferring workplace accountability to the algorithm.

These concerns raise the question of if, and when, employers should be able to rely upon these algorithms when making decisions that impact upon employees. The upcoming sections will assess whether or not the personal grievance provisions in the ERA sufficiently address this legal challenge of algorithmic justifiability.

B: An Overview of the ERA’s Personal Grievance Provisions

1: Unjustifiable action & dismissal

Section 103(b) of the ERA states that it is a personal grievance if one or more conditions of an employee’s employment are affected to their disadvantage by an “unjustifiable action” of the employer.167 Likewise, s 103(a) also makes it a personal grievance if an employee is

162 Enaible Home Page <www.enaible.io>.

163 See: Phoebe V Moore “The Mirror for (Artificial) Intelligence: In Whose Reflection?” (2019) Vol 41 Comparative Labor Law & Policy Journal 47 at 59.

164 Enaible, above n 162.

165 Moore, above n 163, at 59.

166 At 59.

167 ERA, s 103(b).

“unjustifiably dismissed”.168 These specific personal grievances are the two most relevant to performance/productivity management algorithms. If an employer dismisses or disadvantages an employee on the basis of algorithmic advice, and the decision isn’t “justifiable”, then the employer could potentially face a personal grievance claim.

2: “Disadvantage” in relation to algorithms

There are many situations imaginable where an algorithmic decision may result in some sort of “disadvantage” for an employee. However, whether or not they would be cause for a personal grievance depends upon both the particular facts of the relationship, and the particular employment agreement.

For a personal grievance of unjustifiable action to be successful, the action must have disadvantaged the employee by affecting a condition of their employment.169 The courts have typically taken a broad approach to this, with the concept of a “condition” being interpreted as “all the rights, benefits and obligations” arising out of the employment relationship.170 This means that an algorithmically influenced “action” may be subject to a personal grievance even if it doesn’t affect a condition expressly included within the employment contract. 171

One situation in which an algorithmically influenced action may be cause for a personal grievance is in relation to scheduling. Recall that workplace algorithms, such as Percolata, are often used to make scheduling decisions, with “better” employees often being given more hours. The Employment Court have, in the past, determined that a reduction in hours can be cause for a personal grievance of unjustifiable action.172 This means that, in some situations, an employer who follows a scheduling algorithm’s advice to reduce a specific employee’s hours may be subject to a personal grievance claim. Some other algorithmic decisions that could also potentially be covered by s 103(b) are employee demotion173 or workplace restructuring.174 If an employer followed a performance management algorithm’s advice to demote or dismiss an employee, or to restructure the workplace in a way that negatively impacted upon an employee, then they could also be subject to a personal grievance claim.

168 Section 103(a).

169 Section 103(b).

170 Tranz Rail Ltd v Rail & Maritime Transport Union (Inc) [1999] NZCA 63; [1999] 1 ERNZ 460 (CA) at [26].

171 See: ANZ National Bank Ltd v Doidge [2005] NZEmpC 77; [2005] ERNZ 518 at [45].

172 See: Mana Coach Services Ltd v Huxford EmpC Wellington WC16/99.

173 See: New Zealand (with exceptions) Shipwrights etc Union v GN Hale & Son Ltd [1991] NZEmpC 102; [1991] 3 ERNZ 931 (EmpC).

174 See: Opai v Commissioner of Police [2020] NZERA 147 at [105].

The situations described above are merely examples out of a plethora of algorithmic actions that could be covered by the “unjustifiable action” grievance. Ultimately, whether or not an action has negatively affected a right, condition, or obligation of employment will depend upon the particular facts of the case, and the associated employment agreement.

3: The s 103A test for justifiability

Now that we have established that algorithmically influenced actions may, in some cases, result in a disadvantage to an employee, the next step is to consider when these actions will be “unjustifiable”. The test for justifiability is contained within s 103A of the ERA, and ultimately rests upon whether the employer’s actions “were what a fair and reasonable employer could have done in the circumstances”.175 When applying this objective test to the particular facts of a case,176 the court must also take into account a list of mandatory considerations.177 These mandatory considerations are concerned with procedural fairness matters such as ensuring that the employer has conducted a sufficient investigation,178 has raised their concerns with the employee,179 has given them a chance to respond to these concerns180 and has considered their response.181 Typically a defect in one of these procedural factors will result in a decision being unjustifiable unless the defect was minor and did not result in the employee being treated unfairly.182

This test for justifiability ultimately depends upon the individual facts of each case and it is thus inappropriate to determine whether algorithmic decisions in general would be “justifiable” or “unjustifiable”. Despite this, however, there are many attributes of algorithmic decision- making that could make the procedural requirements in s 103A very difficult for employers to satisfy. These procedural difficulties could result in decisions arising from performance/productivity algorithms rarely, if ever, being justifiable in practice.

C: The procedural difficulties in relation to algorithmic decision-making

A lot of these procedural difficulties, much like many other challenges discussed throughout this paper, arise as a result of algorithmic opacity and interpretability. Often, employers may

175 ERA, s 103A(2).

176 Section 103A(1).

177 Section 103A(3).

178 Section 103A(3)(a).

179 Section 103A(3)(b).

180 Section 103A(3)(c).

181 Section 103A(3)(d).

182 Section 103A(5).

find themselves utilising workplace algorithms without a clear understanding of how that algorithm operates or reaches its conclusions. For instance, if an employer was utilising productivity management software, such as Enaible, they may understand what sort of data the software is gathering from employees, but not how this data is interpreted and weighted by the algorithm. An employer that then attempted to use the algorithmically produced “productivity score” as the basis for an action or dismissal against an employee may face difficulties when attempting to follow the procedural matters set out in s103A.

1: Investigation

The first challenge an employer might face in a situation like this would be in relation to conducting a sufficient investigation into the “allegations” against the employee.183 The “allegations” in these algorithmic situations could be things such as an employee having a low productivity/performance score,184 performing better at different times of the day (leading to shift/hour changes)185 or being better suited for a different role within the workplace. An investigation is an important aspect of procedural fairness in these situations, as it ensures that employers are not blindly following an algorithm’s advice. When an algorithm makes such an allegation or recommendation, the question of what an employer would need to “investigate” is relatively uncertain. Would an employer be required to look into the inner workings of the algorithm to discover how, or why, a specific result has been reached? Or would a general understanding of the input data and theoretical workings of an algorithm be sufficient? Given that the court must have regard to “the resources available to the employer”,186 it seems unlikely that an employer would be expected to investigate into the technical inner workings of an algorithm as, due to algorithmic opacity, this may not even be possible.187

Instead of focusing on the workings of an algorithm, an employer might instead be expected to conduct their own investigation into the worker’s performance to assess whether or not the algorithmic allegations are true. However, this can also pose challenges as often algorithmic advice or assessments will be based upon a combination of factors that may not be easily discernible to a human manager. As Edwards and Veale note, the types of data that influence machine learning decisions may “lack any convenient or clear human interpretation in the first

183 ERA, s 103A(3)(a).

184 Enaible produces a “score” whereas other algorithms may use a different assessment.

185 Recall that scheduling algorithms such as Percolata may predict these things.

186 ERA, s103A(3)(a).

187 See: Edwards and Veale, above n 22, at 26.

place”.188 For example, when assessing an employee’s productivity or performance, an algorithm could use vast amounts of abstract data such as how long an employee spends on different tasks, their mouse and keyboard interactions with those tasks, tone of voice analysis and quality of emails sent. Often, these assessments will not be based solely on typical performance or productivity indicators such as sales or billable hours. This means that a human manager attempting to investigate productivity or performance allegations against an employee may have serious difficulties identifying the factors that have led to the algorithm’s advice. To the human eye, there may appear to be nothing wrong with the employee’s performance or productivity despite the algorithm claiming otherwise. It is currently a legal uncertainty as to what would be required for a sufficient “investigation” into algorithmic advice and how, in practice, an employer would be capable of fulfilling this procedural requirement.

2: Raising concerns with employees

Similar challenges arise in relation to the other procedural requirement of raising concerns with employees.189 In Peng v Drapac, the Employment Relations Authority stated that if an employer seeks to justify dismissal on the basis of poor performance, “clear and precise” warnings must be given about the employee’s shortcomings and the improvements sought.190 Likewise, in a different case, the Employment Relations Authority found that a dismissal for poor performance was unjustifiable because the warnings given to the employee were “general in nature” and “no specific performance concerns were documented”.191 As discussed earlier, employers may face difficulties in determining what specific factors have led to an algorithm’s advice due to both complex inner workings of machine learning algorithms and vast amounts of abstract input data. In these situations, it would be very difficult, if not impossible, for an employer to give “clear and precise” warnings about why the algorithm has determined the employee’s performance to be lacking, and what actions they could take to improve it. Simply notifying an employee of their low algorithmic productivity score,192 and informing them of the types of data gathered by the algorithm, would likely be too “general in nature” to satisfy the procedural requirements.

188 Edwards and Veale, above n 22, at 59.

189 ERA, s 130A(3)(b).

190 Peng v Drapac Ltd ERA Auckland AA525/10 at [25].

191 Chow v TDA Immigration and Student Services Ltd [2012] NZERA Auckland 177 at [14].

192 If an employer was using an algorithm similar to Enaible.

3: Allowing employees to respond to concerns

Employers are also expected to give employees a chance to respond to any concerns raised by the warning.193 The Employment Relations Authority has suggested that this includes giving an employee a “clear understanding” of what the employee would have to do to “improve to the required standard”.194 Essentially, an employee must be able to respond to the concerns by improving their performance accordingly. Again, this would be a very difficult requirement to satisfy in some algorithmic situations, as the employer may not know, specifically, what the employee needs to do in order to improve their productivity or performance as assessed by the algorithm. Essentially, an employee could be warned about their algorithmic assessment and be left with no insight as to what specific changes they could make to improve their treatment by the algorithm.

In these situations, the defects in process would not be mere minor defects nor would they be fair to employees.195 The lack of a sufficient investigation or warning, and ability for an employee to respond, undermines the entire disciplinary and decision-making process that is expected from employers. These procedural challenges highlight the difficulties that employers would face when seeking to justify decisions or dismissals influenced by some productivity/performance management algorithms. In many cases, machine learning algorithms will simply not be transparent nor explainable enough to satisfy the procedural requirements in s 103A and thus any decision made on the basis of their analysis would be unjustifiable and susceptible to a personal grievance claim.

D: Is the Current Approach to Justifiability Desirable?

As s103A was enacted two decades ago now, the current focus on procedural fairness under the ERA was clearly not drafted with the prospect of algorithmic decision-making in mind. The increasing use of algorithms in the workplace may result in an evolution of the way in which employment, and particularly management, is viewed. This raises the question of whether the current approach to justifiability is too strict when applied to algorithms and, subsequently, whether the law should be updated to better accommodate for employers seeking to rely upon algorithmic analysis in their decision-making.

193 ERA, s103A(3)(c).

194 Chow v TDA Immigration and Student Services Ltd, above n 191, at [16].

195 ERA, s103A(5).

1: Negatives with the current approach

The procedural challenges and subsequent personal grievance claims that employers may face when using algorithms under the current approach could potentially halt the deployment or development of algorithmic management tools in New Zealand. As algorithms become more advanced and their use in workplaces across the world more widespread, our approach to justifiability could result in New Zealand’s workplaces lagging behind those in other parts of the world.

This would also forfeit many of the supposed benefits to workplace fairness and productivity that these algorithms are said to have. Software companies such as Enable claim that their productivity algorithms allow the most deserving employees to be rewarded, while also encouraging increased productivity in order to receive those rewards.196 Much like with hiring algorithms, these new management tools are suggested to replace biased human decision- making with objective data. Instead of picking subjective favourites, managers will instead be empowered to make decisions based upon the actual performance of their employees.197 Our current emphasis on procedural fairness means that both employers and employees may be unable to reap these benefits even if the algorithm is operating in a manner that is both fair and accurate. As the nature of employment and management evolves, a fundamental rethinking of the way in which an action is deemed to be “fair and reasonable” may be required in order to fully secure the benefits of algorithmic management.

2: Positives with the current approach

However, despite these supposed benefits, there are also serious concerns that performance and productivity algorithms could actually result in unfairness to some individuals and reduce employer accountability.198 The current emphasis on procedural fairness quells some of these concerns by ensuring that employers are unable to escape liability for algorithmically influenced decisions. Instead of being able to justify decisions or dismissals by merely pointing to algorithmic advice, employers are expected to sufficiently understand that advice to the extent that they can provide employees with a fair process.

Not only does this promote accountability, but it also encourages software developers to find ways to create algorithms that are sufficiently transparent or explainable in order to allow for

196 Heaven, above n 160.

197 O’Connor, above n 1.

198 Moore, above n 163, at 59.

this process. If unexplainable algorithms cannot be used without fear of personal grievance, then employers will likely avoid them and the market will adjust accordingly.

Ultimately, the supposed reduction in workplace bias that could occur as a result of performance and productivity algorithms should not come at the cost of procedural fairness. Even if the inner workings of an algorithm are operating in a manner that is fair and accurate, it would be unfair to subject an employee to a decision made by that algorithm without giving them the opportunity to respond or the information to enforce their rights.

Given that algorithmic management is a developing technology and its associated benefits and risks in practice are still relatively unknown, the ERA’s restrictive approach to justifiability is desirable. An approach that is more tailored to allow for the benefits of algorithmic decision- making may be worth considering in future but, for now, the primary focus should be on ensuring that employees are protected from the potential risks associated with algorithmic management.

Chapter V: Algorithmic Data Collection, Surveillance & Privacy Law

This chapter will examine how New Zealand’s privacy law responds to the legal challenge of algorithmic data collection and surveillance. Unlike the previous chapters, which have each focused on a particular challenge arising from one example of algorithmic management, this privacy challenge arises under all three examples and affects non-standard workers, jobseekers and employees.

A: Algorithmic Data Collection

All three examples of algorithmic management considered throughout this paper rely upon extensive data collection to train and utilise predictive models. For instance, gig-working platforms often collect information such as GPS data about workers in order to “assign, optimise, and evaluate” their work via algorithms.199 Hiring/recruitment software, as mentioned in Chapter III, will often collect data about candidates’ social media or internet footprints.200 Likewise, productivity/performance management algorithms, as discussed in the last chapter, rely upon data ranging from employees’ emails and internet usage to data collected via sociometric devices such as tone of voice analysis.201

199 Lee and others, above n 7; and Mateescu and Nguyen, above n 20, at 3.

200 Kim, above n 11, at 861; Bornstein, above n 12, at 531; HR.com, above n 83; and QJumpers, above n 84. 201 Moore, above n 163, at 59; and Janine Berg “Protecting Workers in the Digital Age: Technology, Outsourcing and the Growing Precariousness of Work” (2019) Vol 41 Comparative Labor Law & Policy Journal 69 at 79.

Naturally, this increased data collection has raised many concerns about workplace surveillance and its potential negative effect on workers. Individuals should not only be concerned about the decisional outcomes of algorithms (i.e. discrimination, justification or fairness) but also about the privacy implications that arise from the surveillance and collection of algorithmic input data. As described by Ajunwa, Crawford and Schultz, the erosion of technological and economic constraints on employers presents the opportunity for truly “limitless worker surveillance”.202

This unprecedented level of surveillance poses risks to worker freedom, privacy, autonomy and even safety. 203 If workers feel that they are being constantly spied on by employers, or are pressured to meet certain electronically monitored targets, then they may be more likely to take health and safety risks that could negatively affect physical wellbeing.204 Increased workplace surveillance could also possibly create incentives to “beat the system”, resulting in workers breaking workplace rules, or even the law, and putting themselves and their colleagues in physical or legal danger.205 However, it is not only physical wellbeing that is at risk here, as electronic monitoring is also likely to increase stress and fear amongst workers, leading to potential repercussions for mental wellbeing.206 The increased control that this monitoring provides to employers over how workers perform their jobs, especially those on gig-working platforms or in warehouse jobs,207 could exacerbate these mental issues as workers may feel that they lack any autonomy or individuality within the workplace.

Due to these potential risks, it is imperative that New Zealand has sufficiently strong privacy laws to respond to the novel and intrusive forms of surveillance that algorithmic management relies upon. Not only could strong privacy law deal with these issues of surveillance and control, it could also quell many of the wider concerns relating to algorithmic management such as discrimination or accountability. If an employer was unable to lawfully collect or use the data that is required by hiring and performance management algorithms, then those algorithms would be ineffective, and their associated risks could be reduced.

202 Ajunwa and others, above n 2, at 109.

203 See: Antonio Aloisi and Elena Gramano “Artificial Intelligence Is Watching You at Work. Digital Surveillance, Employee Monitoring and Regulatory Issues in the EU Context” (2019) Vol 41 Special Issue of Comparative Labor Law & Policy Journal 95 at 106.

204 Ajunwa and others, above n 2, at 110; and Moore, above n 163, at 59.

205 Ajunwa and others, above n 2, at 110.

206 At 110.

207 Berg, above n 201, at 81–82.

B: The Privacy Act

1: An overview of the Act

New Zealand’s legislative framework for privacy law is contained within the Privacy Act 2020,208 which sets out 13 different “privacy principles” regarding the use, collection, disclosure, access, correction and storage of “personal information”.209 While all of these principles provide important aspects of our privacy law, I will focus specifically on principles 1–4 as they are the most relevant to algorithmic data collection in the workplace.

Principle 1 is arguably the most important, as it provides that an “agency”210 can only collect personal information if it is necessary for a lawful purpose connected with a function or activity of that agency.211 Principle 2 then states that personal information must be collected from the individual concerned212 and principle 3 sets out some information that must be given to these individuals upon collection, such as the purpose for collection and recipients of that information.213 Principle 4 is also particularly relevant to algorithmic data collection, as it states that personal information must be collected legally and in a manner that is fair and not unreasonably intrusive.214

2: Practical challenges in relation to workers

These privacy principles appear, in theory, to provide extensive protection to the privacy rights of both workers and prospective workers. However, there are some practical challenges that arise when applying these principles to algorithmic management that severely limit their effectiveness in the workplace.

The first challenge arises from s 21, which requires the Privacy Commissioner to balance privacy with other interests such as the ability for businesses to “achieve their objectives efficiently”.215 One of the main justifications for implementing algorithmic management is business efficiency, as hiring algorithms are intended to increase efficiency by reducing the amount of time that management spends screening resumes and finding talented candidates,

208 This will replace the Privacy Act 1993 when it comes into force later this year. See: Feilidh Dwyer “New Privacy Act to commence on 1 December” (18 March 2020) Office of the Privacy Commissioner < https://privacy.org.nz/blog/new-privacy-act-to-commence-on-1-november/>.

209 Privacy Act 2020, s 22; formerly Privacy Act 1993, s 6.

210 Note that “agency” is given a wide interpretation here and would include an employer or prospective employer. See: Section 8; and Office of the Privacy Commissioner “What is an agency?” (2013)

<https://privacy.org.nz/further-resources/knowledge-base/view/512?t=224753_309547>.

211 Section 22, Information privacy principle 1.

212 Section 22, Information privacy principle 2.

213 Principle 3.

214 Principle 4.

215 Section 21(a)(ii).

while productivity/performance management algorithms boast efficiency increases for both management and workers.216 This results in a balancing exercise that could permit the business efficiency increases of algorithmic management to “trump workers’ privacy interests” and “operate in a manner that is less strict on employers”.217

The next challenge arises from the historically wide approach towards the “necessity” of collection under principle 1. This necessity requirement has been interpreted by the courts as “reasonably necessary”218 and, according to Paul Roth, the Privacy Commissioner will also generally take a “wide view” towards the necessity of collection.219 This provides employers with a low threshold for necessity that is not difficult for them to satisfy,220 and potentially permits more types of algorithmic data collection in the workplace. However, this necessity requirement may be strengthened in the 2020 Act, as principle 1 now states that if the purpose for which information is collected “does not require” it, then the agency cannot require the collection of that information.221 This addition may create a higher threshold for employers seeking to collect information, however the Privacy Commissioner would still need to balance other considerations under s 21 so it is unclear how much difference it will make in practice.

C: Applying the Privacy Act to Different Examples of Algorithmic Data Collection

This section will examine the application of the Privacy Act to various forms of algorithmic data collection. Definitive answers cannot be provided, as it ultimately depends on the Privacy Commissioner’s discretion in each case, however past decisions involving less advanced forms of data collection may provide some insight.

1: Is algorithmic data “personal” information?

The Privacy Act only applies to “personal information”222 which is defined as “information about an identifiable individual”.223 Case law from both the High Court and Human Rights Review Tribunal suggest that “identifiable” does not necessarily mean that the individual can be identified by the information itself, and something else, such as an index number, can make

216 See: Esther Kaplan “The Spy Who Fired Me: The human costs of workplace monitoring” Harpers Magazine

(online ed, March 2015).

217 Paul Roth “Privacy Law Reform in New Zealand: Will it Touch the Workplace?” (2016) Vol 41 No 2 New Zealand Journal of Employment Relations 36 at 39.

218 Lehmann v Canwest Radiowords Limited [2006] NZ HRRT 35 at [51].

219 Roth, above n 217, at 40.

220 At 41.

221 Section 22, Information privacy principle 1 (2).

222 Privacy Act 2020, s 22.

223 Section 7.

the information identifiable.224 In the context of algorithmic management, non-identifiable information such as internet history or keystrokes will usually be combined with an individual’s name or ID number in order to assess them on an individual level. Therefore, most information collected for algorithmic management, despite not being inherently identifiable, will still be “personal information” for the purposes of the Privacy Act.

2: Audio or video recordings of employees and prospective employees

Performance or productivity management algorithms often collect and analyse audio or video recordings of employees that are gathered in increasingly sophisticated ways. For example, analytics company Humanyze now offers a wearable sociometric badge that tracks and records the frequency and duration of employees’ interactions throughout the day.225 Similarly, other sociometric badges can also utilise microphones to evaluate things such as an employee’s tone of voice and emotional state.226 This sort of information can be useful for algorithmically assessing performance or productivity, specifically in sales jobs where an employee’s performance can be assessed in part by their verbal interactions with customers. Performance/productivity algorithms may also utilise workplace video recordings (such as from an employee’s webcam) in order to gather data about an employee’s workplace practices or even to assess an employee’s mood using facial recognition software.227

Similarly, audio and video analysis can also be used in the hiring process as it is becoming increasingly popular among large companies to film and analyse job interviews using software such as HireVue.228 This type of software uses both audio and video footage from the interview, along with artificial intelligence, to judge candidates on verbal and non-verbal cues.229 Whether this sort of information would be legally collectable under the Privacy Act ultimately depends upon the employer’s purpose for collection.230 Recently, the Privacy Commissioner found that constant audio recording in NZ Post vehicles was in breach of privacy principle 1 because the Commissioner was not convinced that audio recording was necessary for “safety

224 See: Tapiki and Eru v New Zealand Parole Board [2019] NZHRRT 5 at [61]; and Sievwrights v Apostolakis

HC Wellington CIV-2005-485-527 at [17] – [18].

225 Note that the content of these interactions is not recorded. See: Humanyze “Privacy by Design”

<www.humanyze.com/data-privacy>; and Adams-Prassl, above n 37, at 14.

226 Matthew Bodie and others “The Law and Policy of People Analytics” (2017) 88 U Colo L Rev 961 at 971. 227 Ulrich Leicht-Deobald and others “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity” (2019) 160 J Bus Ethics 377 at 379.

228 Moore, above n 163, at 59–60; and see HireVue “Pre-Employment Assessments”

<www.hirevue.com/products/assessments>.

229 Moore, above n 163, at 59–60.

230 Privacy Act 2020, s 22 information privacy principle 1.

purposes”.231 The Commissioner’s reasoning in this case was that audio recordings would neither prevent accidents from occurring nor lead to changes in safety policies.232 However, an employer using audio or video recordings for algorithmic purposes would likely justify collection on the basis of something else such as making fairer and more accurate management decisions, assessing worker performance, increasing management efficiency or improving workplace productivity. Unlike in the NZ Post case, the employer could likely prove that the collection of audio and video recordings did, in fact, assist with algorithmic analysis that enabled one of these purposes. The likelihood of satisfying principle 1 is again increased by both the wide interpretation of “necessary” and the Commissioner’s duty to balance privacy with other concerns such as business efficiency.

The audio recording of NZ Post workers was also found to be in breach of principle 4, with the Commissioner stating that it would be “unsettling... and unreasonably intrusive” to constantly record audio within the vehicles.233 However, this was primarily because the content of the drivers’ daily interactions were being recorded, which affected the privacy and dignity of both the drivers and those whom they interacted with while working.234 Unlike these traditional forms of audio/video recording, algorithmic management is usually concerned with “metadata” rather than the actual content of recordings. This metadata can be thought of as “data about other data” and, in the context of audio and video, could include things such as tone of voice analysis, length and number of conversations, facial expressions and eye movements.235 Unlike in the NZ Post case, audio/video surveillance for algorithmic purposes will often not intrude into the personal affairs of workers because it is merely collecting and measuring these types of metadata without retaining the original recording. It is unclear whether this would affect the application of the Privacy Act in practice, however the Commissioner’s reasoning in the NZ Post case suggests that the collection of metadata may be considered less intrusive under principle 4. Therefore, the Privacy Act is unlikely to provide much protection against algorithmic audio or video recording in the workplace.

3: Data gathered from workplace computers

As discussed throughout this paper, algorithmic management will often track workplace computers or devices for information about employees’ emails, internet usage, keystrokes,

231 Case Note 289943 [2018] NZPriv Cmr 5.

232 Case Note 289943, above n 231.

233 Case Note 289943, above n 231.

234 Case Note 289943, above n 231.

235 See: Adams-Prassl, above n 37, at 16.

mouse movements or other interactions with their computer. According to a recent survey from the American Management Association, approximately two-thirds of U.S companies already track their employees’ internet use, 43 percent monitor their emails and 45 percent log their keystrokes.236 While the statistics in New Zealand are unclear, this sort of data collection is only going to increase as the arrival of new algorithmic management software continues to provide an increasing number of ways in which this data can be utilised and assessed.

In the past, the collection of email and keystroke data from work computers for the purposes of an employment investigation was found by the Commissioner to comply with the Privacy Act.237 The purpose of such an employment investigation is to investigate concerns about employee behaviour that may breach their obligations to their employer. Most employers would expect that their employees are using their computers for work, rather than personal activity, therefore measuring the quantity or quality of the actual “work” an employee is doing via an algorithm that collects data about keystrokes, internet use or emails may be a justifiable policy under privacy principle 1. Again, the issues surrounding the wide interpretation of “necessary” and the duty for the Commissioner to balance other interests mean that employers will usually be able to satisfy this privacy principle.

Regardless, there are questions surrounding whether or not this type of data collection is even subject to the Privacy Act at all. According to Paul Roth, there are “no privacy rights per se” in respect of workplace email or internet use and any challenge to this collection is usually dealt with by justifiability under employment law.238 There is also a technical argument that the employer is not “collecting” this sort of data, as workplace emails and internet history are already “held” in the employer’s computer system.239

Even if this was found to be “collection” for the purposes of the Privacy Act, it also seems unlikely that it would breach principle 4 for being “unreasonably intrusive” as, again, algorithmic data collection is usually concerned with metadata. Instead of allowing employers to see the potentially personal content of emails, internet use or keystrokes, algorithmic data collection will often be concerned with analysing things such as the number or timing of emails sent and received, the work-relatedness of internet use, the number of keys pressed in different time periods, or common phrases used in emails or other applications. The Privacy Commissioner has previously found keystroke collection to be allowable under the Privacy Act

236 Berg, above n 201, at 79.

237 Case Note 229558 [2012] NZ PrivCmr 1.

238 Roth, above n 217, at 47.

239 Roth, above n 217, at 47; and see Privacy Act 2020, s 7 – definition of “collect” excludes unsolicited information.

even when it gave the employer direct access to an employee’s passwords for personal email accounts.240 Therefore, it seems likely that the less intrusive collection of similar metadata in the workplace would also be permitted under the Privacy Act.

4: Social media or internet footprint data

As you will recall from Chapter III, hiring software, such as Entelo or QJumpers, often collects social media or internet footprint data about prospective job candidates in order to algorithmically analyse their potential. Whether or not this would be permissible under the Privacy Act again depends upon whether it is necessary for a lawful purpose241 and whether the collection is lawful and not unreasonably intrusive.242

Recently, it was held by the Human Rights Review Tribunal that collecting a former employee’s social media data can be in breach of the Privacy Act.243 However, in this case the data was collected from a private social media post that the employer did not have direct access to, and was used to discredit the former employee with potential future employers. Unlike in this case, an employer seeking to collect social media or internet footprint data about job candidates may argue that it is necessary for the lawful purpose of assessing the candidate’s suitability for the job and ultimately increasing efficiency within the hiring process. It is unclear whether or not the Privacy Commissioner would find social media data to be relevant or necessary for this process, however, given the duty to balance business efficiency with privacy rights, it seems likely that it could be permitted under principle 1.

The permissibility of collecting social media and internet footprint data about potential candidates is also seemingly increased when the Privacy Commissioner’s approach to pre- employment personality testing is examined. In one case, a 200-question personality test was permitted because the “collection of some information about a prospective employee’s personality and attitudes appeared to be a ‘lawful purpose’ connected with the employer’s function”.244 The Commissioner did not assess the intrusiveness of the test in this case, or its relevance to the particular position that the candidate was applying for.245 This reasoning suggests that pre-employment collection of social media and internet footprint data would be permissible under the Privacy Act, as it is effectively seeking to assess the “personality and

240 Case Note 229558, above n 237. Note that actually using these passwords to collect information from the personal accounts was in breach of the Privacy Act.

241 Information privacy principle 1.

242 Information privacy principle 4.

243 See: Hammond v Credit Union Baywide [2015] NZHRRT 6.

244 Case Note 2418 [1999] NZPrivCmr 6.

245 Roth, above n 217, at 41.

attitudes” of potential employees and is arguably less intrusive than a 200-question test. It is also likely that the employer would not be required to collect the information directly from the candidate,246 nor inform them of the collection,247 because social media and internet footprint data is usually publicly available.248

D: The European Union’s General Data Protection Regulation

Given the Privacy Commissioner’s past approaches to workplace privacy, it seems unlikely that the Privacy Act will be effective in protecting employees or jobseekers from many types of algorithmic data collection. Unlike our Privacy Act, the European Union’s General Data Protection Regulation (GDPR) has many provisions tailored specifically at newer forms of algorithmic data collection and use. Some of these provisions may provide useful insight for New Zealand policymakers into how privacy law can be aimed specifically at protecting individuals from the threat of algorithmic management. As this paper is focused primarily on legal challenges, rather than solutions, I will only consider two of the more relevant GDPR provisions.

1: The right to an explanation

One of the GDPR provisions targeted at algorithmic management is Article 15(h) which gives individuals the right to know if their data is being used for automated decision-making or “profiling”.249 This provision also gives individuals the right to “meaningful information about the logic involved” with such processing,250 which is sometimes referred to as the “right to an explanation”.251 It is worth noting that “profiling” is defined in the GDPR as any form of automated processing used to evaluate “certain personal aspects relating to a natural person” including, specifically, their “performance at work”.252 This is important because many of the hiring and performance/productivity algorithms considered throughout this paper would fall under this definition of “profiling” as they are essentially evaluating an individual’s performance, or future performance, at work.

246 Information privacy principle 2.

247 Information privacy principle 3.

248 There is an exception in principle 2(2)(d) for “publicly available” information.

249 GDPR, above n 10, Article 15(h).

250 Article 15(h).

251 See: Edwards and Veale, above n 22.

252 GDPR, above n 10, Article 4(4).

This “right to an explanation” arguably already exists for employees in the workplace in New Zealand, as the ERA’s justifiability provision253 focuses on procedural fairness that requires employees to be provided with, amongst other things, sufficiently clear warnings.254 However, replicating the GDPR’s “right to explanation” in New Zealand could be beneficial for other groups impacted by algorithmic management that aren’t subject to the full protections of employment law, such as jobseekers or gig-workers. Currently, these groups have no express rights to any algorithmic explanation beyond the Privacy Act’s principle 3, which only requires individuals to be informed of the “purpose” for data collection.255 It would also promote software developers, and employers, to develop and use algorithms which were sufficiently explainable as to allow for “meaningful logic” to be communicated.

However, as discussed throughout Chapter IV, it is questionable whether or not it is even possible to obtain a meaningful explanation from many types of machine learning algorithms, as their performance often comes “at the expense of internal interpretability”.256 This right to an explanation also risks creating what Edwards and Veale refer to as the “transparency fallacy” wherein an explanation is given but individuals are either too “time-poor, resource- poor, [or] lacking in necessary expertise” to actually make meaningful use of the explanation.257 Edwards and Veale argue that in many cases an explanation is an ineffective remedy as, often, individuals would much rather that the automated profiling, decision or action had never occurred in the first place.258 Despite these criticisms, the inclusion of some explanatory rights under the Privacy Act in relation to automated decision-making or profiling would still be an improvement over privacy principle 3.

2: Further rights regarding automated decision-making and profiling

The GDPR also gives individuals the right not to be subject to legal, or otherwise significant, decisions based solely on automated processing or profiling unless the individual gives explicit consent.259 However, both the Data Protection Working Party260 and the Greek Data Protection

253 ERA, s 103A.

254 Refer back to Chapter IV for further discussion on procedural fairness requirements.

255 Privacy Act 2020, s22 Information privacy principle 3.

256 Edwards and Veale, above n 22, at 64.

257 At 67.

258 At 42.

259 GDPR, above n 10, Article 22.

260 Article 29 Data Protection Working Party “Opinion 2/2017 on data processing at work” (8 June 2017)

<https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=610169>.

Authority261 have stated that consent cannot be a legal basis under the GDPR in an employment context due to the nature of the employment relationship.

Again, this right arguably already exists for employees in New Zealand as it seems unlikely that a significant decision made solely by algorithm would be “fair and reasonable” under the ERA.262 However, for those not subject to the protections of employment law, such as job candidates or gig-workers, this provision could be beneficial in protecting them from some of the risks associated with automated decision-making under algorithmic management. Despite this, some problems may still arise in determining which effects are “significant” and which decisions are based “solely” on automated decision-making. For instance, if a hiring algorithm automatically profiled a group of candidates, and suggested that the employer hire one of them, would the decision not to hire the other candidates be sufficiently “significant” to activate the right under Article 22? Likewise, would this decision be based “solely” on automated decision- making despite the employer ultimately having the final say? Often, even if an algorithm is merely providing advice to an employer, thus meaning that the decision is not “solely” automated, the employer may blindly rely on the advice due to “automation bias”.263 This requirement results in the risk that Article 22 may not be applicable to many of the algorithmic situations that job candidates and gig-workers are subject to. Despite these challenges, it could still be beneficial for individuals in New Zealand to be subject to a similar right against automated decision-making or profiling, albeit with less restrictive wording than Article 22.

Both of the GDPR provisions considered in this chapter have their own respective flaws, and policymakers in New Zealand should be hesitant about directly replicating them. However, their focus on automated decision-making and profiling shows that privacy law can be tailored to directly address algorithmic harms.264 Policymakers in New Zealand should closely examine the GDPR and consider implementing similarly targeted legislation if it proves to be effective in preventing the harms of algorithmic management.

261 Holly Cudbill “€150,000 GDPR fine for wrongly using “consent” as a basis for processing personal data of staff” (9 August 2019) Lexology <https://www.lexology.com/library/detail.aspx?g=0043039d-2cf0-4647-ba26- 7a78e53b67bd>.

262 ERA, s 103A.

263 See: Linda Skitka, Kathleen Mosier and Mark Burdick “Accountability and Automation Bias” (2000) 52 Int J Human-Computer Studies 701 as cited in Edwards and Veale, above n 22, at 45.

264 GDPR also requires “data protection impact assessments” for automated processing or profiling, which could

address algorithmic harms. See: GDPR, above n 10, Article 35.

Conclusion

Throughout this paper, we have considered three distinct examples of algorithmic management that cause a number of different challenges for the current legal framework. If these legal challenges are not sufficiently addressed, then algorithmic management threatens to expose employees, non-standard workers and even jobseekers to heightened structural disadvantage and unfair working conditions.

The first example of algorithmic management that this paper examined was the use of algorithms on gig-working platforms and the challenge that this creates when classifying working relationships. Currently, gig-working platforms are using novel forms of algorithmic control that exposes workers to strict working conditions while allowing them to be unfairly classified as contractors rather than employees. The traditional binary approach to employment classification in New Zealand leads to either unfairness for workers, or economic problems for platforms, and is thus currently unfit to deal with these novel working arrangements facilitated by algorithmic management.

Another challenge that arises from algorithmic management is the discrimination that can occur when hiring/recruitment algorithms are used to assess job candidates. The low transparency of machine learning algorithms causes difficulties when applying the current anti-discrimination law to this novel form of bias. Therefore, we should consider fundamentally rethinking the legal approach to discrimination in New Zealand, or implement some sort of regulatory solution, in order to protect job seekers from unfair discrimination.

Employees are also being affected by algorithmic management, as performance/productivity management algorithms are being increasingly used to assist in workplace decision-making. Luckily, these employees are protected by the personal grievance and justifiability provisions of the ERA, which make it very difficult for an employer to rely upon algorithmic advice when making a decision that could negatively impact an employee. This may prevent some of the benefits of algorithmic management from being secured in New Zealand, however, it is the preferred approach until the potential harms of algorithmic software on employees are fully realised.

All three examples of algorithmic management considered throughout this paper share similar overarching concerns regarding data collection and surveillance. The historically wide approach taken by the Privacy Commissioner in relation to workplace and pre-employment data collection suggests that the Privacy Act provides insufficient protection for all three groups of individuals identified throughout this paper. In order to quell some of these privacy

concerns, policymakers in New Zealand should closely examine the GDPR and consider implementing new legislation in New Zealand that is similarly targeted towards algorithmic harms.

The four distinct challenges explored throughout this paper, while only being a small sample of potential legal issues arising from algorithmic management, reveal wider points about the nature of both modern working arrangements and the law itself. We are witnessing a transformation, facilitated by algorithmic software, of the way in which management is approached in modern society. If the law is to retain its relevance to the modern worker, then it must adapt to this evolution and provide equally innovative solutions to the challenges posed by algorithmic management.

Bibliography

A Cases

  1. New Zealand

ANZ National Bank Ltd v Doidge [2005] NZEmpC 77; [2005] ERNZ 518. Arachchige v Rasier New Zealand Ltd [2020] NZEmpC 35. Bryson v Three-Foot-Six Ltd [2005] NZSC 34.

Case Note 289943 [2018] NZPriv Cmr 5.

Case Note 229558 [2012] NZ PrivCmr 1.

Case Note 2418 [1999] NZPrivCmr 6.

Challenge Realty Ltd v Commissioner of Inland Revenue [1990] 3 NZLR 42 (CA). Chow v TDA Immigration and Student Services Ltd [2012] NZERA Auckland 177. Clark v Northland Hunt Inc [2006] NZEmpC 119; (2006) 4 NZELR 23 (EmpC).

Hammond v Credit Union Baywide [2015] NZHRRT 6. Lehmann v Canwest Radiowords Limited [2006] NZ HRRT 35. Leota v Parcel Express Ltd [2020] NZEmpC 61.

Mana Coach Services Ltd v Huxford EmpC Wellington WC16/99.

McClelland v Schindler Lifts NZ Ltd [2015] NZHRRT 45.

New Zealand (with exceptions) Shipwrights etc Union v GN Hale & Son Ltd [1991] NZEmpC 102; [1991] 3 ERNZ 931 (EmpC).

Northern Regional Health Authority v Human Rights Commission (1997) 4 HRNZ 37.

Opai v Commissioner of Police [2020] NZERA 147.

Peng v Drapac Ltd ERA Auckland AA525/10.

Proceedings Commissioner v Air New Zealand Ltd [1987] NZEOT 1; (1988) 7 NZAR 462.

Sievwrights v Apostolakis HC Wellington CIV-2005-485-527.

Southern Taxis Ltd v Labour Inspector [2020] NZEmpC.

Tapiki and Eru v New Zealand Parole Board [2019] NZHRRT 5.

Tranz Rail Ltd v Rail & Maritime Transport Union (Inc) [1999] NZCA 63; [1999] 1 ERNZ 460 (CA).

  1. United Kingdom

Uber BV and others v Aslam and others [2018] EWCA Civ 2748, [2019] 3 All ER (CA).

  1. United States of America

Berwick v Uber Technologies Inc, California UGC-15-546378 (Cal Super Ct 21 September 2015).

O’Connor v Uber Technologies Inc 82 F Supp 3d 1133, 80 Cal (ND Cal 11 March 2015).

Razak v Uber Techs Inc 951 F.3d 137 (3d Cir Pa 3 March 2020).

State v Loomis 371 Wis 2d 235 (Wis 13 July 2016).

B Legislation

  1. New Zealand

Employment Relations Act 2000. Human Rights Act 1993.

Privacy Act 2020.

Privacy Act 1993.

  1. European Union

Regulation (EU) 2016/679 General Data Protection Regulation [2016] OJ L119.

  1. United Kingdom

Employment Rights Act 1996 (UK).

C Books and Chapters in Books

Cathy O’Neil Weapons of Math Destruction (1st ed, eBook ed, Crown Publishers, New York, 2016).

D Journal Articles

Jeremias Adams-Prassl “What if Your Boss Was an Algorithm? The Rise of Artificial Intelligence at Work” (2019) Vol 41 Comparative Labor Law & Policy Journal 123.

Ifeoma Ajunwa “Algorithms at Work: Productivity Monitoring Applications and Wearable Technology as the New Data-Centric Research Agenda for Employment and Labor Law” (September 2018) 63 St. Louis U LJ 21.

Ifeoma Ajunwa, Kate Crawford and Jason Schultz “Limitless Worker Surveillance” (2017) 735 Cal L Rev 105.

Ifeoma Ajunwa “The Paradox of Automation as Anti-Bias Intervention” (2020 Forthcoming) 41 Cardozo L Rev <www.ssrn.com>.

Charlotte S Alexander and Elizabeth Tippett “The Hacking of Employment Law” (2017) Vol 82 No 4 Missouri Law Review 974.

Antonio Aloisi and Elena Gramano “Artificial Intelligence Is Watching You at Work. Digital Surveillance, Employee Monitoring and Regulatory Issues in the EU Context” (2019) Vol 41 Comparative Labor Law & Policy Journal 95.

Solon Barocas and Andrew Selbst “Big Data’s Disparate Impact” (2016) 104 California Law Review 671.

Janine Berg “Protecting Workers in the Digital Age: Technology, Outsourcing and the Growing Precariousness of Work” (2019) Vol 41 Comparative Labor Law & Policy Journal 69.

Matthew Bodie, Miriam Cherry, Marcia McCormick and Jintong Tang “The Law and Policy of People Analytics” (2017) 88 U Colo L Rev 961.

Stephanie Bornstein “Antidiscriminatory Algorithms” (2018) Vol 70 No 2 Alabama Law Review 519.

Emanuele Dagnio and Ilaria Armaroli “A Seat at the Table: Negotiating Data Processing in the Workplace. A National Case Study and Comparative Insights” (2019) Vol 41 Comparative Labor Law & Policy Journal 173.

James Duggan, Ultan Sherman, Ronan Carbery and Anthony McDonnell “Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM” (2020) 30 Hum Resour Manag J 114.

Lillian Edwards and Michael Veale “Slave to the Algorithm? Why a ‘Right to an Explanation’ is Probably Not the Remedy You are Looking For” (2017) Vol 16 Duke Law and Technology Review 18.

Frank Hendrickx “Privacy 4.0 at Work: Regulating Employment, Technology and Automation” (2019) Vol 41 Comparative Labor Law & Policy Journal 147.

Pauline Kim “Data-Driven Discrimination at Work” (2017) Vol 48 William & Mary Law Review 857.

Ulrich Leicht-Deobald, Thorsten Busch, Christoph Schank, Antoinette Weibel, Simon Schafheitle, Isabelle Wildhaber and Gabriel Kasper “The Challenges of Algorithm-Based HR Decision-Making for Personal Integrity” (2019) 160 J Bus Ethics 377.

Lawrence Lessig “Law Regulating Code Regulating Law” (2003) 35 Loy U Chi L J 1.

Karen Levy and Solon Barocas “Refractive Surveillance: Monitoring Customers to Manage Workers” (2018) 12 International Journal of Communication 1166.

Phoebe V Moore “The Mirror for (Artificial) Intelligence: In Whose Reflection?” (2019) Vol 41 Comparative Labor Law & Policy Journal 47.

Alex Rosenblat and Luke Stark “Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers” (2016) 10 International Journal of Communication 3758.

Paul Roth “Privacy Law Reform in New Zealand: Will it Touch the Workplace?” (2016) Vol 41 No 2 New Zealand Journal of Employment Relations 36.

Linda Skitka, Kathleen Mosier and Mark Burdick “Accountability and Automation Bias” (2000) 52 Int J Human-Computer Studies 701.

Andrew Tutt “An FDA for Algorithms” (2017) 69 Admin L Rev 83.

E Parliamentary and Government Materials

1 European Union

Article 29 Data Protection Working Party “Opinion 2/2017 on data processing at work” (8 June 2017) <https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=610169>.

F Papers and Reports

Valerio De Stefano Negotiating the algorithm: Automation, artificial intelligence and labour protection (International Labour Office, Employment Working Paper No 246, 2018).

Valerio De Stefano The Rise of the ‘Just-in-Time Workforce’: On-Demand Work, Crowd Work and Labour Protection in the ‘Gig-Economy’ (International Labour Office Conditions of Work and Employment Series No 71, 2016).

Colin Gavaghan, Alistair Knott, James Maclaurin, John Zerilli and Joy Liddicoat Government Use of Artificial Intelligence in New Zealand (New Zealand Law Foundation, 2019).

New Zealand Human Rights Commission Privacy, Data and Technology: Human Rights Challenges in the Digital Age (May 2018).

New Zealand Productivity Commission Technological change and the future of work: Final report (2020).

World Economic Forum The Future of Jobs Report (2018).

G Internet Resources

Lana Andelane “Uber’s new Auckland pricing trial criticised for ‘ripping off’ drivers” (25 July 2019) Newshub <https://www.newshub.co.nz/home/new-zealand/2019/07/exclusive- uber-s-new-auckland-pricing-trial-criticised-for-ripping-off-drivers.html>.

Michael Andrew “Are NZ Uber drivers employees? The court is about to decide once and for all” (17 July 2020) The Spinoff <https://thespinoff.co.nz/business/17-07-2020/are-nz-uber- drivers-employees-the-court-is-about-to-decide-once-and-for-all/>.

Holly Cudbill “€150,000 GDPR fine for wrongly using “consent” as a basis for processing personal data of staff” (9 August 2019) Lexology

<https://www.lexology.com/library/detail.aspx?g=0043039d-2cf0-4647-ba26- 7a78e53b67bd>.

Feilidh Dwyer “New Privacy Act to commence on 1 December” (18 March 2020) Office of the Privacy Commissioner < https://privacy.org.nz/blog/new-privacy-act-to-commence-on-1- november/>.

Employment New Zealand “Contractor versus Employee” (2020)

<https://www.employment.govt.nz/starting-employment/who-is-an-employee/difference- between-a-self-employed-contractor-and-an-employee/>.

Enaible “Home Page” <www.enaible.io>.

Entelo “Entelo Diversity” <https://www.entelo.com/products/platform/diversity/>. Fair Work Ombudsman “Uber Australia investigation finalized” (7 June 2019)

<https://www.fairwork.gov.au/about-us/news-and-media-releases/2019-media-releases/june- 2019/20190607-uber-media-release>.

Dave Heatley “Biased Algorithms – a good or bad thing?” (October 2019) New Zealand productivity Commission < https://www.productivity.govt.nz/futureworknzblog>.

HireVue “Pre-Employment Assessments” <www.hirevue.com/products/assessments>. HR.com “Entelo Platform”

<www.hr.com/buyersguide/product/view/entelo_entelo_platform>. Humanyze “Privacy by Design” <www.humanyze.com/data-privacy>. Infor “Infor Talent Science” <www.infor.com/products/talent-science>.

Job Adder “Recruitment Analytics” <www.jobadder.com/recruitment-analytics>.

Alexandra Mateescu and Aiha Nguyen “Explainer: Algorithmic Management in the Workplace” (February 2019) Data & Society Research Institute

<https://datasociety.net/library/explainer-algorithmic-management-in-the-workplace/>. Office of the Privacy Commissioner “What is an agency?” (2013)

<https://privacy.org.nz/further-resources/knowledge-base/view/512?t=224753_309547>. Percolata (2020) <www.percolata.com>.

QJumpers Recruitment Software “Why QJumpers” <www.qjumpers.co.nz/why-qjumpers>.

The Supreme Court “Uber BV and others (appellants) v Aslam and others (Respondents)” (2020) <https://www.supremecourt.uk/cases/uksc-2019-0029.html>.

Uber “Uber B.V Terms and Conditions – New Zealand” (10 June 2020)

<https://www.uber.com/legal/en/document/?name=general-terms-of-use&country=new- zealand&lang=en>.

UK Government “Employment Status” <https://www.gov.uk/employment- status/worker#:~:text=A%20person%20is%20generally%20classed,a%20contract%20or%20 future%20work>.

H Newspaper and Magazine Articles

Jeffrey Dastin “Amazon scraps secret AI recruiting tool that showed bias against women”

Reuters Technology News (online ed, San Francisco, 10 October 2018).

Will Douglas Heaven “This startup is using AI to give workers a productivity score” MIT Technology Review (online ed, 4 June 2020).

Esther Kaplan “The Spy Who Fired Me: The human costs of workplace monitoring” Harpers Magazine (online ed, March 2015).

Alex Miller “Want Less Biased Decisions? Use Algorithms.” Harvard Business Review (online ed, 26 July 2018).

Sarah O’Connor “When your boss is an algorithm” Financial Times (online ed, 8 September 2016).

Don Peck “They’re Watching You at Work” The Atlantic (online ed, December 2013).

Olivia Solon “Big Brother isn’t just watching: workplace surveillance can track your every move” The Guardian (online ed, San Francisco, 6 November 2017).

I Seminars

Min Kyung Lee, Daniel Kusbit, Evan Metsky and Laura Dabbish “Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers” (paper presented to the Annual ACM Conference on Human Factors in Computing Systems, 2015).

Mareike Möhlmann and Lior Zalmanson “Hands on the Wheel: Navigating Algorithmic Management and Uber Drivers’ Autonomy” (paper presented to the International Conference on Information Systems, December 2017).

J Submissions

Uber Technological change and the future of work – Submission on the Productivity Commission’s Issues Paper (June 2019)

<https://www.productivity.govt.nz/assets/Submission-Documents/bc03eb38e0/Sub-027- Uber.pdf>.


NZLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.nzlii.org/nz/journals/UOtaLawTD/2020/26.html