NZLII Home | Databases | WorldLII | Search | Feedback

University of Otago Law Theses and Dissertations

You are here:  NZLII >> Databases >> University of Otago Law Theses and Dissertations >> 2013 >> [2013] UOtaLawTD 22

Database Search | Name Search | Recent Articles | Noteup | LawCite | Download | Help

Pearson, Wade --- "Clarity in the Court of Appeal: Measuring the readability of judgments" [2013] UOtaLawTD 22

[AustLII] University of Otago Law Theses and Dissertations

[Index] [Search] [Download] [Help]

Clarity in the Court of Appeal: Measuring the readability of judgments [2013] UOtaLawTD 22 (1 October 2013)

Last Updated: 17 September 2023

Clarity in the Court of Appeal: Measuring the readability of judgments

Wade Campbell Pearson

A dissertation submitted in partial fulfilment of the degree of Bachelor of Laws (with Honours) at the University of Otago.

October 2013

ACKNOWLEDGEMENTS

I would like to thank...
for...
My supervisor Mark Henaghan
Being an immense source of expertise and inspiration.
My second marker Geoff Hall
Providing invaluable advice and giving me a different view on this topic.
The incomparable Jessica Palmer
Being helpful and supportive, and always being ready to lend an ear.
The staff at the ID Card Office
Being very supportive throughout the year
– and suffering through endless questions.
All of my wonderful friends
Listening, proofreading and giving some very valuable feedback.
Mum and Dad
Everything – proofreading; brainstorming; writing; technical help and everything else.
My dearest Radhika
Making this entire year possible. I could not have done it without you. You listened, you read, you gave advice and supported me. But most of all you put up with me for a year – so thank you.

Any errors, of course, remain my own.

Summary

The aim of this study was to measure how easy to read New Zealand judgments are. I wanted to see whether judgment writing has improved in the last 22 years. I took a random sample of 45 Court of Appeal judgments from 3 years: 1990, 2000 and 2012.

I read these 45 judgments using a “Clarity Test”. The Clarity test structures a reader’s analysis of a judgment, focussing on certain elements of readability. For example: headings, parties’ names, sentence length, and writing style (among others). I divided all these elements into three categories:

Start – how well the judge introduces the case Structure – how well the judgment is organised Style – how clear and readable the writing style is

Within each category, I looked for certain elements, which I describe in detail later on. The more of these elements a judgment had, the more likely it was to be readable. Each element corresponded to a question in the Clarity Test. These 19 questions could give a maximum score of 20 for each judgment. The results showed that judgment clarity greatly increased in the last two decades.

The average judgment score for each year increased from 3.2 in 1990 to 7.1 in 2000. In 2012, this further increased to 11.7 – almost four times the original 1990 score. But this average is still just over half of the maximum possible (20). This shows that there are still improvements to be made. The structured analysis of the Clarity Test allows me to give detailed results for each category.

The ‘Start’ improved hugely over the years; recent judgments got to the point a lot quicker. The improvements in ‘Structure’ were also apparent; it is clear that 2012 judgments are organised in a more readable manner. There were improvements in ‘Style’, but not as many as in the other two categories (at least according to my study).

Chapter 1 gives context, and explains why I chose this study. Chapter 2 contains the method used in this study, and describes the Clarity Test. It explains how I used it, and includes the overall results.

Chapters 3, 4 and 5 are a little bit different. Each Chapter deals with a different category: Start, Structure or Style. In each of these Chapters, I briefly summarise the theory of how to write a clear judgment, and then detail the relevant section of the Test. Finally I discuss the results for that category in detail.

Chapter 6 then contains some modest recommendations for judges, and for future research. Measuring the clarity of judgments provides lessons for us all.

TABLE OF CONTENTS

Chapter 1: Background

Judgments need to be clear. Judgments have power; they can take away a person’s freedom, children, money, reputation or business. So when a judge makes a decision, and gives written reasons for that decision, then those reasons must be clear. Otherwise, the law acts on people who cannot understand what is happening or why it is happening. In this paper, I will not justify the importance of clarity in any detail. I simply assume that judgments should be written as clearly as possible; so that as wide an audience as possible can understand them.

a) Calls for change

In recent years, there has been an increasing amount of research done on judgments.1 This is part of the wider ‘Plain language’ movement that started in earnest in the late 20th century. This movement originally focussed on contracts and laws, but has recently started to widen its scope.2 Clarity is a journal dedicated to improving legal language, with members worldwide. It has recently called for more research in the field of Plain language.3

1 Judgments are called ‘judicial opinions’ in some countries, notably the United States. I will detail this research on the next page.

2 Mark Duckworth "Clarity and the Rule of Law: The Role of Plain Judicial Language" in Ruth Sheard (ed) A Matter of Judgment: Judicial Decision-Making and Judgment Writing (Judicial Commission of New South Wales, New South Wales, 2004)

3 Karen Schriver and Frances Gordon "Grounding plain language in research" (2010) 64 Clarity 33

Judges themselves also support greater clarity in judgments.4 The Hon. Michael Kirby (former Justice of the High Court of Australia) has long been a supporter of plain language, and is a patron of Clarity.5 The need for readable judgments was also highlighted in a recent speech by Lord Neuberger, President of the Supreme Court of the United Kingdom.6 He stresses that judgments should “speak as clearly as possible to the public”. He sees two advantages: fairer outcomes, and increased public confidence in the Courts.

The research on judgments that has been done has mostly been directed at proving the need for clarity. The best example of empirical testing of judgments was done by Professor Joseph Kimble.7 He sent two versions of a judgment to 700 lawyers. One version was rewritten in plain language; the other was the original. Of the 251 lawyers who responded, 61% preferred the plain language version.8

Christopher Trudeau studied legal communication, using a wide range of participants. He found that people with more education actually had less tolerance for poor sentence structure and legalese.9 This would apply to judgments too; even a legally trained reader would surely prefer a clear, structured text to an unclear, disjointed mess. Multiple

4 See James Allsop "Appellate Judgments - The Need for Clarity" (36th Australian Legal Convention, Perth, 19 September 2009); Lord Neuberger "Open justice unbound?" (Judicial Studies Board Annual Lecture 2011, London, 16 March 2011)

5 See further Michael Kirby "Plain concord: Clarity's ten commandments" (2009) 62 Clarity 58

6 Lord Neuberger "No judgment - No justice" (First Annual BAILII Lecture, London, 20 November 2012)

7 Joseph Kimble "The straight skinny on better judicial opinions" (2006) 85(3) Michigan Bar Journal 42

8 At p43

9 Christopher R. Trudeau "The Public Speaks: An Empirical Study of Legal Communication" (2001) 1 The Scribes Journal of Legal Writing 121

studies have shown exactly that: lawyers and judges prefer judgments to be clear and readable.10

There has also been an improvement in judicial education. Most countries around the world offer courses on judgment writing, and New Zealand is no exception. The Institute of Judicial Studies was established in 1998, and does an excellent job of improving the quality of judging (in all respects; not just writing).

So the legal community is realising the value of clear, readable judgments. But are judges actually writing clearer judgments?

b) Aim of this study

This is the aim of my study – to accurately measure whether judgment writing has actually improved in the last two decades. I also want to look at judgments in detail. I would like to see exactly what judges are doing well, and how clarity and readability could be further improved.

At this point, I will briefly explain what I mean by ‘clarity’ and ‘readability’. These terms can be defined in a number of ways.11 I use the terms interchangeably in this paper. The basic idea of both terms is that documents should be written with the reader in mind. Writing for the reader leads to clear and effective communication – this is what I am testing for in this study.

To test for ‘clear and effective communication’, I have collected different elements of clear and effective writing. I have drawn these elements from many sources; including

10 See: Kimble, above n7; Robert Benson "The End of Legalese: The Game is Over" (1985) 13(3) New York University Review of Law and Social Change 519; and Sean Flammer "Persuading Judges: An Empirical Analysis of Writing Style, Persuasion and the Use of Plain English" (2010) 16 The Journal of the Legal Writing Institute 183

11 A recent focus of the Clarity journal has been setting plain language standards. See further Annetta Cheek "Defining plain language" (2010) 64 Clarity 5

lawyers,12 academics13 and judges.14 Most of these sources say very similar things, but I have discussed the differences and controversies where they exist. Many of the elements can be found in the large body of work by Professor James C. Raymond. Raymond is an expert on writing clear judgments and he teaches many judges from around the world, including New Zealand.15

I grouped the elements of clear writing into three categories:

These categories are my attempt at a cohesive framework for the range of elements. ‘Start’ is about making the introduction clear and effective, while ‘Structure’ covers overall layout. ‘Style’ includes elements of clear writing; both in general, and specifically for judgments. Conceptually there is some overlap between the categories, but they work well enough for dividing the test. I will explain these categories in more detail; including

12 Joseph Kimble "The Lost Art of Summarizing" (2001) 38(2) Court Review 3; Bryan A. Garner "The Deep Issue: A New Approach to Framing Legal Questions" (1994) 5 The Scribes Journal of Legal Writing 1; Michele M Asprey Plain Language for Lawyers (4th ed, The Federation Press, Sydney, 2010); Mark Adler Clarity for Lawyers (2nd ed, The Law Society, London, 2007)

13James C. Raymond Writing for the Court (Thomson Reuters, Canada, 2010); Edward Berry Writing Reasons: a handbook for judges (3rd ed, E-M Press, Ontario, 2007); Duckworth, above n2

14 Michael Kirby "On the Writing of Judgments" (1990) 64 ALJ 829; Mark Painter "Legal Writing 201" (March 2002) Plain Language Network <www.plainlanguagenetwork.org/Legal/legalwriting.pdf>; Joyce J. George Judicial Opinion Writing Handbook (4th ed, Williams S. Hein & Co., Inc., New York, 2000); J.E. Côté The Appellate Craft (Canadian Judicial Council, Ottawa, 2009)

15 For a very readable article on judgment writing courses in Canada, see Tracey Tyler "Clarity in the courts: Justices go to writing school" Toronto Star (Toronto, 2011) . See also James Raymond’s website at <www.jcraymond.com>

examples of each element (in Chapters 3, 4 and 5). Before this, however, I will briefly describe different methods of measuring judgments.

c) Measuring clarity

I could have used this entire dissertation to describe methods of measuring clarity. What follows is necessarily a very brief and somewhat simplistic overview only.

There are currently three main options for measuring readability or clarity:16

However, all of these methods have problems that make them unsuited to measuring judgments for this study.

Problems with readability formulas

Readability formulas are computer programs that measure certain features of a text to give a score (although the formula itself can be done manually). Two common formulas are the Flesch Reading Ease and the Flesch-Kincaid Grade Level formulas – both are available through Microsoft Word.17

These formulas measure two things: the number of syllables per word; and the number of words per sentence. This then gives a score which theoretically can be used to compare

16 Dr Annetta Cheek, of the Center for Plain Language, classifies these options as, respectively; ‘numerical’, ‘outcomes-focused’ and ‘elements-focused’. My approach is elements-focused. See Cheek, above n11

17 The scores will be shown in a box after performing a ‘Spelling & Grammar’ check.

readability.18 Texts that score well will have shorter sentences and shorter words. There are other formulas in use, but I will not describe them in detail, as all formulas suffer similar problems.19

As we will see, judgment writing can be divided into three categories: Start, Structure and Style. Within the Style category, there are at least seven different elements. But of these seven elements, formulas usually only look at two: word length and sentence length. So of all three categories, formulas only examine one. And even within that one category, they only look at two discrete elements. This means that formulas focus on a very small sub-set of all the elements of clarity.

Even within this sub-set, formulas are not very reliable, as the result of a formula may vary greatly between different versions of a program.20 Also, the formatting of most texts will need to be ‘cleaned-up’ before use. One study similar to mine is experiencing this problem. John Kleefeld is examining all the judgments from the highest courts of the USA, UK, Australia and Canada, over the last 50 years (a monumental task). His research is on-going, but he had a problem with titles of cases, written as ‘Brown v. White’. The ‘v.’ is short for ‘versus’, but formulas recognise the ‘v.’ as a full stop – and therefore a whole new sentence. This gives an inaccurate score – so Kleefeld had to alter the software to account for this. 21

18 Mark Hochhauser "What readability expert witnesses should know" (2005) 54 Clarity 38

19 For a full discussion of the flaws of readability formulas, see Louis J. Siroco "Readability Studies: How Technocentrism Can Compromise Research and Legal Determinations" (2007) 26(147) Quinnipiac Law Review 101; and see Martin Cutts "Writing by numbers: are readability formulas to clarity what karoke is to song?" (2008) 59 Clarity 28

20 Mark Hochhauser "Some pros and cons of readability formulas" (1999) 44 Clarity 22

21 Interview with Professor John Kleefeld, Assistant Professor at the University of Saskatchewan (The author, 10 May 2013). It will be interesting to see what conclusions can be drawn from his study.

Various authors have attempted to measure the ‘readability’ of judgments using formulas. They have then tried to link this ‘readability’ with persuasiveness,22 citing history,23 and popularity of subject matter.24 These attempts have largely failed. Readability formulas are simply too unsophisticated to accurately measure clarity.

One promising sign lies in a study conducted by Ryan Owens and Justin Wedeking.25 They empirically examined the clarity of judgments of the Supreme Court of the United States. They used a computer program called “Linguistic Inquiry and Word Count” (LIWC). LIWC estimates “cognitive complexity” by looking for certain words. The more cognitively complex a judgment is, the less clear it is likely to be. Their study purported to show that certain Justices wrote more clearly than others. However a different study concluded the opposite, and doubted Owens’ and Wedeking’s conclusions.26

The LIWC method shows promise, but still does not take into account the organisation or introduction of a judgment. Formulas are simply unsuitable for accurate measurement of clarity.

22 Lance N. Long and William F. Christensen "Does the Readability of Your Brief Affect Your Chance of Winning an Appeal? - An Analysis of Readability in Appellate Briefs and Its Correlation with Success on Appeal" (2011) 12 J. App. Prac. & Proc. 14; Brady Coleman and Quy Phung "The Language of U.S. Supreme Court Briefs: A Large-Scale Quantitative Investigation" (2010) 11 J. App. Prac. & Proc. 75

23 Kevin L. Brady "Are readable judicial opinions cited more often?" (4 July 2012) SSRN

<www.ssrn.com/abstract=2100618>

24 Michael Nelson "Elections and Explanations: Judicial Elections and the Readability of Judicial Opinions" (Working paper, Washington University in St. Louis, 2013)

25 Ryan Owens and Justin Wedeking "Justices and Legal Clarity: Analyzing the Complexity of U.S. Supreme Court Opinions" (2011 ) 45 Law and Society Review 1027

26Lance N. Long and William F. Christensen "When Justices (Subconsciously) Attack: The Theory of Argumentative Threat and the Supreme Court" (2013) 91 Or L Rev 933

Problems with cloze/comprehension testing

Cloze testing and comprehension testing give more accurate results than formulas, but both have flaws.

Cloze testing involves taking a text and removing every 5th word. A group of readers is then asked to fill in the blank spaces. The more words correct; the more readable the text is. Comprehension testing is when a researcher asks readers specific questions about the judgment. These are content-based and practical; similar to the questions that lecturers ask students.27 These methods are generally accurate, and also allow the researcher to select readers from a specific audience (e.g. non-lawyers, tax lawyers, law students).

However assembling a focus group of readers is usually expensive and time-consuming. This also makes it unsuitable for comparing a large number of judgments – the expense involved would be astronomical. Testing may provide accurate data on readability or comprehension; but it cannot easily explain what makes a particular judgment more readable than another.

I could not find any studies that used cloze/comprehension testing to measure judgment clarity over time. One study did use a group of readers, but their method is better described as subjective analysis.

Problems with subjective analysis

Subjective analysis has some advantages over the previous two methods. In this method, a skilled writer closely reads the various judgments and gives feedback on each. The tester can take all clarity factors into account, and can analyse differences between judgments.28

27 For example: “What did the judge decide about X?” or “Why did the judge make this decision?”.

28 It is often used during judgment writing courses for judges and is an immensely useful tool. See: Patrick Keane "Decisions that convince" (2004) 52 Clarity 26, at p26; and John Laskin "Teaching judgment writing in Canada" (2011) 66 Clarity 17

Analysis still has its problems however. First, it can take a long time to closely read a 10, 20 or 30 page judgment. Doing so for 45 judgments would take quite a long time indeed. Secondly, it can be too subjective, because the results depend largely on the individual doing the testing – so studies are not very repeatable. An English professor doing the testing would likely get wildly different results to a law student doing the testing. Even with similar results, different testers would likely present the results in a different way.29

Another form of subjective analysis is called content analysis. I will not describe it in detail, but generally it attempts to bring some order to the subjectivity of normal analysis.30 Charles Johnson studied clarity in judgments using this method.31 His method was quite detailed, but basically involved asking readers whether they thought specific parts of the judgment were clear or not. This was more successful than the studies that used readability formulas, but still seems too subjective.

The (possible) solution

The above methods prove inadequate for measuring judgments. So the challenge then is to create a method that:

29 For a fuller discussion of the advantages and disadvantages of all three options, see Cheek, above n11

30 For an excellent explanation and discussion of content analysis, see Mark A. Hall and Ronald F. Wright "Systematic Content Analysis of Judicial Opinions" (2008) 96 CLR 63

31 Charles A. Johnson "Content-Analytic Techniques and Judicial Research" (1987) 15(1) American Politics Quarterly 169

The best way to do this is to use a written framework to guide the tester’s analysis of judgments. This still requires the tester to read each judgment, but it structures their analysis. For this reason, I have created a rough tool, which (for lack of a better term) I have named the ‘Clarity Test’.

Chapter 2: The ‘Clarity Test’

The Clarity Test is a set of questions that help the tester look for certain elements of readability. The answers to these questions then give a score, which can be compared across years, countries, judges or courts. It is a quick but effective way to measure all the different elements that make a judgment easy to read. I have included the full version of the Clarity Test that I used in Appendix A.

a) The ‘Clarity Test’

How it works

The Clarity Test structures a tester’s analysis of a judgment. The tester skims the judgment, looking for certain elements/features, before reading the entire judgment through only once. The test contains 19 questions. These questions do not rely on the tester’s subjective understanding of the judgment, nor do they require in-depth knowledge of the law. They simply help the tester check for evidence of clear judgment writing, giving a maximum score of 20.32

The evidence of clear judgment writing is grouped into four sections (A-D). Each section corresponds to a category: ‘Start’, ‘Structure’ and ‘Style’ (although ‘Style’ has two sections). The test works on the premise that the number of elements present in a judgment will determine how likely it is to be readable. So a judgment with almost all of the elements will likely be easy to read, while a judgment lacking those elements will not.33

32 The minimum score is theoretically -6, but judgments are unlikely to receive such a low score.

33 In theory, a judgment that does not have these features could nevertheless be readable; as it might do something differently, but better. This may be a valid comment in theory, but there was no evidence of it in

Advantages

There are five main advantages to using the Clarity Test. First, it takes into account all of the clarity factors, and allows us to compare the theory of judgment writing to the reality.

Secondly, it is largely irrelevant who does the testing. As long as the tester is familiar with the elements, the results should not vary greatly between testers of different ability (you can even try it at home if you want to).

The third advantage is that it can be applied quickly and easily. It requires only one full read-through, and some targeted skimming. Each judgment took me between 10-30 minutes to complete (depending on length); this makes larger studies more feasible. This also means that the tester does not need any expertise in clear writing, just a basic understanding of the principles involved.

Fourthly, it gives the judges a little benefit of the doubt. It is not hard to pedantically criticise another’s work, but neither is it particularly productive. My aim with this study was to focus only on the most blatant and damaging aspects of inadequate communication – not every little detail.

Finally, the test produces repeatable, presentable results. For example: one question asks whether the judgment has case-specific headings. A judgment either has these or it does not; the identity of the tester is irrelevant. This means the results are also presentable as a score. Using headings will get a judgment one point, making comparing judgments much easier. The score allows the tester to compare both whole judgments and specific sections/elements.

The test is not perfect and it has its flaws, which I discuss more in Chapter 6. But it is conceptually sound; it works in the same way as a teacher’s marking scheme. The test may need tweaking in the future, but I believe it works well enough for this study.

my study. From 45 judgments, there were no moments where I felt a judgment’s score was highly inconsistent with its overall readability.

The best way to show that the Clarity Test works is to show it in action. I will now describe the specific method of this study.

b) The test in action

Sample

I tested civil cases from the New Zealand Court of Appeal for the years 1990, 2000 and 2012. I tested 15 cases from each year, for a total of 45 cases tested. Each year had around 150 cases, so I chose every 10th case. I ignored any cases under 10 pages in length. My sample included both reported and unreported judgments; but I only read the unreported versions. 34

By ‘civil’ cases, what I really mean is ‘non-criminal’ cases. Criminal cases in the Court of Appeal are mostly sentencing appeals. These cases have a distinct structure to them, which cannot easily be compared to non-criminal cases. My test cases still had a wide variety to them; corporate, family, public, tax; intellectual property, trust, competition and more.35

I read all the cases on a computer screen in ‘.pdf’ format. I tested them only once, except when I re-read sections to quote in this paper. For the record, the tester for this study was a 22 year-old law student, in his fifth and final year of study at the University of Otago

34 In reported format, there was less white space between the lines of text, and the text was smaller. One judgment was only available in a reported form - Commissioner of Inland Revenue v Brierly [1990] NZCA 393; [1990] 3 NZLR 303. I ignored the headnote for this judgment.

35 To find only ‘non-criminal’ cases, I searched the Brookers ‘Briefcase’ database for NZCA judgments from each year. I used the search string of case title/party name (not “r v” and not “v r” and not “police” and not “queen”). This would exclude all cases with the “R v” in their title. “R v Brown” is short for “Regina versus Brown”. This means that the Queen (the head of state of New Zealand) is a party to the case; i.e. it is a criminal matter. Although this method was not perfect, it excluded almost all criminal cases.

(i.e. me). I had previously read three of the judgments from the sample (but had since forgotten the detail).36

Court of Appeal

Until the Supreme Court was created in 2004, the Court of Appeal was New Zealand’s highest domestic court (although some cases could be appealed to the Privy Council in the United Kingdom). As an appellate court, the Court of Appeal has two functions: error correction, and law-making.37 The Court corrects errors from the lower courts (e.g. District Court, High Court). But it also changes and clarifies the law. It acts as an overseer, ensuring that the law of New Zealand is developing in a fair and consistent manner. In this function, its judgments are an important source of the common law of New Zealand.

Court of Appeal judgments are generally also less fact-intensive than High or District Court judgments. This makes them sometimes easier to read, and often shorter. They are also available online to the public (although the 1990 judgments are not available on NZLII).38

1990, 2000 and 2012

For this study, I wanted approximately a 20 year time span. 2012 was the most recent full year at the time of writing. It is also the only year from my sample to feature citations in

36 Nesbit v Porter CA165/99, 20 April 2000; Vickery v McLean CA125/00, 20 November 2000;

Commissioner of Inland Revenue v Brierly, above n34

37 Allsop, above n4, at [4]

38 The New Zealand Legal Information Institute (NZLII) is part of a global network dedicated to making law accessible and available to everyone, free of charge. They do a very good job, and have most recent judgments. You should be able to find all the 2012 judgments that I examined at: [www.nzlii.org]

footnotes. I then chose the year 2000 as it was the first year that numbered paragraphs were introduced in the Court of Appeal. Also in that year, two years had elapsed since the first judgment-writing courses were given in 1998. The final year for my sample is 1990

– this was well before any organised attempt at teaching judges how to write. The 1990 judgments had a different font as well (Courier instead of Times New Roman). The exact years chosen are not that important; but the general trends and changes shown over this period are.

Trial

To show that the identity of the tester is irrelevant, I persuaded some fellow students to trial the test. They used an earlier version of the test, but their trial helped me improve the test. I had six responses, including one PhD student. I asked them to test two judgments; one from 2012 and one from 1990.39

Including my own score, the 2012 judgment received between 13 and 16 points. Four students gave the judgment 15 points. So the test was relatively repeatable for a clear, readable judgment.

The 1990 judgment, which was relatively unclear, did not fare so well; responses ranged from -3 to 6. On reflection, the judgment I picked was too short (7 pages). This meant the number of issues dealt with was not comparable to the 2012 judgment. But the results also alerted me to the subjectivity of the some of the questions. Consequently I changed the wording around, and made some questions simpler. I believe that these changes have counteracted the variation in the trial results.

If the latest version of the test does not overcome these difficulties, the trial results are still relevant. Although there was much variation in the 1990 scores, they were nowhere

39 The clear one was: Hutt City Council v The Lower Hutt District Court [2013] NZHC 706; the unclear one was: Varney v Anderson [1988] NZCA 11; [1988] 1 NZLR 478.

near the 2012 scores; clearly showing that the 2012 judgment had more readability features (and was therefore likely to be more readable).

c) The results

Overall scores

Over the last 22 years, the quality of judgment writing has clearly improved. The average score for each year almost quadrupled – from 3.2 to 11.7 (out of 20). This is a huge step, but there is still room for improvement. The maximum possible score on the test is 20, and the minimum is theoretically -6. The average mean score40 for 1990 judgments was

3.2. In 2000, the average increased to 7.1; and then increased to 11.7 in 2012:41

Average Score (Mean)

20

18

16

14

12

10

8

6

4

2

0

1990
2000
2012

3.2

7.1

11.7

40 Calculated by adding all the scores for a year, and dividing by 15 (the number of cases).

41 All figures rounded to one decimal point.
1990 2000 2012
0-8
4-11
7-17
20

18

16

14

12

10

8

6

4

2

0
Range of Scores

The median and mode scores were very similar; showing that the mean was not skewed by outlying judgments. This is further borne out by the range of variation within each year:

For 1990 judgments, the range of scores was between 0 and 8; 8 points of variation. In 2000, this range was from 4 to 11; only 7 points. And in 2012, the scores varied from 7 to 17; a range of 10 points. These ranges show that both the worst and best judgments of each year improved.

Interestingly, the clearest judgment in 1990 scored higher (8) than the least clear judgment from 2012 (7).42

I have included full tables of the results for each year in Appendix B. The analysis of results for each category is set out in the next three Chapters. Before this, I will briefly summarise what the test did not measure.

42 These two judgments were, respectively: Balfour v Attorney-General CA170/89, 12 October 1990; and

General Marine Services Ltd v The Ship "Luana" [2012] NZCA 374

What was not measured

I did not measure the substantive law; this was beyond my ability and beyond the scope of this study. So a high score does not indicate a strong argument. It merely indicates that the judgment is likely to be written clearly (which in theory makes it easier to see the actual strength of the argument).

I also did not measure formatting and design. These can greatly increase the readability of a document, but are usually out of a judge’s control – they are often decided by administration staff.43 In particular, I ignored font, line spacing, and numbered paragraphs.44 However I will briefly comment on a few elements that were measured (but which were not worth any points).

Length

The average judgment length for each year was (in chronological order): 21.2 pages; 15.3 pages; and 25.3 pages.45 The longest judgment was 62 pages,46 and the shortest judgments had 10 pages. I set this minimum limit of 10 pages to avoid short judgments, which have a different structure.47 This study is aimed at substantive or ‘reserved’ judgments – ones that take days or weeks to produce.

43 See Richard Castle "What makes a document readable?" (2007) 58 Clarity 12 44 Numbered paragraphs were introduced to the Court of Appeal in the year 2000. 45 This includes the cover page, and any part pages.

46 This was Commerce Commission v Visy Board Pty Ltd [2012] NZCA 383

47 Short judgments do not usually change the law. They are much less likely to be read by anyone other than the parties’ or their lawyers. This means they are subject to quite different requirements than substantive judgments. See generally: Ruth Sheard (ed) A Matter of Judgment: Judicial Decision Making and Judgment Writing (Judicial Commission of New South Wales, New South Wales, 2003); Michael

Table of contents

A table of contents can be very useful to a reader. There were no tables of contents at all in 1990, and only two in 2000. The year 2012, however, had 12 judgments with a table of contents. This was a huge improvement, and was very helpful to me when reading the judgments (but was not worth any actual points).

Diagrams, tables and graphs (visual aids)

Visual aids can increase readability.48 Some commercial relationships can be very complex, especially in the corporate world. These relationships are often better expressed in diagram form.49 Likewise, some judgments contain large amounts of numerical information; a compact table makes this easier to read.50 Pictures of trademarks or inventions can be indispensable in intellectual property cases.

There were no visual aids used in 1990, and only two used in 2000. This increased to four in 2012. Since 1990, there has been an increasing acceptance of the use of visual aids in judgments where necessary (although most judgments still have no need for them).

In the next chapter, we will see what makes an introduction (or ‘Start) clear and effective. We will then see how I tested for this and what the results were.

Kirby "‘Ex tempore Reasons’" (1992) 9 Australian Bar Review 93; Christopher Enright "Writing an Ex Tempore Judgment" (6 June 2012) Maitland Press <www.legalskills.com.au>

48 John Strylowski "Using Tables to Present Complex Ideas" (2013) 92 Michigan Bar Journal 44

49 See Mallowdale Enterprises Ltd v Commissioner of Inland Revenue [2011] NZHC 4; (2011) 25 NZTC 20-024 at [14]

50 See Ngai Tai Ti Kamaki Tribal Trust v Karaka [2012] NZCA 268 at [23]

Chapter 3: Start

a) Theory

A clear, well-written start to a judgment is a great help to the reader. Without a clear introduction, a reader has to work a lot harder to understand the rest of the judgment.

One much-repeated piece of advice to lawyers is to ‘state your case in under 75 words’.51 This allows a reader to quickly know what the basic point of the judgment is. Garner is a passionate advocate of this – what he calls the ‘deep issue technique’. He makes a compelling case for every judgment to start with a brief summary of fewer than 75 words.52 However it appears that the deep issue technique is not being used in New Zealand. Garner has strict rules for a deep issue statement53, and none of the judgments in my sample complied with these rules. Instead, different criteria are needed. I have split the elements of ‘Start’ into five propositions:

Tell the basic story

A judgment should immediately set out the basic story of the case – the outline of the factual situation. What the reader needs in an introduction is “a generic description of who did what to whom – just enough detail to provide a context in which the issues will

51 Painter, above n14; Garner, above n12

52 Garner, above n12

53 At p1

make sense”.54 In this paper, I refer to this idea as the ‘basic story’. If the basic story of a judgment is clear, the reader will have the necessary context and focus to understand the rest of the judgment.

The introduction to Mallowdale Enterprises v CIR55 is a good example of setting out the basic story. It is 152 words long (twice Garner’s recommended length) – but it nevertheless tells the basic story concisely:

This example tells the basic story but also states the basic issue (whether the investment was ‘capital’ in nature). This leads on to the next point.

State the basic issue

The start of a judgment should also clearly state the basic question or issue for the judge to decide. In some cases, both the story and issue can be set out in only one sentence:56

54 Raymond, above n13, at p55

55 Mallowdale Enterprises Ltd v Commissioner of Inland Revenue, above n49

Can the Hutt City Council (the Council) lawfully suspend a sewage pipe above land owned by Mr and Mrs Cassells?

Even someone who has no knowledge of the law can understand that this case is about a sewage pipe. The Council wants to build one above the Cassells’ land, and the judge has to decide whether it can or not. When a judgment starts this way, it immediately invites the reader to read on. It creates a tension, a problem that needs to be solved.

In contrast, that judgment could have started like this:

This is an application for judicial review of a District court decision delivered by Doogue J on 31 November 2011 holding that the respondent’s proposed action pursuant to s181 of the Local Government Act 2002 was ultra vires, and granting the appellant declaratory relief.

But this tells us almost nothing about the case. Who is the respondent? Who is the appellant? What does ‘ultra vires’ mean? What was the proposed action? What is ‘declaratory relief’? Thankfully, the judge did not use the above rewritten version. Instead he got straight to the point – making the story and issue clear.

In reality, there will be significant overlap between these first two elements.57 Some judgments may start with facts, while others start with the issues. What is important is that both story and issue are clear within the opening page (or two). In the results, I analysed these two elements together.

In the example above, you may have noticed another common flaw - unnecessary detail.

Avoid unnecessary details

The start of a judgment should contain only the basic story and the basic issue - and nothing else. Any extra detail (such as exact dates etc.) can be included later; once the reader has the context necessary to understand its relevance.

If the procedural history (how the case came to court) is crucial to the result, it can be summarised later on. The start of a judgment should be ruthlessly cut down to the vital, relevant parts; i.e. only the story and issue.

Write the facts clearly

The facts should always be set out as clearly as possible. Without a firm grasp of the facts, the legal reasoning will be much harder to understand. Moreover, only the crucial and relevant facts should be included. This can often be hard to do, but being able to clearly understand the facts helps the reader greatly.58

Give the result early

The result is very important, so it should usually come right at the beginning. Judgments are not thriller novels where the reader must survive to the end to find the surprise twist. As Raymond points out, most readers will skip to the end anyway, to see if they can quickly find the result.59 It simply makes more sense to put the result near the beginning too.

Court of Appeal and Supreme Court judgments from 2004 onwards divide the document into two parts: ‘Judgment’ and ‘Reasons’. The ‘Judgment’ section contains two or three separated sentences explaining the actual result, which usually look like this:

A The application for leave to appeal is dismissed

B The applicant must pay the respondent costs of $2500.

58 See further James C. Raymond "Writing to be Read or Why Can’t Lawyers Write like Katherine Mansfield?" (1997) 3 The Judicial Review: Journal of the Judicial Commission of New South Wales 153

The ‘Reasons’ section then contains the substantive opinion – the judge’s reasoning process.60 This is a great way to give the result early (and clearly). Judgments can have a huge effect on parties’ lives – so the orders made need to be extremely clear.61

b) Test

Section A of the test deals with the ‘Start’, and is split into two parts. The first part requires the tester to read only to the end of the first full page of text; i.e. no more than two pages. The second part requires a reading of the whole introduction; that is, until the judge starts analysing the issues in depth.

First part

The tester reads to the end of the first full page, and assesses the quality of these three areas:

The judgment will receive twice as many points if the story and issue are clear to a layperson, rather than only to a lawyer. This requires a judgement call by the tester.

Second part

The second part measures two factors over the whole of the introduction:

60 In this paper, I have avoided the term ‘Reasons’. Instead I use either ‘judgment’ or ‘opinion’.

61 Some recent judgments actually use the parties’ names in the ‘Judgment’ section, like this: Ms Smith must pay Mr Johnson costs of $2500. This makes the order even clearer.

There are 8 points available for the entire ‘Start’ section.

c) Results

Story and issue (A1-A2)

Story and Issue Total Score (A1-A2)
60
50
40
30
20
10
0
1990
2000
2012

23

30

40

There was a clear improvement in how quickly and clearly judgments set out the story and issue. There were four points available for these two questions (A1-A2); so each year could receive a maximum of 60 points (15 judgments, with 4 points each). On this scale, 1990 scored 23; 2000 scored 30; and 2012 scored 40:

This shows the improvement from just over one third of the maximum in 1990; to two thirds in 2012. Clearly the story and issue are being set out better in recent judgments. This is also shown by individual examples from the years.

1990

There were five judgments in 1990 that completely failed either of these questions; although only two failed both. For example, the opening paragraph of Ryan v Hallam62 is a single 144 word sentence that must be read multiple times to make any sense at all (it is in this footnote).63 Unsurprisingly, this introduction scored 0 out of 5, and the judgment itself scored 0 overall. Of the judgments that did explain the basic story and issue, only one did so clearly enough for a layperson to understand (and then only the story, not the issue).64

2000

In 2000, the results improved somewhat. There were only four judgments that failed either of questions A1-A2; none failed both. Even better, one judgment explained the story very clearly65; one explained the issues very clearly66; and one judgment did both.67 This last judgment, AG v Hull, is a good example of an introduction that performs well, without quite using the ‘deep issue’ technique:

[1] The law has long empowered the State to acquire land for public works, at current market value, by purchase or taking. When in 1981 Parliament last revised and consolidated the principal statutory provisions in the Public Works Act it required the State to offer the land back to the person from whom it had been taken if

62 Ryan v Hallam CA295/89, 30 August 1990

63 “The pleadings in this case occupy 50 pages of volume 1 of the case book, both volumes run into 732 pages, but the statement of claim dated 21 March 1988 is a simple four page document in which the respondent as plaintiff in the High Court alleged that he and the defendant, the appellant in this Court, had on or about 6 October 1987 entered into an agreement conferring call and put options under which the respondent granted to the appellant an option to purchase 3,551,000 fully paid ordinary shares of 50 cents each in the capital of Carborundum Abrasives Limited, a duly incorporated public listed company on terms set out in the agreement (the call option) and the appellant granted to the respondent an option to require the appellant to purchase the shares on terms set out in the agreement (the put option).”

64 Balfour v Attorney-General, above n42

65 Sea-Tow Ltd v Grey District Council CA146/99, 15 June 2000

66 Attorney-General v Rodney District Council CA274/99, 18 September 2000

67 Attorney-General v Hull CA41/99, 29 June 2000

the land was no long [sic] required for public works. The previous owner is to have the opportunity to buy the land at current market value.

Keith J then explains that the Government acquired Mr Hull’s land for public works, but eventually decided it was not needed. It offered the land back, but Mr Hull says it should have done so earlier, when the ‘current market value’ was lower. At the end of the first page, the judge sets out the end result and his reasons:

[4] ... Because we conclude that the land continued to be held for “State housing purposes” after 1982 and 1983, we disagree with Randerson J’s first conclusion and allow the appeal.

So within one page (four paragraphs), the judge clearly sets out: the story; issue; result; and reasons. This is a huge help to the reader. But compare this to another judgment from 2000:68

[1] On 20 December 1999 Para Franchising Ltd ("Para") obtained an injunction in the High Court at Wellington against Walop No. 3 Ltd ("Walop"), Martin Dominic Laverty ("Mr Laverty") and Rhonda Jean Laverty ("Mrs Laverty") restraining them "from either directly or indirectly carrying on any business involving the promotion, distribution, marketing or supply of any products or services substitutable for or otherwise competitive with any products or services sold by any store operating under [Para's] franchising system". Walop and Mr Laverty appeal from that decision. There are no appeals against other injunctions granted at the same time in favour of Para against Walop and Mr and Mrs Laverty.

This case only received one point from five – for making the issue clear to a lawyer. I suspect even that was generous to the judgment.69 This introduction uses 108 words where about 20 would do; and it still has not stated the story or issue well. It also had far too much unnecessary detail. 70

68 Walop No 3 Ltd v Para Franchising Ltd CA5/00, 22 March 2000

69 I avoided reconsidering the scores I gave to judgments (unless I had made a clear mistake or typo).

70 It also exhibits the problem of ‘over-particularising’. Notice all the ‘nicknames’ added in brackets and quotation marks (“Walop”). Who else would the reader think the judge is referring to? Providing a shortened name can be useful but here the overuse simply confuses the reader.

2012

In 2012, every single judgment set out both story and issue. Of these, two made both story and issue clear to a layperson. A further six judgments made either the story or issue clear (but not both). This was a clear improvement from the other two years. Furthermore, a lawyer would not have struggled to understand the story and issue in any 2012 judgment (as none scored a ‘0’ for either A1 or A2).

The quality of the 2012 judgments is illustrated by the Roby case.71 This judgment (by Miller J) scored full points for its introduction, and scored 17 points overall – the highest score. This judgment was also notable for including relevant and necessary pictures.

To illustrate the clarity of Miller J’s judgment, I will first set out another judge’s attempt at introducing the case. This is from the High Court judgment (which was later appealed to the Court of Appeal):72

[1] Mars New Zealand Limited (Mars) appeals from a decision of the Assistant Commissioner of Trade Marks declining to uphold its opposition to Roby Trustees Limited’s (Roby’s) trade mark application no. 809554 to register its Optimize Pro Lead The Pack mark

IP3

(the proposed mark) proposed to be used in respect of dog rolls in class 31.

Is that perfectly clear? Did it make sense on the first reading? Or could the judge have made that a bit simpler? Well, let us see how Miller J did in fact introduce the judgment:73

71 Roby Trustees Ltd v Mars New Zealand Limited [2012] NZCA 450

72 Mars New Zealand Ltd v Roby Trustees Ltd HC Auckland, CIV-2011-404-4613, 7 December 2011

73 Roby Trustees Ltd v Mars New Zealand Limited, above n71

IP3
As you can see; the second version is much easier to read. The two paragraphs by Miller J succinctly explain the basic story and the basic issue in the case (and in only 80 words total). The High Court judgment, on the other hand, takes 55 words to say not much of anything. It is only about 200 words further into the judgment that we learn about the real issue: the possibility of customers being confused/deceived.74 In contrast, the Court of Appeal judgment gets straight to the point, while explaining everything in clear, simple language.

This is just one example of the high standard of introductions in 2012 (although not all were so polished).

Unnecessary detail (A3)

Question A3 asked whether there was any unnecessary detail included within the first full page. All 1990 cases had some in the introduction but, in 2000, only 11 cases did. In 2012, this decreased further to only 10 cases. So while most cases did still include unnecessary detail, the more recent judgments did fare slightly better than the 1990 ones. This is related to focussing on making the story and issue clear. If a judge tries to make these clear, it is easier to exclude irrelevant detail. Likewise, if a judge is not too

74 Mars New Zealand Ltd v Roby Trustees Ltd, above n7271, at [4]
1990 2000 2012
10
11
15

15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0

Judgments with Unnecessary Detail (A3)

concerned about writing clearly, then he or she will probably not focus very hard on removing unnecessary words.

Facts and result (A4-A5)

Facts

There was a slight increase in the number of judgments that wrote the facts to a very high, clear standard (A4). There were no such judgments in 1990 and only one in 2000.75 However in 2012 there were four judgments that made the facts very clear. This was one of the more subjective questions, and may not be extremely significant. But as I read the judgments, it was obvious that some judges (mostly from 2012) were better at writing clearly.76 They used shorter sentences with better sentence structure – they wrote as if they wanted to be read.

75 Vickery v McLean, above n36

76 From my sample of cases: notably Miller J, Ellen France J, and the late Chambers J.

The four judgments from 2012 with clear facts scored 14, 16, 16, and 17.77 These four judgments also received the bonus two points for having a great writing style overall (D5).

Result

There was no major change in the treatment of the result (A5). Over all the years, only one judgment failed to get any points on this question. That judgment, GN Hale & Sons, only gave the answer to certain questions: 78

The Court being unanimous as to the answers to be given in the case stated and finding it necessary to answer only questions 1 and 2, they are answered accordingly and the whole matter is remitted to the Labour Court for reconsideration.

On top of the less than clear wording, the questions referred to were only defined on page six. This made it difficult to skim read and find out the result.

Four judgments made the result clear in the introduction (two from 2012, one from each other year).79 In all judgments, the result was located somewhere near the end of the text, and was relatively easy to find. Technically all the 2012 judgments had the result up front (due to the new ‘Judgment’ section), but I excluded this from my results. If this had been included, all judgments from 2012 would have received the maximum two points.

As the results stand, there was no clear difference between the years. In a way, this was to be expected. It would be problematic if there were many judgments with an unclear result. The one point available for making the result clear at the end of the judgment was

77 Respectively: Scandle v Far North District Council [2012] NZCA 52; Fava v Aral Property Holdings Ltd [2012] NZCA 585; Schenker AG and Schenker (NZ) Ltd v Commerce Commission [2012] NZCA 245 Roby Trustees Ltd v Mars New Zealand Limited, above n7171

78 GN Hale & Sons Ltd v Wellington Caretakers IUOW CA158/90, 11 September 1990; at p13 (judgment of Cooke P)

79 From 1990: Balfour v Attorney-General, above n42; from 2000: Attorney-General v Hull, above n67; and from 2012: Perpetual Trust Ltd v Financial Markets Authority [2012] NZCA 308, Osborne v Auckland City Council [2012] NZCA 199

a ‘free’ point in that regard. This does not undermine the test; it simply allows room for subtracting points without ending in negative scores.80 Overall it seems that Court of Appeal judges do not yet regularly state the result near the start of the text. This is somewhat excusable due to the Judgment/Reason feature.81

The start of judgments generally improved over the years. There was a similar level of improvement in the ‘Structure’ section also.

80 It was also intended to show judgments that failed miserably and did not make the result clear. For example: Logan v Auckland City Council CA243/99, 9 March 2000. It was not a part of my study, but the result heading was located halfway through the judgment (called: “Mr Logan’s position and the disposal of this appeal”). There was no table of contents, and the last paragraphs dealt with a miscellaneous point (“Bund”). This made it difficult to quickly find out the result.

81 See above at “Give the result early”, p30.

Chapter 4: Structure

a) Theory

When we think about ‘writing clearly’, it is easy to overlook the importance of organisation. It is, however, vital to communication. A high-quality judgment needs a clear, logical structure, which follows the five points below.

Set out the issues clearly (and early)

A well-written judgment will usually have a section that clearly states what the issues in the case will be. Every judgment has issues – otherwise it would not make it to Court in the first place. The reader therefore needs to be able to easily see how the judge structured the issues. This is a great example, from the Cassells’ case:82

In undertaking my task I shall analyse the following aspects of s 181(2) of the Act:

(1) Its text;

(2) Its purpose;

(3) Its context; and

(4) The policy values which are reflected in the subsection.

Here the judge clearly sets out the four areas that will influence his final decision. This allows a busy reader to quickly decide if the judgment is useful or relevant. If the reader is searching for cases that discuss the purpose of s181(2), then it is immediately clear that this will be discussed. Without an issues section, the reader has to skim through the entire

82 Hutt City Council v The Lower Hutt District Court, above n39 at [11]

judgment, hoping to be able to pick up what the judge is talking about.83 Berry calls the issues section the ‘roadmap’ of a judgment.84

This ‘issues’ section also needs to come as near to the beginning as possible. If it comes too late, its impact is lost.

Use case-specific headings

In the past, many judgments were simply large blocks of text, with not a heading in sight. Now almost all judgments will have headings. A good judgment goes beyond this, by having case-specific headings.

Case-specific headings are better than generic headings such as ‘Introduction’, ‘Facts’ and ‘Law’. They can be questions, or statements – what matters is that they are written specifically for the judgment in question. Here are some of the headings of a 2012 case:85

Defective construction of a holiday home in Kerikeri Issues on the appeal

...

Was the Council negligent in not following through on its notice to rectify? Was the Council negligent in not checking Nationwide’s reports?

Result

Notice how the first heading is not simply ‘Introduction’, but actually describes the basic dispute in the case.86 Without knowing much about the case, you can see here an outline of the judge’s reasoning. The headings set out the issues and show the logical progression from one issue to the next.

83 On this, and on the importance of sub-headings, see Simon Adamyk "Plain legal language in the English courts" (2006) 56 Clarity 11

84 Berry, above n13 at p20

85 I also discuss this judgment later: Scandle v Far North District Council, above n7777

86 Kerikeri is a town in the far north of New Zealand.

Within the realm of case-specific headings, some are better than others. Raymond argues that the judge should write headings “in language that would be intelligible to an educated non-lawyer”.87 In this study, I chose not to differentiate between case-specific headings for two reasons. First, it was quicker to simply check whether the headings (if any) are not generic. Secondly, if too many points were given for headings, then this would have been unfair for the 1990 judgments (which had no headings at all).

Order the issues clearly

Using case-specific headings also helps to order the issues clearly. A good judgment will flow from one point to the next in a clear, logical manner. There is no point in dealing with substantive arguments in depth, only for the judge to find a lack of jurisdiction in the final paragraph. If the issues are clearly defined in a ‘roadmap’ near the beginning, then the issues should be dealt with in this order, and under matching headings.

Make the judgment ‘raidable’

The intended purpose of a judgment is very important. Raymond makes a distinction between ‘readability’ and ‘raidability’.88 Readability is like reading a novel – you read the whole document, from start to finish. But in a text that is raidable, you can ‘raid’ the text for the information you need - dictionaries and legislation are prime examples.

Judgments are unique in that they must be both readable and raidable. The parties (or their lawyers) will read the whole judgment from start to finish. Other lawyers looking at the judgment later may be only interested in one specific legal issue. Therefore it needs to be easy to ‘raid’ different parts of a judgment.

87 Raymond, above n13, at p28

88 At p19

Summarise conclusions (where appropriate)

When a judgment deals with multiple, complex issues, it can sometimes be useful to summarise the conclusions. This can be done at either the start or the end. Raymond explains that “[t]he opening pages ... and the last pages, are the only two sections that will almost certainly be read.”89

He urges judges to restate the conclusions in either/both of these sections. Not all cases will need such a summary, but it can be very useful if the reader wants to quickly find the result on each particular issue (without frantically searching pages and pages of a long judgment).

For example, in the Cassells’ case about the sewage pipe, Collins J briefly restates his conclusion on each of the issues.90 When he does so, he follows the same order as in his ‘roadmap’ at the beginning. The actual conclusions to each issue should, however, be first stated in the discussion of each particular issue.

b) Test

The Structure section (B) has five questions. Question B1 asks whether the judgment sets out the issues clearly and early, for a maximum of two points. Questions B2-B4 are simple yes/no questions. They test for: case-specific headings, order of issues, and ‘raidability’. These questions can usually be answered by a quick scan of the judgment.

Question B5 asks whether there was a summary of conclusions somewhere in the judgment, for one point. This question was originally included in the ‘Start’ section, but it fits better here. In discussing the results however, it was distinct from the other questions (B1-B4), so I consider it separately.

89 At p33

90 Hutt City Council v The Lower Hutt District Court, above n39, at [47]-[48]

c) Results

Structure Total Score (B1-B4)

75
70
65
60
55
50
45
40
35
30
25
20
15
10
5
0

1990
2000
2012

10

37

65

Questions B1-B4 formed the bulk of the ‘Structure’ section, and showed a clear increase over the last 22 years. There were five points on offer, so each year could receive a maximum of 75 points for all of its judgments:

This graph shows very clearly the large and steady improvement in structure. 2012 was only 10 points short of achieving a perfect score. 1990, on the other hand, only scored 10 points in total. As the middle year (2000) got almost exactly a middle score, this highlights the recent improvement in structure. These startling results were also reflected in the individual scores.

Issues (B1)

Recent judgments were much better at clearly setting out the issues near the start (B1). Only three judgments did this in 1990, but this improved to six judgments in 2000. In 2012 on the other hand, 14 out of 15 judgments set out the issues – a clear improvement.

To get full points for this question however, a judgment had to set out the issues both clearly and early. In 1990, only one judgment reached this standard. No judgments from 2000 did. However, in 2012, 10 out of 15 judgments set out the issues both clearly and early. This was also a very large improvement. Between 2000 and 2012, there was a huge focus on clearly stating the issues near the start of the judgment.

The 2012 judgments quite often had a paragraph similar to this:91

[17] The issues on appeal are:

(a) Should the judgment remain confidential?

(b) Should the judgment be released publicly?

(c) Should Perpetual’s name be published?

...

Setting out the issues like this really helps show the layout of the judge’s reasoning. It is very likely that clearly stating the issues helped the judge to structure the judgment overall.

Headings, order, ‘raidability’ (B2-B4)

Questions B2-B4 (headings, order, raidability) were usually linked, so they are examined together.

In 1990, only three judgments ordered the issues well; only two judgments were raidable; and only one single judgment had case-specific headings. In other words, almost all judgments from 1990 scored zero points on questions B2-B4. There was a huge improvement in 2000 however. There were 11 ‘raidable’ judgments, 11 judgments with case-specific headings and 9 judgments with clearly ordered issues.

The clearest improvement in structure was in 2012. In 2012, two judgments lacked case- specific headings; one also lacked order and the other was not raidable. But all of the other judgments (13 in total) received a perfect score on questions B2-B4. Although this

91 Perpetual Trust Ltd v Financial Markets Authority, above n79, at [17]79

does not indicate that each judgment had perfect structure in reality, it is nevertheless a huge improvement. I will explain these results using examples; going from best to worst.

Excellent structure

An example of excellent structure is Fava v Aral, from 2012.92 In the third paragraph, Ellen France J sets out the issues:93

We first set out the background, before discussing the effect of any non- compliance with r 13.5.3 on the decision to order that security be given. We then consider whether the conduct of the lawyers is in issue in terms of r 13.5.3 at this stage. [Emphasis added]

The headings in the judgment then mirrored the issues identified in bold:

The background

...

Impact of any non-compliance with r 13.5.3 Is the lawyers’ conduct in issue at this stage?

Even to a reader unfamiliar with the legal issues, the logical order is clear. If any “non- compliance with r 13.5.3” has no impact, then the lawyer’s conduct would be irrelevant. The judgment’s headings match the issues, which helps internal consistency. We will see shortly what happens when the headings and issues do not match.94

The division of issues also aids ‘raidability’. If a lawyer wants to know what will happen if he or she does not comply with that particular rule (r 13.5.3), then that part of the judgment can be quickly skipped to. In fact, Fava scored perfectly on questions (B1-B4). This planned, comprehensive structure only appeared in the 2012 judgments.

92 Fava v Aral Property Holdings Ltd, above n77

93 At [3]

94 Below at ‘Inadequate structure’, at p47

Inadequate structure

In the other two years, no judgments scored perfectly on questions B1-B4; i.e. they lacked the clear, comprehensive structure that some 2012 cases had. Mostly this was due to not setting out the issues (9 cases). But there were also cases that started off well before failing B2, B3 or B4. The 2000 case Nesbit v Porter95 was the first time the Consumer Guarantees Act 1993 had been interpreted in the Court of Appeal. Mr and Mrs Nesbit had bought a 4 wheel drive vehicle, which later broke down; so they sued the seller. At paragraph [24], Blanchard J set out the issues:96

...

[Emphasis added]

The headings did not quite match the bolded issues however:

Meaning of “ordinarily” Loss of right of rejection

...

The second heading matches the issue statement at [b]; but the first heading does not match issue [a]: “purchased as a consumer”. This mismatch can throw the reader off, and also shows poor structure. If the issue was ‘the meaning of “ordinarily”’, then this should have been the issue stated in [a]. Or if the issue was in fact whether the Nesbits purchased as a consumer, then the heading should have been: ‘Purchased as a consumer?”.

95 Nesbit v Porter, above n36

96 At [24]. I have added the bold simply to make the issue easier to see quickly.

Lack of structure

The 1990 judgments scored incredibly low on the ‘Structure’ section. This is likely due, in part at least, to the near total lack of headings: only one judgment had proper case- specific headings.97 This lack of headings meant it was much harder to see clear evidence of a good structure. This may be a bias in the test, but I think the structure of 1990 judgments really was inadequate. Even the one judgment with headings still only received two of the available five points for questions B1-B4. If anything, the headings in that case made it easier to see the lack of order in the issues.

The highest-scoring 1990 judgment, Balfour v Attorney-General,98 had three headings, which stated the three separate areas of law: causation; breach of statutory duty; and negligence. While this was not enough to secure a point for having case-specific headings, it did receive points for both order of issues and ‘raidability’. Most of the other 1990 cases had no headings at all and received no points in Section B. The lack of headings made it much harder to see how raidable or well ordered the judgment was.

Summary of conclusions (B5)

There was a slight difference in the summary of conclusions on each issue (B5), but nowhere near the difference in the main Structure questions. None of the 1990 cases had a summary (at any point in the judgment). But there was a summary in two judgments from 2000, and four from 2012. Although not all cases need a summary, it is encouraging to see an increasing acceptance of their use. Summaries were useful when reading these judgments – and would be very useful when re-reading judgments for later study (as lawyers, judges and law students often do).

97 Commissioner of Inland Revenue v Brierly, above n34

98 Balfour v Attorney-General, above n42

For example in Law v Tan, the judge included a summary in the second-to-last paragraph:99

[85] In summary we find:

(a) Body Corporate r 2.1(e) is ultra vires.

(b) In consequence, default rule 1(e) in the Second Schedule of the Unit Titles Act 1972 applies.

(c) Body Corporate r 2.2(g) is ultra vires.

...

After reading the preceding 84 paragraphs, it reminds the reader of the important conclusions on each issue (although out of context they seem unintelligible). As there were five separate issues in total, this summary was a great reminder to the reader.

Overall there was a huge improvement in structure, but the improvements in style were not so clear cut.

99 Law v Tan Corporate Trustee Ltd [2012] NZCA 620

Chapter 5: Style

a) Theory

Writing style is often the most obvious and most examined feature of clear writing. For this study, I divided ‘Style’ into two areas; as each area has a different focus, and a different method of measurement. First, specific style requirements for judgment writing, and second; writing style generally.

Judgment style

Use the parties’ names

Judgments should use the parties’ names, and not simply ‘appellant’ or ‘respondent’. Doing this makes the facts much easier to understand,100 and is also more respectful to the parties. This is now standard writing advice to judges.101

Avoid string cites

This is not a huge bar to readability, but it is important nonetheless. String citations are when multiple authorities are cited one after another:102 e.g. Ben Nevis v CIR [2008] NZSC 15; Penny and Hooper v CIR [2011] NZSC 95; Alesco NZ Ltd v CIR [2013] NZCA 40.

Your eyes probably skipped over those citations. Yet some judges and lawyers will cite a long string of cases mid-sentence – making it very difficult to read the sentence in one go.

100 In very rare instances a generic label is acceptable, but only if there is no chance of confusion for any reader.

101 George, above n14, at p41; Raymond, above n13, at p47

102 See generally Mark Cooney "Stringing Readers Along" (2006) 85(12) Michigan Bar Journal 44

String cites are really only useful as a reference; they should not interrupt the flow of the text.

Avoid or summarise block quotations

Long, block quotations have been a mainstay of legal writers for many years. Readers, on the other hand, tend to skip over any block quote of more than a few lines. Judgments frequently contain very long quotes, thrown in to the text with little or no explanation. This results in even a trained legal reader wondering what relevance the quote has to the judge’s reasoning.

Raymond advocates better treatment of long, block quotations. He recommends that, instead of inserting the block quote, writers should simply paraphrase the main idea. If this is not possible, the writer should provide either a summary of the quote, or of the inference the reader should draw.103 For this study, I call this idea ‘integrating’. Berry agrees, saying: “Quotations are like paintings: they need to be framed”.104 Côté goes further, writing that quotes should not be included unless “brief, and wonderful or amazing”.105

If included, the ‘framing’ or ‘integrating’ of a quote will depend on the quote itself, and how it fits into the body of the judgment. Clear summaries of facts from the lower court can be quoted to save time. But almost every other type of quote should be integrated well. The judge may have struggled to understand a badly written contract or statute, but that struggle should not be inflicted on the reader as well.

103 Raymond, above n13, at p53

104 Berry, above n13

105 Côté, above n14, at p80

Some might argue that legal tests must be set out in their entirety. This is fine; if they are then integrated and applied. Of course some ‘tests’ would benefit greatly from paraphrasing, such as this one:106

The word 'income' is not a term of art, and what forms of receipts are comprehended within it, and what principles are to be applied to ascertain how much of those receipts ought to be treated as income, must be determined in accordance with the ordinary concepts and usages of mankind, except in so far as the Statute states or indicates an intention that receipts which are not income in ordinary parlance are to be treated as income, or that special rules are to be applied for arriving at the taxable amount of such receipts.

This quote basically says: ‘income means what it means in ordinary conversation, unless overridden by a specific statute’. The quote was originally from a 1935 case, but was quoted 50 years later in Reid v Commissioner of Inland Revenue without any real integration. So anyone reading Reid has to struggle and decipher a 95 word sentence just to be able to follow the judge’s reasoning.

A readable judgment will paraphrase block quotes where possible. Any quotes that are included will be integrated well.

Writing style

Use common words

Judgments should be written for the parties and the public, not just for lawyers. Legal writers are often guilty of needlessly using complex words like ‘parlance’, ‘inimical’ and ‘otiose’. 107 These words make most readers reach for a dictionary – when simpler words would do.

106 Reid v Commissioner of Inland Revenue [1986] 1 NZLR 129; citing: Scott v Commissioner of Taxation

(1935) 35 SR (NSW)

107 Respectively, these mean (roughly): ‘Talk’, ‘hostile’ and ‘unnecessary’.

Judges and lawyers also tend to use Latin without good reason. A common phrase is ‘inter alia’ – instead of ‘among other things’. This stops many readers from understanding the sentence, and can make them feel unintelligent or uneducated. Some foreign words are ‘terms of art’ and are acceptable; for example: estoppel, detinue, and actus reus. But most foreign words are not terms of art, and can be easily replaced by an English word or phrase.

In this paper, I use the term ‘jargon’ to refer to both foreign and complex words that could easily be replaced. I did not test for words or language like ‘prior to’ instead of ‘before’; or ‘pursuant to’ instead of ‘under’. It is better to avoid language like this, but this type of legalese is not fatal to understanding, so I have ignored it.

Use shorter/simpler sentences

One of the most commonly identified problems with legal writing in general is the tendency to write long or complex sentences. Generally a sentence with over 20 or 25 words is likely to be harder to read.108 ‘Likely’ is the key word in that last sentence. As Raymond points out, judges are perfectly capable of writing readable sentences of 40 words or more. Unfortunately, these eloquent long sentences are the exception, not the norm.

Raymond’s general rule is “if you can’t write a good long sentence, write short ones”.109 In my study, I looked for long, clumsy sentences that require re-reading (or are otherwise confusing). I did not count long sentences that were well constructed and easy to understand.110

108 Bryan A. Garner Legal Writing in Plain English (University of Chicago Press, Chicago, 2001)

109 Raymond, above n13 at p56

110 Raymond identifies these good sentences as being constructed ‘cumulatively’. That is, they are added together, but could be split into shorter sentences with few problems.

Use lists

Older judgments can often be guilty of attempting to set out a list in one long, clumsy sentence. When a list of conditions or findings is necessary, it is best to separate them visually in some way; although overuse should be avoided.

Use the active voice

The active voice is direct – ‘the dog chases the ball’. The passive voice is longer and more confusing – ‘the ball is chased by the dog’. There are valid uses for the passive voice, but legal writers are often guilty of abusing it. Judges should avoid the passive unless necessary.

b) Test

The test also divides Style into two sections: Judgment style (C) and Writing style (D).

Judgment style

Section C covers style choices:

These four elements (C1-C4) are yes/no questions, making it easy to scan the judgment for them. C3 asks whether there were too many block quotes; and C4 asks whether most block quotes were integrated well. This may not have been the best choice of phrasing – we will see this in the results.

Writing Style

Section D is primarily a negative section – most judgments will only lose points here. In questions D1-D4, a judgment will lose one point for having at least one of each of the following categories of writing mistake:

At worst a judgment can lose four points; even if it has many instances of the same mistake. For example, a judgment that has one jargon word and five long sentences will only lose two points, as it only has the two types of mistake. This method of scoring may seem a little haphazard, but it allows the tester to stop searching for each type of mistake after finding one instance. This is more convenient than combing the test for each instance of bad writing. It also mostly avoids the effects of the judgment’s length.112

Question D5 is an overall judgment of writing style, with three possible responses:

Most judgments will be ‘average’ – as in fact most of the judgments in my study were. Only the best and worst cases will fall into either of the other categories.

In theory, questions D1-D4 will make a broad assessment of the writing style. Anything too far beyond that will be caught by D5. The test is not very sophisticated in this regard,

111 The tester should also note down any words classified as jargon (question F3).

112 A 60-page judgment would likely have more mistakes than a 10-page one; so a ratio in some form could be useful. This idea could be explored in future studies, if greater accuracy is needed.

but I think it is adequate to show the general changes. In fact, even with this limitation, the improvement in writing style was still apparent.

c) Results

There was evidence of slight improvements in style over the years, but overall there was less change than in the ‘Start’ and ‘Structure’ sections.

Names and string cites (C1-C2)

There was an improvement in using parties’ names (C1). The worst year was 1990, with only 9 judgments using the parties’ names. This did increase however, to 11 in 2000 and to 14 in 2012. Unsurprisingly, the only 2012 judgment that did not use parties’ names scored only 8 points overall.113 The overall change, from 9 to 14, was a clear improvement, and the 2012 judgments no doubt benefitted from using the parties’ names. But it seems that this was not an issue for 1990 judges; they used names in over half of their judgments.

The use of string cites (C2) probably showed the least difference between the years. 1990 had four judgments that interrupted text with string cites, whereas no judgments from 2000 or 2012 did. String cites do not seem to be a big problem in New Zealand judgments; especially now that citations often appear in footnotes (rather than in the actual text). This is a positive sign, and hopefully holds true for other courts in New Zealand.

Block quotes (C3-C4)

Judges were generally restrained in the number of block quotes used (C3). This was one of the more subjective questions in the test, and the results were not particularly

113 Ngai Tai Ti Kamaki Tribal Trust v Karaka, above n50

informative. From my analysis, there were three judgments in 1990 that had an overwhelming excess of block quotes. In 2000, this decreased to one judgment; and in 2012 there were none. In hindsight, it would have been more useful to collect accurate data on the length and number of block quotes in each judgment. Overall, I would suggest that my results do not show any obvious change.

There was little change in the integration of block quotes either (C4). Roughly half of all judgments tested integrated their block quotes well. Six cases in 1990 integrated well; this increased to seven in 2000 and eight in 2012. This could suggest that there has been a slight improvement in the use of block quotes over the last 22 years. However I think the results instead reflect a flaw in my method. Most other questions were yes/no, but C3 and C4, although yes/no in appearance, were actually quite subjective. A different tester may well have scored the judgments differently. Overall however; the use of block quotes did not affect my results too much. There were only two points to be lost or gained, so questions C3 and C4 did not skew the overall results.114

Writing style - mistakes (D1-D4)

There was little change in the use of the passive, but the other three categories show a steady, if not drastic, improvement. A judgment could lose up to three points for questions D1-D3 (jargon; long sentences; lists). With 15 judgments, each year could lose up to 45 points. There was a slight decrease in the number of mistakes:

114 Judgments that did lose points on questions C3-C4 were usually in the lower range of scores anyway.
2012
2000
1990
5

0
19
10
29
35
45

40

35

30

25

20

15
Writing Mistakes (D1-D4)

2013_2201.png

This shows the decrease in the number of mistakes, from 35 to 19 over the 22 years. Judges seem to be writing more clearly, but not by a wide margin. These figures also do not take into account how many of each type of mistake were present. The individual results follow this slight trend.

Jargon/legalese

There was a decrease in the number of cases using jargon (D1) in 2012. In both 1990 and 2000, only three cases were free of jargon. But in 2012, seven judgments had no unnecessary jargon. This suggests that current judges have become better at avoiding foreign words. Interestingly, there seemed to be little change between 1990 and 2000.

Of the jargon judgments, some had only one jargon word; others had up to six different words. There was a slight shift away from non-English words such as uno flatu, ex facie, a fortiori and sui generis.115 I have included tables of all the jargon words I found in Appendix C.116

115 The number of foreign words for each year was, in order; 6; 6; and 4.

The most common jargon word over all the years was inter alia, appearing in six judgments. Other words with two or three appearances were ‘ameliorate’, ‘apposite’, ‘inimical’, and ‘sui generis’. There were some quite unusual words that found their way into various judgments, for example: ‘contumacious’, ‘ex hypothesi’, ‘thitherto’, and ‘simpliciter’. However, there were also some unusual words that I did not classify as jargon, as I could not easily replace them with more common words: ‘caveat’, ‘quantum’ and ‘Admiralty action in rem’ (among others).

The 1990 judgment AMP v Groves had the most jargon words: seven in total.117 These were: ‘casuistry’, ‘parlance’, ‘corpulent’, ‘espouse’, ‘antithesis’, ‘indubitably’, and ‘sensitisation’. It also had two jargon words that were included but explained well: “novus actus interveniens”118 and “Serbonian Bog”.119 Under the test, this judgment lost only one point for jargon – the same as any judgment with just one instance of jargon. So it could be said that the test is not sophisticated enough in this regard. However despite the many jargon words, it had a decent introduction and structure. Overall, AMP v Groves scored relatively well for a 1990 judgment (5 points). This shows that a failure in one element does not completely ruin a judgment’s score.

Sentence length

The clearest improvement in writing style was in the use of long sentences (D2). All of the 1990 judgments had at least one long, confusing sentence. Only 12 of the 2000

116 The total number of jargon words for each year was, in order; 24; 25 and 13.

117 AMP Fire and General Insurance Co (NZ) Ltd v Groves CA8/90, 1 March 1990

118 At pp 5-6: “if the surgery is not related to the accident, it becomes a new proximate cause, a novus actus interveniens”

119 This phrase was referred to in a block quote. Hardies Boys J then immediately explained, at p 12: “(We are indebted to Mr Shelton-Agar for informing us that "Serbonian bog" is Milton's name (Paradise Lost II

592) for Lake Serbonis in Lower Egypt, a marshy tract now dry, covered with shifting sand: see the Oxford English Dictionary 2nd ed).”
1990 2000 2012
10
12
15

15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0

Overlong sentences (D2)

judgments (80%) lost a point on this question. In 2012, this decreased to only 10 judgments (67%):

While most of the 1990 judgments had more than one long sentence, the 2012 judgments often only had one ‘borderline’ sentence. My method does not reflect this, but it was my overall feeling when testing the judgments.

I found that sometimes a judgment would be written relatively well, but only until the analysis. Even in older cases, the facts and lower court’s decision might be set out in relatively simple language. But then when the judge had to wrestle with complex concepts of law, the writing style suffered – particularly the sentence length. For example, in a 1990 case, Sadowski v Oakleigh Breeding (No 2) Partnership,120 Casey J starts out with relatively short sentences. In explaining his conclusion however, he resorts to a 110-word sentence, which must be read about three times to understand.121

120 Sadowski v Oakleigh Breeding (No 2) Partnership CA216/90, 11 December 1990

121 At p10: “Against this background and having regard to Mr McKenzie's professional qualifications and to his dealings with the applicant, the only plausible view is that he was responsible for the introduction of the

In hindsight, it may have been more useful to count and pinpoint the number of overlong sentences. This would have made it much easier to show examples of what the tester considered to be overlong. The number of sentences would have been useful; to see whether there was any major difference between the years. One of the advantages of not doing this was the speed at which I could analyse each judgment.

Although the use of overlong sentences seemed to decrease, there were still many instances across all years. This was how one of the issues was stated in the ‘roadmap’ of the 2012 case, Ngai Tai Ki Tamaki Tribal Trust v Karaka:122

To the extent the jurisdiction arose other than under s 72 did the Court err in the exercise of its discretion to determine what was a just and reasonable remuneration in all the circumstances, including by failing to give due weight to Mr Stevens’ shortcomings or by giving undue weight to his meritorious conduct?

An overlong sentence in a different area may have made little difference to the overall readability. But the above sentence seriously impedes the reader’s understanding of one of the main issues.123

Lists

The treatment of lists (D3) seems to have improved over the years. There were 7 badly- worded lists in 1990. This decreased to 4 in 2000 and to just one in 2012. This may be in part due to improvements in writing technology over this period; making it easier to

parties and the preparation of the contract either alone or in conjunction with Mr Holloway; that he was not a paid adviser or agent of the respondent, but in respect of this contract was employed by it under a management agreement of the kind described in the prospectus; and further, that far from being a paid agent or adviser, Mr Holloway was a partner himself in the respondent and closely associated with Mr McKenzie in its management structure.”

122 Ngai Tai Ti Kamaki Tribal Trust v Karaka, above n50

123 Despite the unclear wording, the judgment did still receive two points for setting the issues out clearly and early (B1) – the other issues were defined well.

separate lists visually. The better visual separation of lists made the more recent judgments easier to understand, as they usually included numbers or bullet points.

The 1990 judgments on the other hand, often had sentences like this:124

However, because the payments in question do not on the evidence before us take priority under ss 101 and 308 of the Companies Act, and the compliance order against Mr Cormack was made on the basis that they do; and because in the circumstances it is clearly inappropriate to order compliance by Mr Cormack personally on any other grounds, we conclude that the order cannot stand.

This long sentence is not the clearest list of reasons (although it can be understood after re-reading). If the judge had separated the reasons into digestible portions, it would have been much easier to read:

We conclude that the order cannot stand because:

  1. the payments in question do not on the evidence before us take priority under ss 101 and 308 of the Companies Act;
  2. the compliance order against Mr Cormack was made on the basis that they do; and
  3. in the circumstances it is clearly inappropriate to order compliance by Mr Cormack personally on any other grounds.
This rewording also puts the important information at the front (that the order cannot stand).

Passive

I only found two judgments that had confusing passive sentences (D4). This does not necessarily reflect the writing style of the judges; it is likely due to a flaw in my method. In the read-through required by questions D1-D4, I found it much easier to spot long sentences, lists and jargon words. However, it was difficult to quickly pick up use of the passive; it simply was not very obvious in a quick read-through.

124 Quik Bake Products Ltd (in rec) v New Zealand Baking Trade Employees IUOW CA46/90, 16 August 1990, at p14

For future studies, I would recommend either removing or altering the question. For example, it may be easier to only check the ‘arguments’ section; where the judge sets out each side’s submissions. Judges will often write “It was submitted by the appellant that...”. Focussing the search efforts on this smaller area (the arguments for each side) could be more instructive. Another possibility would be using a computer program to highlight instances of the passive. The tester could then decide whether each instance is problematic or not. As the Clarity Test stands, it was not particularly useful in examining abuse of the passive.

Writing style - overall (D5)

There were few judgments that stood out in terms of writing style: I classified almost all of the judgments as having ‘average’ writing styles. There was only one judgment that I felt had a ‘terrible’ style, a 1990 case.125 There were no exceptionally clear judgments in 1990 or 2000. However in 2012, there were four judgments with exceptionally clear writing styles.126 Two of these were by the same judge; Justice Miller.127 The four exceptionally clear judgments also received a point for having described the facts very clearly (question A4, above). But these well-written judgments still had at least one or two mistakes.

There was only one judgment that had no mistakes at all in questions D1-D4: Scandle v Far North District Council.128 Ironically, it was let down by its start and structure; otherwise it could have received an almost perfect score. In Scandle, the late Justice

125 Which interestingly only had two types of mistake: jargon and long sentences: Appliance Industries Ltd v Neeco Ltd CA37/90, 28 September 1990. For the record, I still have basically no idea what that judgment was about.

126 Fava v Aral Property Holdings Ltd, above n77; Roby Trustees Ltd v Mars New Zealand Limited, above n71; Schenker AG and Schenker (NZ) Ltd v Commerce Commission, above n77; and Scandle v Far North District Council, above n77

127 Roby and Schenker

128 Scandle v Far North District Council, above n7777

Chambers displayed his distinctively clear writing style. 129 This was the judgment that started with the heading: “Defective construction of a holiday home in Kerikeri”:

Notice the short, clear sentences. Each one contains a single idea. Of course, he is not afraid to mix up sentence length, adding longer ones occasionally to improve the flow of the judgment. But overall he expresses himself in simple, everyday language. The house in question was later bought by Mr Scandle, the appellant. He sued a number of parties and was awarded damages of over $450,000. However, as Chambers J puts it: “he has not seen a cent of it. ... The only player with deep pockets was the Council. But had it done anything wrong? Mr Scandle said it had”.130

When you read writing like this, it is hard to imagine it being written any clearer. If Scandle had been written in 1990, however, the judgment likely would have described how the appellant alleged breach of statutory duty and negligence by the respondent relating to breaches of Council inspection procedures. In contrast to this dry, dull writing, Chambers J consistently wrote simply and clearly.

Scandle also highlights the importance of not simply looking at writing style to measure readability. Using another test such as a readability formula may have given a very high score. But, despite the short words and sentences, Scandle had its flaws. It took until paragraph [7] for the reader to find out that “Mr Mullane built a house that did not comply with the amended design or building code”. And it is not until paragraphs [10]

129 Sadly, Justice Robert Chambers passed away this year aged 59. He had recently been appointed to the Supreme Court of New Zealand. His contribution to the law of New Zealand will be sorely missed.

130 Scandle v Far North District Council, above n77, at [10]

and [11] that the basis for Mr Scandle’s case is made clear (negligence and breach of statutory duty).

The structure was also not perfect. At paragraph [12], under the heading “Issues on the appeal”, Chambers J explains that it was hard to precisely determine the issues. But he does not then explain what the issues were in the end. He has divided the case into issues, but he does not make this explicitly clear. A reader might commiserate with the judges, who clearly faced a formidable challenge in identifying the relevant issues. But a reader in a rush would wonder why the judge did not simply begin paragraph [12] by saying: “The issues are X and Y. It was hard to exactly isolate these issues, because...”

The Scandle judgment demonstrates the importance of measuring readability not just by looking at one or two elements of readability. The whole judgment needs to be looked at in a comprehensive way. I believe the Clarity Test does this.

Chapter 6: Recommendations

a) Lessons for judges

The real value of the Clarity Test lies in the lessons we can learn from analysing judgments. Although the overall results were positive, there are still areas that judges can work on.

Better introductions

Despite a big improvement, introductions could still be better. Every judgment from 2012 made the story and issue clear to a lawyer by the end of the first full page. But judges have little control over who reads their judgments – it is not just lawyers. Nonlawyers may need to read and understand a particular judgment: their business, family, savings or even freedom could be on the line. There needs to be a greater focus on making both the story and issue clear to a nonlawyer, as only two recent judgments did this successfully.131

This leads onto the next issue: judges need to get to the point quicker. Sometimes, focussing on the story can backfire, as in Scandle, where the judge took too long to state the issue. Readers need context; but not too much – it is a fine line. Two-thirds of the most recent judgments still had unnecessary detail in the opening pages, so this could be improved.132

131 Roby Trustees Ltd v Mars New Zealand Limited, above n7171; and Schenker AG and Schenker (NZ) Ltd v Commerce Commission, above n7777

132 Of course, it can be hard to decide what is and is not ‘necessary’. But if a judgment begins immediately with the story and issue, this sets the reader up very well for the rest of the judgment.

Clarify issues

There was a huge improvement in the structure of judgments. It appears that some judgments could benefit, however, by setting out the issues clearer and earlier. There were still five judgments (a third) that did not do this to the best extent possible.

Avoid style mistakes

The results of the Style section paint a mixed picture. On one hand, some current judges can write exceptionally clearly – at times clear enough for a nonlawyer to understand.133 On the other hand, judges still make the same mistakes. The two main mistakes are overlong sentences and unnecessary legalese. Mistakes like these are not always a complete bar to understanding – but they make it harder for less capable readers. No matter who we are, we have all been less capable readers at some point in our lives. Completely removing mistakes is difficult, but it is possible.134

Integrate block quotes

The biggest remaining flaw in judgments lies in the integration of block quotes. Just under half of the most recent judgments did not integrate well. They copied and pasted. They repeated other judges’ words without summarising or paraphrasing them. They simply did not write with the reader in mind. This is an area which could definitely be improved – although it may be the most difficult to improve in practice.

Main opinion first

When multiple judges write separate opinions, the main opinion should come first. I had originally intended to examine this in my study, but it seems that separate opinions have become far less common in the last 22 years. Only 6 judgments from 45 in my sample

133 Notably Miller J. See Roby Trustees Ltd v Mars New Zealand Limited, above n7171; and Schenker AG and Schenker (NZ) Ltd v Commerce Commission, above n77

134 Although it is arguable that such an excellent style can come at the cost of Start and Structure. See

Scandle v Far North District Council, above n7777

had separate opinions – all from 1990. This made comparing the years impossible, in this study at least.135

I consequently removed this section from the main body of my study. I have included some extra information in Appendix D (including further questions for the Clarity Test).

As I could not get an accurate picture of the current treatment of separate opinions, I would simply recommend that the main opinion should always come first in a judgment. The introduction and the facts must in be the first paragraphs, not buried somewhere in the middle. It seems that in the Supreme Court, however, there is a convention that the Chief Justice’s judgment goes first. Unfortunately this causes problems when the Chief Justice dissents.136 A reader must either skip over her judgment (and risk not returning); or read her judgment without knowing the facts and background. Either way, an otherwise clear opinion can be muddled through something as simple as the order.137

The treatment of separate opinion judgments could, and should, be explored in future studies.

b) Future research

I believe that there is scope for more research and testing of judgments – both in New Zealand and worldwide. Academics need a reliable, simple way of measuring and

135 When analysing the six separate opinion judgments, I took the lowest score if there was conflict. If the separate judgments were short; I only counted Style mistakes (and ignored them for Start and Structure). These six judgments were: Hawkins v Davison CA215/90, 21 December 1990; Ryan v Hallam, above n62; GN Hale & Sons Ltd v Wellington Caretakers IUOW, above n78; Chatfield v Jones CA231/89, 2 May 1990; AMP Fire and General Insurance Co (NZ) Ltd v Groves, above n117; Re Ham CA144/89, 29 March 1990

136 I make no comment on the frequency with which the current Chief Justice dissents. Indeed, her dissents are often clear and readable – but they lose impact by coming first.

137 For example, see Ben Nevis Forestry v Commissioner of Inland Revenue [2008] NZSC 15

comparing the clarity of judgments. My hope is that the Clarity Test goes some way to fulfilling this goal. However, certain elements of the test did not perform as well as intended. If you are reading this, trying to find inspiration for a research topic, then consider using the Clarity Test. If you do, here are a few suggestions gleaned from my experience with the test.138

Introductions

There may be room for more levels of unnecessary detail, to avoid penalising introductions which have only one small piece of unnecessary detail. Perhaps the question’s focus could be shifted: e.g. “Were any precise details given before the context necessary to understand them?”

Alternatively, the unnecessary detail question could be split into two; to cover both structure and style. Excessive stylistic detail would include precise dates and facts which have little or no relevance. Excessive structural detail would focus on whether any paragraphs or topics could be shifted to later in the text.

Headings

If all judgments in a sample set have headings, then there could be a greater focus on the headings themselves. For example; the test could ask whether a nonlawyer would understand the headings. A heading would fail this question if it uses a lot of legal terms or otherwise requires detailed legal knowledge to be understood.

138 Also keep in mind that the test should be tailored to the intended sample of judgments. The questions and possible responses will need to be adapted to fit judgments from other countries or even other courts in New Zealand.

Block quotes

The analysis of block quotes did not work as intended. Questions C3 and C4 should be altered to more accurately measure the treatment of block quotes over the years. I see two possible solutions, depending largely on the sample of judgments to be studied.

First, if a sample includes some judgments with no block quotes at all, then this should be rewarded. This could be done by simply asking whether there were any non-integrated block quotes at all. This would be easy to measure, and would account for judges who successfully avoid block quotes altogether.

Secondly, if all judgments in a sample have some block quotes, then a ratio could be used. For example, judgments that have one block quote per ten pages will be easier to read than those that have five quotes per ten pages. A ratio like this would need to be adapted to the sample of judgments selected. This would not be as strict as the first option, but may be more useful for measuring how judges actually write.

Writing mistakes

A few judgments had just one mistake, while others had five or more. If judgments of a similar size were being compared, then this section could be changed to account for the number of mistakes in each. Otherwise a ratio may be needed for judgments of different lengths. Lists, long sentences and jargon were easy to find. Passive sentences were not. The passive question (D4) should be either removed, or answered on a separate read- through.139

The tester could also examine not just jargon (inter alia, contumacious) but also other, milder instances of legalese. For example, using ‘prior to’ instead of ‘before’, or ‘pursuant to’ instead of ‘under’. As mentioned above, words like these detract from

139 Perhaps the tester should only consider the ‘submission’ section. This is often where the judge is most likely to write in the passive: “It was submitted by the respondent that...”

readability, without totally preventing understanding. They were not measured in my study, but could be considered in future studies (possibly by searching each judgment for a list of the most troublesome/common words).

I hope that these suggestions aid further research. The Clarity Test is not perfect, but it is a positive step in the right direction. If we are serious about improving judgments, then we need to also be serious about measuring their clarity. Only when we have clear, accurate data can we fully understand how judges have written in the past, and how they write now.

Conclusion

The first and the last pages of a text are usually the only ones that get read. So, if you skipped here from the first page, let me summarise for you.

Recent judgments are clearer and more readable:

But:

The Clarity Test worked well as a tool for measuring the clarity of judgments. It may need refinement and improvement, but the basic concept is sound. I hope that it can be used to further our understanding of judgment writing. The more we understand judgment writing; the more we can do to improve it. Better, clearer judgments benefit everyone, and everyone needs to be able to understand the law.

Judgments are the law – so if we can understand judgments, we can understand the law.

Bibliography

  1. Cases

a) General Cases

Hutt City Council v The Lower Hutt District Court [2013] NZHC 706. Ben Nevis Forestry v Commissioner of Inland Revenue [2008] NZSC 15. Logan v Auckland City Council CA243/99, 9 March 2000.

Mallowdale Enterprises Ltd v Commissioner of Inland Revenue [2011] NZHC 4; (2011) 25 NZTC 20-024.

Mars New Zealand Ltd v Roby Trustees Ltd HC Auckland, CIV-2011-404-4613, 7 December 2011.

Reid v Commissioner of Inland Revenue [1986] 1 NZLR 129.

Varney v Anderson [1988] NZCA 11; [1988] 1 NZLR 478.

b) 1990 Sample Cases

AMP Fire and General Insurance Co (NZ) Ltd v Groves CA8/90, 1 March 1990.

Appliance Industries Ltd v Neeco Ltd CA37/90, 28 September 1990.

Balfour v Attorney-General CA170/89, 12 October 1990.

Challenge Realty Ltd v Commissioner of Inland Revenue CA1/90, 19 July 1990.

Chatfield v Jones CA231/89, 2 May 1990.

Commissioner of Inland Revenue v Brierly [1990] NZCA 393; [1990] 3 NZLR 303.

GN Hale & Sons Ltd v Wellington Caretakers IUOW CA158/90, 11 September 1990.

Hawkins v Davison CA215/90, 21 December 1990.

Quik Bake Products Ltd (in rec) v New Zealand Baking Trade Employees IUOW

CA46/90, 16 August 1990.

Rainbow Corporation Ltd v Ryde Holdings Limited CA168/90, 13 November 1990.

Re Ham CA144/89, 29 March 1990.

Ryan v Hallam CA295/89, 30 August 1990.

Sadowski v Oakleigh Breeding (No 2) Partnership CA216/90, 11 December 1990.

Timbercraft Industries Limited v Otago and Southland Federated Furniture and Related Trades IUOW CA173/89, 1 August 1990.

Westpac v Merlo CA73/90, 28 November 1990.

c) 2000 Sample Cases

Attorney-General v Hull CA41/99, 29 June 2000.

Attorney-General v McLennan CA41/00, 7 December 2000.

Attorney-General v Rodney District Council CA274/99, 18 September 2000.

Belman Holdings Ltd v Edegwater Motel Ltd CA143/00, 25 October 2000.

Farrelly v Gruar CA104/00, 20 December 2000.

Harvey v Hurley CA189/99, 2 March 2000.

Kirk v Vallant Hooker & Partners CA18/99, 29 February 2000.

Motor Vehicle Dealers Institute Inc v Auckland Motor Vehicle Disputes Tribunal

CA67/00, 31 July 2000.

Nesbit v Porter CA165/99, 20 April 2000.

Residential Care (NZ) Inc v Health Funding Authority CA170/99, 17 July 2000. Robert Bryce & Co Ltd v Stowehill Invesments Ltd CA08/00, 24 August 2000. Sea-Tow Ltd v Grey District Council CA146/99, 15 June 2000.

Vickery v McLean CA125/00, 20 November 2000.

Walop No 3 Ltd v Para Franchising Ltd CA5/00, 22 March 2000.

Wood v Christchurch Golf Club Inc CA254/99, 23 May 2000.

d) 2012 Sample Cases

Commerce Commission v Visy Board Pty Ltd [2012] NZCA 383.

Commissioner of Inland Revenue v Stiassny [2012] NZCA 93.

Fava v Aral Property Holdings Ltd [2012] NZCA 585.

General Marine Services Ltd v The Ship "Luana" [2012] NZCA 374.

Johnston v Schurr [2012] NZCA 363.

Law v Tan Corporate Trustee Ltd [2012] NZCA 620.

Ngai Tai Ti Kamaki Tribal Trust v Karaka [2012] NZCA 268.

Osborne v Auckland City Council [2012] NZCA 199.

P v Bridgecorp Ltd (in rec & in liq) [2012] NZCA 530.

Perpetual Trust Ltd v Financial Markets Authority [2012] NZCA 308.

Roby Trustees Ltd v Mars New Zealand Limited [2012] NZCA 450.

Russell v Commissioner of Inland Revenue [2012] NZCA 128.

Scandle v Far North District Council [2012] NZCA 52.

Schenker AG and Schenker (NZ) Ltd v Commerce Commission [2012] NZCA 245.

W v S [2012] NZCA 166.

2. Books and Chapters in Books

Mark Adler Clarity for Lawyers (2nd ed, The Law Society, London, 2007).

Michele M Asprey Plain Language for Lawyers (4th ed, The Federation Press, Sydney, 2010).

Edward Berry Writing Reasons: a handbook for judges (3rd ed, E-M Press, Ontario, 2007).

J.E. Côté The Appellate Craft (Canadian Judicial Council, Ottawa, 2009).

Mark Duckworth "Clarity and the Rule of Law: The Role of Plain Judicial Language" in Ruth Sheard (ed) A Matter of Judgment: Judicial Decision-Making and Judgment Writing, (Judicial Commission of New South Wales, New South Wales, 2004).

Bryan A. Garner Legal Writing in Plain English (University of Chicago Press, Chicago, 2001).

Joyce J. George Judicial Opinion Writing Handbook (4th ed, Williams S. Hein & Co., Inc., New York, 2000).

James C. Raymond Writing for the Court (Thomson Reuters, Canada, 2010).

Ruth Sheard (ed) A Matter of Judgment: Judicial Decision Making and Judgment Writing

(Judicial Commission of New South Wales, New South Wales, 2003).

4. Journal Articles

Simon Adamyk "Plain legal language in the English courts" (2006) 56 Clarity 11.

Robert Benson "The End of Legalese: The Game is Over" (1985) 13(3) New York University Review of Law and Social Change 519.

Richard Castle "What makes a document readable?" (2007) 58 Clarity 12. Annetta Cheek "Defining plain language" (2010) 64 Clarity 5.

Lance N. Long and William F. Christensen "Does the Readability of Your Brief Affect Your Chance of Winning an Appeal? - An Analysis of Readability in Appellate Briefs and Its Correlation with Success on Appeal" (2011) 12 J. App. Prac. & Proc. 145.

Lance N. Long and William F. Christensen "When Justices (Subconsciously) Attack: The Theory of Argumentative Threat and the Supreme Court" (2013) 91 Or L Rev 933.

Mark Cooney "Stringing Readers Along" (2006) 85(12) Michigan Bar Journal 44.

Martin Cutts "Writing by numbers: are readability formulas to clarity what karoke is to song?" (2008) 59 Clarity 28.

Sean Flammer "Persuading Judges: An Empirical Analysis of Writing Style, Persuasion and the Use of Plain English" (2010) 16 The Journal of the Legal Writing Institute 183.

Bryan A. Garner "The Deep Issue: A New Approach to Framing Legal Questions" (1994) 5 The Scribes Journal of Legal Writing 1.

Karen Schriver and Frances Gordon "Grounding plain language in research" (2010) 64 Clarity 33.

Mark Hochhauser "Some pros and cons of readability formulas" (1999) 44 Clarity 22. Mark Hochhauser "What readability expert witnesses should know" (2005) 54 Clarity 38.

Charles A. Johnson "Content-Analytic Techniques and Judicial Research" (1987) 15(1) American Politics Quarterly 169.

Patrick Keane "Decisions that convince" (2004) 52 Clarity 26.

Joseph Kimble "The Lost Art of Summarizing" (2001) 38(2) Court Review 30.

Joseph Kimble "The straight skinny on better judicial opinions" (2006) 85(3) Michigan Bar Journal 42.

Michael Kirby "‘Ex tempore Reasons’" (1992) 9 Australian Bar Review 93. Michael Kirby "On the Writing of Judgments" (1990) 64 ALJ 829.

Michael Kirby "Plain concord: Clarity's ten commandments" (2009) 62 Clarity 58. John Laskin "Teaching judgment writing in Canada" (2011) 66 Clarity 17.

Brady Coleman and Quy Phung "The Language of U.S. Supreme Court Briefs: A Large- Scale Quantitative Investigation" (2010) 11 J. App. Prac. & Proc. 75.

James C. Raymond "Writing to be Read or Why Can’t Lawyers Write like Katherine Mansfield?" (1997) 3 The Judicial Review: Journal of the Judicial Commission of New South Wales 153.

Louis J. Siroco "Readability Studies: How Technocentrism Can Compromise Research and Legal Determinations" (2007) 26(147) Quinnipiac Law Review 101.

John Strylowski "Using Tables to Present Complex Ideas" (2013) 92 Michigan Bar Journal 44.

Christopher R. Trudeau "The Public Speaks: An Empirical Study of Legal Communication" (2001) 1 The Scribes Journal of Legal Writing 121.

Ryan Owens and Justin Wedeking "Justices and Legal Clarity: Analyzing the Complexity of U.S. Supreme Court Opinions" (2011 ) 45 Law and Society Review 1027.

Mark A. Hall and Ronald F. Wright "Systematic Content Analysis of Judicial Opinions" (2008) 96 CLR 63.

5. Lectures/Speeches

James Allsop "Appellate Judgments - The Need for Clarity" (36th Australian Legal Convention, Perth, 19 September 2009).

Lord Neuberger "Open justice unbound?" (Judicial Studies Board Annual Lecture 2011, London, 16 March 2011).

Lord Neuberger "No judgment - No justice" (First Annual BAILII Lecture, London, 20 November 2012).

6. Internet Materials

Kevin L. Brady "Are readable judicial opinions cited more often?" (4 July 2012) SSRN

<www.ssrn.com/abstract=2100618>.

Christopher Enright "Writing an Ex Tempore Judgment" (6 June 2012) Maitland Press

<www.legalskills.com.au>.

Mark Painter "Legal Writing 201" (March 2002) Plain Language Network

<www.plainlanguagenetwork.org/Legal/legalwriting.pdf>.

Michael Nelson "Elections and Explanations: Judicial Elections and the Readability of Judicial Opinions" (Working paper, Washington University in St. Louis, 2013).

7. Newspaper Articles

Tracey Tyler "Clarity in the courts: Justices go to writing school" Toronto Star (Toronto, 2011).

Appendix A: The full Clarity Test

INSTRUCTIONS

1. Choose a civil case that you have not read. It must be at least 10 full pages long (ignore strike-out or leave applications).

  1. The test has 5 sections (A-E):
  2. When answering questions, simply circle the answer (Ignore the “Points” column for the moment).
  3. If there are multiple opinions, apply sections A-C to the main opinion. If there is no clear ‘main opinion’, apply to all and take the lowest scores. Count the mistakes of all opinions (section D).
  4. If anything is unclear, choose an interpretation and note it down.
  5. Now answer the questions in sections A-D – good luck.
  6. Once you’ve completed the questions, add up the points.
  7. Answer the remaining questions in section E.
  8. You’re finished – relax and have a cup of tea.
A
READ THE START

POINTS
A1.
Read no further than one full page.
Is the basic factual context clear? (i.e. who did what to whom)
  • YES – clear to a layperson
  • YES – clear to a lawyer
  • NO – unclear, or I needed
to re-read/read further
2
1
0
A2.
Is the basic question that the judge must decide clear?
  • YES – clear to a layperson
  • YES – clear to a lawyer
  • NO – vague/unstated
2
1
0
A3.
Was any unnecessary detail included?
  • YES – included
  • NO – left out
0
1
A3.
Now read until the issues are analysed in depth. Were the facts expressed in an exceptionally clear and succinct way? (most judgments will not be)
  • YES – clear to a layperson
  • NO – clear to a lawyer, or vague
1
0
A5.
Is the end result of the case clear? Are the key findings or orders clear?
(If NOT clear, you can skip to the last page)
  • YES – clear at start
  • YES – clear, but I had to skip to the end
  • NO – unclear
2
1
0

B
CHECK THE STRUCTURE

POINTS
B1.
Does the judge set out the issues clearly and early? (ideally in the first few paragraphs or in a separate section with a heading)
  • YES – clearly and early
  • NO – clearly but late
  • NO – vague or not at all
2
1
0
B2.
Are there mostly case-specific headings?
(‘Facts’, ‘Law’, ‘Discussion’ – are not case-specific.)
  • YES – case-specific
  • NO – generic or none
1
0
B3.
Are the issues ordered in a clear, logical manner? (If no headings, read the entire judgment)
  • YES – somewhat
  • NO – not at all
1
0
B4.
Can you skip to one discrete issue? (if you wanted to understand the law on that particular issue)
  • YES – clear
  • NO – locating discrete issues is difficult
1
0
B5.
Does the judge provide a clear summary of the reasons for the result? (in either the opening or
closing paragraphs)
  • YES – clear, concise summary
  • NO – vague or not at all
1
0
C
LOOK FOR STYLE CHOICES

POINTS
C1.
Are the parties usually called by their names/unique labels? (Instead of a generic ‘plaintiff/appellant’)
  • YES – (Mrs X, X Ltd)
  • NO – (Respondent etc.)
1
0
C2.
Are there any string citations that really interrupt your reading? Choose ‘NO’ if any string citations are in footnotes or are otherwise unobtrusive.
  • YES – distracting
  • NO – string cites are unobtrusive
0
1
C3.
Are there too many long, block quotations? (that really interrupt your reading)
  • YES – too many
  • NO – few/none
0
1
C4.
Are almost all long, block quotations accompanied by either a summary; or a statement of the inference the reader is expected to draw?
(i.e. are they integrated into the text?)
  • YES – no block quotes, or almost all integrated
  • NO – quotes are not integrated well
1
0

D
EXAMINE THE WRITING STYLE

POINTS

Now read the entire judgment. Are there any instances of:


D1.
  • Unnecessary legalese or jargon? (that could be replaced by more common words)
o YES
(write these words down in
section E)
-1
D2.
  • Long or grammatically complex sentences? (that are confusing, or must be re-read)
o YES
-1
D3.
  • Lists of items that are contained in one sentence? (instead of bullet points)
o YES
-1
D4.
  • Sentences in the passive voice? (that should be written in the active voice)
o YES
-1
D5.
Once you finish reading, think about the writing style generally (ignoring structure and content).
Especially consider how many instances of each



bad writing style you found.
Was the writing style generally:
  • Great! – could use as an example
  • Average – somewhere in the middle
  • Terrible – dense, wordy, confusing
Most of the judgments should (hopefully) be average. If you find one that is particularly well- written, choose ‘Great’. If you find one that annoys and frustrates you, choose ‘Terrible’.



2
0
-2

E
FINAL SCORE
E1.
Table of contents:
YES NO

E2.
Diagrams, tables or graphs?
YES NO

E3.
Legalese/jargon:
E4.
TOTAL SCORE:

/20

Appendix B: Result tables

To fit all results into tables, I have given each case a number from 1-45. 1990 judgments are from 1-15. 2000 judgments are from 16-30. 2012 judgments are from 31-45. The values for question E3 are given in Appendix C (Jargon words). Full citations for each case are in the Bibliography.

1990 Judgments

1. Hawkins
6. Appliance
11. Challenge
2. Sadowski
7. Ryan
12. Brierly
3. Westpac
8. GN Hale
13. Chatfield
4. Rainbow
9. Quik Bake
14. Re Ham
5. Balfour
10. Timbercraft
15. AMP

Case name:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Length: (pg)
30
11
12
10
26
14
21
27
14
14
35
20
34
24
26
A1 Story
1
1
1
1
2
1
0
1
0
1
1
1
0
0
1
A2 Issue
0
1
1
0
1
1
0
1
1
1
1
1
0
1
1
A3 Detail
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
A4 Facts
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
A5 Result
1
1
1
1
2
1
1
0
1
1
1
1
1
1
1
B1 Issues
0
0
0
0
0
0
0
1
0
1
0
0
0
0
2
B2 Heading
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
B3 Order
0
0
0
0
1
0
0
1
0
0
0
0
0
0
1
B4 Raid
0
0
0
0
1
0
0
0
0
0
0
1
0
0
0
B5 Concl.
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
C1 Names
1
0
0
1
1
0
0
1
1
1
0
1
0
1
1
C2 String
0
1
1
1
1
1
1
1
1
1
1
0
0
1
0
C3 Number
1
1
1
1
1
0
1
1
1
1
0
0
1
1
0
C4 Block
0
1
1
0
1
0
0
0
0
1
0
0
1
1
0
D1 Jargon

-1
-1
-1
-1
-1
-1

-1
-1
-1
-1
-1

-1
D2 Long
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1
D3 Lists
-1
-1


-1


-1
-1


-1
-1


D4 Passive











-1



D5 Style
0
0
0
0
0
-2
0
0
0
0
0
0
0
0
0
E1 TOC
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
E2 Visual
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
TOTAL:
2
3
4
3
8
0
1
5
2
6
2
2
0
5
5
2000 Judgments

16. Farelly
21. Robert Bryce
26. Wood
17. AG v McLennan
22. Motor Vehicle
27. Nesbit
18. Vickery
23. Residential Care
28. Harvey
19. Belman
24. AG v Hull
29. Kirk
20. AG v Rodney
25. Sea-Tow
30. Walop

Case name:
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Length:
13
19
10
10
23
14
11
23
22
12
11
20
12
19
11
A1 Story
1
1
1
1
1
1
0
1
2
2
1
0
1
1
0
A2 Issue
1
1
1
1
2
1
1
0
2
1
1
1
1
1
1
A3 Detail
0
0
1
0
1
0
0
0
1
1
0
0
0
0
0
A4 Facts
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
A5 Result
1
1
1
1
1
1
1
1
2
1
1
1
1
1
1
B1 Issues
0
1
1
1
1
0
0
0
0
1
0
1
0
0
0
B2 Heading
1
1
0
0
1
0
0
1
1
1
1
1
1
1
1
B3 Order
1
1
0
0
1
0
0
0
1
1
1
1
1
0
1
B4 Raid
1
1
0
0
1
0
1
1
1
1
1
0
1
1
1
B5 Concl.
1
0
0
0
1
0
0
0
0
0
0
0
0
0
0
C1 Names
0
1
1
0
0
1
1
1
1
1
1
1
0
0
1
C2 String
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
C3 Number
1
1
1
1
1
1
1
1
0
1
1
1
1
1
1
C4 Block
1
0
1
0
0
0
0
1
0
1
1
0
0
1
1
D1 Jargon

-1
-1
-1

-1
-1
-1
-1
-1

-1
-1
-1
-1
D2 Long
-1


-1
-1
-1
-1

-1
-1
-1
-1
-1
-1
-1
D3 Lists
-1
-1


-1



-1






D4 Passive










-1




D5 Style
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
E1 TOC
N
N
N
N
Y
N
N
N
Y
N
N
N
N
N
N
E2 Visual
N
N
N
N
N
N
N
N
N
N
N
N
Y
Y
N
TOTAL:
8
8
9
4
10
4
4
7
9
11
8
6
6
6
7

2012 Cases

31. Law
36. Johnston
41. Osborne
32. Fava
37. General Marine
42. W v S
33. P v Bridgecorp
38. Perpetual
43. Russell
34. Roby
39. Ngai Tai Ki Tamaka
44. CIR v Stiassny
35. CC v Visy
40. Schenker
45. Scandle

Case name:
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
Length:
31
19
22
19
62
35
13
11
31
14
21
14
36
35
16
A1 Story
2
1
1
2
1
2
1
1
1
2
1
2
1
1
2
A2 Issue
1
2
1
2
1
1
1
1
1
2
1
1
2
1
1
A3 Detail
0
1
1
1
0
1
0
0
0
1
0
0
0
0
0
A4 Facts
0
1
0
1
0
0
0
0
0
1
0
0
0
0
1
A5 Result
1
1
1
1
1
1
1
2
1
1
2
1
1
1
1
B1 Issues
2
2
2
1
2
0
1
2
2
2
1
2
2
2
1
B2 Heading
1
1
1
1
1
1
0
1
1
1
0
1
1
1
1
B3 Order
1
1
1
1
1
1
0
1
1
1
1
1
1
1
1
B4 Raid
1
1
1
1
1
1
1
1
1
1
0
1
1
1
1
B5 Concl.
1
0
0
1
0
0
0
0
0
0
1
0
0
1
0
C1 Names
1
1
1
1
1
1
1
1
0
1
1
1
1
1
1
C2 String
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
C3 Number
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
C4 Block
1
1
1
1
0
1
0
1
0
1
0
1
0
0
0
D1 Jargon

-1
-1

-1



-1
-1
-1
-1
-1


D2 Long
-1


-1
-1

-1
-1
-1
-1
-1
-1

-1

D3 Lists





-1









D4 Passive















D5 Style
0
2
0
2
0
0
0
0
0
2
0
0
0
0
2
E1 TOC
Y
Y
Y
Y
Y
Y
N
Y
Y
N
Y
N
Y
Y
Y
E2 Visual
N
Y
N
Y
N
Y
N
N
Y
N
N
N
N
N
N
TOTAL:
13
16
12
17
9
11
7
12
8
16
8
11
11
11
14

Appendix C: List of jargon words in judgments

1990
2000
2012
ameliorating
a fortiori
ameliorate
amelioration
abrogated
apposite
antithesis
appurtenant
concomitant
apposite
eminent
derogate
casuistry
epithet
effluxion
circumscribed
ex facie
expeditious
contumacious
ex hypothesi
impecunious
corpulent
expedience
inter alia (x2)
espouse
germane
intituled
evinced
immutable
posits
fait accompli
incongruity
sui generis
indubitably
inimical (x2)
vis-à-vis
inter alia (x4)
inviolate

parlance (x2)
otiose

procured
peruse

sensitisation
precipitating

stultifying
preponderance

thitherto
rectification

transposition
remiss

uno flatu
simpliciter


sui generis


tacking*

*Specialised term not explained in the judgment.

Appendix D: Multiple opinions

The Clarity Test originally included questions for multiple opinions. These were removed afterwards; but I include them here:

E
ARE THERE MULTIPLE OPINIONS?

POINTS
E1.
Did the majority opinion (and the facts) come before the minority opinion?
  • YES – majority first
  • NO – minority first
0
-1
E2.
Was a summary of the facts repeated in different judgments? (without a very good reason)
  • YES – needless repetition
  • NO – not repeated
-1
0
E3.
Can you clearly identify the issues on which the judgments differed?
  • YES
  • NO
0
-1
E4.
Is the end result still clear and obvious?
  • YES
  • NO
0
-1

In my study, the six judgments only lost either one or two points in this section. No judgment failed all questions, but no judgment passed all questions either.


NZLII: Copyright Policy | Disclaimers | Privacy Policy | Feedback
URL: http://www.nzlii.org/nz/journals/UOtaLawTD/2013/22.html