Generative Artificial Intelligence (Gen AI) is rapidly emerging as a tool that opens exciting new opportunities for the provision of legal services. Lawyers in New Zealand and overseas are using and investing in this technology to enhance their service offering to clients.
Although Gen AI has significant potential for the legal profession, there are also risks and ethical issues that need to be carefully managed.
The New Zealand Law Society Te Kāhui Ture o Aotearoa provides information in this guidance about what Gen AI is and what lawyers need to consider in order to manage risks associated with its use.
The purpose of this guidance is to assist lawyers, but it is not a substitute for legal advice or technical expert input. The Law Society wishes to acknowledge the Law Society of England and Wales for sharing their guidance Generative AI – the essentials 1 and allowing the Law Society to draw on and adapt it for the New Zealand context.
What is the regulatory framework for AI in New Zealand?
What are the opportunities for lawyers?
What are the risks for lawyers?
Quality assurance and competence
Privacy, data protection, cyber security
Professioanl and ethical obligations
Generative AI is a subcategory of AI that uses algorithms to create new “outputs“ or content (including creating text, images, video/audio). It uses “prompts“(which are questions or instructions given by the user) to create content that closely resembles human-made content using large quantities of existing data or data that has been fed in. Gen AI tools are described as being “trained” with inputted information. They will also access sources across the internet for information.
In simple terms, traditional AI ‘recognises’, while Gen AI ‘creates’ new content based on its training data.
There are a wide range of available Gen AI tools. These include paid, free, open-source and privately owned. For example, some law firms have created their own internal Gen AI tools, while other legal service providers commercially offer AI tools to lawyers. Lawyers may be familiar with some better-known Gen AI tools such as ChatGPT, Bing Chat, Google Bard, GitHub Co-Pilot, and Dall-E.
As with all emerging technologies, there are no universally agreed definitional terms for artificial intelligence and associated concepts. This glossary, however, is intended to provide some commonly understood terms associated with AI.2
Chatbot: Digital tool or software application designed to simulate conversation with users (primarily via text or synthesised speech). Some operate on predefined responses but advanced versions integrating Gen AI provide more dynamic and responsive interactions with users.
Lawtech: Technologies that aim to support, enhance, or replace traditional methods for delivering legal services or the operation of the justice system. The use of AI is an established feature of Lawtech. Machine Learning: Subset of AI that trains machines to learn from existing data and improve on that data to make predictions or decisions.
Deep Learning: Deep learning is a more specialised machine learning technique in which more complex layers of data and neural networks are used to process data and make decisions.
Large Language Model (LLM): An AI algorithm which, through sophisticated pattern recognition and calculations, learns to predict the next best word or part of a word in a sentence. Gen AI chatbots generally use LLMs to generate responses to “prompts”.
Currently, there is no over-arching regulation for the use of AI in New Zealand. However, there is significant work happening in this area and examples of this include:
The Office of the Privacy Commissioner also sets out clear expectations for the use of Gen AI by agencies.3
These pieces of guidance are a useful resource for anyone wanting to learn more about the risks of using Gen AI and “guard-rails” that can be put in place to minimise the risk of harm.
Lawyers can also keep up to date with developments in this area in New Zealand through the government website Digital.NZ and the OECD’s webpage on AI policies in New Zealand.4
As with any evolving technology, opportunities for users are only limited by the imagination. General benefits identified with the use of Gen AI include increased efficiency, increased productivity and innovation (including through data-insights and new ways of delivering services).
Lawyers can harness the benefits of Gen AI in legal work in more specific ways such as:
The use of Gen AI to undertake these tasks, however, carries specific risks. For example, Gen AI may be unsuitable for legal analysis of source material, given AI’s inability to distinguish bias and opinion in the way a human can. Further, currently, there is limited New Zealand legal “training” and content available to common Gen AI tools.5 Lawyers need to be aware of these risks and take steps to mitigate them, as appropriate.
As Gen AI is rapidly developing, there may be other ways to utilise it in legal practice, that have yet to be discovered.
This section outlines some of the specific areas of risk for lawyers using Gen AI and the associated regulatory and professional obligations. At a minimum, privacy, and fair trading requirements will apply in addition to obligations under the Lawyers and Conveyancers Act (Conduct and Client Care) Rules 2008 (RCCC).
All lawyers are ultimately responsible for the legal services they provide. Services must be provided by lawyers in a way that meets legal, professional, and ethical obligations and a failure to meet those requirements can be a complaints and disciplinary matter.
A lawyer is not absolved from responsibility for legal advice or defects in an end-product (such as a contract) because it is derived from Gen AI.
Gen AI cannot understand its output, nor can it validate its accuracy, in the way a human author can. Gen AI can therefore create inaccurate or false outputs. Tools are being developed to counter this (such as citation checkers). Gen AI can create seemingly persuasive but nonsensical or false content – this is known as a “hallucination”. The fact remains that the human lawyer will be responsible for the accuracy and validity of the AI created content and there are risks associated with this.
Careful human oversight of the use of Gen AI is vital to ensure that it is used ethically and responsibly. At a minimum, for lawyers, this should include:
The use of Gen AI involves inputting data into the tool. Be aware that the AI provider may well be able to see your input data and the outputs. This can create a privacy risk (in addition to concerns about confidentiality and privilege referred to below). The data inputted may also be transferred out of New Zealand to AI companies located overseas. This has implications under the Privacy Act and lawyers should have regard to Information Privacy Principle 12 (“Disclosure outside New Zealand”).7
The public service guidance recommends against inputting personal and client information into external AI tools. The Courts guidance similarly highlights the real risks of inputting confidential or suppressed information into Gen AI tools.
Cybersecurity risks can also be associated with the use of Gen AI. For example, malicious actors can exploit vulnerabilities in the tool to corrupt data or undertake more sophisticated phishing or cyber-attacks on users. CERT NZ provides tailored guidance and regularly updated warnings for businesses in relation to cyber security issues which lawyers should be familiar with.8
Using Gen AI can give rise to questions about who owns the input and output data. Users also face risks related to inadvertently infringing intellectual property rights.
For example, some AI tools will engage in “data scraping” – which is taking data from a range of external sources. This can create risks related to copyright infringement or disputes over intellectual property in relation to content used to create an output.
In addition, some Terms of Service will allow the Gen AI provider to reuse input data and retain ownership of output data. It is vital that care is taken to ensure contractual provisions do not place a lawyer in breach of professional and legal obligations relating to legally privileged, confidential, or personal information.
There are also ethical and professional risks relevant to Gen AI. This note refers above to the ultimate responsibility that all lawyers have for the quality and competence of the work they produce. In addition, given that AI can create false or nonsensical outputs, there is a risk of relying on AI output in a way that could be misleading to the court, opposing counsel or clients.
Improper, negligent or incompetent use of Gen AI could lead to a serious breach of the RCCC including r 3 (competence), 10.9 (misleading and deceptive conduct) and 13.1 (duty of fidelity to the Court). There are examples of lawyers overseas relying on Gen AI and inadvertently providing false authorities to the Court, with serious disciplinary consequences.9
A lawyer practising on own account who allows the use of Gen AI in a way that is not adequately monitored or checked or who allows a situation to arise where staff are using Gen AI in an unauthorised manner also risks breaching r 11 and 11.1 (Proper professional practice – administering, supervising and managing a legal practice).
Inputting client details and legally privileged material into a publicly accessible/external Gen AI tool may also give rise to a breach of privilege and confidentiality obligations (see: Chapter 8 of the Rules). At a minimum, lawyers need to consider whether client consent should be sought for use of their data. Further, personal information or client related information should not be used for testing AI systems, generating templates or similar – fictional data should be used for this.
Related to this is the issue of disclosure and transparency to clients about the use of Gen AI, given the fiduciary nature of the relationship and certain professional obligations. For example, a lawyer must provide client care and service information, including about who will undertake work and the way the services will be provided (see: r 3.5 and the Preface to the Rues). A lawyer must also take steps to ensure that a client understands the nature of the retainer and consults the client about steps taken to implement the client’s instructions (r 7.1).
The use of Gen AI or reliance on defective or misleading outputs created by it therefore may become a complaints and disciplinary matter in a number of ways.
As the use of the technology develops, lawyers may need to also review their billing practices and the information that is provided to clients at the start of a retainer (see: Chapters 3 and 9). If the model used is primarily a time and attendance model, lawyers my need to consider what is appropriate if Gen AI is undertaking tasks previously undertaken by human actors. For example, it may be appropriate to charge in a similar way to when a lawyer is using a research tool. However, the application of a time and attendance charge to drafting a contract or completing document review on discovery undertaken by a non-human, may need careful consideration.
In summary, the use of Gen AI has the potential to enhance the way lawyers deliver legal services. Lawyers can devolve certain tasks to Gen AI that have traditionally eaten into time available for other priorities. The benefits can include significant time and costs savings to the mutual satisfaction of both lawyers and clients.
However, with reward comes risk and this needs to be carefully managed. Key to managing potential risks is:
This checklist is adapted from the Law Society of England and Wales guidance. It includes factors that lawyers should consider from initial exploration, procurement, use and review.
Be very careful about any data used to “train” AI tools. Consider use of anonymised data and be aware of confidentiality and privacy risks before beginning use of AI tool. Do not use personal information or client information for testing or creating templates.
Privacy, data protection, confidentiality, privilege compliance: