New Zealand Law Society - Generative AI guidance for lawyers

Generative AI guidance for lawyers

Generative Artificial Intelligence (Gen AI) is rapidly emerging as a tool that opens exciting new opportunities for the provision of legal services. Lawyers in New Zealand and overseas are using and investing in this technology to enhance their service offering to clients.

Although Gen AI has significant potential for the legal profession, there are also risks and ethical issues that need to be carefully managed.

The New Zealand Law Society Te Kāhui Ture o Aotearoa provides information in this guidance about what Gen AI is and what lawyers need to consider in order to manage risks associated with its use.

The purpose of this guidance is to assist lawyers, but it is not a substitute for legal advice or technical expert input. The Law Society wishes to acknowledge the Law Society of England and Wales for sharing their guidance Generative AI – the essentials 1 and allowing the Law Society to draw on and adapt it for the New Zealand context.

What is Gen AI?

Generative AI is a subcategory of AI that uses algorithms to create new “outputs“ or content (including creating text, images, video/audio). It uses “prompts“(which are questions or instructions given by the user) to create content that closely resembles human-made content using large quantities of existing data or data that has been fed in. Gen AI tools are described as being “trained” with inputted information. They will also access sources across the internet for information.

In simple terms, traditional AI ‘recognises’, while Gen AI ‘creates’ new content based on its training data.

There are a wide range of available Gen AI tools. These include paid, free, open-source and privately owned. For example, some law firms have created their own internal Gen AI tools, while other legal service providers commercially offer AI tools to lawyers. Lawyers may be familiar with some better-known Gen AI tools such as ChatGPT, Bing Chat, Google Bard, GitHub Co-Pilot, and Dall-E.

Glossary of terms

As with all emerging technologies, there are no universally agreed definitional terms for artificial intelligence and associated concepts. This glossary, however, is intended to provide some commonly understood terms associated with AI.2

Chatbot: Digital tool or software application designed to simulate conversation with users (primarily via text or synthesised speech). Some operate on predefined responses but advanced versions integrating Gen AI provide more dynamic and responsive interactions with users.

Lawtech: Technologies that aim to support, enhance, or replace traditional methods for delivering legal services or the operation of the justice system. The use of AI is an established feature of Lawtech. Machine Learning: Subset of AI that trains machines to learn from existing data and improve on that data to make predictions or decisions.

Deep Learning: Deep learning is a more specialised machine learning technique in which more complex layers of data and neural networks are used to process data and make decisions.

Large Language Model (LLM): An AI algorithm which, through sophisticated pattern recognition and calculations, learns to predict the next best word or part of a word in a sentence. Gen AI chatbots generally use LLMs to generate responses to “prompts”.

What is the regulatory framework for AI in New Zealand?

Currently, there is no over-arching regulation for the use of AI in New Zealand. However, there is significant work happening in this area and examples of this include:

The Office of the Privacy Commissioner also sets out clear expectations for the use of Gen AI by agencies.3

These pieces of guidance are a useful resource for anyone wanting to learn more about the risks of using Gen AI and “guard-rails” that can be put in place to minimise the risk of harm.

Lawyers can also keep up to date with developments in this area in New Zealand through the government website Digital.NZ and the OECD’s webpage on AI policies in New Zealand.4

What are the opportunities for lawyers?

As with any evolving technology, opportunities for users are only limited by the imagination. General benefits identified with the use of Gen AI include increased efficiency, increased productivity and innovation (including through data-insights and new ways of delivering services).

Lawyers can harness the benefits of Gen AI in legal work in more specific ways such as:

  • Undertaking e-discovery;
  • Analysing contracts;
  • Generating templates and drafting documentation;
  • Conducting legal research and summarising large quantities of information;
  • Engaging with potential clients via chatbots;
  • Predicting case outcomes and analysing risks.

The use of Gen AI to undertake these tasks, however, carries specific risks. For example, Gen AI may be unsuitable for legal analysis of source material, given AI’s inability to distinguish bias and opinion in the way a human can. Further, currently, there is limited New Zealand legal “training” and content available to common Gen AI tools.5  Lawyers need to be aware of these risks and take steps to mitigate them, as appropriate.

As Gen AI is rapidly developing, there may be other ways to utilise it in legal practice, that have yet to be discovered.

What are the risks for lawyers?

This section outlines some of the specific areas of risk for lawyers using Gen AI and the associated regulatory and professional obligations. At a minimum, privacy, and fair trading requirements will apply in addition to obligations under the Lawyers and Conveyancers Act (Conduct and Client Care) Rules 2008 (RCCC).

Quality assurance and competence

All lawyers are ultimately responsible for the legal services they provide. Services must be provided by lawyers in a way that meets legal, professional, and ethical obligations and a failure to meet those requirements can be a complaints and disciplinary matter.

A lawyer is not absolved from responsibility for legal advice or defects in an end-product (such as a contract) because it is derived from Gen AI.

Gen AI cannot understand its output, nor can it validate its accuracy, in the way a human author can. Gen AI can therefore create inaccurate or false outputs. Tools are being developed to counter this (such as citation checkers). Gen AI can create seemingly persuasive but nonsensical or false content – this is known as a “hallucination”. The fact remains that the human lawyer will be responsible for the accuracy and validity of the AI created content and there are risks associated with this.

Careful human oversight of the use of Gen AI is vital to ensure that it is used ethically and responsibly. At a minimum, for lawyers, this should include:

  • Fact-checking outputs and the accuracy and relevance of case citations/source references (reflecting the risks of “hallucinations”, Judicial Officers and support staff are now urged to check the accuracy of information contained in submissions that show signs they were created by Gen AI6);
  • Ensuring that supervised staff do not access Gen AI to assist with work tasks unless authorised to do so and that use of AI is disclosed to their supervisor.

Privacy, data protection, cyber security

The use of Gen AI involves inputting data into the tool. Be aware that the AI provider may well be able to see your input data and the outputs. This can create a privacy risk (in addition to concerns about confidentiality and privilege referred to below). The data inputted may also be transferred out of New Zealand to AI companies located overseas. This has implications under the Privacy Act and lawyers should have regard to Information Privacy Principle 12 (“Disclosure outside New Zealand”).7

The public service guidance recommends against inputting personal and client information into external AI tools. The Courts guidance similarly highlights the real risks of inputting confidential or suppressed information into Gen AI tools.

Cybersecurity risks can also be associated with the use of Gen AI. For example, malicious actors can exploit vulnerabilities in the tool to corrupt data or undertake more sophisticated phishing or cyber-attacks on users. CERT NZ provides tailored guidance and regularly updated warnings for businesses in relation to cyber security issues which lawyers should be familiar with.8

Intellectual Property

Using Gen AI can give rise to questions about who owns the input and output data. Users also face risks related to inadvertently infringing intellectual property rights.

For example, some AI tools will engage in “data scraping” – which is taking data from a range of external sources. This can create risks related to copyright infringement or disputes over intellectual property in relation to content used to create an output.

In addition, some Terms of Service will allow the Gen AI provider to reuse input data and retain ownership of output data. It is vital that care is taken to ensure contractual provisions do not place a lawyer in breach of professional and legal obligations relating to legally privileged, confidential, or personal information.

Professional and ethical obligations

There are also ethical and professional risks relevant to Gen AI. This note refers above to the ultimate responsibility that all lawyers have for the quality and competence of the work they produce. In addition, given that AI can create false or nonsensical outputs, there is a risk of relying on AI output in a way that could be misleading to the court, opposing counsel or clients.

Improper, negligent or incompetent use of Gen AI could lead to a serious breach of the RCCC including r 3 (competence), 10.9 (misleading and deceptive conduct) and 13.1 (duty of fidelity to the Court). There are examples of lawyers overseas relying on Gen AI and inadvertently providing false authorities to the Court, with serious disciplinary consequences.9

A lawyer practising on own account who allows the use of Gen AI in a way that is not adequately monitored or checked or who allows a situation to arise where staff are using Gen AI in an unauthorised manner also risks breaching r 11 and 11.1 (Proper professional practice – administering, supervising and managing a legal practice).

Inputting client details and legally privileged material into a publicly accessible/external Gen AI tool may also give rise to a breach of privilege and confidentiality obligations (see: Chapter 8 of the Rules). At a minimum, lawyers need to consider whether client consent should be sought for use of their data. Further, personal information or client related information should not be used for testing AI systems, generating templates or similar – fictional data should be used for this.

Related to this is the issue of disclosure and transparency to clients about the use of Gen AI, given the fiduciary nature of the relationship and certain professional obligations. For example, a lawyer must provide client care and service information, including about who will undertake work and the way the services will be provided (see: r 3.5 and the Preface to the Rues). A lawyer must also take steps to ensure that a client understands the nature of the retainer and consults the client about steps taken to implement the client’s instructions (r 7.1).

The use of Gen AI or reliance on defective or misleading outputs created by it therefore may become a complaints and disciplinary matter in a number of ways.

As the use of the technology develops, lawyers may need to also review their billing practices and the information that is provided to clients at the start of a retainer (see: Chapters 3 and 9). If the model used is primarily a time and attendance model, lawyers my need to consider what is appropriate if Gen AI is undertaking tasks previously undertaken by human actors. For example, it may be appropriate to charge in a similar way to when a lawyer is using a research tool. However, the application of a time and attendance charge to drafting a contract or completing document review on discovery undertaken by a non-human, may need careful consideration.

Embracing the opportunity and managing the risks

In summary, the use of Gen AI has the potential to enhance the way lawyers deliver legal services. Lawyers can devolve certain tasks to Gen AI that have traditionally eaten into time available for other priorities. The benefits can include significant time and costs savings to the mutual satisfaction of both lawyers and clients.

However, with reward comes risk and this needs to be carefully managed. Key to managing potential risks is:

  • Understanding how Gen AI works and what its limitations are;
  • Being clear about when it will be used and why (and when not to use it);
  • Being clear about the legal, regulatory, and professional obligations that apply when using Gen AI;
  • Researching the options available – including a risk analysis of specific tools;
  • Ensuring clear processes and procedures are in place to cover the use of Gen AI including carefully managing confidentiality and privacy

Checklist: use of AI - the essentials

This checklist is adapted from the Law Society of England and Wales guidance. It includes factors that lawyers should consider from initial exploration, procurement, use and review.

What do I need to consider if I’m thinking about using Gen AI?

  • Purpose identification: Determine the primary need or goal for the use of the AI tool in the practice (what is the business case for using Gen AI?).
  • Due diligence on vendors and tools: Research reputable AI tool providers, speak to others in the industry and consider whether the vendor can meet your requirements. Evaluate claims made by vendors. At a minimum, ask what data the tool accesses and how it is trained, how will your input data be used and who owns input and output data.
  • Stakeholder engagement: Involve your IT staff and/or providers, firm management and the lawyers who will be the end-users to ensure the use of AI is in line with the identified purpose and the firms’ policies.
  • Consider whether changes are required to the firm’s fee billing approach and information to clients, if Gen AI is to be used for some tasks on a matter. Is a ‘time and attendance’ basis appropriate, with reference to the reasonable fee factors in the Rules, if AI is used on a matter.

Privacy, Confidentiality, Privilege -Data and training- what do I need to do?

Be very careful about any data used to “train” AI tools. Consider use of anonymised data and be aware of confidentiality and privacy risks before beginning use of AI tool. Do not use personal information or client information for testing or creating templates.

Privacy, data protection, confidentiality, privilege compliance:

  • Confirm that use and storage of data complies with privacy and confidentiality compliance obligations.
  • Ensure that a system and plan is in place to protect client confidentiality and privilege.
  • Take a “privacy by design approach” – at a minimum undertake a Privacy Impact Assessment.10

Procurement

  • Trials and demos: Request demos or trial versions to evaluate the AI’s effectiveness and whether it will meet your needs. Trials and demos should be isolated from other technical systems for safety and should be used to evaluate vendor claims.
  • Contractual terms: Review the terms of service, especially around data protection, intellectual property and data rights, the geographic location of data storage and any liability/disclaimer clauses.
  • Cost analysis: What is the total cost of the tool? – what is the cost of ongoing support and computational charges? Agreeing to a fixed cost can avoid unexpected costs.
  • Long term viability and planning: What are the support arrangements in place with vendor? Is there a regular review process and is there an exit mechanism, if required.

Implementation and usage:

  • Policy: Have a clear policy for all staff about how the firm uses AI. This should include topics such as protection of confidentiality and privilege, monitoring and unauthorised use, and quality assurance.
  • Training: Ensure there is a training plan in place and regular sessions for staff. Training should cover technical education but also ethical and professional obligations, privacy, and cybersecurity.
  • Data input management: Clearly define what data can be fed into the tool considering both legal and ethical restraints.
  • Feedback and review: Have a process for users to provide feedback to assist with reviewing ongoing use of the AI tool.

Risk Management

  • Legal and regulatory compliance: Ensure that use of the AI tool complies with legal and ethical requirements. The legal landscape in this area is evolving so make sure that the firm keeps abreast of all legal developments.
  • Cybersecurity measures: Ensure that robust security measures are in place to protect from data breaches – this includes being satisfied about what security measures the vendor has in place.
  • Liability and insurance: Assess liability and insurance cover related to use of the tool. Speak to your insurer to determine whether use of the tool is covered under your existing policy.
  • Business continuity: Is there a plan in place in case the AI system fails?
  • Ethical and professional considerations: Consider the potential ethical implications and biases of the AI’s output. Ensure that users are aware of this and there is a review process in place to address this risk.

Review and evolution

  • Regular assessment: Periodically review whether the AI tool continues to meet the firm’s needs and that no legal or ethical/professional issues have arisen that need addressing.
  • Exit strategy: Consider how the firm can transition away from the tool, if needed. Can data, source code or any existing training on the tool be transferred, if required.

Communication

  • Client communication: Clearly communicate to clients when and how AI tools are used in their matters, where appropriate. Consider whether consent is required to use their information.
  • Internal awareness: Keep the firm’s staff informed about the tool’s capabilities, benefits, and limitations, as well as their professional responsibilities.