What is Generative AI? The term “Generative AI” refers to a program that allows users to create unique “output” using text prompts, including written output (e.g., letters, emails, essays, briefs, analyses, poems), images such as photographs or drawings, power point slides, and videos. The AI tool uses its assigned “data set,” such as information available on Google, to (theoretically) generate a distinctive result for each user. For example, asking the AI to “write an essay about Pompeii” should produce unique language, sentence structure, and content. Many of the AIs are “large language models” (“LLMs”) in which text is used for both prompting and the generated result. The AIs use a neural network to identify patterns with which the AI can learn the usages, relationship, and parameters of language to generate answers. Some AIs have access to the internet, while others use a downloaded and stored dataset.
What Generative AIs are Available? Generative AI is not new and has been used in some manner by many well-known companies. Many lawyers may have experienced Generative AI through the use of legal research search engines, contract management software, and e-discovery classification. ChatGPT, one of many generative AIs, was first created in November 2022 using a closed set of data through mid-2021. OpenAI, the company that created ChatGPT, has released more recent versions such as GPT-4 with both free and paid versions. Google has two AIs: Bard (similar to ChatGPT with Google data) and Search Labs (for better search results). Microsoft, a major investor in OpenAI, created BingAI.
Other AIs are also being rolled out regularly, including SlidesGPT (to create PowerPoint presentations), Dall-E (to create images from text), Lensa (to create digital portraits based on photos), Lumen5 (to create videos), AlphaCode (to generate coding in various programming languages), GitHub Copilot (to complete coding for programmers), Cohere Generate (for drafting emails, product descriptions, and other marketing and sales materials), Synthesia (to create videos), Bardeen (to automate workflow and assist with various tasks), Copy.ai (to write content such as marketing materials, slogans, headlines, language interpretation and transcription, and other drafting), and many more.
Issue 1: Ownership of Content and Intellectual Property Created by AI. Many creators of AI software include disclaimers that ownership of the data generated remains with the AI and does not transfer to the user. Moreover, the content generated may include unquoted original material created by others, including copyrighted materials. For example, when asked who owns the content generated, ChatGPT states that no one owns its output; however, OpenAI’s Terms of Use state that OpenAI assigns all its right, title and interest in and to the Output to the user. As such, lawyers should not consider content created by AI to be owned by the lawyer unless sufficient changes are made by the lawyer to make the content uniquely the lawyer’s creation. Several lawsuits have already been filed in 2023 regarding ChatGPT and other AI uses of personal data and copyrighted material, and the USPTO has reportedly denied a copyright for AI-created content that was not created by a human. Until sufficient litigation has created settled law on this rapidly-changing technology, lawyers should be wary of all content produced through AI.
Issue 2: Attorney-Client Privilege and Privacy. Since AI-created content may not be owned by the user, attorneys should be aware that confidential information input into an open AI may not be private and may lead to an unintentional waiver of privilege. Many AI software creators may note in their terms of service that the information input into the AI will be added to the AI’s learning and dataset, which could be shared with other users. There is a lack of complete confidentiality. Lawyers should avoid using client names and details that could jeopardize the attorney-client privilege. Additionally, lawyers can begin seeking discovery related to the opposing party’s use of Generative AI to determine whether any privilege or confidentiality claims have been arguably waived.
Issue 3: Input and Bias Affects Output. With the rapidly-changing AI environment, it is important to remember that AI is only as good as its input. For example, if ChatGPT uses a dataset from November 2022, it would not be aware of events or information happening in more recent months. Moreover, given the many AIs available today, it is difficult to know where exactly the AI is drawing its material from to support the created content. The information may be incorrect, incomplete, or biased. The data set on which the AI’s answers are based may be influenced by malicious prompt-writers. Use of AI may further discriminatory practices and patterns built into the AI’s dataset, including under-representing historically marginalized minority groups. AI may provide information about a different state without any jurisdictional nuances. The AIs are always learning from the data input and can be influenced by users. In short, AI is relying on the imperfect information and patterns in its dataset.
Issue 4: Hallucinations and Deep Fakes. The Generative AI will always give an answer to prompting, even if no responsive information exists in its data set. This can result in “hallucinations,” where the AI creates a convincing but fictitious answer which the AI itself believes is accurate. If the user confronts the AI about an incorrect hallucination, the AI may double down and try to convince the user of its inaccuracy. The result is a mixture of fact and fiction which is difficult to verify. OpenAI’s Terms of Use admit that use of ChatGPT “may in some situations result in incorrect Output that does not accurately reflect real people places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case [sic], including by using human review of the Output.”
In the legal world, hallucinations may arise with respect to legal research. The AI might create legal precedent or invent cases out of thin air in a manner that sounds convincing. In May 2023, two New York attorneys used ChatGPT to generate a legal motion, and after neither the judge nor opposing counsel could find the cited cases, the judge demanded the attorneys produce the full text of the decisions, which the Judge believe were hallucinated by ChatGPT. When challenged, the attorneys doubled down on their use of ChatGPT, using it to generate false decisions. The attorneys may be facing judicial sanction including disbarment for their actions. This is an early lesson that attorneys should not rely on Generative AI for legal research, but should always verify the information generated, including case citations.
Illinois Ethics Rules Impacted by Generative AI:
- Rule 1.1 “Competence” requires a lawyer to provide “competent representation” to a client, which includes keeping abreast of changes in the law and its practice, including relevant technology (see note 8).
- Rule 1.6 “Confidentiality of Information” requires a lawyer to avoid revealing information about the client and the representation unless the client gives informed consent or the disclosure is permitted. Under Rule 1.6, a lawyer “shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client,” which includes competency in safeguarding information (see notes 18-19).
- Rule 3.1 “Meritorious Claims and Contentions” requires a lawyer to bring only claims and defenses for which “there is a basis in law and fact for doing so that is not frivolous, which includes a good-faith argument for an extension, modification or reversal of existing law. A lawyer may violate Rule 3.1 if the lawyer relies solely on information and arguments generated through AI which have no basis in law, such as cases hallucinated by the AI.
- Rule 3.3 “Candor Toward the Tribunal” requires a lawyers to “not knowingly (1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer.” A lawyer may violate Rule 3.3 by submitting inaccurate law or information to a Court which was hallucinated by AI, because the lawyer arguably should have known to verify the information before submitting.
- Rule 5.1 “Responsibilities of Partners, Managers, and Supervisory Lawyers” requires lawyers to supervise those under their authority to ensure that the other lawyer and the firm conform to the Rules of Professional Conduct. Lawyers should communicate to those attorneys and staff they supervise to avoid relying on AI and verify any information generated by AI.
- Rule 7.1 “Communications Concerning a Lawyer’s Services” bars an attorney from making a “false or misleading communication about the lawyer or the lawyer’s services,” which includes a communication which “contains a material misrepresentation of fact or law, or omits a fact necessary to make the statement considered as a whole not materially misleading.” An attorney could arguably violate the rule if the client unknowingly relied solely on content generated by AI for hiring the attorney that is not reflective of the attorney’s training and ability.
Regulation of Generative AI. While AI is a rapidly evolving field, various governments and organizations have begun attempting to regulate the use of AI. States with privacy laws may enforce citizens’ privacy rights through litigation. Several foreign countries such as Italy have banned the use of Generative AI. The federal government may apply laws relating to “disinformation” or some form of fraud or misrepresentation to try to rein in the over-reliance on AI technologies. Technology giants such as Elon Musk have urged AI developers to slow development given the potential risks AI brings to society. Thinktanks are in the process of assembling proposals to regulate the use of AI and provide transparency and oversight. The American Bar Association unanimously enacted AI Resolution 604 setting out standards for the development and use of AI products, services, and systems, and encouraging transparency and accountability.
How can Lawyers use Generative AI? Generative AI is a good starting place for attorneys to brainstorm ideas and a “first draft” for content of recurring tasks. AI can assist with writing in many general capacities and creating content for attorney websites and blogs. The content created should always be carefully reviewed to verify the accuracy and completeness of the information. AI should not be used as a legal research tool without verifying the information generated. AI can assist lawyers with their writing skills or act as a virtual legal assistance. So long as no confidential client information is included, a lawyer can copy their draft correspondence or analysis into the AI system to review, edit, or change the language to make it more optimistic. A lawyer could also use content created by AI to get keywords to use in a writing, or to generate potential headlines or taglines for blog posts or other marketing materials. AI can be asked to draft discovery requests for a specific type of case (without any confidential or client details).
Is Generative AI a Threat to Lawyers? Generative AI cannot replace the critical thinking, legal expertise, judgment, or empathy that an attorney with a human brain brings to the legal field and client interactions. AI can enhance an attorney’s practice and assist with making legal services more efficient and affordable for some clients, but ultimately AI has many limitations and challenges that can only be overcome by an attorney’s skill, training, reasoning, and judgment. With this new technology, attorneys can better assist clients by streamlining processes, jump starting projects, and improving the writing quality of client correspondence and Court filings. Attorneys should use AI as one of the legal field’s many tools to aid in better representing clients and improving the integrity of the judicial system.
Sandra Mertens
[email protected]