The new legal ethics position on the use of generative AI in legal practice makes one point very clear: lawyers must maintain competence across all technological means relevant to their practice, and this includes the use of generative AI.
The opinion was issued jointly by the Pennsylvania State Bar and the Philadelphia Bar Association to educate lawyers about the benefits and pitfalls of using generative AI and to provide ethical guidelines.
While the opinion focuses on AI, it reiterates that lawyers’ ethical obligations surrounding this emerging technology are no different than those regarding any technology.
Related: Is Gen AI creating a divide between haves and have-nots among law firms?
“Lawyers must be as proficient in using technological tools as they are in employing more traditional methods,” the opinion states. “Whether it's knowing how to navigate legal research databases, use electronic discovery software, use a smartphone, use email, or protect client information in digital form, lawyers are expected to maintain proficiency across all technological avenues relevant to their work.”
That said, the opinion acknowledges that generative AI poses unique problems not previously seen in legal technology. Most important is its ability to generate text and to hallucinate in the process of generating text. The opinion states that the technology's text-generating capabilities “break new ground for ethical guidelines.”
“Rather than focusing on whether a lawyer's choice of a particular legal argument is meritorious, some lawyers are using generative AI platforms without checking the citations or legal arguments,” the opinion explains. “In essence, the AI ​​tool gives the lawyer exactly what he or she asked for, and the lawyer does not perform due diligence on the results after getting a positive result.”
The opinion also touches on the possibility that AI may be biased, pointing out that “AI is not a blank slate free of prejudice or preconceived ideas.”
“Such bias can lead to discrimination and favor certain groups or perspectives over others, and can manifest in areas such as facial recognition and hiring decisions,” the opinion said.
Taking these issues into account, the opinion states that lawyers have a duty to communicate with their clients about the use of AI technology in their work. The opinion advises that in some cases, lawyers should obtain client consent before using certain AI tools.
12 Responsibility
The 16-page opinion is a concise primer on the use of generative AI in legal practice, and also includes a brief background on the technology and an overview of ethics opinions in other states.
But most importantly, it concludes with 12 responsibilities relevant to lawyers who use generative AI.
Be honest and accurate: The opinion cautions that lawyers must ensure that AI-generated content, such as legal documents and advice, is honest, accurate, and based on sound legal reasoning, and must adhere to the principles of honesty and integrity in their professional conduct. Verify the accuracy of all citations and cited material: Lawyers should ensure that citations they use in legal documents and arguments are accurate and relevant. This includes ensuring that citations accurately reflect the content they reference. Ensure competence: Lawyers must be skilled in the use of AI technology. Confidentiality: Lawyers should protect information related to their client representation and ensure that AI systems that handle sensitive data adhere to strict confidentiality measures and do not share sensitive data with others not protected by attorney-client privilege. Identify conflicts of interest: The opinion states that lawyers must be vigilant in identifying and addressing potential conflicts of interest that arise from the use of AI systems. Communicate with clients: Lawyers should communicate with clients about the use of AI in their work and provide clear and transparent explanations of how such tools are used and their potential impact on litigation outcomes. Where necessary, lawyers should obtain client consent before using certain AI tools. Ensuring information is unbiased and accurate: Lawyers should ensure that data used to train AI models is accurate, unbiased, and ethically sourced to prevent bias and inaccuracies in AI-generated content. Ensuring AI is used appropriately: Lawyers should be wary of misuse of AI-generated content and ensure it is not used to deceive or manipulate legal processes, evidence, or outcomes. Adherence to ethical standards: Lawyers should stay informed of relevant regulations and guidelines governing the use of AI in legal work to ensure compliance with legal and ethical standards. Exercise professional judgment: Lawyers should exercise professional judgment in relation to AI-generated content and recognize that AI is a tool to aid, not replace, legal expertise and analysis. Use appropriate billing methods: AI has enormous time-saving capabilities. Lawyers should therefore ensure that AI-related expenses are reasonable and properly disclosed to clients. Be transparent: Lawyers should be transparent with clients, colleagues and courts about their use of AI tools in legal practice, including disclosing any limitations or uncertainties associated with AI-generated content.
My advice: Don't be stupid.
In my years of writing about legal technology and legal ethics, I've developed my own shortcut rule for staying out of trouble: Don't do anything stupid.
For example, it would be foolish to ask ChatGPT to find examples to support your claims and then submit them to court without even reading or shepardizing them.
For example, it would be foolish to ask a generative AI tool to create a court filing or an email to a client and then send it off without editing it.
In a joint opinion, the Pennsylvania and Philadelphia ethics commissions put the “don't do anything stupid” guideline in more polite terms, warning that generative AI tools must be used by lawyers who understand the risks and benefits.
“These tools should be used with caution and should prompt attorneys to carefully review the 'work product' produced by these tools. These tools are not a substitute for personal review of cases, statutes, and other legislative materials.”
The full opinion can be found here: Joint Formal Opinion 2024-200.