AI and the Modern Lawyer: Ethics, Oversight, and Safer Practice

Written by

Awais Haq

Awais Haq

(Updated December 21, 2025)
5 min read

AI is transforming law, but ethical risks abound. Learn how to navigate ABA Rules 1.6, 3.3, and 5.3 to prevent hallucinations, protect client privacy, and use AI tools responsibly in your practice.

Quick Answer

AI offers legal efficiencies in research and drafting, but strict ethical oversight is non-negotiable. Lawyers must treat AI as "non-lawyer assistance" (ABA Rule 5.3), verify all citations to prevent hallucinations (ABA Rule 3.3), and ensure client data is never stored by public models to avoid privacy breaches (ABA Rule 1.6).

Table of Contents

There is no end to the number of cases we hear about irresponsible AI use in law.

Very recently, on 18th November in fact, we heard about a Connecticut Attorney being fined for hallucinations in his draft. He had used Descrybe.AI to draft an opposition brief. This was on behalf of a pro se litigant, or a client that would be representing himself.

The attorney, upon realizing the errors, apologized "sincerely and unreservedly" and committed to no longer using AI tools. Judge Hall advised caution, but she did not believe in outright banning the use of AI.

These technologies aren't going anywhere, and their use in every field is only growing. Widely used legal tools like LexisNexis and Westlaw feature AI powered tools too, after all.

Smart and Responsible AI Use in Law

So how can one use AI smartly and responsibly in the field of law?

Research

Lawyers need to be well-versed with a large volume of caselaw and precedent, finding which takes hours of manual research. AI tools like Casetext, ROSS Intelligence, and Lexis+ scan huge databases of cases, statutes, and regulations, shortening this process.

Responsible use: Once AI has pointed to a case or law, always verify it before citing.

Documents Review

You may need to scan through lengthy documents on a daily basis, searching for missing clauses or inconsistencies. While AI cannot be wholly relied on for this, it may still accurately flag many parts for review by a legal professional.

Responsible use: Ensure that the document is being reviewed by a secure, encrypted AI that is not training on it. Use only to flag for review, not to amend using generative content.

Predictive Analysis

AI is increasingly being used to analyze the trends and outcomes of a case. Because it can identify patterns in previous cases, past judgements, judge tendencies, opposing counsel history and so on, it is able to arrive at credible predictions that benefit a case greatly with lawyer oversight.

Responsible use: Use as advisory data points, not definitive forecasts. Your own professional judgement that takes demeanor and nuances into account is more valuable than AI insights.

Compliance Monitoring

Law firms can use AI to oversee internal processes as well as external changes in laws and industry standards. When done consistently, lawyers will be able to ensure their work is always up to standard and meets all applicable compliances.

Responsible use: Regularly audit internal and external data feeds for completeness and bias.

Intake and Calls

Chatbots and automated forms on attorney websites can send intake information directly to Clio or databases in other platforms. They may also interpret scattered data, compile it in one place and organize legal workflow and deadlines.

Responsible use: Ensure that intake automating AI's do not train on or store client details once they have been submitted.

Overall Principle of Responsible Use:

Wherever AI technology is being used hand in hand with real lawyer oversight and checking, such use is responsible. AI must also only be used where client privacy and confidentiality can be guaranteed. It doesn't do, for example, for you to be discussing all private client details with ChatGPT, a platform which has been given a low security rating by many.

The Perils of Unethical AI Use in Law

Unethical use of AI revolves around a blatant lack of oversight and regard for client privacy. These are some of the ways to avoid using AI in:

Unauthorized Practice

This refers to the use of AI tools to perform tasks that require the expertise of a licensed attorney, like preparing legal documents or providing advice. AI tools are not bound by any accountability mechanism, whereas attorneys are held liable instead. AI cannot replace a lawyer.

  • ABA 5.3 requires you to treat these tools like nonlawyer assistance and supervise them.
  • ABA 1.1 expects you to understand the tool and verify its output.

Hidden Bias

Attorneys as well as laypersons have no idea what data AI models are trained on. Such data could contain bias when it concerns social identities and phenomena. Answers produced may feature and perpetuate this bias, which may affect a case.

Manipulating Outcomes

Because AI has a tendency to hallucinate and give the user the answer they are looking for ("glazing them"), this may result in bias. Drafts, documents and facts may be fabricated to support a certain outcome. You would be breaching ABA 3.3 by submitting hallucinations.

Fair Billing

ABA 1.5 requires a large degree of transparency and reasonableness for billing. If clients are charged for hours that you used AI to hasten work with, that violates ABA 1.5.

Breach of Privacy

You may underestimate this factor, but AI tools have high potential to result in privacy breaches. ABA 1.6 especially pertains to this. If your AI stores, trains on or shares identifiable client information, you are instantly violating this.

Avoiding Confidentiality Breaches with AI

We know most AI tools certainly store the user's input for training. Using them without violating a rule seems impossible. However, there are ways you can avoid a confidentiality breach.

Remove all identifiers before AI sees anything

This will include dates, locations, names, unique facts, business secrets, medical info, and so on. Anything that could be used to narrow down and identify a client must be removed.

Use contractual AI systems

There do exist AI platforms that offer encrypted processing without retention, but free platforms aren't it. Proton Lumo, OpenAI API and other enterprise or custom built AIs are your best bet.

If your query is a sensitive matter and you're using a third-party tool, informed consent is vital.

Use hypotheticals wherever possible

Change identifiable details to a fictionalized version, and use vague terms ("someone", "they/them", "some company"). This preserves the tool's analytical value, but does not jeopardize privacy. Use AI to brainstorm and analyze ways a case could play out.

Conclusion: Oversight is Key

AI has entered every profession and every field. It is changing the way work is done forever. It comes with both its efficiencies and its caveats, but this should not deter you from utilizing it. Responsible use centers around care and oversight.

When using AI tools, lawyers need to be wary of their state's confidentiality and professional rules. They also need to treat the output of these tools with the necessary oversight and skepticism.

Navigate Legal AI Responsibly

Interested in custom-built, secure AI solutions for your legal practice that ensure compliance and confidentiality?

Contact Our Consulting Team
Awais Haq

About Awais Haq

From civil engineering to revolutionizing legal tech, I’m a problem-solver driven by impact. Disillusioned by industry malpractice, I pivoted to build tech solutions that matter - first scaling an online tutoring marketplace to $800K ARR, then founding Time Technologies LLC in Nov 2024. With 19+ projects across edtech, government security, and AI, I now focus on empowering small to mid-sized law firms by slashing admin burdens.

Connect on LinkedIn

Frequently Asked Questions

A futuristic lawyer looking at a computer screen displaying a legal document with AI powered tools.

Connect with an Expert

Share your project details and we'll get back to you within 24 hours with a personalized solution.