And the Greatest of These is Fees
Adoption of AI by legal practitioners this year reached a level sufficient to prompt the ABA to issue Formal Opinion 512 regarding the ethical obligations of lawyers who utilize generative AI tools to deliver legal services.[i] The formal opinion caught my eye because I have long shared the ABA’s interest in the ethical use of AI in the legal space. I actually wrote a paper on the topic myself in 2021.[ii]
In their formal opinion, the ABA discusses how lawyers’ use of generative AI tools impacts their ethical duties of: (a) competence, (b) confidentiality, (c) communication, (d) consent, (e) candor/meritorious claims, and (f) supervision – some of the same professional responsibilities I touched on in my own 2021 essay. These are all important ethical considerations to be sure. But the ABA also mentions an ethical obligation I’d never contemplated – a lawyer’s duty to charge reasonable fees.
I was a little surprised after reading the opinion that I didn’t think of fees myself when I first began pondering the risks and benefits of Legal AI adoption. After all, fees are pretty darn important to lawyers, and reasonable fees are equally important to their clients. What’s more, increasing access to affordable legal services is not just one of the reasons I am such a vocal proponent of AI adoption in the legal space. The prospect of providing legal access to the underserved was one of the primary drivers in my decision to enter law school.
I feel like that was a pretty big miss on my part. Thankfully, the ABA Standing Committee on Ethics and Professional Responsibility’s aim was better than my own. But I digress. Rather than beating myself up publicly for failing to consider fees in my frequent philosophical musings on the topic of Legal AI ethics, I will offer myself some grace and accept this second chance to review the ABA’s legitimate concerns on the duty to charge reasonable fees - arguably one of the most broadly impactful ethical duties a lawyer possesses.
Formal Opinion 512 contains a great deal of wisdom on how lawyers may ensure that they fully comply with Model Rule 1.5 in their professional use of generative AI. I will share a brief overview here. We know that Model Rule 1.5(a) requires that a lawyer’s fees and expenses be reasonable, but how does that apply to gen AI? Here’s a quick breakdown.
Explain the Charges
Before billing a client for the use of generative AI, a lawyer must explain the basis of the charges. The ABA recommends that the explanation be given in advance and in writing.[iii]
Share the Savings
Generative AI delivers efficiencies that lawyers must pass on to their clients or risk sanctions. Gen AI can do in moments what might take a person hours to complete. Maybe AI can’t do those things as well as a competent attorney, but that is covered under other model rules.
When billing is set at an hourly rate, the client should only be billed for the actual time spent. Here, that means the time it takes the lawyer to interface with the AI tool and carefully review its output.
Critically, lawyers billing on a contingency or flat fee basis may need to reduce their fees if time efficiencies realized from the use of AI are significant enough to make their original fee agreement unreasonable. As Formal Opinion 512 reminds us, a large fee charged for a disproportionate amount of work could be viewed as both unreasonable and sanctionable.[iv]
No New Surcharges
Lawyers are permitted to bill clients for reasonable expenses specific to their matters. That includes the actual cost of direct expenses incurred to use a generative AI tool for a particular client’s case. But the ABA cautions lawyers not to bill clients directly, like through an impermissible surcharge, for the costs of gen AI tools in general use across their practices. Such expenses are likely unbillable overhead just like the electric bill.
On the other hand, lawyers may still be able to bill clients for the use of gen AI tools, but only if the charges and rates are agreed upon in advance and do not exceed the direct costs plus a reasonable allocation for overhead. The ABA compared these charges to the way lawyers bill for photocopies today. Formal Opinion 512 urges lawyers to exercise their best professional judgment in determining when (and how much) to bill a client for the use of gen AI tools.
Conclusion
While the use of gen AI tools in legal practice may be novel, the underlying ethical considerations underlying that use are not. When technology delivers shiny new tools, that does not equate to new ethical rules. Fees must still be fair and reasonable, and lawyers must continue to honor their ethical duties to their clients and the profession, especially when it comes to billing.
Ultimately, AI is a tool—albeit a powerful one—and lawyers must wield it wisely to ensure justice remains affordable and accessible. The duty to charge reasonable fees is one of the most important parts of that responsibility.
*Lead photo courtesy of ChatGPT (and me)
[i] No Named Author, “ABA issues first ethics guidance on a lawyer’s use of AI Tools”, American Bar Association News, https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/
[ii] Cathy Melton, When Lawyers Hire Chatbots: Ethical Considerations, April 26, 2021, https://drive.google.com/file/d/1JjjIWk6h8ARWwP1Umf7BrttGC9YgQNjW/view?usp=drive_link.
[iii] American Bar Association Standing Committee on Ethics and Professional Responsibility, Formal Opinion 512:Generative Artificial Intelligence Tools, July 29, 2024, https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
[iv] Id.
The 3.8 Million Dollar Plan
To paraphrase the ingenious introductory sequence of a forward thinking 1970’s action series[i]: People, we can rethink this. WE HAVE THE TECHNOLOGY! We have the capability to make the world’s (or at least the U.S.’s) first remote bar exam. California will be that bar exam. Better than it was before. Stronger. Faster. Cheaper for all concerned.
OK - I totally added that last part. But now that I’ve gotten that out of my system, let’s talk about some super big news from the west coast. The State Bar of California Committee of Bar Examiners has announced that, beginning in February 2025, they hope to give examinees the option to take the California bar exam remotely. California is the very first of the U.S. states to offer a remote option. To coincide with this groundbreaking policy shift, California plans to roll out its own fully online version of the bar exam to be offered digitally in small testing centers around the world or even in examinees’ homes.[ii]
California’s Bar Examiners expect to save $3.8 million per year with the move to remote or small testing center bar exams made possible by technology. That is a substantial win for the committee and for technology. It’s also an epic win for examinees. Some examinee benefits are immediately apparent. California bar examinees will no longer be required to travel to the physical site of the exam for 2 to 5 days, which eliminates significant time and expense. That’s right, no more overpaying for limited access hotel rooms to guarantee the best possible access to the testing site on exam day if you plan to take the California bar exam! No more airfare, fuel or food costs to consider on top of everything else.
After all, don’t bar examinees already have enough on their plates? They are typically recent graduates with normal adult living expenses and often limited funds. Few have had regular, full-time employment since the first semester of law school due to the level of time and commitment necessary to graduate. After graduation, the ABA recommends an 8 to 10 week bar prep period at 40 hours per week, so examinees often take time off work, if employed, to optimize their chance of passing the bar. Many have just paid for background checks, character & fitness investigations, the MPRE exam, registration fees, bar review courses/programs, and specialized exam software licensing in the months immediately preceding the bar exam.[iii] Costs vary by state and the bar prep courses selected, but the expense of taking the bar exam is significant by any standard. Any reduction in costs, whether direct or indirect, is a big boon for beleaguered bar examinees, who will undoubtedly thank California for passing the technological efficiencies on to them.
In addition to much needed cost savings, a remote bar exam offers examinees a significant intangible benefit – greater peace of mind. There is almost nothing leading up to the bar exam that could be characterized as “Zen”. Law school is an incredibly demanding, high stakes endeavor. The character & fitness investigation is simultaneously cumbersome and worrisome. Bar prep is mentally and physically exhausting. Travelling to strange environs can be taxing, even in the best of times. Standing in line with a clear plastic bag containing (hopefully) only approved items while waiting to be searched before (lord willing) being admitted to a gigantic, echoey, sweltering hot (or freezing cold – it simply cannot be predicted) exam hall to take the biggest test of your life with hundreds of strangers is – I’m not sure there is even a word adequate to describe it so I will not attempt to manufacture one. Finally, having to find the money to pay to do the nerve-racking activities above is beyond stressful.
Now, California bar examinees will be able to alleviate at least a portion of that stress thanks to smartly leveraging available technologies. They can focus more time, energy, and money on being fully prepared for the bar exam and less on fretting over financial demands, travel arrangements, and temperature controls. I predict an almost immediate increase in pass rates for California. And before anyone suggests it – the increased success will NOT be the result of remote takers cheating. California will still require all examinees to take the exam simultaneously,[iv] and effective remote proctoring software has been widely available for some time now. Pass rates will increase because empowering examinees to take the bar exam from home (or a testing center near home) preserves valuable financial and mental resources they need to ace the exam.
So, I must enthusiastically applaud the California Committee of Bar Examiner’s choice and strongly encourage other states to follow California’s lead. I believe the shift to remote bar exams is the right move at the right time. Remote bar exams are fiscally responsible for examining boards, cost-efficient for examinees, and we do indeed have the technology.
*Lead photo courtesy of Dall-E
[i] “The Six Million Dollar Man”, Facebook, April 13, 2024, https://www.facebook.com/watch/?v=638639761753437.
[ii] Karen Sloan, “California to allow its new bar exam to be taken from anywhere”, Reuters, October 11, 2024, https://www.reuters.com/legal/government/california-allow-its-new-bar-exam-be-taken-anywhere-2024-10-11/.
[iii] Kayleigh McNeil, “Hidden Hurdles: The True Cost of the Bar Exam”, Washington Journal of Law, Technology & Arts, https://wjlta.com/2023/04/24/hidden-hurdles-the-true-cost-of-the-bar-exam/.
[iv] Sloan, supra.
Bound to Happen: AI Assisted Acceptance
Bound to Happen: AI Assisted Acceptance
I thoroughly enjoy experimenting with new technologies, so when I learned there was a free AI platform called predraft.ai that could make blogging nearly effortless, I was curious and eager to give it a try. Like any typically egocentric human, I take great pride in my writing skills. Thus, I had no intention of using the content the AI produced as my inaugural blog post. Rather I simply wanted to experience and assess the ease of input and quality of output myself. I quickly changed my mind, though not completely.
I did not suddenly decide to copy and paste the AI-generated article into my website and pass it off as my own writing . I suppose some people would use the tool that way. That may even be the type of use predraft.ai intended, but that was never my plan. Maybe I am an idealistic fool, but the ethics of such a practice seem questionable at best. Furthermore, my own writing is simply better IMHO 😊. Instead, I thought it might be interesting to share the process and evaluate the output through the lens of a presumptively parallel personal experience.
The Process
Using predraft.ai was indeed quite simple, with a highly intuitive interface that requires minimal input from the end user. To generate an original article at no cost, one need only:
· Access the predraft.ai website at https://predraft.ai/topic,
· Select an article type from the drop down menu,
· Type in a high-level topic,
· Enter any desired keywords, separated by commas, and
· Press the Autofill button.
Once the Autofill button is pressed, predraft.ai automatically generates:
· Additional related keywords,
· Key facts to highlight,
· Writing tone,
· Article length (normal or long), and
· Language/Country.
Each of the auto-populated data values is editable prior to generating an article, and users have the option of adding one or more URL links to specific sources if desired. A user can also choose to opt in or out of using real-time Google search results as a source.
Finally, with one click of the Write Article button, predraft.ai begins the process of writing a well-structured article that meets the specified criteria. The article, a blog post in my case, includes:
· Auto-generated title,
· Summary/Synopsis,
· Open-source cover and body photos/graphics,
· Sections that correspond to the summary,
· Properly formatted section headers, and
· Clearly marked in-line keywords.
While the estimated time to prepare an article is typically 5 to 10 minutes, all of my attempts have produced an article in significantly less time - around 1 or 2 minutes.
In addition to the polished article produced by predraft.ai, the tool simultaneously provides:
· An outline of the topics researched,
· A listing of any internal and external links, and
· Several suggestions for “What to Write Next”.
If any of the suggested topics sound interesting, the user may handily generate another article simply by clicking on the Create button.
Once an article is generated, users have several options for capturing the article to use as they see fit. They may copy the article in rich text, html, or markdown. If they are fortunate enough to have a WordPress site (I am not), they can even publish the article directly from this page of the predraft.ai interface. Overall, I found the predraft.ai article generation process to be simple, efficient, and expedient. For individuals or organizations that want to easily produce inexpensive, high-volume, original blog posts, the tool could certainly prove useful. But for myself, I was particularly interested in the quality and content of the output.
The Output
The AI-Generated Article
My initial prompts directed predraft.ai to generate an article on the specific legal functions AI will likely be able to perform unassisted by an attorney within the next 5 years. I’ve read articles speculating on this topic written by human authors, and I thought it would be fascinating to see what an AI entity had to say on the topic. You can read a copy of the article predraft.ai generated by clicking on the link below, if interested. Really, it is not too shabby: https://drive.google.com/file/d/12Z9UuvVjsZ7el-SuStFnHZXqmBsIWfNh/view?usp=sharing
An Objective Assessment of the AI-Generated Output
The AI-authored article was technically well-written with excellent grammar and a solid, easily readable structure that flowed well. The tool automatically generated and formatted a compelling title for the article and separated the body into sections with properly formatted, sensible section headers. Predraft.ai even provided an eye-catching graphic to accompany the article without being asked to do so.
The tool used 4 sources to craft the 1149 word article, including embedded links to the external sources in the body of the article. Having spent the last few years doing legal citations, which seems like a very small number of sources. That said, this is a blog article on a general topic for public consumption, not an appellate brief prepared for a Supreme Court case, so 4 sources were likely adequate.
The style of writing, while not deficient, was easily recognizable as authored by AI. I find that AI written content is nearly always identifiable by avid readers and experienced writers. I suspect AI writing may become more like human writing over time as technology evolves, but in the meantime there is no mistaking it as machine generated. I have never been able to put my finger on what it is about AI-generated content that makes distinguishes it from content authored by humans, so I consulted ChatGPT for some insight. I specifically requested “brutal honesty.”
I think ChatGPT’s response was honest, if not brutal per se, and actually quite interesting. Hence, I’ve included it below for review. In short, ChatGPT seems to be saying AI’s writing cannot compare because only people can produce truly provocative, insightful, and nuanced writing. I particularly enjoyed the flattery at the beginning and the summary at the end because they make a particularly apt segue into the subjective assessment that follows.
That’s a great question—and you're absolutely right, educated readers can often sense when content is AI-generated.
Here’s why:
1. Predictability and Over-Optimization:
AI writing often follows very structured, predictable patterns. It tends to adhere strictly to the format it was trained on, leading to robotic-sounding transitions, formulaic sentence structures, and overuse of certain common phrases. This can make the content feel mechanical or overly polished, lacking the natural variance that human writing has.
2. Surface-Level Insights:
AI models can generate content based on existing knowledge, but they struggle to provide deep, original insights, which are hallmarks of human expertise. Educated readers can tell when writing is sticking to “safe” or widely available information without offering unique, thought-provoking analysis or creative connections between ideas.
3. Repetitiveness and Lack of Subtlety:
While AI models have become much better at variety, they still tend to repeat words, phrases, or concepts more often than a human writer would. They lack the nuanced ability to rephrase ideas subtly across different paragraphs, which makes content feel repetitive or even redundant.
4. Limited Humor, Sarcasm, and Emotional Nuance:
AI-generated content struggles with the more complex, nuanced aspects of language, like humor, sarcasm, irony, or deeply emotional tone shifts. Even when AI attempts humor or emotional depth, it often comes across as awkward or forced because it’s not grounded in lived experience or intuitive understanding.
5. Overuse of Neutral Tone:
AI tends to err on the side of neutrality to avoid generating controversial or inappropriate content. This neutrality can drain the text of passion, personality, or a clear point of view, making it feel flat. Human writers, on the other hand, often take a stance, express strong opinions, or inject personal voice—especially in creative or analytical writing.
6. Awkward Phrasing and Syntax:
AI sometimes generates phrases that are grammatically correct but sound off, stiff, or unnatural. Phrases like “It is important to note that” or “In conclusion” get overused, and while grammatically fine, they feel like boilerplate rather than something a human would write naturally in that context.
7. Lack of Original Metaphors or Analogies:
Humans often use creative analogies or metaphors to illustrate complex ideas. AI, while capable of constructing comparisons, generally sticks to conventional or cliché ones. It lacks the creativity to make novel connections between disparate ideas that surprise or captivate the reader.
8. Minimal Use of Anecdotes or Personal Stories:
AI doesn’t have personal experiences to draw from, so it typically avoids using anecdotes or weaving in personal narratives, which are common in human-authored writing, especially in blogs, opinion pieces, or reflective essays. This absence can make the text feel less engaging or relatable.
In short, AI writing often feels like it’s “checking all the boxes” of what correct writing should be, but without the depth, creativity, and personal touch that makes human writing compelling. Educated readers pick up on these subtleties, spotting the consistent signs of AI-generated content because it lacks the imperfect, unique voice of a human mind at work.
A Wholly Subjective Potentially Biased Human Assessment
While I found the article easy enough to read, I was disappointed to observe that AI did not have any new or surprising revelations regarding unassisted AI legal functions on the horizon, at least none that it was willing to share with me. In fact, the tool mentioned no functions that AI is not fully capable of performing today. It seemed odd to me that an entity of such vast intelligence could not imagine legal applications for AI far beyond what I myself could imagine. Even stranger, I thought, that predraft.ai presented past and current functionality as if it were speculative, especially given the fact the tool clearly had access to real-time Google searches.
I was suddenly reminded of a time in the distant past (the “internet boom”) when there was a renewed push toward large scale process automation. I was a huge fan of process automation initiatives, likely for the same reason I enjoyed solving puzzles. I loved both the challenge and the payoff - the exhilarating idea of taking a massive heap of unnecessarily cumbersome and disconnected pieces and transforming them into a beautifully effective and orderly whole. Who wouldn’t love such an idea? Turns out, most people.
You see, the majority of humans do not really enjoy change. They are excited by the possibilities inherent in new technologies, but not always in a good way. Occasionally, this excitement can resemble the excitement of a hive full of bees met by a bear intent on extracting an indulgent midafternoon snack of honey. This is especially true when the change in question is a bit mystery and it’s coming to their workplace. Science tells us that adrenaline-based fear and excitement are physiologically indistinguishable.[i] Regardless of the nature of the stimulus applied, individual human perception and interpretation alone determine whether or not the given stimulus is viewed as positive (excitement) or negative (fear).[ii]
I, on the other hand, am a nerd from way back. I love people, but I also love technology, and the things I believe technology can do to improve human lives. I largely perceive technological advances as positive (exciting) and unknown consequences as possibilities. As a result, I was STOKED by any opportunity to assist with process automation efforts – literally bursting with questions and suggestions aimed at advancing automation efforts. On one occasion, I was speaking animatedly with a mentor about ways to overcome what I viewed as a limitation in an automated process. In short, the limitation was the necessity of human intervention at various points in the automated process without any obvious purpose. It seemed rather inefficient, and I was near obsessed with “improving” the functionality. In fact, these limitations were intentionally incorporated into the process. The automated process could in fact be done from beginning to end with a single efficient click of a button as I’d envisioned, but if the process was rolled out that way from its inception, no one would ever click that button. They feared being replaced by an intelligent algorithm.
My experience generating an article using predraft.ai was reminiscent of my long ago revelation about fear and reluctance in the adoption of new technologies. I expected something groundbreaking – exciting if you will, depending on your individual perspective - or at least thought provoking to be contained in the output, much like I once naively expected to improve process automation by eliminating the need for intermittent human intervention. But the tool instead provided output much more palatable to the masses – something easy to read and nonthreatening, which I have to imagine is for the best.
Since generating the first article on Legal AI, I’ve generated a handful more on similar topics, this time manually adjusting the tone to values such as “angry”, “fearful”, “skeptical”, and “doubtful”. In every case, the article returned was clearly pro-AI. For example, in an article entitled “Overview of AI’s Impact on the Future of Legal Research”, I requested an “angry” tone. The tool generated an article containing the following statement: “Artificial Intelligence (AI) is not just a buzzword; it's radically altering the legal landscape, and it's about time lawyers wake up to this reality. AI is enhancing research methods, streamlining processes, and increasing efficiency across the board. A staggering 79% of law firm professionals believe that AI will dramatically transform their work within the next five years. It's frustrating that so many in the field still seem oblivious to the fact that AI can dramatically reduce the time spent on mundane tasks, allowing legal professionals to concentrate on strategic decision-making instead of drowning in paperwork.”
Arguably, predraft.ai did deliver on the angry tone, but not at the adoption of AI in the legal space so much as frustration with legal practitioners who doubt the value of AI for legal research. As I’ve already stated, I myself am heavily pro-AI, so this does not create any philosophical dissonance for me, and I even occasionally share the tool’s apparent frustration with humans. I merely point this out to demonstrate that this particular algorithm is clearly operating within particular intentional bounds, and said bounds appear to acknowledge that most people find technology less threatening when it is delivered incrementally and in a fashion that still appears to consider the importance of the humans interacting with it. As all wise stewards of technology realize, when the potential exists for people to view a technology as the enemy (rightly or wrongly), and to sense that it may threaten their livelihood, this risk must be addressed at the outset and even in the algorithm itself or no one will ever click that button.
[i] Anne Guillot, “Excitement over fear”, Breathe Magazine, October 6, 2020, https://breathemagazine.com.au/mindfulness/excitement-over-fear/#:~:text=Excitement%20and%20fear%20are%20in,the%20mind%20interprets%20the%20experience.
[ii] Alex Korb Ph.D., “Predictable Fear: Why the brain likes haunted houses”, Psychology Today, October 31, 2013, https://www.psychologytoday.com/us/blog/prefrontal-nudity/201410/predictable-fear.
I thoroughly enjoy experimenting with new technologies, so when I learned there was a free AI platform called predraft.ai that could make blogging nearly effortless, I was curious and eager to give it a try. Like any typically egocentric human, I take great pride in my writing skills. Thus, I had no intention of using the content the AI produced as my inaugural blog post. Rather I simply wanted to experience and assess the ease of input and quality of output myself. I quickly changed my mind, though not completely.
I did not suddenly decide to copy and paste the AI-generated article into my website and pass it off as my own writing . I suppose some people would use the tool that way. That may even be the type of use predraft.ai intended, but that was never my plan. Maybe I am an idealistic fool, but the ethics of such a practice seem questionable at best. Furthermore, my own writing is simply better IMHO 😊. Instead, I thought it might be interesting to share the process and evaluate the output through the lens of a presumptively parallel personal experience.
The Process
Using predraft.ai was indeed quite simple, with a highly intuitive interface that requires minimal input from the end user. To generate an original article at no cost, one need only:
· Access the predraft.ai website at https://predraft.ai/topic,
· Select an article type from the drop down menu,
· Type in a high-level topic,
· Enter any desired keywords, separated by commas, and
· Press the Autofill button.
Once the Autofill button is pressed, predraft.ai automatically generates:
· Additional related keywords,
· Key facts to highlight,
· Writing tone,
· Article length (normal or long), and
· Language/Country.
Each of the auto-populated data values is editable prior to generating an article, and users have the option of adding one or more URL links to specific sources if desired. A user can also choose to opt in or out of using real-time Google search results as a source.
Finally, with one click of the Write Article button, predraft.ai begins the process of writing a well-structured article that meets the specified criteria. The article, a blog post in my case, includes:
· Auto-generated title,
· Summary/Synopsis,
· Open-source cover and body photos/graphics,
· Sections that correspond to the summary,
· Properly formatted section headers, and
· Clearly marked in-line keywords.
While the estimated time to prepare an article is typically 5 to 10 minutes, all of my attempts have produced an article in significantly less time - around 1 or 2 minutes.
In addition to the polished article produced by predraft.ai, the tool simultaneously provides:
· An outline of the topics researched,
· A listing of any internal and external links, and
· Several suggestions for “What to Write Next”.
If any of the suggested topics sound interesting, the user may handily generate another article simply by clicking on the Create button.
Once an article is generated, users have several options for capturing the article to use as they see fit. They may copy the article in rich text, html, or markdown. If they are fortunate enough to have a WordPress site (I am not), they can even publish the article directly from this page of the predraft.ai interface. Overall, I found the predraft.ai article generation process to be simple, efficient, and expedient. For individuals or organizations that want to easily produce inexpensive, high-volume, original blog posts, the tool could certainly prove useful. But for myself, I was particularly interested in the quality and content of the output.
The Output
The AI-Generated Article
My initial prompts directed predraft.ai to generate an article on the specific legal functions AI will likely be able to perform unassisted by an attorney within the next 5 years. I’ve read articles speculating on this topic written by human authors, and I thought it would be fascinating to see what an AI entity had to say on the topic. You can read a copy of the article predraft.ai generated by clicking on the link below, if interested. Really, it is not too shabby: https://drive.google.com/file/d/12Z9UuvVjsZ7el-SuStFnHZXqmBsIWfNh/view?usp=sharing
An Objective Assessment of the AI-Generated Output
The AI-authored article was technically well-written with excellent grammar and a solid, easily readable structure that flowed well. The tool automatically generated and formatted a compelling title for the article and separated the body into sections with properly formatted, sensible section headers. Predraft.ai even provided an eye-catching graphic to accompany the article without being asked to do so.
The tool used 4 sources to craft the 1149 word article, including embedded links to the external sources in the body of the article. Having spent the last few years doing legal citations, which seems like a very small number of sources. That said, this is a blog article on a general topic for public consumption, not an appellate brief prepared for a Supreme Court case, so 4 sources were likely adequate.
The style of writing, while not deficient, was easily recognizable as authored by AI. I find that AI written content is nearly always identifiable by avid readers and experienced writers. I suspect AI writing may become more like human writing over time as technology evolves, but in the meantime there is no mistaking it as machine generated. I have never been able to put my finger on what it is about AI-generated content that makes distinguishes it from content authored by humans, so I consulted ChatGPT for some insight. I specifically requested “brutal honesty.”
I think ChatGPT’s response was honest, if not brutal per se, and actually quite interesting. Hence, I’ve included it below for review. In short, ChatGPT seems to be saying AI’s writing cannot compare because only people can produce truly provocative, insightful, and nuanced writing. I particularly enjoyed the flattery at the beginning and the summary at the end because they make a particularly apt segue into the subjective assessment that follows.
That’s a great question—and you're absolutely right, educated readers can often sense when content is AI-generated.
Here’s why:
1. Predictability and Over-Optimization:
AI writing often follows very structured, predictable patterns. It tends to adhere strictly to the format it was trained on, leading to robotic-sounding transitions, formulaic sentence structures, and overuse of certain common phrases. This can make the content feel mechanical or overly polished, lacking the natural variance that human writing has.
2. Surface-Level Insights:
AI models can generate content based on existing knowledge, but they struggle to provide deep, original insights, which are hallmarks of human expertise. Educated readers can tell when writing is sticking to “safe” or widely available information without offering unique, thought-provoking analysis or creative connections between ideas.
3. Repetitiveness and Lack of Subtlety:
While AI models have become much better at variety, they still tend to repeat words, phrases, or concepts more often than a human writer would. They lack the nuanced ability to rephrase ideas subtly across different paragraphs, which makes content feel repetitive or even redundant.
4. Limited Humor, Sarcasm, and Emotional Nuance:
AI-generated content struggles with the more complex, nuanced aspects of language, like humor, sarcasm, irony, or deeply emotional tone shifts. Even when AI attempts humor or emotional depth, it often comes across as awkward or forced because it’s not grounded in lived experience or intuitive understanding.
5. Overuse of Neutral Tone:
AI tends to err on the side of neutrality to avoid generating controversial or inappropriate content. This neutrality can drain the text of passion, personality, or a clear point of view, making it feel flat. Human writers, on the other hand, often take a stance, express strong opinions, or inject personal voice—especially in creative or analytical writing.
6. Awkward Phrasing and Syntax:
AI sometimes generates phrases that are grammatically correct but sound off, stiff, or unnatural. Phrases like “It is important to note that” or “In conclusion” get overused, and while grammatically fine, they feel like boilerplate rather than something a human would write naturally in that context.
7. Lack of Original Metaphors or Analogies:
Humans often use creative analogies or metaphors to illustrate complex ideas. AI, while capable of constructing comparisons, generally sticks to conventional or cliché ones. It lacks the creativity to make novel connections between disparate ideas that surprise or captivate the reader.
8. Minimal Use of Anecdotes or Personal Stories:
AI doesn’t have personal experiences to draw from, so it typically avoids using anecdotes or weaving in personal narratives, which are common in human-authored writing, especially in blogs, opinion pieces, or reflective essays. This absence can make the text feel less engaging or relatable.
In short, AI writing often feels like it’s “checking all the boxes” of what correct writing should be, but without the depth, creativity, and personal touch that makes human writing compelling. Educated readers pick up on these subtleties, spotting the consistent signs of AI-generated content because it lacks the imperfect, unique voice of a human mind at work.
A Wholly Subjective Human Assessment
While I found the article easy enough to read, I was disappointed to observe that AI did not have any new or surprising revelations regarding unassisted AI legal functions on the horizon, at least none that it was willing to share with me. In fact, the tool mentioned no functions that AI is not fully capable of performing today. It seemed odd to me that an entity of such vast intelligence could not imagine legal applications for AI far beyond what I myself could imagine. Even stranger, I thought, that predraft.ai presented past and current functionality as if it were speculative, especially given the fact the tool clearly had access to real-time Google searches.
I was suddenly reminded of a time in the distant past (the “internet boom”) when there was a renewed push toward large scale process automation. I was a huge fan of process automation initiatives, likely for the same reason I enjoyed solving puzzles. I loved both the challenge and the payoff - the exhilarating idea of taking a massive heap of unnecessarily cumbersome and disconnected pieces and transforming them into a beautifully effective and orderly whole. Who wouldn’t love such an idea? Turns out, most people.
You see, the majority of humans do not really enjoy change. They are excited by the possibilities inherent in new technologies, but not always in a good way. Occasionally, this excitement can resemble the excitement of a hive full of bees met by a bear intent on extracting an indulgent midafternoon snack of honey. This is especially true when the change in question is a bit mystery and it’s coming to their workplace. Science tells us that adrenaline-based fear and excitement are physiologically indistinguishable.[i] Regardless of the nature of the stimulus applied, individual human perception and interpretation alone determine whether or not the given stimulus is viewed as positive (excitement) or negative (fear).[ii]
I, on the other hand, am a nerd from way back. I love people, but I also love technology and the things I believe technology can do to improve human lives. I largely perceive technological advances as positive (exciting) and unknown consequences as possibilities. As a result, I was STOKED by any opportunity to assist with process automation efforts – literally bursting with questions and suggestions aimed at advancing automation efforts. On one occasion, I was speaking animatedly with a mentor about ways to overcome what I viewed as a limitation in an automated process. In short, the limitation was the necessity of human intervention at various points in the automated process without any obvious purpose. It seemed rather inefficient, and I was near obsessed with “improving” the functionality. In fact, these limitations were intentionally incorporated into the process. The automated process could in fact be done from beginning to end with a single efficient click of a button as I’d envisioned, but if the process was rolled out that way from its inception, no one would ever click that button. They feared being replaced by an intelligent algorithm.
My experience generating an article using predraft.ai was reminiscent of my long ago revelation about fear and reluctance in the adoption of new technologies. I expected something groundbreaking – exciting if you will, depending on your individual perspective - or at least thought provoking to be contained in the output, much like I once naively expected to improve process automation by eliminating the need for intermittent human intervention. But the tool instead provided output much more palatable to the masses – something easy to read and nonthreatening, which I have to imagine is for the best.
Since generating the first article on Legal AI, I’ve generated a handful more on similar topics, this time manually adjusting the tone to values such as “angry”, “fearful”, “skeptical”, and “doubtful”. In every case, the article returned was clearly pro-AI. For example, in an article entitled “Overview of AI’s Impact on the Future of Legal Research”, I requested an “angry” tone. The tool generated an article containing the following statement: “Artificial Intelligence (AI) is not just a buzzword; it's radically altering the legal landscape, and it's about time lawyers wake up to this reality. AI is enhancing research methods, streamlining processes, and increasing efficiency across the board. A staggering 79% of law firm professionals believe that AI will dramatically transform their work within the next five years. It's frustrating that so many in the field still seem oblivious to the fact that AI can dramatically reduce the time spent on mundane tasks, allowing legal professionals to concentrate on strategic decision-making instead of drowning in paperwork.”
Arguably, predraft.ai did deliver on the angry tone, but not at the adoption of AI in the legal space so much as frustration with legal practitioners who doubt the value of AI for legal research. As I’ve already stated, I myself am heavily pro-AI, so this does not create any philosophical dissonance for me, and I even occasionally share the tool’s apparent frustration with humans. I merely point this out to demonstrate that this particular algorithm is clearly operating within particular intentional bounds, and said bounds appear to acknowledge that most people find technology less threatening when it is delivered incrementally and in a fashion that still appears to consider the importance of the humans interacting with it. As all wise stewards of technology realize, when the potential exists for people to view a technology as the enemy (rightly or wrongly), and to sense that it may threaten their livelihood, this risk must be addressed at the outset and even in the algorithm itself or no one will ever click that button.
Try predraft.ai: https://predraft.ai/topic
[i] Anne Guillot, “Excitement over fear”, Breathe Magazine, October 6, 2020, https://breathemagazine.com.au/mindfulness/excitement-over-fear/#:~:text=Excitement%20and%20fear%20are%20in,the%20mind%20interprets%20the%20experience.
[ii] Alex Korb Ph.D., “Predictable Fear: Why the brain likes haunted houses”, Psychology Today, October 31, 2013, https://www.psychologytoday.com/us/blog/prefrontal-nudity/201410/predictable-fear.