Guest post by Maya Bensalem
A recent petition before a U.S. District Court has ignited a debate over the undisclosed use of AI to draft an arbitral award. In La Paglia v. Valve Corp., the claimant, John La Paglia, petitioned the U.S. District Court for the Southern District of California to vacate an arbitral award rendered under the auspice of the American Arbitration Association (AAA). The challenge relies, inter alia, on the arbitrator’s undisclosed and allegedly inappropriate use of artificial intelligence.
While there was no clear and convincing evidence definitively proving that AI was misused in drafting the award, the claimant argued that certain elements raise concern. Mainly, the petition mentioned that the award itself appeared to bear hallmarks commonly associated with AI-generated text, including unusual phrasing and structural patterns. Most notably, the facts section contained assertions that were false or never introduced during the arbitration proceedings, suggesting the inclusion of information not drawn from the official record.
According to the claimant, further suspicion arose from the arbitrator’s own admission that he used ChatGPT to write unrelated articles along with his stated desire to finalize the award quickly due to an upcoming trip, potentially motivating an overreliance on AI tools to accelerate the drafting process. La Paglia is concerned that the AI-generated content has improperly influenced the outcome of the dispute.
The legal foundation for the challenge rests on Section 10(a)(4) of the Federal Arbitration Act (FAA), which permits a court to vacate an arbitral award if the arbitrator is found to have “exceeded their powers.” This provision applies when an arbitrator steps outside the boundaries of authority granted by the arbitration agreement.
The key arguments made by the claimant is that the arbitrator’s use of AI constitutes an unauthorized delegation of their adjudicative powers: “An arbitrator’s reliance on generative AI to replace their own role, and the parties’ submissions, in the litigation process betrays the parties’ expectations of a well-reasoned decision rendered by a human arbitrator.” To support this contention, the claimant drew analogies from other cases involving improper delegation of adjudicative powers, citing, inter alia, Bassett’s Adm’r v. Cunningham’s Adm’r, an 1853 decision in which a Virginia court held that arbitrators had unlawfully delegated their authority by relying on third parties to examine financial documents and adopting their conclusions without independently reviewing the evidence themselves.
The use of support for drafting an arbitral award is not per se objectionable so long as the tribunal retains full control over the reasoning and content of the final award. It has even become standard practice in arbitration, for the adjudicators to seek the assistance of a third-party. Indeed, tribunal secretaries play a supportive role, assisting arbitrators with administrative and procedural tasks such as organizing case materials, managing communications, and drafting non-substantive sections of awards. Leading institutions, including the ICC and LCIA, have issued guidance emphasizing that tribunal secretaries may provide support but must not usurp the decision-making authority of the arbitrators.
The role of tribunal secretaries has been further clarified during the Yukos proceedings, where the Hague Court of Appeal ruled that although the tribunal had failed to disclose the extent of its assistant’s involvement in drafting the award, this did not invalidate the proceedings. The court emphasized that so long as the arbitrators ultimately assumed responsibility for the decision, the mere act of delegation for drafting support did not warrant the setting aside of the award.
The controversy at the heart of La Paglia v. Valve Corp. highlights the urgent need for clear boundaries governing the use of artificial intelligence in arbitral decision-making. In this regard, the principles established for the use of arbitral secretaries offer a valuable analogue. Just as secretaries may assist with administrative and procedural tasks—but not replace the arbitrator’s core adjudicative function—AI tools should also be confined to a strictly supportive role.
This approach has recently been affirmed by key institutions in the arbitration field, including the Silicon Valley Arbitration and Mediation Center (SVAMC) and the Chartered Institute of Arbitrators (Ciarb). Both institutions have issued guidelines emphasizing that arbitrators must not delegate any part of their personal mandate to AI systems, particularly when it comes to deciding legal or factual matters.
While AI may be used to assist in organizing evidence, managing documents, or even drafting language, the ultimate responsibility for interpreting the law, assessing the evidence, and reasoning the award must remain with the arbitrator. This safeguard preserves the integrity of arbitration by ensuring that the decision reflects the arbitrator’s independent judgment, not the opaque logic of an algorithm.
The principle that an award must be the product of a transparent, accountable process is central to the legitimacy of arbitration. A fundamental concern with AI-generated content is the “black box” problem: even when the output seems coherent, the reasoning behind it is often non-traceable and unverifiable. If AI-generated content replaces or distorts the arbitrator’s evaluation of the parties’ arguments and the record, that reliance could constitute an unauthorized delegation of the arbitrator’s core duty, thereby breaching the parties’ agreement and justifying vacatur.
This case brings to the forefront critical concerns surrounding due process, transparency, and the impermissible delegation of core adjudicative functions to AI systems. LaPaglia’s challenge implicates not only principles of U.S. domestic arbitration law but also has broader implications for the international enforcement of arbitral awards. While the award itself is domestic the underlying issues resonate globally. As such, this case may serve as an early signal of the legal and ethical complexities international arbitration will face in the age of AI.
The court is yet to issue its ruling on the matter.
Disclaimer: The above is intended for information purposes only and does not constitute legal advice. Please refer to the terms and conditions page for more information.
