Skip to content

THE NOTE

Does the APA Require Notice-and-Comment When an Agency Uses AI to Draft a Proposed Rule?

The Question

When a federal agency uses a large language model to generate the text of a proposed rule, does this trigger additional procedural obligations under the Administrative Procedure Act?

The Claim

Current APA requirements apply to the final proposed rule regardless of how it was drafted; the drafting tool does not independently trigger notice-and-comment obligations.

Lead
by Margaret Chen · Professor of Administrative Law, Georgetown University Law Center
Sample — Replace Before Launch

The Administrative Procedure Act's notice-and-comment requirements, codified at 5 U.S.C. § 553, impose obligations on agencies at the point of proposing and finalizing a rule. The statute is tool-agnostic: it mandates that agencies publish notice of proposed rulemaking in the Federal Register, provide interested persons an opportunity to participate through submission of written data, views, or arguments, and incorporate in the final rule a concise general statement of basis and purpose.

Nothing in this framework conditions its requirements on the method by which the agency drafted the proposed text. Whether a rule is drafted by a career attorney, a political appointee, an outside contractor, or a large language model, the APA's procedural requirements attach to the output — the proposed rule itself — not to the process by which it was composed. To hold otherwise would require reading into Section 553 a process-based trigger that the text does not support.

This does not mean that AI-assisted drafting raises no administrative law concerns. Arbitrary and capricious review under Section 706(2)(A) requires the agency to demonstrate reasoned decisionmaking, and reliance on AI-generated text without substantive review could undermine the agency's ability to articulate the basis for its policy choices. But this is a question of substance, not procedure — it goes to the adequacy of the agency's reasoning, not to whether additional notice-and-comment steps are required.

The distinction matters because conflating process with substance would create perverse incentives. If the mere use of an AI drafting tool triggered additional procedural requirements, agencies would be discouraged from adopting efficiency-enhancing technologies, or would simply obscure their use of such tools. Neither outcome serves the APA's purposes.

Technically Reviewed
Response
by David Kalman · Senior Counsel, Office of Information and Regulatory Affairs (former)
Sample — Replace Before Launch

Professor Chen's analysis is technically correct but strategically incomplete. The APA's text is indeed tool-agnostic — but the principles underlying arbitrary and capricious review are not. The question is not whether AI drafting triggers a new procedural step, but whether it functionally undermines the existing procedural guarantees in ways that require compensating safeguards.

When an agency uses a large language model to generate regulatory text, the resulting draft reflects statistical patterns in training data, not the agency's deliberative process. If the agency then publishes that text with minimal substantive modification, the notice-and-comment period becomes a procedural formality rather than a genuine opportunity for public participation to shape agency reasoning. The public comments on a text whose analytical foundations are opaque even to the agency officials responsible for it.

This is not a hypothetical concern. Executive Order 12866 and its successors require agencies to conduct cost-benefit analysis and to demonstrate that regulatory choices reflect reasoned consideration of alternatives. An AI-generated draft may embed assumptions about costs, benefits, and alternatives that the agency has not independently evaluated. The resulting rule may satisfy Section 553's formal requirements while violating the substantive rationality review that State Farm and its progeny demand.

The appropriate response is not to require "additional" notice-and-comment, but to require transparency: the agency should disclose when AI tools were used in drafting, identify which portions of the regulatory text were AI-generated, and demonstrate through the rulemaking record that human officials independently evaluated the analytical basis for each significant regulatory choice.

Rejoinder
by Margaret Chen
Sample — Replace Before Launch

Mr. Kalman and I agree on more than we disagree. His transparency proposal is sound policy, and I would support it as a best practice or executive directive. But I resist framing it as an APA obligation. The statute's genius is its generality — it has survived seventy years of technological change precisely because it regulates outputs, not inputs.

The moment we condition APA compliance on disclosing how a rule was drafted, we create a precedent that extends far beyond AI. Must agencies disclose when they relied on an industry lobbyist's draft language? When a congressional staffer provided model text? These are questions of political accountability, not administrative procedure. The APA should remain the floor, not the ceiling, of regulatory transparency.