Dean Ball, board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), recently offered his “early thoughts” on the First Amendment status of artificial intelligence from recent Supreme Court Decisions. A research fellow at George Mason University’s Mercatus Center, Mr. Ball studies AI and writes on it frequently.

The question wasn’t directly at issue in the cases that prompted these remarks. They were simply returned to the lower courts because the litigants had not, in legal terms, argued them appropriately. But the justices nonetheless “seemed eager to offer broader thoughts.”

Nearly all the Supreme Court Justices, Mr. Ball writes in “Is AI in Trouble at the Supreme Court?” at his influential online newsletter Hyperdimensional, expressed “skepticism about whether the use of AI tools can be considered Constitutionally protected … at least in this context.”

Moody v. NetChoice and NetChoice v. Paxton involved Florida and Texas laws that prohibit large websites from exercising viewpoint discrimination when moderating user-generated content. The industry group asserted that all conceivable applications of these laws were unconstitutional. The Court refused to decide on this, unanimously agreeing that such a ruling would require proof of unconstitutionality for all such applications—or, at least, many more than NetChoice presented arguments about.

In her majority opinion, however, Justice Elena Kagan suggested the laws were clearly unconstitutional in one respect—because they require companies to disseminate some speech against their will. But there is more to the issue.

Social media platforms employ not just content rules or guidelines, but also AI algorithms—automatic formulas, which are also used for “curating” (customizing) individual users’ feeds. The vast majority of major social media platforms’ decisions about placement of content, Mr. Ball writes, are curation decisions rather than moderation (enforcement of a platform’s rules or standards for content). The majority opinion covers only “content moderation decisions based on community rules,” not “the content curation decisions that define most social media feeds.”

“What if an algorithm does both? … what if it primarily curates based on user preferences, but also screens content for violations of community guidelines? Are all actions taken by the algorithm [constitutionally protected] as long as it does a little bit of moderation? Or are only the moderation decisions safe from regulation?” The justices’ view on this seems “unclear.”

Thus, this major caveat in Kagan’s opinion: “We … do not deal here with feeds whose algorithms respond solely to how users act online—giving them the content they appear to want, without any regard to independent content standards.”

While justices Roberts, Kavanaugh, and Sotomayor, along with Kagan, excluded such situations from their analysis, justices Alito, Gorsuch, Thomas, and Barrett voiced “doubts about whether any use of AI … should be protected.”  Thus, “eight of nine … have expressed some … skepticism about whether AI content curation and/or moderation decisions should be considered protected speech under the First Amendment.”

In Part 2 of his follow-up piece, Mr. Ball gives the justices’ reasons for doubting that a platform has corporate free-speech rights when content decisions have been delegated to algorithms. He cites these questions from Justice Barrett:

“What if a platform’s owners hand the reins to an AI tool and ask it simply to remove ‘hateful’ content? If the AI relies on large language models to determine what is ‘hateful’ and should be removed, has a human being with First Amendment rights made an inherently expressive ‘choice . . . not to propound a particular point of view’?”

He also quotes Alito, Thomas, and Gorsuch, who asked: “when AI algorithms make a decision, ‘even the researchers and programmers creating them don’t really understand why the models they have built make the decisions they make.’ Are such decisions equally expressive as the decisions made by humans?”

“The Justices,” Mr. Ball writes, “have homed in on an issue that vexes many in the AI community: when an AI acts on a person’s behalf, is that AI operating as an extension of the user’s will? … on behalf of the company that made it? … its own behalf? … nobody’s behalf?”

The responses to these and related questions “will determine much about how AI is treated in the legal system,” both in terms of social media regulation and more generally.