Biden Administration Outlines Government ‘Guardrails’ for A.I. Tools

President Biden on Thursday signed the first national security memorandum detailing how the Pentagon, the intelligence agencies and other national security institutions should use and protect artificial intelligence technology, putting “guardrails” on how such tools are employed in decisions varying from nuclear weapons to granting asylum.

The new document is the latest in a series Mr. Biden has issued grappling with the challenges of using A.I. tools to speed up government operations — whether detecting cyberattacks or predicting extreme weather — while limiting the most dystopian possibilities, including the development of autonomous weapons.

But most of the deadlines the order sets for agencies to conduct studies on applying or regulating the tools will go into full effect after Mr. Biden leaves office, leaving open the question of whether the next administration will abide by them. While most national security memorandums are adopted or amended on the margins by successive presidents, it is far from clear how former President Donald J. Trump would approach the issue if he is elected next month.

The new directive was announced on Thursday at the National War College in Washington by Jake Sullivan, the national security adviser, who prompted many of the efforts to examine the uses and threats of the new tools. He acknowledged that one challenge is that the U.S. government funds or owns very few of the key A.I. technologies — and that they evolve so fast that they often defy regulation.

“Our government took an early and critical role in shaping developments — from nuclear physics and space exploration to personal computing and the internet,” Mr. Sullivan said. “That’s not been the case with most of the A.I. revolution. While the Department of Defense and other agencies funded a large share of A.I. work in the 20th century, the private sector has propelled much of the last decade of progress.”

Mr. Biden’s aides have said, however, that the absence of guidelines about how A.I. can be used by the Pentagon, the C.I.A. or even the Justice Department has impeded development, as companies worried about what applications could be legal.

“A.I., if used appropriately and for its intended purposes, can offer great benefits,” the new memorandum concluded. “If misused, A.I. could threaten United States national security, bolster authoritarianism worldwide, undermine democratic institutions and processes, facilitate human rights abuses.”

Such conclusions have become commonplace warnings now. But they are a reminder of how much more difficult it will be to set rules of the road for artificial intelligence than it was to create, say, arms control agreements in the nuclear age. Like cyberweapons, A.I. tools cannot be counted or inventoried, and everyday uses can, as the memorandum makes clear, go awry “even without malicious intent.”

That was the theme that Vice President Kamala Harris laid out when she spoke for the United States last year at international conferences aimed at assembling some consensus about rules on how the technology would be employed. But while Ms. Harris, now the Democratic presidential nominee, was designated by Mr. Biden to lead the effort, it was notable that she was not publicly involved in the announcement on Thursday.

The new memorandum contains about 38 pages in its unclassified version, with a classified appendix. Some of its conclusions are obvious: It rules out, for example, ever letting A.I. systems decide when to launch nuclear weapons; that is left to the president as commander in chief.

While it seems clear that no one would want the fate of millions to hang on an algorithm’s pick, the explicit statement is part of an effort to lure China into deeper talks about limits on high-risk applications of artificial intelligence. An initial conversation with China on the topic, conducted in Europe this past spring, made no real progress.

“This focuses attention on the issue of how these tools affect the most critical decisions governments make,” said Herb Lin, a Stanford University scholar who has spent years examining the intersection of artificial intelligence and nuclear decision-making.

“Obviously, no one is going to give the nuclear codes to ChatGPT,” Dr. Lin said. “But there is a remaining question about how much information that the president is getting is processed and filtered through A.I. systems — and whether that is a bad thing.”

The memorandum requires an annual report to the president, assembled by the Energy Department, about the “radiological and nuclear risk” of “frontier” A.I. models that may make it easier to assemble or test nuclear weapons. There are similar deadlines for regular classified evaluations of how A.I. models could make it possible “to generate or exacerbate deliberate chemical and biological threats.”

It is the latter two threats that most worry arms experts, who note that getting the materials for chemical and biological weapons on the open market is far easier than obtaining bomb-grade uranium or plutonium, needed for nuclear weapons.

But the rules for nonnuclear weapons are murkier. The memorandum draws from previous government mandates intended to keep human decision makers “in the loop” of targeting decisions, or overseeing A.I. tools that may be used to pick targets. But such mandates often slow response times. That is especially difficult if Russia and China begin to make greater use of fully autonomous weapons that operate at blazing speeds because humans are removed from battlefield decisions.

The new guardrails would also prohibit letting artificial intelligence tools make a decision on granting asylum. And they would forbid tracking someone based on ethnicity or religion, or classifying someone as a “known terrorist” without a human weighing in.

Perhaps the most intriguing part of the order is that it treats private-sector advances in artificial intelligence as national assets that need to be protected from spying or theft by foreign adversaries, much as early nuclear weapons were. The order calls for intelligence agencies to begin protecting work on large language models or the chips used to power their development as national treasures, and to provide private-sector developers with up-to-the-minute intelligence to safeguard their inventions.

It empowers a new and still-obscure organization, the A.I. Safety Institute, housed within the National Institute of Standards and Technology, to help inspect A.I. tools before they are released to ensure they could not aid a terrorist group in building biological weapons or help a hostile nation like North Korea improve the accuracy of its missiles.

And it describes at length efforts to bring the best A.I. specialists from around the world to the United States, much as the country sought to attract nuclear and military scientists after World War II, rather than risk them working for a rival like Russia.

<

About FOX NEWS

Check Also

Trump Attacks Bipartisan Semiconductor Law, a Key Policy Achievement for Biden

Former President Donald J. Trump on Friday blasted the CHIPS and Science Act, a bipartisan …

Leave a Reply

Your email address will not be published. Required fields are marked *