(Scypre.com) – A group of three Democrats and one Republican introduced a bill in the House to prevent artificial intelligence systems from being able to launch a nuclear attack.
Any future Defense Department policy decisions that could lead to artificial intelligence being capable of firing off nuclear weapons on its own would be stymied by the bipartisan lawmakers’ measure. While U.S. military use of artificial intelligence can be appropriate for enhancing national security purposes, it should not be used for deployment of nuclear weapons without a human chain of command and control.
Ken Buck said this week, “Our job as Members of Congress is to have responsible foresight when it comes to protecting future generations from potentially devastating consequences,” said Rep.
Ted Lieu has been vocal about the dangers of allowing the rapid development of artificial intelligence.
Existing Pentagon policy requires a human to be in the loop for any decisions regarding the use of nuclear weapons.
Lawmakers and officials in Washington are starting to reckon with both the positives and the dangers of artificial intelligence.
In an interview on America’s Newsroom on Friday morning, Buck said that Congress hadn’t been looking at the issue for a long time. “We want to make sure that there’s a human in this process of launching a nuclear weapon, if at any point in time we need to launch a nuclear weapon,” Buck said.
Senate Majority Leader Chuck Schumer released a broad framework earlier this month calling for companies to allow outside experts to review their technology before it is publicly available to use.
The bipartisan legislation that was announced on Wednesday seeks to prevent an artificial intelligence system from making nuclear launch decisions. The Block Nuclear Launch by Artificial Intelligence Act would prohibit the use of federal funds for launching any nuclear weapon by an automated system. “As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons.”
Humans need to be aware of the decision to use deadly force, especially for our most dangerous weapons.
In order to inform and execute decisions by the President to initiate and end nuclear weapon employment, the United States will maintain a human “in the loop”, according to the new bill.
The National Security Commission on Artificial Intelligence called for the US to affirm its policy that only human beings can authorize the employment of nuclear weapons, as well as codifying the Defense Department principle into law.
While US military use of artificial intelligence can be appropriate for enhancing national security purposes, it should not be used for deployment of nuclear weapons without a human chain of command and control.
A group of researchers called for a pause in the development of systems “more powerful” than GPT-4 in March, as anxiety grows over the future potential of rapidly advanced (and sometimes poorly understood and overhyped) generative artificial intelligence technology.
While GPT-4 is not feared to launch a nuclear strike, a group of researchers that evaluate the capabilities of today’s most popular large language models for OpenAI fear that more advanced future artificial intelligence systems may be a threat to human civilization.
Some of that fear has transferred to the broader populace, despite the fact that the machine learning community is still controversial.
The new bill is part of a larger plan to avoid nuclear escalation. The pair reintroduced a bill that would prohibit the president from launching a nuclear strike without the approval of Congress. According to the congressmen, the goal is to reduce the risk of nuclear Armageddon.
There are two cosponsors of the Block Nuclear Launch by Artificial Intelligence Act in the Senate.