At a time when election-year politics are quashing legislative initiatives, AI safety is the rare issue gaining momentum in Congress. And it couldn’t come at a better time, as the American public increasingly becomes aware of the politics embedded in AI models.
Maxim Lott developed a tool that would have all the AI models take a political compass quiz. Every AI leans toward the economic left and social libertarianism.
Over the last several days, users have discovered that Google’s model, Gemini, won’t assist you if you are pro-fossil fuel, won’t say whether Stalin is worse than libertarians, and thinks repealing net neutrality might be as bad as Hitler.
AI is going to change society; the least it can do is confidently tell us Hitler was bad.
We know the labs can't be trusted to handle these risks themselves. The Gemini debacle makes this crystal clear.
Ideologically obsessed software developers are focusing all of their attention on changing history to suit their ideology. We need congressional action to rein them in and tackle the real challenges.
The stage for new legislation was set in October when President Biden signed Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” In Congress, the most thorough legislative frameworks have been sponsored by unusual bipartisan coalitions.
One is the Bipartisan Framework for U.S. AI Act, developed by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., which would impose liability on AI companies for harms caused by their models. Another bicameral bill to create a national commission on AI was introduced by Freedom Caucus member Rep. Ken Buck, R-Colo., along with Democrat Reps. Ted Lieu, Calif., Anna Eshoo, Calif., and Sen. Brian Schatz, Hawaii.
The latest polling shows why AI safety is generating this level of bipartisan interest in Congress. Close to 80% of respondents in both parties maintain that “preventing dangerous and catastrophic outcomes from AI” should be the top concern relative to other AI policy priorities. AI doesn’t need to become Skynet to still be scary.
American voters are particularly keen to address threats posed by biological agents. More than 80% believe that AI could inadvertently cause a catastrophic event and favor oversight protocols for research experiments on dangerous viruses and liability for AI-led creation of biological weapons.
Equally important, voters in both parties agree on which AI issues policymakers should not prioritize.
Social justice concerns around AI register the lowest support. Just 24% of respondents indicate that “reducing racial and gender bias in AI” should be at the top of the agenda.
Whereas 77% are instead most concerned about preventing dangerous and catastrophic risks from AI. Public opinion is in line with the estimates of risk analysts, including AI executives, who consider uncontrolled AI to be one of the greatest existential threats facing humanity.
The core threat is the “alignment problem” — the prospect that AI of growing sophistication could turn against the interests of its human creators.
The Gemini model mishaps show how dangerous this can be. Imagine a super powerful AI that can’t determine whether Hitler or net neutrality is worse.
The challenge in passing AI safety legislation, however, is that powerful constituencies in the Democratic Party consider the social justice implications of AI to be a much higher priority than the public, and are willing to spend political capital on the issue.
Pressure from the left informed the Biden administration’s decision to make the advancement of “equity and civil rights” one of the eight guiding principles of the administration’s AI policy. The consequence, as Manhattan Institute senior fellow Christopher Rufo cautions, is that the precepts of critical race theory could be embedded into the country’s AI strategy.
The equity pillar of the executive order not only distracts from far more serious threats; it is a political liability that weakens the political coalition needed to pass AI safety legislation.
Just 37% favor the executive order’s endorsement of “fair practices” for the use of AI in policing and sentencing criminals.
The AI safety provisions of the Biden executive order were built on two Trump-era executive orders on AI. Yet the equity and diversity aspects of the policy led the Trump-aligned America First Legal to accuse the Biden administration of using AI policy as a “blunt weapon to advance their extreme goals” ranging from “race-based government benefits” to “criminal prosecutions.”
The left’s fixation on social justice issues could inflame the culture war over “woke AI,” undermining Congress’s ability to tackle the most consequential threats from unaligned AI.
Currently, more than 60% of voters favor the passage of AI safety legislation this year that would restrict the training of dangerous AI models and their proliferation to terrorists and other bad actors.
But this support could prove tenuous if Americans come to see AI safety policy as a trojan horse for woke agendas favored by Washington Democrats and their allies in the tech industry.
As the window shrinks to pass legislation, the onus is on Congressional Democrats to hold the line against social justice activists and work with Republicans to pass legislation focused on the most catastrophic risks from AI.
Jared Whitley is a longtime politico who has worked in the U.S. Congress, White House and defense industry. He is an award-winning writer, having won best blogger in the state from the Utah Society of Professional Journalists (2018) and best columnist from Best of the West (2016). He earned his MBA from Hult International Business School in Dubai. Read Jared Whitley's reports — More Here.