This week, senators and representatives have held separate hearings regarding the perils and promises of artificial intelligence (AI).
The move signals lawmakers’ growing regulatory appetite in the wake of actions on the technology from the Biden administration.
“AI is no longer a matter of science fiction nor is it a technology confined to research labs,” said Aleksander Mądry, a computing professor at the Massachusetts Institute of Technology (MIT), in written testimony for Wednesday’s House hearing, held by the House Oversight Committee’s Subcommittee on Cybersecurity, Information Technology, and Government Innovation.
“AI is a technology that is already being deployed and broadly adopted as we speak.”
Earlier that same day, the Senate Homeland Security & Government Affairs Committee held its own hearing.
One of the Senate’s witnesses, Brown University Professor Suresh Venkatasubramanian, contributed to the Biden administration’s new “AI Bill of Rights,” released to little fanfare in Oct. 2022.
Venkatasubramanian also praised Biden’s February 2023 executive order on racial equity.
It explicitly instructs federal agencies to “[advance] equity” when using AI systems.
Before the Biden administration acted on AI, the Trump administration, in 2019, launched the American Artificial Intelligence Initiative.
Through his fiscal year 2021 budget proposal, Trump also sought to double federal research & development spending on nondefense AI.
Eric Schmidt, the former CEO of Google, laid out three AI-related expectations from platforms he believes everyone would find acceptable in his testimony before the House.
“First, platforms must, at minimum, be able to establish the origin of the content published on their platform,” he said in written testimony.
“Second, we need to know who specifically is on the platform representing each user or organization profile.
“Third, the site needs to publish and be held accountable to its published algorithms for promoting and choosing content.”
Rep. Nancy Mace (R-SC), who chairs the House’s cybersecurity subcommittee, illustrated the power of new AI innovations in a very direct way.
She delivered an opening statement that she revealed was written by OpenAI’s ChatGPT platform.
ChatGPT is an example of the burgeoning generative AI technologies that can convincingly mimic human writing, visual art, and other forms of expression.
“We need to establish guidelines for AI development and use,” said Mace-as-ChatGPT.
“We need to establish a clear legal framework to hold companies accountable for the consequences of their AI systems.”
Her AI-written statement also warned that AI could “be used to automate jobs, invade privacy, and perpetuate inequality.”
The subcommittee’s ranking member, Rep. Gerry Connolly (R-VA), noted that the federal government laid much of the groundwork for the Information Age half a century ago, suggesting there may be a precedent for more intensive federal involvement today.
The predecessor to the Internet, the U.S. Advanced Research Projects Agency Network (ARPANET), was the work of the U.S. Department of Defense, thanks in large part to pioneering computer scientist J.C.R. Licklider.
Speaking before the Senate, Jason Matheny of the Rand Corporation spoke of the key national security challenges presented by AI.
Those include “the potential applications of AI to design pathogens that are much more destructive than those found in nature,” according to his written testimony.
At the state level, AI-related legislation has emerged across the country over the past half-decade.
In 2019, Illinois broke new ground with the Artificial Intelligence Video Interview Act.
The law makes employers who use AI to analyze video interviews of job applicants disclose that fact prior to the interview.
A 2022 amendment requires employers to gather data on the race and ethnicity of such interviewees so as to identify any racial bias in subsequent hiring.
Similar concerns were voiced by the Democrats’ witness at the House cybersecurity hearing, the University of Michigan intermittent lecturer and AI ethicist Merve Hickok.
Hickok’s prescriptions? Among other things, additional hearings and a possible “Algorithmic Safety Bureau.”
“You need to hear from those who are falsely identified by facial recognition [and those] wrongly denied credit and jobs because of bias built in algorithmic systems,” she said in written testimony.
Meanwhile, others worry about the leftward skew of ChatGPT.
Venture capitalist Marc Andreessen has warned about the ideological dimension of current debates over AI and its hazards.
“It’s not an accident that the standard prescriptions for putative AI risk are ‘draconian repression of human freedom’ and ‘free money for everyone,’” Andreessen wrote on Twitter.
It's not an accident that the standard prescriptions for putative AI risk are "draconian repression of human freedom" and "free money for everyone".
— Marc Andreessen (@pmarca) March 5, 2023
“The outcome of the AI safety argument has to be global authoritarian crackdown on a level that would make Stalin blush. It’s the only way to be sure,” he added.
The outcome of the AI safety argument has to be global authoritarian crackdown on a level that would make Stalin blush. It's the only way to be sure. https://t.co/CsfRGgHMyq
— Marc Andreessen (@pmarca) March 4, 2023