There’s some good news/bad news about AI regulation. The good news is that this past weekend California Governor Gavin Newsome vetoed the controversial S.B. 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The bad news is that he vetoed S.B. 1047.
Or maybe
it’s the other way around.
Regulating AI is tricky. Credit: NCSL
Honestly,
I’m not sure how I should feel about the veto. Smarter, more knowledgeable people
than me had lined up on both sides. No legislation is ever perfect, of course, and
it’s never possible to fully anticipate the consequence of most new laws, but a
variety of polls indicate that most Americans support some regulation of AI.
“American
voters are saying loud and clear that they don’t want to see AI fall into the
wrong hands and expect tech companies to be responsible for what their products
create,” said Daniel
Colson, Executive Director of the Artificial Intelligence Policy Institute.
“Voters are concerned about AI advancement—but not about the U.S falling behind
China; they are concerned about how powerful it can become, how quickly it can
do so and how many people have access to it.”
Credit: AIPI |
S.B. 1047
would have, among other things, required safety testing of large AI models
before their public release, given the state the right to sue AI companies for
damages caused by their AI, and mandated a “kill switch” in case of catastrophic
outcomes. Critics claimed it was too vague, only applied to large models, and,
of course, would stifle innovation.
In his statement
explaining his veto, Governor Newsome pointed out the unequal treatment of the
largest models and “smaller, specialized” models, while stressing that action
is needed and that California should lead the way. He pointed out that
California has already taken some action on AI, such as for deepfakes, and punted
the issue back to the legislature, while promising to work with AI experts on
improved legislation/regulation.
The bill’s
author, Senator Scott Wiener, expressed
his disappointment: “This veto is a setback for everyone who believes in
oversight of massive corporations that are making critical decisions that
affect the safety and welfare of the public and the future of the planet.” Moreover,
he added: “This veto leaves us with the troubling reality that companies aiming
to create an extremely powerful technology face no binding restrictions from
U.S. policymakers, particularly given Congress’s continuing paralysis around
regulating the tech industry in any meaningful way.”
Indeed, as
on most tech issues, Congress has been largely missing in action. “States and
local governments are trying to step in and address the obvious harms of A.I.
technology, and it’s sad the federal government is stumped in regulating it,” Patrick
Hall, an assistant professor of information systems at Georgetown University, told
The New York Times. “The American public has become a giant
experimental population for the largest and richest companies in world.”
I don’t
know why we’d expect any more from Congress; it’s never gotten its hands around
the harms caused by Facebook, Twitter, or Instagram, and the only reason it
took any action against TikTok was because of its Chinese parent company. It may
take Chinese AI threatening American for Congress to act.
As was
true with privacy, the European Union was quicker to take action, agreeing
on regulation – the A.I.
Act – last year, after debating it some three years. That being said, the
Act won’t be in effect until August 2025, and the details are still
being drafted. Meanwhile, big tech companies – mostly American – are
working to weaken it.
So it
goes.
Summary of EU AI Act Credit: Analytics Creator |
First,
agencies must begin to understand the landscape of AI risks and harm in their
regulatory jurisdictions. Collecting data on AI incidents — where AI has
unintentionally or maliciously harmed individuals, property, critical
infrastructure, or other entities — would be a good starting point.
Second,
agencies must prepare their workforces to capitalize on AI and recognize its
strengths and weaknesses. Developing AI literacy among senior leaders and staff
can help improve understanding and more measured assessments of where AI can
appropriately serve as a useful tool.
Third
and finally, agencies must develop smart, agile approaches to public-private
cooperation. Private companies are valuable sources of knowledge and expertise
in AI, and can help agencies understand the latest, cutting-edge advancements.
Corporate expertise may help regulators overcome knowledge deficiencies in the
short term and develop regulations that allow the private sector to innovate
quickly within safe bounds.
Similarly,
Matt Keating and Malcolm Harkins, writing
in CyberScoop, warn: “Most existing tech stacks are not equipped for
AI security, nor do current compliance programs sufficiently address AI models
or procurement processes. In short, traditional cybersecurity practices will
need to be revisited and refreshed.” They urge that AI developers build with
security best practices in mind, and that organizations using AI “should adopt
and utilize a collection of controls, ranging from AI risk and vulnerability
assessments to red-teaming AI models, to help identify, characterize, and
measure risk.”
In the absence
of state or federal legislation, we can’t just throw our hands up and do
nothing. AI is evolving much too fast.
-----------
There are
some things that I’d hope we can agree on. For example, our images, voices, and
other personal characteristics shouldn’t be allowed to be used/altered by AI. We
should know what information is original and what is AI-generated/altered. AI
shouldn’t be used to ferret out even more of our personal information. We
should be careful about to whom we sell/license it to, and we should be
hardening all of our technology against the AI-driven cyberattacks that will, inevitably,
come. We need to determine who is responsible, how, for which harms.
And we
need to have a serious discussion about who benefits from AI. If AI is used to
make a handful of rich people even richer, while costing millions of people
jobs, that is a societal problem that we cannot just ignore – and must not
allow.
Regulating
a new technology, especially a world-changing one like AI, is tricky. Do it too
soon/too harsh, and it can deter innovation, especially while other
jurisdictions don’t impose them. Do it too late/too lightly, and, well, you get
social media.
There’s
something important we all can do. When voting this fall, and in every other
election, we should be asking ourselves: is this candidate someone who understands
the potentials and perils of AI and is prepared to get us ready, or is it
someone who will just try to ignore them?
No comments:
Post a Comment