My heart says I should write about Uvalde, but my head says, not yet; there are others more able to do that. I’ll reserve my sorrow, my outrage, and any hopes I still have for the next election cycle.
Instead, I’m turning to a topic that has long
fascinated me: when and how are we going to recognize when artificial
intelligence (AI) becomes, if not human, then a “person”? Maybe even a doctor.
Credit: Strategic Finance Magazine
What prompted me to revisit
this question was an article in Nature by Alexandra George and Toby Walsh: Artificial
intelligence is breaking patent law.
Their main
point is that patent law requires the inventor to be “human,” and that concept
is quickly become outdated.
It turns out that there is a test case about this issue
which has been winding its way through the patent and judicial systems around
the world. In 2018, Stephen Thaler, PhD,
CEO of Imagination Engines, started
trying to patent some inventions “invented” by an AI system called DABUS (Device
for the Autonomous Bootstrapping of Unified Sentience). His legal team submitted patent applications
in multiple countries.
It has not gone well.
The article notes: “Patent registration offices have so far rejected the
applications in the United Kingdom, United States, Europe (in both the European
Patent Office and Germany), South Korea, Taiwan, New Zealand and Australia…But
at this point, the tide of judicial opinion is running almost entirely against
recognizing AI systems as inventors for patent purposes.” Credit: SCC Blog
The only “victories” have been limited. Germany offered to issue a patent if Dr.
Thaler was listed as the inventor of DABUS.
An appeals court in Australia agreed AI could be an inventor, but that
decision was subsequently
overturned. That court felt that the
intent of Australia’s Patent Act was to reward human ingenuity.
The problem is, of course, is that AI is only going to get more intelligent, and will increasingly “invent” more things. Laws written to protect inventors like Eli Whitney or Thomas Edison are not going to work well in the 21st century. The authors argue:
In the absence of clear laws setting out how to assess AI-generated inventions, patent registries and judges currently have to interpret and apply existing law as best they can. This is far from ideal. It would be better for governments to create legislation explicitly tailored to AI inventiveness.
Those aren’t the only issues that need to be reconsidered. Professor George notes:
Even if we do accept that an AI system is the true inventor, the first big problem is ownership. How do you work out who the owner is? An owner needs to be a legal person, and an AI is not recognized as a legal person,
Another problem with ownership when it comes to AI-conceived inventions, is even if you could transfer ownership from the AI inventor to a person: is it the original software writer of the AI? Is it a person who has bought the AI and trained it for their own purposes? Or is it the people whose copyrighted material has been fed into the AI to give it all that information?
Yet another issue is
that patent law typically requires that patents be “non-obvious” to a “person
skilled in the art.” The authors point out: “But
if AIs become more knowledgeable and skilled than all people in a field, it is
unclear how a human patent examiner could assess whether an AI’s invention was
obvious.”
--------------
I think of this issue particularly
due to a new
study, where MIT and Harvard researchers developed an AI that could
recognize patients’ race by looking only at imaging. Those researchers noted: “This finding is striking as this task is
generally not understood to be possible for human experts.” One of the co-authors told
The Boston Globe: “When my graduate students showed me some of the results that were
in this paper, I actually thought it must be a mistake. I honestly
thought my students were crazy when they told me.”
Explaining what an AI did, or how it did
it, may simply be or become beyond our ability to understand. This is the infamous “black box” issue, which
has implications not only for patents but also liability,
not to mention teaching or reproducibility.
We could choose to only use the results we understand, but that seems
pretty unlikely.
Professors George and Walsh propose three steps for
the patent problem:Credit: Jones Day
- Listen and Learn:
Governments and applicable agencies must undertake systematic investigations of
the issues, which “must go back to basics and assess whether protecting
AI-generated inventions as IP incentivizes the production of useful inventions
for society, as it does for other patentable goods.”
- AI-IP Law: Tinkering
with existing laws won’t suffice; we need “to design a bespoke form of IP known
as a sui generis law.”
- International Treaty:
“We think that an international treaty is essential for AI-generated
inventions, too. It would set out uniform principles to protect AI-generated
inventions in multiple jurisdictions.”
The authors conclude: “Creating bespoke law and an
international treaty will not be easy, but not creating them will be worse. AI
is changing the way that science is done and inventions are made. We need
fit-for-purpose IP law to ensure it serves the public good.”
It is worth noting that China, which aspires to become
the world leader in AI, is
moving fast on recognizing AI-related inventions.
------------
Some experts posit that AI is and always will be simply
a tool; we’re still in control, we can choose when and how to use it. It’s clear that it can, indeed, be a powerful
tool, with applications in almost every field, but maintaining that it will
only ever just be a tool seems like wishful thinking. We may still be at the stage when we’re
supplying the datasets and the initial algorithms, and even usually
understanding the results, but that stage is transitory.
AI are inventors, just like AI are now
artists, and soon will be doctors, lawyers, and engineers, among other
professions. We don’t have the right
patent law for them to be inventors, nor do we have the right licensing or
liability frameworks for them to in professions like medicine or law. Do we think a medical AI is really going to
go to medical school or be licensed/overseen by a state medical board? How very 1910 of us!Illustration by Aga Więckowska
Just because AI aren’t going to be human doesn’t mean
they aren’t going to be doing things only humans once did, nor that we shouldn’t
be figuring out how to treat them as persons.
No comments:
Post a Comment