I must admit, for several years now I’ve worried that my beloved Wikipedia could not survive in an AI era. Honestly, it’s hard to imagine that in a few years we all won’t just ask our AI assistant/overlord when we want information, although the AI tendency to hallucinate and spit out a totally plausible collection of truth and make-believe creates a bar that will have to be overcome.
Credit: Wikipedia
The good
news is that AI doesn’t seem to be winning this particular battle yet. The bad
news is that the war on “woke” may be a bigger threat, at least in the short term.
A new paper
from researchers at Kings College London looked at the impact of ChatGPT on
Wikipedia engagement, and offered cautious hope: “We find no evidence of an
overall decline in Wikipedia engagement across the four metrics studied.
Instead, page views and visitor numbers increased in the period following
ChatGPT’s launch.” They did, however, find slower growth in areas where ChatGPT
was available.
The
authors cite several studies that indicate that, to date, ChatGPT responses are
not viewed as favorably as Wikipedia’s, and point out that Wikipedia readers are
not keen to see generative AI summaries of its articles. They note that ChatGPT’s
capabilities are evolving, and cite other research that found that up to 5% of
newly created articles in the English Wikipedia were written using generative
AI tools. AI may be coming, but it’s not quite ready for Wikipedia-level prime
time yet.
The authors see other risks to Wikipedia from AI. Professor Elena Simperl, Professor of Computer Science at King’s and Co-Director of the King’s Institute for Artificial Intelligence, said:
Our work did not confirm the most alarmist scenario, but we’re not out of the woods yet. AI developers are letting their scrapers loose on Wikipedia to train them on high quality data, pushing up traffic to levels where Wikipedia’s servers are struggling to keep up. Generative AI summaries are also using Wikipedia’s data in web searches but not crediting sources, siphoning web traffic away while borrowing the platform’s work.
For free services like this, no-one stops to ask how it’s being paid for – and now Wikipedia is having to make the tough decision of where to allocate their limited resources to deal with this. It’s vital as a community we take steps to protect this important platform, and we hope to turn our work into a monitoring tool where the community can track how AI is impacting Wikipedia.
Postdoc
and first author of the study Neal Reeves suggests there are steps available to
protect Wikipedia. “Ultimately, we need a new social contract between AI
companies and providers of high-quality data like Wikipedia where they retain
more power over their material, while still allowing for their data to be used
for training purposes. Collaboration, like that seen in programmes like
MLCommons, is needed to reach across the aisle and ensure that the next
generation of AI models are trained well, but in a way that doesn’t destroy one
of the free internet’s greatest resources.”
Speaking
of destroying great resources, Elon Musk has decided that, as he found with
Twitter, Wikipedia is too woke, and has announced his AI rival to it:
Grokipedia. The vision: “Grokipedia is going to be the world’s biggest, most
accurate knowledge source, for humans and AI with no limits on use.”
The announcement rambles on:
With Grok, Grokipedia aims for maximum truth through first principles and physics. It replaces partially masked evidences of how legacy media operates, rewriting with complete accurate context that cuts through the BS. this will combat the evil organizations and the evil minds that operating under the hood and who’ve poisoned minds for decades with endless fake news and distorted narratives through legacy media and Wikipedia, causing immense harm to young minds and manipulated the world long enough.
I.e., if
you liked how Elon “fixed” Twitter, you may like Grokipedia. Of course, if you
are not a fan of X, or have found Grok to be underwhelming and, at times,
scary, Grokipedia may not be for you.
![]() |
Credit: DogeDesigner |
- End decision-making by “consensus.”
- Enable competing articles.
- Abolish source blacklists.
- Revive the original neutrality policy.
- Repeal “Ignore all rules.”
- Reveal who Wikipedia’s leaders are.
- Let the public rate articles.
- End indefinite blocking.
- Adopt a legislative process.
Mr. Sanger
believes Wikipedia doesn’t allow certain right wing websites (think Fox News)
as sources, and says:
“What I can tell you is that over the years, conservatives, libertarians, were
just pushed out. There is a whole…army of administrators, hundreds of them, who
are constantly blocking people…that they have ideological disagreements with.”
Tucker
Carlson had Mr. Sanger on his podcast last week, and pronounced: “Wikipedia
shapes America. And because of its importance, it’s an emergency, in my
opinion, that Wikipedia is completely dishonest and completely controlled on
questions that matter.”
![]() |
Tucker Carlson and Larry Sanger. Credit: Tucker Carlson Podcast |
Adding to
the pile-on, White House AI and crypto
czar David Sacks has suggested Wikipedia is “hopelessly biased,”
alleging that an “army of left-wing activists maintain the bios and fight
reasonable corrections.” Politico reported
that Ed Martin, then D.C.’s interim U.S. attorney appointed by Trump, and the
House Oversight Committee have also gone after Wikipedia for similar reasons
If you are
someone who thought that the Biden Administration, the mainstream media, and social
media platforms were right to try to get credible information out during the
height of the COVID pandemic, this will all sound frighteningly familiar to
you. If you are someone who thinks every point of view should be heard equally
and that RFK Jr. makes a lot of sense, then you have your new target.
Honestly,
if Wikipedia is going to go down, I’d rather it be from a top notch AI than for
it to be smeared as “Wokepedia” and drowned in disinformation. But I’m not sure
I’ll have that choice.