Monday, June 16, 2025

How Novel: Novelty Indicators

Humans crave novelty. Our visual cortex is stimulated by changes in our visual field. Even infants show more interest in new sights and sounds. Curiosity doesn’t entirely distinguish homo sapiens from other species, but we certainly would win the prize for maximizing its value. Science, art, and music wouldn’t exist without our drive for novelty. Science in particular thrives on some scientist thinking, “hmm, isn’t that interesting?” then turning it into something that helps us understand the universe we live in better, often with practical applications.

We don't always recognize important novelty when we see it

Depending on what source one believes, there are somewhere between 3 million and 7 million scientific papers published each year; whatever the number, it is growing rapidly, and impossible for a scientist in any particular field to keep up with, much less to look for insights from other fields. Most of these papers, of course, are likely to be incremental work, building on previous research and probably not reflecting true breakthroughs. How then, do researchers, and the people who fund them, spot the truly novel papers that may spark breakthroughs?

Enter the idea of a “novelty indicator” – and welcome to the Metascience Novelty Indicator Challenge.

The need for something to help identify scientific novelty has been recognized for some time. A 2016 paper by Wang, et. al warned that there was, in fact, a bias against such novelty. “Research which explores unchartered waters has a high potential for major impact but also carries a higher uncertainty of having impact,’ the authors warn. “These findings suggest that science policy, in particular funding decisions which rely on traditional bibliometric indicators based on short-term direct citation counts and Journal Impact Factors, may be biased against “high risk/high gain” novel research.”

Similarly, a 2019 paper by Veugelers and Wang emphasizes:

We find that the small proportion of scientific publications which score on novelty, particularly the 1% highly novel scientific publications in their field, are significantly and sizably more likely to have direct technological impact than comparable non-novel publications. In addition to this superior likelihood of direct impact, novel science also has a higher probability for indirect technological impact. being more likely to be cited by other scientific publications which have technological impact.

The issue is, how to best do such a score?

In Nature, Dr. Benjamin Steyn, co-head of the UK Metascience Unit, laments that he has “been stumped by the fact that there are no good ways to measure novelty,” and so: “Without good indicators, researchers can’t assess the prevalence of original papers or their value in scientific progress.”

He mentions work done by DeSci Labs and others on novelty scores, but believes “none of which are foolproof.” There has to be a better way:

That’s why the UK Metascience Unit has partnered with the non-profit organization RAND Europe; the Sussex Science Policy Research Unit; and the publisher Elsevier, to launch MetaNIC (see go.nature.com/3hhsdp3) — a competition to produce and validate indicators for scientific novelty in academic papers. Running until November, MetaNIC is open to researchers all around the world.

Participants will test their algorithm against 50,000 scientific papers that will have been ranked by 10,000 researchers on their novelty. The team whose indicator best matches the humans’ assessments will win £300,000 (US$407, 000). 

The Challenge website explains the importance:

If the global science system had responsibly used better and more timely indicators of research excellence, this could have a profound impact on the incentives of researchers, our understanding of the factors which make excellence more likely, and in turn, the pace of research progress. Better metascience indicators can help funders, governments, academic institutions, and individuals get more high-quality research out of limited resources.

As Dr. Steyn says: “Better indicators could improve our understanding of the factors that make research excellence more likely, the incentives of scientists and so the pace of scientific progress. That is worth exploring.”

Credit: MetaNIC
It’s worth pointing out exactly what the Metascience Unit that he is co-head of is for. It is a branch of the UK government, and the website explains: “Metascience typically examines the institutional structures, practices and incentives explaining how researchers spend their time and the speed, direction, nature and impact of their outputs.”

To put that in practice:

All our work starts from a simple idea: that the scientific method, so powerful in so many areas of life, should be systematically and routinely applied to how we practice, fund and support science itself.
Investing in research, development and innovation is vital to UK and international economic growth and prosperity. However, it is not just the quantity of that investment that matters, but also the quality. How research is funded and practiced is critical to accelerating scientific breakthroughs and innovations, nurturing talent, and shaping research culture.”

It makes me wish the U.S. had a Metascience Unit. Instead, we have DOGE, which is slashing federal scientific funding in the name of curbing “waste, fraud, and abuse,” crushing anything that can even remotely be considered “DEI,” and, while they are at it, punishing universities that President Trump is mad at. That’s no way to invest in science, to discover innovation, or to prepare for the future. If anything, it scorns novelty.

In JAMA Network, David Cutler and Edward Glaeser warn that the proposed NIH cuts are “the $8 trillion health care catastrophe.” In Forbes, John Drake, a professor at the University of Georgia, points out: “New macro-empirical research finds that every dollar invested in non-defense public R&D yields $1.40–$2.10 in economic output, and since World War II, government funding has driven roughly 20% of U.S. productivity.”  A paper from American University researchers concludes: “…budget cuts to public R&D would significantly hurt the economy in the long run, with large negative effects on GDP, investment, and government revenue. A 25 percent cut to public R&D spending would reduce GDP by an amount comparable to the decline in GDP during the Great Recession.”

So, yeah, how one cuts research, and which research gets cut, makes a difference, not just to researchers and their institutions but to all of us and the future of our country. 

I understand that federal funding isn’t unlimited and perhaps could be spent more judiciously, but arbitrary cuts are perhaps the worst way to do it. It sure seems like focusing on novelty, with its bigger potential for large impacts, could be a much better way to direct funding.

Maybe you, or a researcher you know, should sign up for the Metascience Novelty Indicators Challenge! 

No comments:

Post a Comment