The financing of companies and investments has a central role in modern society. An entire academic discipline – finance – is devoted to the topic, and it accounts for significant proportions of national economies.
Despite its enormous long-run impact on our technological capabilities, however, the way we allocate funding in science is comparatively unexamined. We know our current system isn’t perfect: one can draw up a long list of breakthrough scientific ideas whose arrival was delayed by years due to challenges obtaining funding (and one cannot list the ideas who never arrived, because they never secured funding!).
And there is good reason to think our current imperfect system isn’t simply the best that’s feasible either. Part of the reason is that science is – rightfully – not a market: it’s not easy to get rich by funding science better than your peers. But if it were, we might see a lot more competition and attempts at innovation in this space.
Another challenge is that evaluating the quality of science often takes time and expertise. That means individuals and organisations don’t get great feedback on the quality of their decision-making, which makes it very hard to improve, even if they want to. With weak incentives to improve, and weak information to direct us towards better ways, it’s unlikely we lucked into the best possible arrangement.
As with finance, at the heart of the scientific funding enterprise are individuals who make decisions about what to fund (often in conjunction with peer review, but enjoying some discretion too). I’ll call them grant makers here. If we can find ways to make these individuals more effective at their jobs, the rewards to science could be enormous. We need a new metascience research agenda focused on the best ways to select, train, and incentivise our grant makers. With information in hand, we can then use the levers of democratic civil society to reform our public and private grant makers.
Let’s start with how we select grant makers. What are the kinds of traits associated with great grant-making? Generalist or specialist knowledge? Do early career or late career scientists make better, or even different, kinds of grants? What about people from different backgrounds: socioeconomic, but also work experience? Or is it all about innate ‘taste’? If the latter, are there ways we can find that, via the way we screen prospective grant makers? Could we ask applicants to make predictions about the outcomes of different funded grant proposals or which projects successfully replicate?
Or perhaps we should encourage more scientists to rotate through temporary stints as grant makers, while the outcomes of their projects are tracked. Years later, perhaps we can identify people with a talent for spotting overlooked scientific opportunities, and recruit them.
Next, we can turn to how we enable science funders to get better at their job. To start, are there training programmes that work? Mentoring programmes? Beyond that, there is plenty of scope to improve the quality of feedback grant makers receive. Naturally, one can track the ultimate outcomes of grants. The trouble is, this can take many years to play out, too slow to help a grant maker improve. But there are other options for generating more rapid feedback on a grant maker’s judgement.
Prediction markets, whether internal or external, have been used in some settings to help generate information on events that are distant in time or uncertain, and there have been some efforts to use them in science as well. My own employer, Open Philanthropy, has adopted a simpler strategy for getting its grant makers feedback, by asking them to make a series of probabilistic forecasts associated with every grant. Over many grants, a grant maker can begin to see if they consistently misjudge elements of a grant and adapt accordingly.
Lastly, the way we incentivise grant makers matters just as much. In finance, if you make a contrarian bet and win, you get rich. In science, the upside is small, but the professional downsides may be real. Among career grant makers, is a larger promotion feasible for making a contrarian bet that pays off? Better information on how grants perform, discussed above, would also make stronger incentives possible.
At one extreme, some portion of compensation for grant makers could be tied to the eventual performance of funded science, as judged by later scientists. On the other extreme, we could merely try to provide some reputational incentive to make bold calls, for example by annually recognising the best out-of-consensus grant maker of the last 20 years. Would these kinds of incentives matter in science? Finally, it’s also possible to provide stronger incentives to peer reviewers: perhaps, years later, the reviewers whose scores are most highly correlated with the performance of grants could be given additional research grants of their own, as a prize for exceptional peer reviewing.
We don’t really know which of these reforms would improve the quality of our grant-making, which would degrade it, and which don’t matter at all. And that’s the point: it’s past time we found out.
This article originally appeared in Operation Innovation, an essay collection from The Entrepreneurs Network about how to build a more innovative economy.
Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.
CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.