Skip to content
Writing

We’re not using AI to its fullest human potential

AI robot with circuit, chemical structure and program code on a black background. Getty Image; Yuichiro Chino

We should be living in a golden age of science.

For centuries, the scientific method was defined by two pillars—theory and experiment. Now, we live in the age of artificial intelligence, which adds a vital third pillar. Without advanced computation, according to leading scientific bodies, discoveries of the past decade, such as the detection of the Higgs boson, the discovery of new drugs like halicin, which can kill strains of bacteria resistant to all known antibiotics, or the observation of gravitational waves, “would have been impossible.”

But despite these advances, scientific innovation today is too often defined by new use cases for existing technologies or refining previous advancements, rather than the creation of entirely new fields of discovery.

In daily life, artificial intelligence is ubiquitous in our homes, from Alexa buying our groceries with a simple command, to Netflix anticipating what will entertain us through algorithmic ingenuity. But we need a lot more of it in our laboratories—moving science forward for public benefit, and helping us to solve the hardest problems of our time, from climate change and poverty to healthcare and sustainable energy.

This can only happen by accelerating the next global scientific revolution—by supporting broad and deep incorporation of AI techniques into scientific and engineering research. Because while AI innovation has been substantive, its adoption into scientific and engineering research has not been ubiquitous, fast or interdisciplinary.

Why is it that, despite remarkable advances in AI, it is not yet helping us consistently make the kind of breakthroughs that will expand the frontiers of our knowledge, and accelerate the process of scientific discovery?

There are two main reasons. First, while plenty of money is already pouring into AI projects at universities, these funds tend to be allocated to particular disciplines, such as AI for computer science, rather than to work that builds bridges between the natural sciences, computer science and engineering.

At this moment, the use of AI tools in the scientific and engineering research ecosystem is still in the early adopter stage, rather than being a default part of researchers’ toolkits. We can’t expect scientists to embrace the capacities of AI without appropriate training. A researcher hoping to use AI will need to acquire not only a deep understanding of a particular problem—such as antibiotic resistance—but also the knowledge of which data, and what representation of that data, will be useful for training an AI model to solve it.

Second, the incentives simply don’t exist for young scientists to attempt truly bold research. Much of postdoctoral funding is tied to specific research grants and expected results within disciplinary boundaries, so postdoctoral fellows do not usually have full freedom to take risks with new techniques.

So what can be done to change the status quo? We believe training for AI in science, equitable access to AI tools, and its responsible, ethical application should govern any meaningful response.

First, we need rigorous and interdisciplinary training for young scientists using AI. AI’s failures can largely be attributed to unrealistic expectations about AI tools, errors in their use and the poor quality of data used in their development. Scientists across disciplines, from all backgrounds, will need AI fluency to prevent such missteps.

Postdoctoral research is a particularly opportune moment in a scientist’s career to receive this training. This may sound counterintuitive, as conventional academic pressures dictate the swift publishing of papers after a Ph.D. degree is earned, before moving on to the next job. But this is actually the perfect time to broaden research horizons instead of falling into the orthodoxy of hyperspecialization. Instead of being rushed to prove themselves quickly, postdocs should be given the time and the support to try something new.

Second, we have to ensure equitable access to AI tools. According to a recent National Artificial Intelligence Research Resource report, equitable participation in cutting edge AI research is limited by gaps in access to the necessary data and computational power. Leaving out scientists from historically underrepresented and underserved backgrounds “limits the breadth of ideas incorporated into AI innovations and contributes to biases and other systemic inequalities.”