A giant, superfast AI chip is being used to find better cancer drugs – MIT Technology Review

Posted: Published on November 21st, 2019

This post was added by Alex Diaz-Granados

At Argonne National Laboratory, roughly 30 miles from downtown Chicago, scientists try to understand the origin and evolution of the universe, create longer-lasting batteries, and develop precision cancer drugs.

All these different problems have one thing in common: they are tough because of their sheer scale. In drug discovery, its estimated that there could be more potential drug-like molecules than there are atoms in the solar system. Searching such a vast space of possibilities within human time scales requires powerful and fast computation. Until recently, that was unavailable, making the task pretty much unfathomable.

Sign up for The Algorithm artificial intelligence, demystified

But in the last few years, AI has changed the game. Deep-learning algorithms excel at quickly finding patterns in reams of data, which has sped up key processes in scientific discovery. Now, along with these software improvements, a hardware revolution is also on the horizon.

Yesterday Argonne announced that it has begun to test a new computer from the startup Cerebras that promises to accelerate the training of deep-learning algorithms by orders of magnitude. The computer, which houses the worlds largest chip, is part of a new generation of specialized AI hardware that is only now being put to use.

Were interested in accelerating the AI applications that we have for scientific problems, says Rick Stevens, Argonnes associate lab director for computing, environment, and life sciences. We have huge amounts of data and big models, and were interested in pushing their performance.

Argonne National Laboratory

Currently, the most common chips used in deep learning are known as graphical processing units, or GPUs. GPUs are great parallel processors. Before their adoption by the AI world, they were widely used for games and graphic production. By coincidence, the same characteristics that allow them to quickly render pixels are also the ones that make them the preferred choice for deep learning.

But fundamentally, GPUs are general purpose; while they have successfully powered this decades AI revolution, their designs are not optimized for the task. These inefficiencies cap the speed at which the chips can run deep-learning algorithms and cause them to soak up huge amounts of energy in the process.

In response, companies have raced to design new chip architectures that are specially suited for AI. Done well, such chips have the potential to train deep-learning models up to 1,000 times faster than GPUs, with far less energy. Cerebras is among the long list of companies that have since jumped to capitalize on the opportunity. Others include startups like Graphcore, SambaNova, and Groq, and incumbents like Intel and Nvidia.

courtesy of National Argonne Laboratory

A successful new AI chip will have to meet several criteria, says Stevens. At a minimum, it has to be 10 or 100 times faster than the general-purpose processors when working with the labs AI models. Many of the specialized chips are optimized for commercial deep-learning applications, like computer vision and language, but may not perform as well when handling the kinds of data common in scientific research. We have a lot of higher-dimensional data sets, Stevens sayssets that weave together massive disparate data sources and are far more complex to process than a two-dimensional photo.

The chip must also be reliable and easy to use. Weve got thousands of people doing deep learning at the lab, and not everybodys a ninja programmer, says Stevens. Can people use the chip without having to spend time learning something new on the coding side?

Thus far, Cerebrass computer has checked all the boxes. Thanks to its chip sizeit is larger than an iPad and has 1.2 trillion transistors for making calculationsit isnt necessary to hook multiple smaller processors together, which can slow down model training. In testing, it has already shrunk the training time of models from weeks to hours. We want to be able to train these models fast enough so the scientist thats doing the training still remembers what the question was when they started, says Stevens.

courtesy of National Argonne Laboratory

Initially, Argonne has been testing the computer on its cancer drug research. The goal is to develop a deep-learning model that can predict how a tumor might respond to a drug or combination of drugs. The model can then be used in one of two ways: to develop new drug candidates that could have desired effects on a specific tumor, or to predict the effects of a single drug candidate on many different types of tumors.

Stevens expects Cerebrass system to dramatically speed up both development and deployment of the cancer drug model, which could involve training the model hundreds of thousands of times and then running it billions more times to make predictions on every drug candidate. He also hopes it will boost the labs research in other topics, such as battery materials and traumatic brain injury. The former work would involve developing an AI model for predicting the properties of millions of molecular combinations to find alternatives to lithium-ion chemistry. The latter would involve developing a model to predict the best treatment options. Its a surprisingly hard task because it requires processing so many types of databrain images, biomarkers, textvery quickly.

Ultimately Stevens is excited by the potential that the combination of AI software and hardware advancements will bring to scientific exploration. Its going to change dramatically how scientific simulation happens, he says.

Here is the original post:
A giant, superfast AI chip is being used to find better cancer drugs - MIT Technology Review

Related Posts
This entry was posted in Brain Injury Treatment. Bookmark the permalink.

Comments are closed.