Healthcare Technology Featured Article

July 05, 2017

AI is Changing the Face of Medical Treatment


When you hear the term “artificial intelligence,” you might picture the evil AI overlords from The Matrix that feed on humans, or the androids in Blade Runner relegated to the dangerous task of interplanetary mining. These representations of AI are definitely entertaining, but Hollywood Sci-Fi movies have obscured the practical applications of AI in the real world.

In the real world, artificial intelligence serves a much bigger purpose. Advanced AI technologies not only help solve complex problems and drive efficiencies but also help improve our lives.

One very practical application of AI is its integration into healthcare to help diagnose and treat patients more accurately and easily.

Here are several ways AI is finding its way into the field of healthcare:

1. AI generates personalized care for cancer patients

Treating cancer involves a complexity of tests and expert analysis in order to determine the best course of action. Historically, these treatment plans have been developed based on a “one-size-fits-all” approach, without much consideration given to the genomic drivers of the individual’s particular type of cancer.

With new advancements in personalized medicine and cognitive computing, however, treatment therapies can now be tailored to a patient based on their individual genomic profile.

Two companies making improvements in efficiency and decision making in this area are Avera Health and Viviphi, Ltd.  

In collaboration with Avera Health, Viviphi Ltd. has launched an AI platform called PrecisionPlan. This cognitive computing platform gathers data from electronic medical records and labs, along with patient-specific genomic information. Next, the data is analyzed and the program then generates a comprehensive and actionable treatment plan that allows the healthcare practitioner to make accelerated decisions.

2. AI is making medical imaging available in developing countries

Bay Labs, based out of San Francisco, CA, is dedicated to making ultrasound imaging accessible across the world. The company has developed software to automatically read ultrasounds and detect conditions such as rheumatic heart disease, a condition that affects a lot of children in developing countries.

This AI technology eliminates a large expense formerly required to diagnose using ultrasounds. Since developing countries that have access to imaging machines often lack qualified technicians to interpret the results, software that can automatically and accurately interpret results is the perfect solution.

Is automated interpretation as good as human interpretation? Research says it is – in fact, sometimes it’s better.

Artificial intelligence has caught up to the human brain

There has been considerable debate over the years about whether computer intelligence can ever compare to human intelligence. Until now, computers were still lagging a bit behind.

The software developed by Bay Labs uses a deep learning process that outperforms the recognition experiments previously done with primates.

For over a decade, MIT postdoctoral researcher Charles Cadieu has been running image recognition tests that pit computers against primates. The purpose is to measure how fast the primates can accurately process images shown to them on a screen, and their results are compared to the results of the computer.

Since he began these tests, the primates have always outperformed the computers at least by a factor of ten. However, in recent tests, Cadieu says the computers have caught up and are nearly equivalent to the primate brains.

In 2013, the AI neurons outperformed human neurons, proving that machines can now do what we once thought only human brains could do.

The deep learning process involved is no different than the way computers learn to better recognize speech.

3. AI can identify diabetic retinopathy

Google has also been working with doctors to develop AI that can identify diabetic retinopathy and prevent diabetics from going blind. By analyzing retinal photos, the software can succeed at identifying retinopathy about as well as a human opthamologist.

Google’s AI system for identifying diabetic retinopathy has performed quite well. The National Institute of Health says that an 80 percent rate of accuracy is recommended, and Google’s AI system achieved a rate of accuracy above 90 percent.

This AI technology is similar to what Google uses for facial recognition when images are uploaded to Vision API.

Just as AI can interpret ultrasounds for people in developing countries, the purpose of using AI to automatically identify retinopathy is to bring screening services to places where it’s not readily available.

Why AI interpretation is the next step in healthcare

AI brings a great benefit to the interpretation of results – not only by providing a lack of bias, but also adding clinical intelligence to help aggregate data to improve decision-making. Since humans naturally carry a bias when it comes to interpreting, for example in facial expressions (often based in culture as well as past experiences), that bias creates the potential for error. AI systems can be trained to accurately recognize images without bias or previous experiences to cloud the interpretation of that data. Or alternatively, in the case of cancer treatment, the data analysis required to develop a meaningful individualized treatment plan could only be achieved with the use of technology as a support system. The result? Better patient care, more accurate treatment, and improved outcomes.




Edited by Alicia Young
Get stories like this delivered straight to your inbox. [Free eNews Subscription]




SHARE THIS ARTICLE



FREE eNewsletter

Click here to receive your targeted Healthcare Technology Community eNewsletter.
[Subscribe Now]