So, how do I see the future of healthcare using AI? Well, let’s just face it. AI is heading to transform medicine and somewhere even replace real-medicine workers. Every year we observe the appearance of new and more advanced solutions. This, by the way, provides a whole slew of advantages, one of the most important of them is reducing the time needed to reach a diagnosis that allows medical workers to better prioritize patient case.
Right now there is already a sufficient number of successful developments. AI and deep learning can analyze a lot more factors and cases than live-health workers do. To be more precise, we are able to use AI for genome research, drug development, medical imaging. AI-based devices can learn, analyze large amounts of information, and make decisions much faster than humans.
Wow.. isn’t it just amazing? But what does it mean and is it really so? I suggest reviewing some of the recent advancements and startups to figure it out! and also for a healthy life check out these 12 amazing Healthy lifestyle facts.
In medical practice, there is such an important thing as medical intuition. When a patient visits a doctor, the doctor just looks and literally asks a couple of questions and he already knows what is going on and what disease a patient may have.
Let’s look at this from a data analysis perspective. So, what is medical intuition? It is like a doctor using his built-in neural network. Having analyzed a sufficient number of cases, a doctor is able, consciously or not, to identify some additional factors that help him narrow down the solution space. And further, he comes to hypotheses, which he considers a possible diagnosis.
Is AI better than medical intuition?
Whether we like it or not, medical intuition is an automated thing that can be performed not less well by a machine. What is more, some diagnoses can be formulated better by machines. Let’s take a look at the most prominent cases.
Ada. This is a service that allows you to directly communicate with a person and give him your recommendations. The medical application communicates with the patient, asks about symptoms and complaints, and in response gives recommendations, including which doctor to visit, offers to contact a specialist for a remote consultation.
Lunit. About 20% of lung cancer and breast cancer go unnoticed in the diagnosis. Lunit is developing deep learning and 3D visualization software to help detect difficult-to-diagnose diseases such as airway cancer. Its use increases the probability of detecting cancer to 83–86%. Over time, the technology will improve and accuracy will become even higher.
Sense.ly. This program monitors the condition of people who have recently undergone long-term treatment or suffer from chronic diseases. It was developed by a startup from San Francisco and raised 8 million venture capital investments initially. The application was created in order to structure data on the state of a person, it sends them to a specialist and gives recommendations. Also, the system is able to remind about the time of taking medications and when there is a need to visit a doctor.
Insilico Medicine. The creation of new drugs is one of the most important tasks of modern medicine. The development of a single medicine in a research laboratory costs $ 2.6 billion, in total about $ 150 billion is spent on this process. Therefore, in 2014, only 46 new drugs appeared. Insilico Medicine creates artificial intelligence that will help in the development of drugs, biomarkers and the study of aging mechanisms. The company wants to improve the quality of life of every person.
PathAI (Boston) was founded by Andy Back, a graduate of Harvard Medical School. Back spent more than five years studying at Harvard and decided to help other doctors with the timely diagnosis of complex diseases, such as cancer, using machine learning to quickly and accurately analyze cell images.
Startup opportunities are used not so much by doctors as by researchers at pharmaceutical companies. Among the startup client’s are such giants of the pharmaceutical industry like Novartis, Gilead Sciences and Bristol-Myers Squibb.
Aira (San Diego) helps blind and visually impaired people “see” the world by combining the capabilities of the human body and the capabilities of artificial intelligence in an application or smart glasses. The company claims that their AI development, Chloe, is still at the initial level of its capabilities: it can perform simple tasks like reading instructions on a medicine package, but the developers have big ambitions for its further development.
One of the company’s goals is to make navigation possible with computer vision. The product can be used free of charge for sessions shorter than five minutes and for all sessions in more than 25 thousand places where Aira has partnership agreements. For example, JFK Airport or the Wegmans chain of stores.
Such programming languages like Python and R are popular for applications of machine learning. But in recent years, Julia is acquiring its place and has become the new de-facto for machine learning. Julia offers best-in-class support for modern ML-frameworks like TensorFlow and MXNet, making it easy to adapt to existing workflows.
Here are some of the powerful projects built on this language.
IBM and Julia Computing for Diagnosing Diabetic Retinopathy
Diabetic retinopathy is a complication of diabetes, caused by high blood sugar levels damaging the back of the eye (retina). This disease affects more than 126 million diabetics. Timely screening and diagnosis can help prevent vision loss for millions of diabetics worldwide, but many of them lack access to health care.
There are over 126.2 million people in the world (as of 2010) with diabetic retinopathy, and this is expected to rise to over 191.2 million by 2030. According to the WHO in 2006, it accounted for 5% of world blindness.
Deep Learning in Julia comes to the rescue and helps to significantly reduce this number. By combining Julia’s superior speed and performance with IBM’s Power8 server and NVIDIA Tesla K80 GPU accelerators, researchers increased image processing speed 57x — a dramatic improvement.
They used MXNet.jl, a Julia package for deep learning. As a first step, they load a pre-trained model — ImageNet model called Inception with weights in their 39th epoch. On top of that, they specify a simple classifier.
contextflow and Julia Help Radiologists Diagnose Difficult Cases Faster
The first and foremost problem with the current state of radiologists’ working process is increasing workloads and lack of time. For diagnosing and treating disease, radiologists need to spend more time overall searching for information and a smaller proportion of their time actually interpreting examinations.
contextflow’s image search engine in collaboration with Julia made efforts to resolve this dilemma. contextflow is a 3D image-based search engine that uses deep learning to put the knowledge encoded in millions of medical images and reports at fingertips. Simply mark a region of interest in an image, contextflow returns reference cases, and associated knowledge.
While radiologists spend up to 20 minutes searching for information, contextflow using Julia promises to cut down search time to ~2 seconds. Taking into account a global shortage of radiologists, this could be a real solution.
For doing so, contextflow uses Knet.jl — a great low-level Julia deep learning package, to build the models to identify reference cases for physicians.
Google Deepmind Predicts an Exacerbation of Kidney Disease Two Days Before It Occurs
This deep learning method can continuously predict the risk of future patient deterioration, drawing on recent work that models adverse events based on electronic medical records and using acute kidney damage — a common and potentially life-threatening condition — as an example.
The model was developed on a large data set of electronic medical records covering various clinical environments, which included 703,782 adult patients in 172 inpatients and 1,062 outpatient settings. The model predicts 55.8% of all stationary episodes of acute kidney damage and 90.2% of all acute kidney damage that required subsequent dialysis, with a lead time of up to 48 hours and a ratio of 2 false alerts for each true alert. In addition to predicting future acute kidney damage, the model provides confidence estimates and a list of clinical features that are most common for each prognosis, as well as predicted future trajectories for clinically relevant blood tests.
Despite the fact that recognition and quick treatment of acute kidney damage is known to be difficult, the developed approach may offer opportunities for identifying patients at risk within a time interval that allows for early treatment.
Augmented Reality Microscope with Real-Time Integration of Artificial Intelligence for Cancer Diagnosis
Microscopic evaluation of tissue samples plays an important role in the diagnosis and formulation of cancer and, therefore, directs therapy. However, these estimates demonstrate significant variability, and many regions of the world do not have access to trained pathologists. Although artificial intelligence promises to improve access and quality of care, the cost of digitizing images for pathology and the difficulty in deploying AI solutions remain barriers to real use.
Augmented Reality Microscope (ARM) superimposes AI-based information on the current sample presentation in real-time, providing seamless AI integration into routine workflows. Researchers demonstrate the usefulness of ARM for detecting metastatic breast cancer and identifying prostate cancer, with latency compatible with real-time use. It is expected that ARM will be able to remove barriers to the use of AI designed to improve the accuracy and efficiency of a cancer diagnosis.
Neural Network to Instantly Predict the Tertiary Structure of the Protein
Another powerful breakthrough in healthcare made by AI. ProteinNet neural network is able to predict the structure of a protein in milliseconds. Well, predicting the structure of a protein from its sequence is a central biochemistry problem. Convolutional neural networks show promising results.
Advances in deep learning that replace complex, human-developed conveyors with differentiable models optimized from start to finish suggest potential benefits. In their study, scientists presented an end-to-end differentiable model to study protein structure.
The model links the local and global structure of the protein through geometric units that optimize global geometry without breaking local covalent chemistry. The tested model uses two complex tasks: forecasting new folds without co-evolutionary data and forecasting known folds without structural patterns.
In the first problem, the model achieves modern accuracy, and in the second, it is within 1–2 Å; competing methods that use co-evolution and experimental patterns have been refined over the years, and it is likely that the differentiated approach has significant potential for further improvement, ranging from drug discovery to protein construction.
Certainly, there are lots of other AI applications in medical care. This means only one thing for me, we have great potential for further development.
For me now everything looks fine with AI plus medicine except for one thing. While cutting-edge technologies save time, money and serve patients more efficiently, the majority are curious — is it safe enough? What if AI software will give the wrong diagnosis and so on.
Who will be responsible for the false results? It’s hard to answer. From the legal point of view, we do not understand who to shift the responsibility to. What’s the solution to this? Well, I think, medical workers should supervise the work of AI.
In my opinion, it is highly important for the medicine to change not towards the complete replacement of doctors, but towards symbiosis and joint collaboration with AI. All existing AI-powered systems should act like supporting systems for making medical decisions.
Also read - The Healthy Humans blogs
Thanks for reading!