Hackernoon logoHybrid AI for Personal Medicine by@george33

Hybrid AI for Personal Medicine

Author profile picture

@george33Georgii Dernovyi

Programmer, researcher and inventor in IT

Neural networks gave us a powerful and cheap-to-use tool for solving problems of forecasting, computer vision, and text analysis. However, at the same time, they brought the problem of inaccuracy, which is presented as the “norm” and “black box” for deep networks, the derivation of which is difficult to understand and improve.

It is unreasonable to set up a neural network to control complex, expensive objects, where the cost of an error is high or can cost lives or irepparable damage. This area also includes medicine, where neural networks have recently been used to analyze images and make a diagnosis. If the first is quite justified - we currently do not have another technology comparable to CNN - networks (Attention +-), then the diagnosis with a network is not an optimal and highly limited solution, leading us astray.

This is due to the limited representation of the meanings of words and concepts in the neural network. The concepts that we use in our head are of a dynamic nature and look more like rivulets connected with other streams than static objects with fixed connections, as it is represented in modern neural networks GPT-2,3, Bert, e.t.c.

They reflect the meaning no better than the shadow of an object reflects its actual geometry and color. However, if we take the strong side of neural networks - the rapid obtaining of a preliminary result and use it as an input for classical AI, which performs precise operations on semantic data and thereby is able to compensate for the errors of a neural network, then it is probably possible to achieve a result that is inaccessible separately to either networks or classical AI.

Classic AI (hereinafter cAI) is a rule-based inference system capable of providing any accuracy and depth of analysis on the data provided. The rules can be as complex as you like, even beyond of human comprehension, and at the same time give an accurate and explainable calculation, provided that they are correct.

The downside to cAI is the complexity of developing rules and maintaining the system. All known somewhat useful cAI systems were created from scratch and had their own representations of rules and inference, which did not allow to effectively reuse the experience of their creation and training in the future.

The overwhelming majority of such projects did not reach any result at all due to incorrect timing (at times), the lack of advanced experts who can clearly state their knowledge, as well as the absence or weak qualifications of knowledge engineers, a rare caste of programmers, which in theory should was to have knowledge in fuzzy logic, linguistics, semiotics and psychology.

They were a mediator between experts and computers, programming the knowledge gleaned from the experts. All these difficulties have led to the oblivion of cAI systems and the turn to neural networks.

However, let's imagine that we have a cAI system that already has an idea of ​​basic knowledge in the subject area, as well as with written logical rules describing exactly how logic works in this area. If for expanding knowledge, we would not have to program anything (-- knowledge engineers), but we only need to add the specific knowledge of the expert to the existing ones.

Suppose also that we have found a way to represent knowledge in such a way that it becomes understandable to any expert in this subject area. Then we will be able to give the system to many specialists and subsequently combine their knowledge into super knowledge, which is expected to be richer and more accurate than its parts. The system will be able to compare parts of super knowledge and give information about where and how they contradict each other, as well as resolve and correct contradictions in automatic or expert mode.

This reasoning led to the start of the AI ​​dermatologist project, which required the use of both AI types within the same system. A session with a dermatologist begins with an examination of problem skin. Accordingly, the CNN network will give us, by analogy, a list of possible diagnoses from a photo.

Next, using the classical deduction method, our CAI emulates the brain of a dermatologist, assuming that he has brilliant analytical skills. It identifies the shortest path based on the number of questions asked, using data from the network to get the most reliable result. The answer to each such question changes the probabilities of possible diagnoses, and the best diagnosis is when competing diagnoses are much less likely.

The system automatically generates questions from the knowledge base data and conducts a conversation with a user in completely normal English.

A short description of a personal AI dermatologist DExpert:

The user takes a picture or uploads a photo of problem skin into the program, after which the system determines a list of the most likely candidates, calculates the minimum number of questions that will clarify the diagnosis and starts a dialogue.

The user can receive a recommendation from the system by clicking the button below, or study possible candidate diseases by requesting a photo and description from the system.

After receiving answers, the system calculates a diagnostic table:

The user can get a recommendation from the system by clicking the button below, or study possible candidate diseases by requesting a photo and description from the system.

A screenshot of the administration system (far right panel - knowledge views).

The described system is suitable for creating AI doctors both in related specialties (cosmetology) and distant ones (radiology, tomography).


Join Hacker Noon

Create your free account to unlock your custom reading experience.