A fast non-academic introduction to AI for beginners — in medicine (or caregiving).
This post is based on a lecture I’ve hed a few times, so it was time to put it in print.
Artificial intelligence is as the word suggests: “Non-organic” intelligence.
As a neuroscientist I think of lab-grown brain cells that produce neural activity and form neural networks and can perform cognitive tasks, they don’t exist yet.
A computer-scientist thinks of a computing machine that can accurately and autonomously simulate the thought and decision processes of the human intellect.
When it comes to AI, there are as many discussions to be had about what it is and isn’t, as there are applications of it.
Just look at pop culture. It’s everything from an adorable robot to an apocalyptic mind control device.
If we see intelligence as a tool for our survival, and artificial intelligence as a tool for our intelligence, it starts to make sense.
And if in health-care we put AI in context of digitization. Both in a figurative and literal sense AI is in the middle of digit(A)l(I)sation.
We have a digital spectrum.
It ranges from Digitization (i.e. paper to Excel). Same work process, different data storage. It’s much easier to store in an Excell-file than on hundreds of post-its. Then at the other end of the spectrum we have advanced transformative systems which in the future can alleviate humans from jobs which are more efficiently and more accurately done by a digital system.
Some Digital Systems are there to support what we already do. Apps that make analyses and data gathering easier. Other digital systems aim to transform how we work e.g. self service in an airport, check ins, checkouts. These systems don’t replace humans, they re-structure a customer process, and shorten waiting times allowing the humans to tend to customer service.
Isn’t it all just a bunch of statistics?
Right of the bat, let’s make this clear. Yes AND No.
Not helpful, I know.
It’s math. Math can be used for statistics. Math can be used for machine learning/deep learning. Math can be used as art.
This model by Alexander Scarlat, at HisTalk is a perfect example of the difference. Both Machine Learning (which is also in a spectrum of AI, but is debated to what extent it is AI) and Statistics have an input and output. So what is the difference?
I could write an essay, but the title of this says “quick” so here it is.
Statistics starts with input + rule →model= output.
Machine Learning starts (most often with) (input + output + training → model learns= rule) + (rule + input → model predicts = output).
But what is training? And what is a model?
Let’s say a machine has to decide if it sees an X or an O. It bases it’s premise on pixels (or voxels if it’s 3D data). Sliders source: Venelin Valkov (has a GREAT pedagogic channel: follow HIM) https://medium.com/@curiousily
But hey! All X’s and all O’s don’t look similar.
So what is the common denominator?
Take that and turn it in to math.
And you have yourself a probability of an O or an X.
This is how a computer sees an image. And it’s not smart. It operates according to set values and math.
And sometimes it gets it wrong, because it detects things we didn’t predict it would.
Ever seen this headline?
Well. As you’ve probably understood by now, it’s not the computer or algorithm that was awesome on its own. The dermatologists are AWESOME as well for setting a clear definition and math for what is cancer and what is just a plain ol’ mole.
So if the computer can’t think, how is it intelligence?
Well. To put it roughly, we don’t always think either.
You can train a person to do a simple surgery, by letting them do a standard procedure over and over again, without them ever knowing the mechanisms or the reasons behind what they do. A person could technically perform any task that has clear guidelines, without any background knowledge.
Sometimes we go on autopilot.
That’s why there is a great model from Russell and Norvig (1995) that divides the AI genre in to 4 systems.
Systems that think humanly, well almost.
There, we have diagnosis. Anything that can be classified, that can help us make a decision, or make a decision on its own. We DON’T have machines that think like a human, but we do have machines that simulate human thinking.
We need to decide if a patient should be given treatment A, B, C or D. Using combined data, with DL, we can have a pretty good AI support system that gives us an optimal course of treatment.
But, then again. Humans are illogical. And humans are unpredictable. So the way we think can’t really be modeled this way. Thus the emphasis on SUPPORT system.
I wash my hands before I go in to the shower. It makes very, or little sense, to do so. No computer would do this type of illogical thinking unless a human specifically asked it to. And that brings us to the next box.
Systems that think rationally, most of the time
In this case you don’t need the humane aspect of it.
Two self driving wheelchairs are driving down a long hallway, they meet in a narrow passage and only one can get by.
One of the patients is more ill, none of the patients can communicate. If two human assistants were driving the wheelchair, they would start to discuss and bargain about who has to move out of the way.
In the case of the computer systems of the self driving wheelchairs, we can set the priority already before the patients are sent off to their destinations. If both wheelchairs have the same priority level, one establishes a rule: all chairs coming from the right have priority etc.
We transform rules in to a systems that think according to specific rules.
We let a system, make rule based, non emotional, decisions.
Systems that act humanly
This is where chat-bots, social robots and human-mimicking systems come in.
In health care, heaps and heaps of companies are trying to create conversational UI.
But, if you’ve ever spoken to Siri or Alexa, you know you wouldn’t want one anywhere near a patient. A LOT of misunderstanding can and will happen. In the case of above mentioned chat-bots that mimic psychologists, they have very limited scope of conversation in their design, and a lot of multiple choice boxes that help the conversation along.
Like in the example of my chat-bot Apophis. It was made to be sarcastic, which does not work when the script (which is a bunch of IF statements) meets a real human problem. Apophis was completley text bases, which increased the chances for misunderstanding.
The most common way to use chatbots and social robots is to allocate something repetitive.
Another way to act humanly is to replicate human physiology and let the script manifest in a physical body. Such as a social robot.
A robot that can answer simple questions and/or preform tasks which entail heavy lifting.
Systems that act rationally
This one is quite difficult. But imagine a system that isn’t dependant on our opinions. A system that knows everything, and can use the systems we’ve already mentioned to give the user an optimal path to a successful outcome. What ever that might be.
I almost died, would an AI have caught it?
I almost died a few months ago, from strep throat. I had never felt worse in my life, I could clearly feel life slipping away, loosing consciousness, difficulty breathing. I was hallucinating and I showed signs of going in to sepsis.
In my chart it said: responsive, has the flu.
That’s the information that the AI would have had available.
Knowing now what is needed for a system to do a somewhat human job, we need to understand that any AI system, low lever or high level, needs better data.
The AI would have had my CRP which was 70<. It would have probably evaluated my situation differently than the humans who were examining me.
Now, in my case. It ended well. I was eventually given antibiotics, and it took 50 days and 4 different types of antibiotics before I became well, but I did.
Could a rational system shortened my time in treatment? Found the strep earlier? Suggested the right medication right away?
We don’t know.
The future will tell!