Artificial intelligence may be a boost to medicine

Spread the love

It won’t be long before the computers put us all out of jobs. I can’t wait. For the medical field, it looks like some of us will be out of a job before others.

An exciting study from the Netherlands utilized computerized analysis of pathologic slides to look for metastatic breast cancer in lymph nodes in specimens from women with known breast cancer.

Finding metastatic cancer in a lymph node is an important indicator for recurrence of disease if untreated, a prognostic sign, and a reason to undergo more extensive treatment. Obviously, doing whatever we can to accurately and thoroughly assess such tissue is extremely important. The good news is, pathologists do a great job of reading slides. That bad news is, they are human.

An alternative to humans reading slides would be to have the slides digitalized and read by computers. The analysis of medical images is a logical starting point for learning computers to augment or takeover human responsibilities. First, the image is flat, the computer doesn’t have to have stereoscopic vision or move in three dimensions. Second, analyzing images is a game of “Where’s Waldo.” Lastly, the analysis of an image for a particular element has a binary response — for example, either a fracture is seen or not seen.

Image analysis has been used to exam pictures of retinas looking for evidence of damage to the eye from diabetes. It has also been used to analyze images of skin lesions. The computer does a good job at “seeing” cancerous lesions, in fact, the computer can make the call from a digital image with the same accuracy as dermatologist looking at a patient’s skin, even when the dermatologist is using a scope to magnify the lesion. As soon as computers have little arms to pop pimples, dermatologists will be out of work.

There is more to it than computers just seeing images. This study, and others, uses a neural net for computer learning. The programmers didn’t program what a cancer is or isn’t. The programmers showed the computer images that were labeled as “cancer” or “not cancer,” the computer was left to its own to teach itself the characteristics of each. This teaching is a narrow form of artificial intelligence. The computer hasn’t passed the Turing test. It can’t fool us as human in a conversation, but it can fool us into thinking it is a pathologist (they aren’t stereotypically known for conversation anyway).


The researchers made an open invitation to programmers around the world to devise teaching algorithms. They supplied the participating teams with 270 images (110 with nodal metastases) for their computers to learn from. Then the computers were tested with 129 different images (49 with cancer).

At the same time, the researchers had pathologists read the 129 test slides. They had one pathologist read the slides without any time constraint. This is unrealistic because pathologists usually read slides about one per minute. A second test had 11 pathologists read all the slides at their usual pace. It turns out, the best computer algorithm did as well as the pathologist who worked without time constraint. That pathologists spend 30 hours reviewing the slides, which would be totally unrealistic in clinical practice. The computer algorithm did better at finding metastases than the pathologists who worked at their normal pace.

But before you go and fire your pathologist, there is more to the story. In real life, if a pathologist thinks a case is negative for cancer, those slides are submitted for additional staining. Some of the additional stains are time consuming and costly, as they use antibodies that bind to tumor cells. The antibodies also are colored so that they really stand out when looked at under the microscope. This process is critical as some metastases are tiny — just a minuscule clump of abnormal cells amongst the normal milieu of cells.

The researchers did not use this additional step of processing as part of their study. So, it would be unfair to say that the computer found things that would not be found, but it is fair to say the computer sometimes found things that would have incurred more time and cost to be found by human eyes.

The researchers did not speculate on the future applications of this algorithm, but I’m willing to do so. The robotic takeover won’t be an event, it will be an evolution. In this case, I foresee the learning computer scanning slides and picking out the key positive images from the series and a pathologist confirming the diagnosis, thereby saving a lot of time. For the negative slides, the additional staining will be done, and a new computer algorithm will investigate them, picking out the positives. And, finally, the slides that are negative by both methods will be examined by the pathologist to confirm the result. Such a system would save a large chunk of time as individual cases can have dozens of slides each.

This slow, augmentative robotic process is exciting, but also means one thing — don’t encourage your 5-year-old to be a pathologist when he or she grows up.

Dr. Salvatore Iaquinta is a head and neck surgeon at Kaiser Permanente San Rafael and the author of “The Year They Tried To Kill Me.” He takes you on the Highway to Health every fourth Monday.


Copyright © 2018 NEURALSCULPT