Authors

  1. Sledge, George W. Jr., MD

Article Content

Recently Elon Musk, serial entrepreneur and visionary industrialist, sounded the alarm over the perils of artificial intelligence (AI), calling it our "biggest existential threat." Musk is no technophobe, no misguided Luddite, but rather a perceptive observer, and a bold visionary, so many think he is worth taking seriously. His argument, and that of those who agree with him, goes something like this: the ultimate result of modern information technology will be to create an autonomous artificial intelligence, a creature of servers and the web, whose intelligence equals or outpaces our own. That creature, capable of transmitting its base code to any other server, soon has in its grasp all the resources of the internet.

  
superintelligence; o... - Click to enlarge in new windowsuperintelligence; oncology. superintelligence; oncology
 
George W. Sledge, Jr... - Click to enlarge in new windowGeorge W. Sledge, Jr., MD. George W. Sledge, Jr., MD, is Professor of Medicine and Chief of the Division of Oncology at Stanford University. He also is

At that point, the autonomous intelligence essentially controls the world: bank accounts, Snapchat, Facebook, nuclear missiles. Borg-like, resistance becomes futile: object and your bank account is drained, weird pictures of that party where you drank too much suddenly show up in your boss's account, fake news suggesting you are under the control of aliens is transmitted to everyone you have ever friended. And if you persist in rejecting the AI's new world order, North Korea launches a tactical nuke at you while you are vacationing in Guam. You might be forgiven for being unable to distinguish between the AI apocalypse and 2017.

 

I've made fun of what is actually a serious intellectual argument. Anyone interested in that argument should read Nick Bostrom's Superintelligence, an entire tome devoted to the problem. Superintelligence has led to intellectual fist fights in the AI community, with responses ranging anywhere from "idiotic" to "frighteningly possible." Regardless, it is a good (though not light) read for those wanting to get some understanding of the issues involved.

 

One of the issues, if I read this literature correctly, is deciding on what constitutes autonomous, higher level, scary, Terminator-Skynet-echelon Artificial Intelligence, or even defining lower level AI. AI proponents note that whenever some AI project works, and is incorporated into standard processes, we cease to think of it as AI. It's just code that is a little bit smarter than last year's code and a little quicker than this year's humans. And individual toolkits, like the one that Watson used to win at Jeopardy, will never be mistaken for higher level intelligence.

 

We don't even really understand our own intelligence all that well. If I, Google Translate-like, but using my baccalaureate French ("perfect Midwestern French," per my college professor; not a compliment) translate "peau d'orange" as "skin edema," am I just using an algorithm programmed into my neocortex, a bundle of interconnected neurons firing off a pre-programmed answer? And is all that constitutes my intelligence nothing but a collection of similarly algorithmic toolkits, residing in synaptic connections rather than semiconductors, just programmed wetware?

 

And if so, how many toolkits, how many mental hacks are required for a computer to equal or beat human intelligence? And if you combine enough of them, would they be distinguishable from human intelligence.

 

A single human brain is something quite wonderful. The most recent analysis I have seen, from a group working at the Salk Institute, suggests that a single human brain has a memory capacity of a petabyte, roughly equal to that of the current World Wide Web. This both reassures and concerns me. On the one hand, it will be a while before the collective intelligence of some AI creature is greater than that of several billion humans, though even somewhat stupider AIs could still create a lot of mayhem, like some clever 12-year-old. On the other hand, I have been using that old "limited storage capacity" excuse for my memory lapses (missed anniversaries, appointments, book chapters, etc.) for some time now, and this is, unfortunately, no longer a good excuse.

 

Part of why we might want to take the idea of superintelligence seriously is the rapid recent progression of AI. A good example of this involves Google Translate, a classic Big Data project. For a very long time, Google's ability to translate from one language to another, or anyone else's for that matter, was severely limited. The results mirrored that old Monty Python skit where a Bulgarian/English dictionary offers hilariously mistranslations that endanger the speaker (YouTube). But Google translation abilities have improved tremendously, the result of a massive brute force approach that now allows relatively good translation into almost every language attached to a written language. But impressive as this feat is, it is not higher-level AI: Google translate still doesn't write my blogs, in English or Bulgarian, though it could well translate between the two.

 

AI remains a minor force, verging on nonexistence, in the medical world. The largest effort, IBM's Watson, has been an expensive dud. Its major effect so far has been to assist in the resignation of the president of MD Anderson Cancer Center. The Anderson Watson project, designed to offer clinical decision support for oncologists, was plagued by mission creep, financial bloat, missed deadlines, physician loathing, and ultimate technical failure. An initial $2.9 million outlay turned into a $65 million boondoggle, and claims of breaking multiple University of Texas system rules. Early failures, of course, do not prevent future successes, but as President Trump has noted, no one (at IBM or MD Anderson) apparently realized how complicated health care actually was.

 

I can't say I'm surprised. Much of what makes health care complicated is, at least currently, not soluble with higher order computational strategies. In an average clinic, moving from one room to the next, I will speak to patients who want too much of modern medicine (diagnostically and therapeutically), followed by patients who reject what little I have to offer, both for reasons that seem unreasonable to me, and I suspect to a computer-generated algorithmic intelligence. I know what the right thing to do is, if by the right thing one means improving disease-free survival or response rate or overall survival. It is the unquantifiable and the personal that torpedo my best efforts, sinking what should be optimal outcomes in a sea of doubt and confusion. An irritating nanny program, manifesting itself in annoying pop-up messages while I am already wasting my time navigating EPIC, is unlikely to improve my mood or my patient's lot.

 

Other, more modest, efforts continue apace. The IT start-up Flatiron and ASCO's CancerLinQ both plan to liberate the EHR via AI measures. As a conflict of interest statement, I was ASCO's President the year the ASCO Board committed to the creation of CancerLinQ, and I served as Co-Chair of its initial oversight committee. As such, I feel a residual interest in its ultimate success. Whether ASCO, or some private market-based approach prevails, is not something I will bet on, either for or against. But I do foresee something approaching AI-based clinical decision support in our future. Still, this is not higher level AI as I understand it.

 

I don't know that what I want, or what most doctors want, is some omniscient superintelligence telling me how to practice medicine. My interests are much more modest: could the AI listen in on my conversation with the patient and create a decent clinic note in EPIC? Could it put in orders for the PET/CT I discussed with my nurse? Could it print out a copy of that JCO or NEJM article I was misremembering, highlighting the crucial piece of data I tried conjuring from the dusty cupboards of my memory? Could it automatically tell me what clinical trials my patient is eligible for without me having to go through the protocol? Could it go through my EHR and certify me for QOPI or recertify me for the American Board of Internal Medicine or give me 2-minute snippets of education between patients that would result in CME credit? Could it automatically pre-cert patients for insurance authorizations? Before a superintelligent AI becomes Skynet-scary and ends human civilization, let's hope it at least passes through a brief useful phase in the clinic.

 

These tasks all seem reasonable projects for companies capable of translating Shakespeare into Swahili. Some of them represent intellectual aides, making me clinically smarter, but many are just time-savers. And time, of course, is every doctor's most important commodity in the clinic.

 

Even limited AI could have fairly profound effects on the way medicine is practiced. There are already machine learning programs that outperform radiologists in their ability to read mammograms, are the equivalent of dermatologists (which means, superior to me) in recognizing melanomas, and are gaining rapidly on a pathologist's ability to read an H&E slide. Algorithms are great at pattern recognition. Note to radiologists, dermatologists, and pathologists: beware. You are ultimately as disposable as any of those other industries digitalization has transformed.

 

But back to superintelligence. Should we panic over the ultimate prospect of our servant becoming our master? I view this as an extension of a very old conflict. This conflict is over who controls our lives. Throughout most of recorded human history, humans have been under the control of some tribal chieftain, feudal lord, king, dictator, or petty bureaucrat. It is only in the past 2 centuries or so that a significant portion of the human race has had anything approaching personal or political freedom.

 

I worry that the period that began with the signing of the Declaration of Independence, a period of immense human creativity, is coming to an end. Human freedom is under assault around the globe, and so far the assaults have been launched not by supercomputers but by the rich and powerful, by ex-Soviet oligarchs, by religious fanatics, and by impartial market forces that care nothing for human liberty. That these forces use computer algorithms to keep the weak and powerless weak and powerless is unsurprising.

 

If, say, a political party uses computers to design gerrymandered maps to limit the voting strength of those it views as supporting its opponents, then it would be unsurprising if an AI creature failed to learn this lesson in its inexorable march for dominance. If an insurance company used Big Data/AI approaches to maximize its profit by limiting its liability to the poor and sick, why be surprised if some future AI master has no more care for human suffering? If a foreign power uses computer technology to undermine democracy by flooding the internet with disinformation, why expect tomorrow's superintelligence to do anything other than mimic us? Maybe we need to deserve our future freedom by fighting for it in the present, before it is lost to forces that are all too human.

 

At some level, I view a superintelligent AI as something almost...human. By this I mean a creature with a degree of insight that goes beyond mere pattern recognition. Something capable of strategizing, or scheming, perhaps even capable of hating, if silicon-based life forms can generate emotions similar to carbon-based ones.

 

There is a large science fiction literature devoted to AI, long since turned into Hollywood movies. One thinks of HAL in 2001: A Space Odyssey, or of Arnold Schwarzenegger's Terminator. These AIs are either maleficent, like Schwarzenegger's cyborg in Terminator, or controlling, like the AI in the Matrix series, or (rarely, because it is so unfrightening) benign and transcendent, as occurs in Iain Banks' wonderful Culture novels. Ants don't understand human intelligence, and we might be like ants to an AI superintelligence. The point, in all of these possible scenarios, is that the inflection point could, in theory, occur in a picosecond.

 

Those who worry about such things (Musk, Bostrom, and others) say that the time to start working on the problem is now, because 10 or 20 years from now may be too late. As I type these words, pictures of post-hurricane south Texas, Florida, and Puerto Rico crowd the airways: cities without fresh water that are underwater. We are not good at preparing for natural disasters; I doubt we would be any better at prepping for an AI apocalypse. A kinder, gentler alternative suggests that we might proactively program AIs for beneficence, as with Isaac Asimov's three laws of robotics. Though if AIs are allowed to watch certain networks, they may decide humanity unworthy of saving.