Artificial Intelligence: A Primer for the Laboratory Leader

Are robots coming for us, or are they coming to help us?

Gaurav Sharma, MD,Tarush Kothari, MD MPH
Published:Nov 18, 2019
|8 min read
Register for free to listen to this article
Listen with Speechify
0:00
8:00

“Alexa, what is artificial intelligence?” 

The moment you say these words, the circle on top of your digital voice-controlled assistant lights up, and the machine replies, “Artificial intelligence is the theory and development of computer systems able to perform tasks that normally require human intelligence and decision making.” 

In the past five years, popular media has been flooded with stories featuring “smart” machines and artificial intelligence (AI) slated to replace everything from truck drivers to doctors. Much of the discussion comes off as hypothetical musings about a dark future with rampant job loss and dramatic proclamations about the end of the world as we know it.

Closer to home, AI has become a hot topic in the clinical pathology peer-reviewed literature, at meetings and training courses, and among bloggers.

The jargon around AI can be intimidating and confusing for laboratory managers and leaders. Perhaps we could all help ourselves by becoming familiar with certain basic concepts and definitions. For the purpose of this article, we have framed these basics into 11 questions, with the aim of demystifying AI. 

1. What is artificial intelligence?

As the term suggests, artificial intelligence is intelligence that is artificial (machine based or nonhuman), programmed by humans to perform humanlike activities. By contrast, human intelligence is premised on learning by trial and error and by past experiences (either one’s own experiences or those of others).

Any AI system has three basic properties: 

  1. Intentionality: Humans design AI systems with the intention of making decisions from historical or real-time data.
  2. Adaptability: AI systems learn and adapt as they compile information to make decisions and continuously refine this process.
  3. Intelligence: this intelligence is nonhuman, though it is generally the best approximation of how a human would make decisions (hence, depending on how an AI system is designed, it is prone to bias).

2. How old is artificial intelligence?

The term artificial intelligence was coined in 1956 at a computer science conference at Dartmouth College in New Hampshire. AI came into national folklore in 1997, when IBM Deep Blue defeated the reigning world chess champion in a six-game match. In 2011, IBM Watson defeated the all-time most successful human players of Jeopardy in front of millions of television viewers. Within the realm of the clinical laboratory, the first mentions of AI appeared in the 1990s.



3. What is the difference between a general AI system and a narrow AI system?

A general AI system can solve complex and interconnected problems. In the real world, general AI systems are few and just emerging.

A narrow AI system, on the other hand, is programmed to perform highly repeatable but specific and data-intensive tasks efficiently, at scale, and without errors (e.g., taking inventory of store shelves).

4. Is artificial intelligence applicable to lab-generated data?

Laboratory-generated data offers an ideal environment for AI applications because it meets all the requirements of big data—a term used to describe large data sets characterized by the properties of volume, velocity, veracity, and variety. Big-data sets may have limited visible value, but when mined with the right questions in mind and using the right exploratory methods, they can reveal insights and information that were previously hidden.

5. What is an algorithm?

An algorithm is a set of instructions that a machine can act upon. These instructions help the machine decide and execute the next step based on current information. Simple algorithms can be set up using simple conditions (e.g., “and,” “or,” and “not”), and multiple simple algorithms can be combined into complex algorithms. A start sequence of a clinical laboratory analyzer is an example of an algorithm—it defines the individual sequence of events to be followed before the machine is ready for operation. 

6. What is machine learning?

Machine learning (ML) includes any application or domain of AI that relies on sophisticated algorithms to spot patterns or associations within a data set, make predictions, and suggest appropriate actions to a human based on those predictions.

There are three broad types of ML: 

Supervised: The goal of supervised ML is to find the mapping function (f) given the input variable (x) and the output variable (y).

y = f(x)

Typically, a training data set is used to first identify the mapping function (f). The function (f) can then be used to predict outputs on a new data set similar to the training data set with similar input variables. Different types of supervised ML algorithms are linear regression, logistic regression, decision tree, k-nearest neighbor, naive-Bayes, and support vector machines. 

Unsupervised: Unlike supervised ML, there are no predefined outputs in unsupervised ML. The goal here is for the algorithms to identify the most interesting output pattern that describes the input data. There are two types of algorithms within unsupervised ML: clustering and association. Clustering can discover groupings inside your input data (e.g., that certain specialists such as obstetricians and geneticists overwhelmingly order prenatal genetic tests). Association can discover rules hidden inside your input data (e.g., only certain hematologists prescribe an expensive medication such as argatroban). Neural networks are a type of unsupervised ML. 

Reinforcement learning: Unlike supervised or unsupervised ML, the goal in reinforcement learning is to maximize reward in any given situation. The goal of the algorithm is to find the best path with maximum total reward, which is usually the sum of the positive or negative rewards along a sequential path. 

7. What is deep learning?

Deep learning can be thought of as a subset or next generation of machine learning that can make predictions entirely independent of humans. The most common deep learning methods (i.e., those used in image analysis software, voice and speech recognition, and natural language processing) use neural networks, which are inspired by the neuronal circuitry of the brain and how it is believed to process information. Deep learning is powerful in that it is able to take on both supervised and unsupervised ML tasks. 

8. What is natural language processing?

Human language is an incredibly complex system for communicating ideas and concepts across groups, time, and locations. Our words and languages are fraught with opportunities for misinterpretation and misrepresentation. A single word may represent more than one concept (e.g., “star” can refer to a film actor as well as a celestial body) or several words may represent a single concept (e.g., the element sodium can be written as Na, sod., or sodium). For linear-thinking machines, understanding language is an error-prone exercise. For example, the phrase “call me a cab” can be validly interpreted by a machine as a call for a taxi (what you wanted) or a request for being addressed as “cab” (a hilarious but logical misunderstanding).

Natural language processing (NLP) is a discipline that helps computers capture human language, identify the concepts, derive meaning from what is being conveyed, and communicate back in words (either oral or written) that are easily understood by humans. NLP is divided into many subdisciplines, including but not limited to optical character recognition, speech recognition, and voice-driven internet search. So, any time you see a product featuring NLP, it is designed to understand and interpret oral or written words.

9. What is an artificial neural network?

Artificial neural networks (ANNs) are computing devices (hardware or software) modeled on biological brains. Like the neuronal networks in our brains, ANNs can comprise hundreds to billions of individual units that are interconnected and can be switched on or off at any given time. 

ANNs are typically arranged in layers. Data are presented to an ANN through its input layer, processed by a network of internal layers, and retrieved from the output layer. This level of interconnection contrasts traditional computing in which processing units are arranged in a series and information is sequentially transferred between each silo. 

Traditional computing architecture is very efficient in solving a limited set of problems with well-defined inputs and outputs (e.g., the digital calculator). Most ANNs are distinguished by a specific learning rule that determines how the processing units interact with each other. When trying to solve a new problem, ANNs might have a learning curve. 

10. Will artificial intelligence take over our jobs?

Yes and no. 

In the 19th and early 20th centuries, mechanization of agriculture and manufacturing reduced the need for jobs that require physical labor. The advent of computing in the late 20th century eliminated jobs requiring consistent but low-skill mental work, such as simple arithmetic done by accountants. In the 21st century, the internet economy has eliminated jobs that require mid-skilled work, including tasks that require coordination, collation, minimal computation, and analysis. Think of hotels.com and uber.com—web-based apps that upended travel agencies and taxi companies, respectively.

Like the internet economy, AI will disrupt underlying business models that offer decent-paying jobs in areas such as research, development, marketing, and analysis of all kinds. Any job that has a predictable workflow and deals with a limited number of tasks and variables is at risk of losing its ground to the advent of AI. Think of inventory management, supply chains, risk management, finance, law, and even medicine.

In health care, AI may result in the loss of some jobs, but it also presents opportunities for new jobs. Creative destruction is a concept that was proposed in the 1950s by the Austrian economist Joseph Schumpeter. He believed that in a capitalistic system, new markets and methods develop from existing ones, incessantly destroying the old system and creating a new one in its place. We posit that AI will be a creative destroyer of several roles and jobs in health care—it will make several low-complexity jobs redundant but will reward specialization and adoption of new business models. For example, there will be a need for analytics translators, a domain that does not exist yet but will be critical in bridging the technical expertise of data engineers and data scientists with the operational expertise of individual departments. Analytics translators may not be proficient in AI but will be well-versed in understanding the expectations of the customer, the business model of the enterprise, and the nature of the problems that the enterprise faces. 

11. What will be the manager’s role in the post-AI laboratory?

In the post-AI laboratory, the role of the manager may differ from that of today. The biggest difference will be that more information and insights will be easily and quickly available to the manager. Access to high-quality, high-speed, and low-cost insights will allow managers to focus on high-yield and people-centric issues rather than on low-yield and task-centric issues. Laboratory managers will work in six main roles in the post-AI laboratory:

Roles of Laboratory Managers in the Post-AI Laboratory

RolePrimary CompetenciesOutcome
Project identifierProcess improvementIdentify projects or opportunities that eliminate waste or improve efficiency
Data investigatorData retrieval; data cleanupWork with data scientists to identify sources of data within and outside the lab
AI feederAnalyticsWork with data scientists to build a pipeline that takes existing data sources and directs them to the appropriate AI methods
AI validatorAnalytics; business/financeWork with non-lab specialists (e.g., finance) to validate the end results of AI analysis
AI communicatorAnalytics; communicationTranslate the AI insights and their operational meaning for employees and leadership
Project managerCommunication; project managementOnce an insight has been validated and a suitable project has been finalized, relay it to the teams and manage as a project

As the Greek philosopher Heraclitus said, “Change is the only constant in life.” He lived in a world in which farmers worked with bare hands or rudimentary tools, were subject to the whims of nature, and sought the mercy of the Greek gods. Modern farmers work in a completely different world; they have mechanized tools, assured irrigation, current information on soil conditions, and detailed information on weather patterns and market demand, allowing them to grow more abundant crops and sell them at a higher price. Yet if ancient farmers were around today, they might believe modern agriculture had taken away their jobs. It is useful to remember that new technology eliminates old jobs, but it also creates new jobs. Instead of thinking that robots are coming for us, we may instead think about how robots are coming to help us. Let us not fear change; let us take inspiration from Heraclitus and look forward to a better and brighter future. 


Gaurav Sharma, MD

Dr. Sharma is a pathologist practicing in Southeastern Michigan and is board-certified in anatomical/clinical pathology, molecular-genetic pathology, and clinical informatics.


Tarush Kothari, MD MPH

Dr. Kothari is a pathologist practicing in New York. He is board-certified in anatomical/clinical pathology and clinical informatics and holds a master’s degree in public health from Columbia University.


Tags:

InformaticsArtificial IntelligenceMachine LearningManagementComputer Models