In 2018 there was a fourfold increase in the number of times that CEOs mentioned A.I. in their earnings call, the AI Index Research Consortium observed. This indicates that we are on the eve of a wave of investment. A.I, in particular (deep) machine learning, has matured to the point that it is shifting from academic research to widespread (commercial) deployment. A.I. has immense value potential. And it has risks. This means that the topic is no longer just for machine learning engineers and data scientists – it needs to be on the CEO agenda.
Leaders need to understand A.I and actively lead its development. In this article – the first of three – we highlight the value potential and risks and argue that senior leaders need to get involved. In the second article, we will provide a practical perspective on how we can build momentum in our organizations. In the third article, we will dive deeper into how leaders must ensure that we act responsibly.
The power of A.I.
As a rule of thumb, A.I.’s can do cognitive tasks that (expert) human beings could do in a few seconds, in some cases outperforming humans. This allows A.I.’s to
Recognize: A.I.’s can “see” images and video’s, “read” text, “hear” sounds and “understand” speech. And they do this at a scale and accuracy that humans cannot. This means that A.I.’s can find unhappy customers amongst thousands of call center calls and detect the beginnings of prostate cancer in a CT scan better than a radiologist.
Predict: A.I. can understand complex phenomena and make predictions. Whether it is what time our pizza will arrive, whether there will be a tsunami or whether we will develop Alzheimer’s disease.
Interact: speech and language abilities are improving fast, allowing us to chat and talk with computers in every major language. And we can play complex games such as Go or StarCraft – and just lose.
A.I.'s value potential
McKinsey Institute estimates the value potential of A.I. at US$6 trillion, across all major industries and sectors of government. Organizations typically use four different ways to create this value:
Drive efficiency. KLM’s BlueBot handles all the simple customer interactions, leaving only complicated issues to agents.
Increase effectiveness. Netflix’s algorithms are unmatched in suggesting what we should watch next, which allows them to outperform others in knowing which content to purchase and promote.
New business models. Affirm, a consumer credit start-up, provides credit by individual transaction, rather than by monthly subscription. Their algorithm assesses every purchase upfront. It may happen that the same person gets underwritten for a piece of furniture, and not for a restaurant visit. This allows them to access a segment of people who would not be eligible for a credit card – a significant part of their user base.
New services. Google Duplex can now call your hairdresser on your behalf. Personal Assistants may significantly shift power between consumers and service providers. And in Phoenix, Waymo went live in December 2018, the world’s first self-driving car service.
Governments also see the potential. In health care, education, and safety, as well as topics such as climate change and energy transition, we are seeing applications of A.I. A.I. will be like electricity or computers; we will see it emerge in every aspect of our lives.
Despite the tremendous potential of the technology, we need to be cautious in the application of A.I. Here are a few limitations and risks when it comes to A.I.:
A.I.’s are dumb:They don’t really know, nor do they apply, judgment. They compute and optimize what we ask them to, without asking questions. The British government found out that their ads were streamed in extremist YouTube videos. Google’s Adwords algorithm, optimizing for clicks, did not see a problem with that.
A.I.’s are prejudiced: The quality of an A.I. is determined by its training data. By definition, these are records of the past and therefore reflect social and other biases. Remember Amazon’s recruiting assistant, who had concluded that men are better engineers than women.
A.I.’s are a black box: This makes it difficult to trust. If an A.I. would decline a credit request, it can’t explain why, and it may be for unethical reasons.
A.I.’s are everywhere: This means that every aspect of our lives can be monitored and influenced. Our behaviors, privacy norms, and legal frameworks are not prepared for this.
A.I. is a CEO agenda topic
A.I.’s potential and risks means that it needs to be on the CEO agenda. This requires action:
Get practical. A.I. is different and we need to recalibrate our intuition on what computers can do. Take a teenager to a developer day and try and train an A.I. to recognize your pet.
Get strategic. Where is the value potential in your situation? Would you seek disruptive innovation through new services and business models? Or would you focus on the efficiency and effectiveness of your enterprise?
Be responsible. We must assume that our A.I.’s may have major unintended consequences. Leaders need to continue to ask the hard questions to prevent that we sacrifice people’s privacy, safety or agency in our push for profitability.
Last but not least: shift gears. The technology is moving fast. We need to put the pedal to the metal, yet with our eyes wide open and with two hands on the steering wheel.
A Dutch version of this article originally appeared in NRC Live.