A.I.: Use responsibly

Mark Vernooij Patrick Leenheers
July 22nd, 2019
Article by: Mark Vernooij, Patrick Leenheers
A.I.: Use responsibly

In 2018 there was a fourfold increase in the number of times that CEO’s mentioned A.I. in their earnings calls, the AI Index Research Consortium reported in December 2018. This indicates that we stand on the eve of a wave of investment.

 

Leaders need to understand A.I. and actively lead its development. In this article – the final in a series of three – we will dive deeper in how leaders are ensuring that their organizations act responsibly. In the first article, we discussed the value potential and risks of the technology. In the second piece, we observed what leaders are doing to grow the momentum in their organizations.

The risks are real

In recent months we have seen what can go wrong. There were scandals on social manipulation (Cambridge Analytica), security breaches (Facebook), algorithmic discrimination (Amazon’s Rekognition), wrong cancer treatments (IBM Watson), use of A.I. in weapons (Google) and fatal injuries (Tesla and Uber). We will need to find ways to mitigate the downsides around bias, explainability, privacy, security, and safety.

We do not have one ethical code, but we expect A.I. to have one

The historian Herodotus (500 B.C.) was reportedly the first to challenge the notion of a universal morality, quoting that ‘Custom is the Lord of All. MIT’s Moral Machine uses the well-known “Trolley Problem” to crowd-source rules for self-drive algorithms. It shows how ethical standards differ across cultures. And in sixty years, we have not been able to find a solution that works for all.  In today’s multi-conceptual world, with co-existing paradigms of ‘good’ and ‘bad,’ many of us consider it actually virtue to be open to other people’s points of view. This lack of one common ethical code forms challenges in our real-world interactions, these challenges are compounded in a digital world. It is the first time in human history that we let technology make decisions about our lives at such a large scale. Algorithms grant or deny us opportunities or even determine what to do in life and death situations. And we expect A.I. to do so in a consistent, rational and fair way, across situations, regions and (cultural) contexts even if we ourselves don’t.

A.I.
This is the first time in human history that we have let technology make decisions about our lives at such a large scale. #tecnology #AI #artificialintelligence #ethics #leadership Click To Tweet

Leaders, not algorithms, are responsible

We believe that ethical debate is important, but will not get us answers anytime soon. And yet the potential of A.I. is so large that we are rapidly moving ahead with its development and use. Instead of waiting for the philosophers to agree, we suggest that leaders of organizations take responsibility. They need to lead a process in which we collectively make clear what we consider acceptable in specific domains and build organizations that are ethical and transparent about their intents and actions with A.I. Doing so will help them protect the reputation of their organizations, will provide them with long-term success and will create societies worthy of living in.

In recent months we have seen multiple high-profile examples on how leaders are taking steps towards responsible use.

  • Collaborate to create regulation. Microsoft, Apple, and Google have indicated that they encourage their government to provide guardrails around how data and A.I. may be used. In Europe, GDPR was a step forward in terms of protection of the individual’s rights. Two NY Times correspondents, one based in New York and one in London, found in an experiment that it makes a marked difference. However, The German Ethik Kommision für Automatisiertes Fahren set an example of how government and industry leaders, collaborating with policymakers, advocacy groups, lawyers, journalists, ethicists, and scientists defined an ethics code for autonomic vehicles.
  • Build a company culture of ethical A.I. Google published an A.I. code of ethics, and installed a “responsible innovation team,” in response to last years’ worker movement opposing collaboration with America’s Homeland and Defense Departments. It is an essential first step, and probably more is needed. Leaders will need to explain why ethics are critical, and their decisions need to be visibly consistent with the code. People’s targets and incentives need to be aligned. A.I. project teams need to be diverse and inclusive, and people (technical and non-technical) need to be trained in how to make ethical tradeoffs together.
  • Interrogate your algorithms. Leaders need to get practical with the technology. They should understand how biases were removed from the dataset that was used to train the AI, how the impact of the algorithms was tested and monitored and how safeguards have been built in the processes to avoid people being harmed.
  • Be transparent. The French administration announced that all government algorithms would be made public so that society at large can verify their correct application. The Dutch government suggests a more nuanced approach, given that general access has practical limitations. Yet being clear to users what the A.I.’s are doing should be a basic right. And targeted “Algorithmic Impact Assessments” by neutral third parties are an essential part of governance, with leaders increasingly asking auditors to ascertain the working of their algorithms.
  • Own the (un)intended consequences. If face recognition discriminates against people of color, the technology is not mature enough to be used by governments for surveillance and law enforcement purposes, and we should not use it. Amazon’s shareholders demanded their leadership to stop selling “Rekognition” to the U.S. government, to no avail so far. Yet they did shut down their hiring assistant after a range of failed attempts to remove its gender bias.
A.I.
Leaders need to build organizations that are ethical and transparent about their intents and actions with A.I. #technology #AI #artificialintelligence #leadership #ethics Click To Tweet

Each plays their part

It is clear that government, shareholders, activists, media and even employees need to play their role to ensure that we develop A.I. in a way that it serves humanity. And it is encouraging to see that leaders are increasingly taking up their roles to ensure we build solutions for a society that we all want to live in.

 

A Dutch version of this article originally appeared in NRC Live.

If you are a leader looking to develop A.I. responsibly, join the THNK Executive Leadership ProgramDownload the brochure or find out if you qualify.