Mar 28, 2022

Read Time IconRead time: 3 mins

What Ethical AI Practices Mean for Business

Traditionally, the importance of ethics in society only applied to human beings. However, there are now new actors in the world: intelligent machines. With rapid increases in artificial intelligence (AI), businesses — and society as a whole — must now consider the ethics of intelligent machines.

Watch Iyad Rahwan, Guest Expert in the Artificial Intelligence: Implications for Business Strategy online short course from MIT, explore ethics in AI.

Transcript

Ethics in AI: How businesses should take care

Ethics are the moral principles that govern a person’s behavior or the conducting of any activity. It has traditionally applied to human beings, but now we have new kinds of actors in the world: intelligent machines.

So why should business care about AI ethics? Well, the big picture is that AI is going to bring many benefits to society and to consumers and business alike, from better-quality recommendations to safer cars, better medical diagnosis, and a whole host of other benefits. AI technology also raises risks that people are concerned about: algorithms filtering news and putting us in filter bubbles, fake news, and unfair matching of jobs or romantic partners for instance.

Introduce a human element to balancing the risks of AI

How do we balance the risks and the benefits? And, the way that people have been thinking about these problems so far is that, well, if you’re worried about what the machine might do and that it might misbehave, then you just need to put a human in the loop. So the human-in-the-loop can be thought of as a human that operates as part of the system’s operation and monitors it in case it misbehaves and is able to intervene. The machine is performing a certain function. For example, it’s an autopilot flying a plane, and there is a human that is performing oversight, for example, the human pilot in the cockpit. The stakeholders involved all have the same goal. So you sort of agree that we all want to get from A to B as fast and safe as possible.

I think what’s happening more recently is that we’re going to have situations where different stakeholders have different preferences about the kinds of goals that the system should achieve, or the kinds of constraints that should be imposed on the system’s behavior. Somebody might care more about fairness, another person might care about safety, and another person, again, might care about efficiency.

How do we, as a society, make up our minds and agree on what the machines should do before we ensure that the human is able to regulate it? So what happens in this case is we move from this idea of human-in-the-loop to something that I call ‘society-in-the-loop’. So you can think of society-in-the-loop as human-in-the-loop oversight over an intelligence system, combined with a social contract that determines what society would like the system to fulfill.