Ethical AI

From the Microsoft Cloud and AI Team

Microsoft-Logo

(Press the Spacebar to continue)

A recording of this training is available for download here.

Impacts of Artificial Intelligence

Every new technology brings with it new advantages and new impacts - some unforseen.

With the introduction of the automobile, people had the ability to move goods and services across the country, and live in new places. Cars brought new wealth, industries, and opportunities - and impacted societies in ways that had never been seen. But with those advantages also came pollution, a loss of jobs in certain industries, a decline in public transport systems, and increased congestion.

Artificial Intelligence is broadly defined as computing programs that approximate human senses and intelligence. It has the potential to enhance human life in multiple ways that we are only now beginning to explore. But along with its advantages comes new challenges for fairness, accountability, transparency and ethical system behavior.

As technical professionals, it is our responsibility to mitigate the risks associated with AI solutions. Designing solutions with the following principles in mind can help us build an ethical foundation.

AI Principle:

Fairness

AI must maximize efficiencies without destroying dignity, and guard against bias

Fairness

  • AI systems should treat everyone in a fair and balanced manner and not affect similarly situated groups of people in different ways.
  • Model training data should sufficiently represent the world in which we live, or at least the part of the world where the AI system will operate.
  • Ensure the users understand the limitations of the system, especially if they assume technical systems are more accurate and precise than people, and therefore more authoritative.
  • Ensure that the people designing AI systems reflect the diversity of the world in which we live.
  • People with relevant subject matter expertise should be included in the design process and in deployment decisions.
  • Develop analytical techniques for and within the system to detect and address potential unfairness.

AI Principle:

Accountability

AI must have algorithmic accountability

Accountability

  • If the recommendations or predictions of AI systems are used to help inform consequential decisions about people, it is critical that humans are primarily accountable for these decisions.
  • Clearly Demonstrate that the solution is designed to operate within a clear set of parameters under expected performance conditions, and that there is a way to verify that they are behaving as intended under actual operating conditions.
  • Create a robust feedback mechanism so that users can easily report performance issues they encounter.
  • Implement an internal review board to oversee all AI solutions created by your organization.

AI Principle:

Transparency

AI must maximize efficiencies without destroying dignity and guard against bias

Transparency

  • The system must identify itself as AI, or that it is augmented with AI, so as not to deceive a user.
  • The AI solutions must be clear, understandable, and well-described.
  • Perform systematic evaluations of the quality and suitability of the data and models used to train and operate AI-based products and services, and systematic sharing of information about potential inadequacies in training data.
  • When AI systems are used to make consequential decisions about people, provide adequate explanations of overall system operation, including information about the training data and algorithms, training failures that have occurred, and the inferences and significant predictions they generate.
  • AI systems must comply with privacy laws that require transparency about the collection, use and storage of data, and mandate that consumers have appropriate controls so that they can choose how their data is used.

AI Principle:

Ethical

AI must assist humanity and be designed for intelligent privacy

Ethical

  • Develop a shared understanding of the ethical and societal implication of the AI technologies your application implements.
  • Design and testing should also anticipate and protect against the potential for unintended system interactions or bad actors to influence operations, such as through cyberattacks or misleading communications.
  • Humans should play a critical role in making decisions about how and when an AI system is deployed, and whether it’s appropriate to continue to use it over time.
  • Describe when and how an AI system should seek human input during critical situations, and how a system controlled by AI should transfer control to a human in a manner that is meaningful and intelligible.
  • When AI systems are used to help make decisions that impact people’s lives, it is particularly important that people understand how those decisions were made.

Putting the Principles into Action