back to top

Ethics And AI: Are We Ready For The Rise Of Artificial Intelligence?

Steven Mintz

No job in the United States has seen more hiring growth in the last five years than artificial-intelligence specialist, a position dedicated to building AI systems and figuring out where to implement them.

But is that career growth happening at a faster rate than our ability to address the ethical issues involved when machines make decisions that impact our lives and possibly invade our privacy?

Maybe so, says Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior.

“Rules of the road are needed to ensure that artificial intelligence systems are designed in an ethical way and operate based on ethical principles,” he says. “There are plenty of questions that need to be addressed. What are the right ways to use AI? How can AI be used to foster fairness, justice and transparency? What are the implications of using AI for productivity and performance evaluation?”

Those who take jobs in this growing field will need to play a pivotal role in helping to work out those ethical issues, he says, and already there is somewhat of a global consensus about what should be the ethical principles in AI.

Those principles include:

  • Transparency. People affected by the decisions a machine makes should be allowed to know what goes into that decision-making process.
  • Non-maleficence. Never cause foreseeable or unintentional harm using AI, including discrimination, violation of privacy, or bodily harm.
  • Justice. Monitor AI to prevent or reduce bias. How could a machine be biased? A recent National Law Review article gave this hypothetical example: A financially focused AI might decide that people whose names end in vowels are a high credit risk. That could negatively affect people of certain ethnicities, such as people of Italian or Japanese descent.
  • Responsibility. Those involved in developing AI systems should be held accountable for their work.
  • Privacy. An ethical AI system promotes privacy both as a value to uphold and a right to be protected.

Mintz points to one recent workplace survey that examined the views of employers and employees in a number of countries with respect to AI ethics policies, potential misuse, liability, and regulation.

“More than half of the employers questioned said their companies do not currently have a written policy on the ethical use of AI or bots,” Mintz says. “Another 21 percent expressed a concern that companies could use AI in an unethical manner.”

Progress is being made on some fronts, though.

In Australia, five major companies are involved in a trial run of eight principles developed as part of the government AI Ethics Framework. The idea behind the principles is to ensure that AI systems benefit individuals, society and the environment; respect human rights; don’t discriminate; and uphold privacy rights and data protection.

Mintz says the next step in the U.S. should be for the business community likewise to work with government agencies to identify ethical AI principles.

“Unfortunately,” he says, “it seems the process is moving slowly and needs a nudge by technology companies, most of whom are directly affected by the ethical use of AI.”

Dr. Steven Mintz (www.stevenmintzethics.com), author of Beyond Happiness and Meaning: Transforming Your Life Through Ethical Behavior, has frequently commented on ethical issues in society and business ethics.

Latest Articles

Latest Articles

Related Articles