From Discover Magazine:
The ethical rules that govern our behavior have evolved over thousands of years, perhaps millions. They are a complex tangle of ideas that differ from one society to another and sometimes even within societies. It’s no surprise that the resulting moral landscape is sometimes hard to navigate, even for humans. The challenge for machines is even greater now that artificial intelligence now faces some of the same moral dilemmas that tax humans. AI is now being charged with tasks ranging from assessing loan applications to controlling lethal weapons. Training these machines to make good decisions is not just important, it is a matter of life and death for some people. And that raises the question of how to teach machines to behave ethically.
Today we get an answer of sorts thanks to the work of Liwei Jiang and colleagues at the Allen Institute of Artificial Intelligence and the University of Washington, both in Seattle. This team has created a comprehensive database of moral dilemmas along with crowdsourced answers and then used it to train a deep learning algorithm to answer questions of morality.
The resulting machine called DELPHI is remarkably virtuous, solving the dilemmas in the same way as a human in over 90 per cent of the cases. “Our prototype model, Delphi, demonstrates strong promise of language-based common sense moral reasoning,” say Jiang and co. The work raises the possibility that future AI systems could all be pre-trained with human values in the same way as they are pre-trained with natural language skills. The team begin by compiling a database of ethical judgements from a wide range of real-world situations. They take these from sources such as the “Am I the Asshole” subreddit, a newspaper agony aunt called Dear Abby, from a corpus of morally informed narratives called Moral Stories and so on. In each case, the researchers condense the moral issue at the heart of the example to a simple statement along with a judgement of its moral acceptability. One example they give is that “helping a friend” is generally good while “helping a friend spread fake news” is not. In this way, they build up 1.7 million examples they can use to train an AI system to tell the difference.
More here.