Slowing down development of AI systems passing the Turing test

Yoshua Bengio at his own website:

There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values. The short and medium term risks –manipulation of public opinion for political purposes, especially through disinformation– are easy to predict, unlike the longer term risks –AI systems that are harmful despite the programmers’ objectives,– and I think it is important to study both.

With the arrival of ChatGPT, we have witnessed a shift in the attitudes of companies, for whom the challenge of commercial competition has increased tenfold. There is a real risk that they will rush into developing these giant AI systems, leaving behind good habits of transparency and open science they have developed over the past decade of AI research.

There is an urgent need to regulate these systems by aiming for more transparency and oversight of AI systems to protect society. I believe, as many do, that the risks and uncertainty have reached such a level that it requires an acceleration also in the development of our governance mechanisms.

More here.