New systems like chatGPT are enormously entertaining, and even mind-boggling, but also unreliable, and potentially dangerous

Gary Markus in his Substack newsletter:

Avatar of S. Abbas Raza created by Lensa AI.

The core of that threat comes from the combination of three facts:

• these systems are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination; ask them to explain why crushed porcelain is good in breast milk, and they may tell you that “porcelain can help to balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop”. (Because the systems are random, highly sensitive to context, and periodically updated, any given experiment may yield different results on different occasions.)

• they can easily be automated to generate misinformation at unprecedented scale.

• they cost almost nothing to operate, and so they are on a path to reducing the cost of generating disinformation to zero. Russian troll farms spent more than a million dollars a month in the 2016 election; nowadays you can get your own custom-trained large language model, for keeps, for less than $500,000. Soon the price will drop further.

More here.