For the Love of God, AI Chatbots Can’t ‘Decide’ to Do Anything

Janus Rose in Vice:

The incessant hype over AI tools like ChatGPT is inspiring lots of bad opinions from people who have no idea what they’re talking about. From a New York Times columnist describing a chatbot as having “feelings” to right-wing grifters claiming ChatGPT is “woke” because it won’t say the N-word, the hype train seems to chug along faster with every passing week, leaving a trail of misinformation and magical thinking about the technology’s capabilities and limitations.

The latest is from a group of people that knows so little about technology, last week it considered banning TikTok because it uses Wi-Fi to access the internet: politicians. On Monday, Connecticut Senator Chris Murphy tweeted an alarming missive claiming that “ChatGPT taught itself to do advanced chemistry.” “It decided to teach itself, then made its knowledge available to anyone who asked,” the senator wrote ominously. “Something is coming. We aren’t ready.”

As many AI experts pointed out in the replies, virtually every word of these statements is wrong. “ChatGPT is a system of averages. It is a language model and only understand[s] how to generate text,” reads a Twitter community note that was later appended to Murphy’s Tweet. “It can ‘appear’ to understand text in the same way that AI can ‘appear’ to create images. It is not actual learning.” While it’s true that large language models like ChatGPT aren’t specifically trained to perform every possible task, it’s not because these AI tools “decided” to brush up on their chemical equations.

More here.