Randy Sparkman at Literary Hub:
By now, we’ve seen the ChatGPT parlor tricks. We’re past the novelty of a cake recipe in the style of Walt Whitman or a weather report by painter Bob Ross. For the one-hundredth time, we understand the current incarnation of large language models make mistakes. We’ve done our best to strike a studied balance between doomers and evangelists. And, we’ve become less skeptical of “emergent” flashes of insight from the aptly-named foundational models. At the same time, Google, Meta and a list of hopeful giant swatters have released credible competitors to ChatGPT.
For all those reasons, global use of ChatGPT recently declined for the first time since its November 2022 release. Perhaps now we’re ready to get to more elemental questions about what generative language artificial intelligence can or cannot do for us in the everyday.
I come to this discussion from a long career managing IT systems in large enterprises, where, as MIT’s Nicholas Negroponte predicted in 1995, everything that could be digitized was digitized. I’m not a cognitive scientist, but I understand enough of how large language models work and how humans separate digital wheat from chaff to begin to think about what they might do with software with an opinion of its own.
More here.