Matthew Sparkes in New Scientist:
There is a global rush for GPU chips, the graphic processors that were originally designed to run video games and have also traditionally been used to train and run AI models, with demand outstripping supply. Studies have also shown that the energy use of AI is rapidly growing, rising 100-fold from 2012 to 2021, with most of that energy derived from fossil fuels. These issues have led to suggestions that the constantly increasing scale of AI models will soon reach an impasse.
Another problem with current AI hardware is that it must shuttle data back and forth from memory to processors in operations that cause significant bottlenecks. One solution to this is the analogue compute-in-memory (CiM) chip that performs calculations directly within its own memory, which IBM has now demonstrated at scale.
IBM’s device contains 35 million so-called phase-change memory cells – a form of CiM – that can be set to one of two states, like transistors in computer chips, but also to varying degrees between them.
More here.