Emerging Nonvolatile Memories for Machine Learning
Digital computers have been getting exponentially faster for decades, but huge challenges exist today. Transistor scaling, described by Moore's law, has been slowing down over the last few years, ending the era of fully predictable performance improvements. Furthermore, the data-centric computing demands fueled by machine learning applications are rapidly growing, and current computing systems -- even with the historical rate of improvements driven by Moore's law -- cannot keep up with these enormous computational demands. Some are turning to analogue in-memory computing as a solution, where specialised systems operating on physical principles accelerate specific tasks. We explore how emerging nonvolatile memories can be used to implement such systems tailored for machine learning. In particular, we discuss how memristive crossbar arrays can accelerate key linear algebra operations used in neural networks, what technological challenges remain, and how they can be overcome.