/* ---- Google Analytics Code Below */

Wednesday, October 12, 2022

New Computing Architecture

At a school I attended, worth a look.   Technical

Researchers at the University of Pennsylvania Propose a New Computing Architecture Ideal for Artificial Intelligence (AI)

By Khushboo Gupta- October 5, 2022

Conventional computing architectures severely constrain artificial intelligence’s ability to improve technology. In traditional models, memory storage and computing occur in separate areas of the machine. This is why data must be transported from its storage area to a CPU or GPU for processing. The most significant disadvantage of this design is that this movement takes time, which reduces the performance of even the most potent processing units available. There is no avoiding lag when compute performance exceeds memory transfer. These delays become a severe issue when dealing with the massive amounts of data required for machine learning and AI applications. 

Researchers have focused on hardware innovation to achieve the necessary increases in speed, agility, and energy efficiency as AI software advances in sophistication and the rise of the sensor-heavy Internet of Things produces larger datasets. A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science, in collaboration with researchers from Sandia National Laboratories and Brookhaven National Laboratory, have created a new computing architecture based on compute-in-memory (CIM), which is ideal for AI. Processing and storage take place simultaneously in CIM systems, which helps to reduce energy consumption and eliminate transfer time. The new CIM design from the team stands out for containing no transistors. This design is specifically adapted to how Big Data applications have changed how computing works today.

Transistors limit the speed at which data may be accessed, even in a compute-in-memory architecture. They utilize more time, space, and energy than is ideal for AI applications since they require much wire in a chip’s overall circuitry. The transistor-free design by the team is distinctive since it is straightforward, quick, and uses less energy. The researchers clearly emphasize that the advancement is not limited to circuit-level design. Their earlier materials science research on a semiconductor known as scandium-alloyed aluminum nitride (AlScN) was the foundation for the new computing architecture. Ferroelectric switching is possible with AlScN, making it faster and more energy-efficient than other nonvolatile memory components. Another crucial feature is the material’s ability to be deposited at temperatures low enough to work with silicon foundries. This makes it possible for the architecture to be space-efficient, which is crucial for small chip designs. .... '    (much more, Computing Technical) 


No comments: