Computer

McKelvey School of Engineering work could help usher in

How can the computer chip predict the future of gene synthesis?  Researchers say developments in computer chips could shed light on the future of synthetic biology

The future of computing could be analog.

The digital design of our everyday computers is great for reading emails and playing games, but today’s problem-solving computers work with large amounts of data. The ability to both store and process this information can lead to performance bottlenecks due to the way computers are built.

The next computing revolution could be a new type of hardware, called in-memory processing (PIM), an emerging computing paradigm that merges memory and processing unit and performs its calculations using the physical properties of the machine – step 1 nor of 0. necessary to perform the processing digitally.

At Washington University in St. Louis, researchers in the lab of Xuan “Silvia” Zhang, associate professor in the Preston M. Green Department of Electrical and Systems Engineering at the McKelvey School of Engineering, have designed a new PIM circuit , which brings the flexibility of neural networks to port to PIM computing. The circuit has the potential to increase the performance of PIM computing by orders of magnitude beyond its current theoretical capabilities.

Their research was published online Oct. 27 in the journal IEEE Transactions on Computers. The work was a collaboration with Li Jiang at Shanghai Jiao Tong University in China.

Traditionally designed computers are built using a Von Neuman architecture. Part of this design separates the memory – where data is stored – and the processor – where the actual computation is performed.

“Today’s computing challenges are data-intensive,” Zhang said. “We have to process tons of data, which creates a performance bottleneck at the CPU and memory interface.”

PIM computers aim to circumvent this problem by merging memory and processing into a single unit.

Computer science, especially computer science for today’s machine learning algorithms, is essentially a complex – extremely complex – series of additions and multiplications. In a traditional digital central processing unit (CPU), this is done using transistors, which are essentially voltage-controlled gates to allow current to flow or not to flow. These two states represent 1 and 0 respectively. Using this digital code – binary code – a processor can perform any arithmetic operation needed to operate a computer.

The type of PIM that Zhang’s lab is working on is called resistive random-access memory PIM, or RRAM-PIM. While in a processor bits are stored in a capacitor in a memory cell, RRAM-PIM computers rely on resistors, hence their name. These resistances are both the memory and the processor.

The bounty ? “In resistive memory, you don’t have to translate to digital or binary. You can stay in the analog domain.” This is the key to making RRAM-PIM computers much more efficient.

“If you need to add, you connect two currents,” Zhang said. “If you need to multiply, you can change the resistor value.”

But at some point, information must be translated into a digital format to interface with the technologies we know. This is where RRAM-PIM hit its bottleneck – converting analog information into a digital format. Next, Zhang and Weidong Cao, a postdoctoral research associate in Zhang’s lab, introduced neural approximators.

“A neural approximator is built on a neural network that can approximate arbitrary functions,” Zhang said. Given any function, the neural approximator can perform the same function, but improve its efficiency.

In this case, the team designed neural approximator circuits that could help eliminate the bottleneck.

In the RRAM-PIM architecture, after the resistors in a crossbar network have performed their calculations, the responses are translated into a digital format. This means in practice that we add the results of each column of resistors on a circuit. Each column produces a partial result.

Each of these partial results, in turn, must then be converted into digital information in what is called an analog-to-digital conversion, or ADC. Conversion is energy intensive.

Neural approximation makes the process more efficient.

Instead of adding each column one at a time, the neural approximation circuit can perform multiple calculations – columns down, across columns, or most efficiently. This leads to fewer ADCs and increased computing efficiency.

The most important part of this work, Cao said, was determining how much they could reduce the number of digital conversions occurring along the outer edge of the circuit. They found that neural approximator circuits increased efficiency as much as possible.

“No matter how many analog partial sums the columns of the RRAM crossbar array generate – 18, 64, or 128 – we only need one analog-to-digital conversion,” Cao said. “We used a hardware implementation to reach the theoretical low limit.”

Engineers are already working on large-scale prototypes of PIM computers, but they faced several challenges, Zhang said. The use of Zhang and Cao’s neural approximators could eliminate one of these challenges – the bottleneck, proving that this new computational paradigm has the potential to be much more powerful than the current framework suggests. Not just once or twice as powerful, but 10 or 100 times more.

“Our technology allows us to get closer to this type of computer,” Zhang said.