19/02/2021

Samsung announces high bandwidth memory, processing-in-memory architecture

Computer engineers have long been working to remove the bottlenecks that arise in computers due to the need to shuffle data back and forth between a computer's CPU and its memory chips. Most efforts to do so have involved adding small amounts of fast memory caches to the CPUs—unfortunately, doing so increases energy consumption, leading to more heat production.

In this new effort, the team at Samsung has taken the opposite approach—giving memory chips the ability to take on some of the processing. With the new HBM-PIM, Samsung has placed what it describes as "a DRAM optimized AI engine inside of a memory bank." This reduces the processing load on the CPU by offloading some of its work to remote databanks. Not only is the workload reduced, but the speed of processing is increased due to a reduction in data movement. Specs for the HBM-PIM include a PCU running at 300MHz controlled by the host CPU using conventional memory commands. With this approach, the PCU can be instructed to carry out FP16 calculations directly inside of the DRAM unit. Notably, the HBM-PIM can operate as normal RAM when a system is running applications that have not been written for it.

Samsung notes that when they tested the new technology with their existing HBM2 Aquabolt systems, system performance doubled and energy consumption was reduced by 70%. They also noted that installing HBM-PIMs in existing systems would not require any other changes to in-place hardware or software. Their HBM-PIM technology is currently being tested, they expect to see the results in the first half of this year. Representatives of the company will be presenting a paper they have written describing the new technology at this year's International Solid-State Circuits Virtual Conference.

https://techxplore.com/news/2021-02-samsung-high-bandwidth-memory-processing-in-memory.html

No comments :

Post a Comment