Home » Product Launch » NEO Semiconductor Develops 3D X-AI Chip Technology, to Replace DRAM Chips Inside HBM

NEO Semiconductor Develops 3D X-AI Chip Technology, to Replace DRAM Chips Inside HBM

NEO_3D_X_AI_Chip

3D NAND flash memory and 3D DRAM developer, NEO Semiconductor completes the development of its 3D X-AI chip technology. This technology will soon replace the current DRAM chips inside high bandwidth memory (HBM) to solve data bus bottlenecks by enabling AI processing in 3D DRAM.

 

The new 3D X-AI can reduce the huge amount of data transferred between HBM and GPUs during AI workloads. It can enhance the performance, power consumption and cost of AI Chips for AI applications like generative AI.

 

These chips provide a 100-fold increase in performance acceleration by integrating 8,000 neuron circuits for AI processing within 3D memory. They also achieve a 99% reduction in power consumption by minimizing the need to transfer data to the GPU for calculations, which in turn reduces heat generation by the data bus. Furthermore, the chips enhance memory density by eight times, with 300 memory layers allowing High Bandwidth Memory (HBM) to store larger AI models.

 

“Current AI Chips waste significant amounts of performance and power due to architectural and technological inefficiencies,” said Andy Hsu, Founder & CEO of NEO Semiconductor. “The current AI Chip architecture stores data in HBM and relies on a GPU to perform all calculations. This separated data storage and data processing architecture makes the data bus an unavoidable performance bottleneck. Transferring huge amounts of data through the data bus causes limited performance and very high power consumption. 3D X-AI can perform AI processing in each HBM chip. This can drastically reduce the data transferred between HBM and GPU to improve performance and reduce power consumption dramatically.”

 

A single 3D X-AI die has 300 layers of 3D DRAM cells with 128 GB capacity and comes with one layer of neutral circuit with 8,000 neurons. NEO estimates that it can support up to 10 TB/s of AI processing throughput per die. Using twelve 3D X-AI dies stacked with HBM packaging can achieve 120 TB/s processing throughput, resulting in a 100X performance increase.

 

“The application of 3D X-AI technology can accelerate the development of emerging AI use cases and promote the creation of new ones,” said Jay Kramer, President of Network Storage Advisors. “Harnessing 3D X-AI technology to create the next generation of optimized AI Chips will spark a new era of innovation for AI Apps.”

Announcements

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

RELATED POSTS