Home » Technology » Semiconductors » Seoul-Based Semiconductor Manufacturer FuriosaAI Introduces AI Inference Chip, RNGD

Seoul-Based Semiconductor Manufacturer FuriosaAI Introduces AI Inference Chip, RNGD

FuriosaAI_board

AI semiconductor manufacturer, FuriosaAI unveils RNGD (pronounced Renegade), an AI accelerator, at Hot Chips 2024. RNGD is a data center accelerator for high-performance large language model (LLM) and multimodal model inference, disrupting an AI hardware landscape long defined by legacy chipmakers and high-profile startups. Founded in 2017 by three engineers with backgrounds at AMD, Qualcomm, and Samsung, the company has pursued a strategy focused on rapid innovation and product delivery which has resulted in the unveiling and fast development of RNGD.

 

Furiosa successfully completed the full bring-up of RNGD after receiving the first silicon samples from their partner, TSMC. This achievement reinforces the company’s track record of fast and seamless technology development. With their first-generation chip, introduced in 2021, Furiosa submitted their first MLPerf benchmark results within 3 weeks of receiving silicon and achieved a 113% performance increase in the next submission through compiler enhancements.

 

Early testing of RNGD has revealed promising results with large language models such as GPT-J and Llama 3.1. A single RNGD PCIe card delivers 2,000 to 3,000 tokens per second throughput performance (depending on context length) for models with around 10 billion parameters.

 

“The launch of RNGD is the result of years of innovation, leading to a one-shot silicon success and exceptionally rapid bring-up process. RNGD is a sustainable and accessible AI computing solution that meets the industry’s real-world needs for inference,” said June Paik, Co-Founder and CEO of FuriosaAI. “With our hardware now starting to run LLMs at high performance, we’re entering an exciting phase of continuous advancement. I am incredibly proud and grateful to the team for their hard work and continuous dedication.”

 

RNGD’s latest innovations feature a Tensor Contraction Processor (TCP) architecture, offering a perfect blend of efficiency, programmability, and performance. The TCP is supported by a compiler optimized for treating entire models as single operations. With a low 150W TDP and 48GB of HBM3 memory, it efficiently runs models like Llama 3.1 8B on a single card, outperforming leading GPUs.

 

“The Furiosa RNGD AI Inference solution drives the adoption of green computing with Supermicro. By integrating Furiosa’s technology, Supermicro systems can reduce power consumption per card while still delivering exceptional inference performance,” said Vik Malyala, SVP, Technology and AI; President and Managing Director, EMEA of Supermicro.

 

“The collaboration between GUC and FuriosaAI to deliver RNGD with exceptional performance and power efficiency hinges on meticulous planning and execution. Achieving this requires a deep understanding of modern AI software and hardware. FuriosaAI has consistently demonstrated excellence from design to delivery, creating the most efficient AI inference chips in the industry,” said Aditya Raina, CMO of GUC.

 

The chip is currently sampling to early access customers, with broader availability expected in early 2025.

Announcements

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

RELATED POSTS