Home » Product Launch » SiFive Brings Intelligence XM Series to Accelerate AI Workload

SiFive Brings Intelligence XM Series to Accelerate AI Workload

SiFive Brings Intelligence XM Series to Accelerate AI Workload

The gold standard for RISC-V computing, SiFive, Inc. introduces SiFive Intelligence XM Series to help accelerate high-performance AI workloads. Intelligence SM becomes the first IP from SiFive to include a highly scalable AI matrix engine. This engine helps accelerate time to market for semiconductor companies building system-on-chip solutions for edge IoT, consumer devices, next-gen electric and/or autonomous vehicles, data centers, and more.

 

To support its customers and the broader RISC-V ecosystem, the company announces its intention to open source a reference implementation of its SiFive Kernel Library (SKL).

 

SiFive’s new XM Series offers an extremely scalable and efficient AI compute engine. By integrating scalar, vector, and matrix engines, XM Series customers can take advantage of very efficient memory bandwidth. The XM Series also continues SiFive’s legacy of offering extremely high performance per watt for compute-intensive applications.

 

“Many companies are seeing the benefits of an open processor standard while they race to keep up with the rapid pace of change with AI. AI plays to SiFive’s strengths with performance per watt and our unique ability to help customers customize their solutions,” said Patrick Little, CEO of SiFive. “We’re already supplying our RISC-V solutions to five of the ‘Magnificent 7’ companies, and as companies pivot to a ‘software first’ design strategy we are working on new AI solutions with a wide variety of companies from automotive to data center and the intelligent edge and IoT.”

 

“RISC-V was originally developed to efficiently support specialized computing engines including mixed-precision operations,” said Krste Asanovic, SiFive Founder and Chief Architect. “This, coupled with the inclusion of efficient vector instructions and the support of specialized AI extensions, are the reasons why many of the largest data center companies have already adopted RISC-V AI accelerators.”

 

Krste further explains the new XM series. The XM Series features four X-cores per cluster, a cluster can deliver 16 TOPS (INT8) or 8 TFLOPS (BF16) per GHz. There are 1TB/s of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port or a CHI port for coherent memory access. SiFive envisions the creation of systems incorporating no host CPU or ones based on RISC-V, x86, or Arm.

Announcements

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

RELATED POSTS