VeriSilicon Hits 100M Mark: NPU in Global AI Chips

VeriSilicon Hits 100M Mark: NPU in Global AI Chips

VeriSilicon (688521.SH) has announced a momentous achievement with its Neural Network Processor (NPU) IP integrated into over 100 million AI-enabled chips across ten major application sectors globally. These sectors include Internet of Things (IoT), wearables, smart TVs, smart home, security monitoring, servers, automotive electronics, smartphones, tablets, and smart healthcare.

 

Over the past seven years, VeriSilicon’s NPU IP has been successfully integrated into 128 AI SoCs supplied by 72 licensees, establishing the company as a global leader in embedded AI/NPU. The NPU IP is designed with a low-power, programmable, and scalable architecture, making it a cost-effective neural network acceleration engine.

 

The latest VIP9000 series NPU IP by VeriSilicon offers scalable and high-performance processing capabilities for both Transformer and Convolutional Neural Network (CNN). This powerful IP supports all major frameworks, including PyTorch, ONNX, and TensorFlow. Notably, it features advanced technologies such as 4-bit quantization and compression, addressing bandwidth constraints and enabling the deployment of AI-generated content (AIGC) and Large Language Model (LLM) algorithms on embedded devices.

 

VeriSilicon’s FLEXA technology enhances the VIP9000 series, enabling seamless integration with Image Signal Processor (ISP) and video encoders. This integration forms low-latency AI-ISP and AI-Video subsystems without requiring DDR memory, providing customization options for power- and space-constrained environments in deeply embedded applications.

 

Wei-Jin Dai, Executive VP and GM of IP Division at VeriSilicon, emphasized the essential role of AI capabilities in smart devices across applications. He stated, “AI capability is now an essential part of every smart device across different applications. VeriSilicon is leveraging our highly efficient AI computing capabilities and vast experience in deploying over 100 million AI-enabled chips to bring server-level AIGC capabilities to embedded devices.”

ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT

Share this post with your friends

Share on facebook
Share on google
Share on twitter
Share on linkedin

RELATED POSTS