Tachyum has made a significant stride in expanding its Prodigy software ecosystem by introducing LLVM support for AI and incorporating Linux Rust support. These additions bolster an already extensive suite of applications, system software, frameworks, and libraries that undergo rigorous testing through QEMU emulation and FPGA emulation prior to Prodigy chip production.
The LLVM Project is renowned for its collection of modular and reusable compiler and toolchain technologies. Rust, on the other hand, is a versatile, general-purpose programming language that prioritizes performance, type safety, and concurrency. It is noteworthy that Rust has gained recognition as the second language officially accepted for Linux kernel development.
LLVM plays a pivotal role within the realm of artificial intelligence (AI), serving as the backbone for numerous major AI frameworks such as PyTorch and Tensorflow. These AI compilers rely on LLVM for native instruction generation. Additionally, standalone AI compilers like Apache TVM are also founded on the LLVM compiler infrastructure. In the case of Tachyum, an LLVM backend has been developed to support the Tachyum ISA, particularly vector and tensor instructions in both full and low precisions. This strategic integration enables Prodigy to harness its hardware features specifically designed to enhance AI workload performance. Consequently, this AI compiler backend promises superior performance for inference applications.
The incorporation of Rust into Prodigy’s Linux kernel is poised to offer Linux developers enhanced efficiency and confidence in creating new functionality. Rust distinguishes itself through its speed and memory efficiency, underpinned by a rich type system and ownership model that ensures memory and thread safety. While GCC Rust support is on the horizon, it will be implemented when the language becomes readily accessible.
Dr. Radoslav Danilak, founder and CEO of Tachyum, highlighted the significance of the software component, stating, “As impressive as the hardware architecture of the Tachyum Prodigy Universal Processor is, it is the software that we support that will allow customers and partners to unlock the full potential of the chip.” He further revealed that the initial availability of LLVM is scheduled for the first quarter of 2024, with the software engineering team making efforts to ensure its early release. Subsequently, Tachyum’s team is diligently working on constructing Tachyum software distribution images, and customers can anticipate a beta version for testing a quarter after the alpha release.
Tachyum’s Prodigy Universal Processor, designed to cater to all workloads, enables data center servers to seamlessly transition between computational domains, including AI/ML, high-performance computing (HPC), and cloud, all within a single architecture. By eliminating the need for costly dedicated AI hardware and significantly increasing server utilization, Prodigy delivers substantial reductions in capital expenditures (CAPEX) and operating expenses (OPEX) while delivering unprecedented data center performance, power efficiency, and cost-effectiveness. Prodigy is equipped with 192 high-performance custom-designed 64-bit compute cores, offering up to 4.5 times the performance of the most powerful x86 processors for cloud workloads, up to 3 times the performance of the highest-performing GPUs for HPC, and a remarkable 6-fold improvement for AI applications.