IBM (NYSE: IBM) unveiled details of its new, upcoming IBM Telum Processor designed to bring deep learning inference to enterprise workloads to help address fraud in real-time. The tech giant launched the product at the annual Hot Chips conference. IBM’s first processor that contains on-chip acceleration for AI inferencing while a transaction is taking place, Telum took three years in development.
This new on-chip hardware acceleration breakthrough is designed to help customers achieve business insights at scale across banking, finance, trading, insurance applications, and customer interactions. By the first half of 2022, a Telum-based system is planned.
IBM’s recent Morning Consult research revealed that 90% of respondents said that building and running AI projects wherever their data resides, helping to overcome traditional enterprise AI approaches that tend to require significant memory and data movement capabilities to handle inferencing.
With Telum, the accelerator in close proximity to mission-critical data and applications enables enterprises to conduct high volume inferencing for real time-sensitive transactions without invoking off-platform AI solutions, which may impact performance. Users can also build and train AI models off-platform, deploy and infer on a Telum-enabled IBM system for analysis.
The new chip also enables clients to leverage the full power of the AI processor for AI-specific workloads making it ideal for financial services workloads like fraud detection, loan processing, clearing and settlement of trades, anti-money laundering and risk analysis. With these new innovations, clients will be positioned to enhance existing rules-based fraud detection or use machine learning, accelerate credit approval processes, improve customer service and profitability, identify which trades or transactions may fail, and propose solutions to create a more efficient settlement process.
IBM’s latest addition, Telum follows the company’s long heritage of innovative design and engagement that includes hardware and software co-creation and integration that spans silicon, system, firmware, operating systems and leading software frameworks.
The chip contains 8 processor cores with a deep superscalar out-of-order instruction pipeline, running with more than 5GHz clock frequency, optimized for the demands of heterogeneous enterprise-class workloads. The completely redesigned cache and chip-interconnection infrastructure provides 32MB cache per core and can scale to 32 Telum chips. The dual-chip module design contains 22 billion transistors and 19 miles of wire on 17 metal layers.
Leadership in Semiconductors
Telum is the first IBM chip with technology created by the IBM Research AI Hardware Center. In addition, Samsung is IBM’s technology development partner for the Telum processor, developed in 7nm EUV technology node.
Telum is another example of IBM’s leadership in hardware technology. Among the world’s largest industrial research organizations, IBM Research recently announced scaling to the 2 nm node, the latest benchmark in IBM’s legacy of contributions to silicon and semiconductor innovation. In Albany, NY, home to the IBM AI Hardware Center and Albany Nanotech Complex, IBM Research has created a leading collaborative ecosystem with public-private industry players to fuel advances in semiconductor research, helping to address global manufacturing demands and accelerate the growth of the chip industry.