Alibaba Cloud, a digital technology and intelligence backbone of Alibaba Group, has taken wraps off a new processor design for use in its data centers. Called as Yitian 710, the home-grown server chip was custom-built by Alibaba Group’s chip development business, T-Head.
The chip formed the brain of Alibaba Cloud’s new indigenous server system called Panju, a new development in what the company said is its in-house internalization effort to build its own server system around its own homegrown chips
“Customizing our own server chips is consistent with our ongoing efforts toward boosting our computing capabilities with better performance and improved energy efficiency,” said Jeff Zhang, President of Alibaba Cloud Intelligence and Head of Alibaba DAMO Academy during the Apsara Conference, Alibaba’s annual technology flagship event.
“We plan to use the chips to support current and future businesses across the Alibaba Group ecosystem. We will also offer our clients next-generation computing services powered by the new chip-powered servers in the near future. Together with our global partners including Intel, Nvidia, AMD and Arm, we will continue to innovate our compute infrastructure and offer diverse computing services for our global customers,” Zhang added.
Built with advanced 5nm process technology, the Yitian 710 incorporates 128 Arm cores with 3.2GHz top clock speed to deliver exceptional performance and excellent energy efficiency.
Each processor chip has 60 billion integrated transistors. The Yitian 710 is the first server processor that is compatible with the latest Armv9 architecture and includes 8 DDR5 channels and 96-lane PCIe 5.0, providing high memory and I/O bandwidth. The chip has achieved a score of 440 in SPECint2017 (a standard benchmark to measure CPU integer processing power), surpassing that of the current state-of-the-art Arm server processor by 20% in performance and 50% in energy efficiency.
The server system Panjiu was developed for the next-generation of cloud-native infrastructure.
By separating computing from storage, the servers are optimized for both general-purpose and specialized AI computing, as well as high-performance storage. With a modular design approach for large-scale data center deployment, these servers are expected to deliver exceptional economic value for a wide variety of cloud-native workloads, such as containerized applications and computed optimized workloads.