AMD Radeon Instinct with 7nm Vega for datacenter ships before year-end

Posted on Tuesday, November 06 2018 @ 23:02 CET by Thomas De Maesschalck
AMD logo
Over at its Next Horizon event, AMD introduced the Radeon Instinct MI60 and MI50, two new accelerators for the datacenter market. The special thing about these cards is that they're based on a 7nm Vega 20 GPU. The MI60 features 64 CUs, 4096 stream processors, a peak engine clock of 1.8GHz, and 32GB HBM2 memory. The card packs 13.23 billion transistors in a 331mm² die and is the first PCI Express 4.0 capable GPU.

Next is the Radeon Instinct MI50, that model has 60 CUs, 3840 stream processors, a peak engine clock of 1746MHz, and 16GB HBM2. The MI60 has 14.7 teraflops FP32 performance while the MI50 tops at 13.4 teraflops. Both cards have a 300W TDP. Shipments of the MI60 start before year-end, the MI50 will follow in March 2019.

AMD claims its 7nm GPUs offer over 25 percent higher performance at the same power consumption, or 50 percent lower power consumption at the same frequency than the original Vega.
AMD (NASDAQ: AMD) today announced the AMD Radeon Instinct™ MI60 and MI50 accelerators, the world’s first 7nm datacenter GPUs, designed to deliver the compute performance required for next-generation deep learning, HPC, cloud computing and rendering applications. Researchers, scientists and developers will use AMD Radeon Instinct™ accelerators to solve tough and interesting challenges, including large-scale simulations, climate change, computational biology, disease prevention and more.

“Legacy GPU architectures limit IT managers from effectively addressing the constantly evolving demands of processing and analyzing huge datasets for modern cloud datacenter workloads,” said David Wang, senior vice president of engineering, Radeon Technologies Group at AMD. “Combining world-class performance and a flexible architecture with a robust software platform and the industry’s leading-edge ROCm open software ecosystem, the new AMD Radeon Instinct™ accelerators provide the critical components needed to solve the most difficult cloud computing challenges today and into the future.”

The AMD Radeon Instinct™ MI60 and MI50 accelerators feature flexible mixed-precision capabilities, powered by high-performance compute units that expand the types of workloads these accelerators can address, including a range of HPC and deep learning applications. The new AMD Radeon Instinct™ MI60 and MI50 accelerators were designed to efficiently process workloads such as rapidly training complex neural networks, delivering higher levels of floating-point performance, greater efficiencies and new features for datacenter and departmental deployments1.

The AMD Radeon Instinct™ MI60 and MI50 accelerators provide ultra-fast floating-point performanceand hyper-fast HBM2 (second-generation High-Bandwidth Memory) with up to 1 TB/s memory bandwidth speeds. They are also the first GPUs capable of supporting next-generation PCIe® 4.02 interconnect, which is up to 2X faster than other x86 CPU-to-GPU interconnect technologies3, and feature AMD Infinity Fabric™ Link GPU interconnect technology that enables GPU-to-GPU communications that are up to 6X faster than PCIe® Gen 3 interconnect speeds4.

AMD also announced a new version of the ROCm open software platform for accelerated computing that supports the architectural features of the new accelerators, including optimized deep learning operations (DLOPS) and the AMD Infinity Fabric™ Link GPU interconnect technology. Designed for scale, ROCm allows customers to deploy high-performance, energy-efficient heterogeneous computing systems in an open environment.

“Google believes that open source is good for everyone,” said Rajat Monga, engineering director, TensorFlow, Google. “We've seen how helpful it can be to open source machine learning technology, and we’re glad to see AMD embracing it. With the ROCm open software platform, TensorFlow users will benefit from GPU acceleration and a more robust open source machine learning ecosystem.”

Key features of the AMD Radeon Instinct™ MI60 and MI50 accelerators include:

  • Optimized Deep Learning Operations: Provides flexible mixed-precision FP16, FP32 and INT4/INT8 capabilities to meet growing demand for dynamic and ever-changing workloads, from training complex neural networks to running inference against those trained networks.
  • World’s Fastest Double Precision PCIe®2 Accelerator5: The AMD Radeon Instinct™ MI60 is the world’s fastest double precision PCIe 4.0 capable accelerator, delivering up to 7.4 TFLOPS peak FP64 performance5 allowing scientists and researchers to more efficiently process HPC applications across a range of industries including life sciences, energy, finance, automotive, aerospace, academics, government, defense and more. The AMD Radeon Instinct™ MI50 delivers up to 6.7 TFLOPS FP64 peak performance1, while providing an efficient, cost-effective solution for a variety of deep learning workloads, as well as enabling high reuse in Virtual Desktop Infrastructure (VDI), Desktop-as-a-Service (DaaS) and cloud environments.
  • Up to 6X Faster Data Transfer: Two Infinity Fabric™ Links per GPU deliver up to 200 GB/s of peer-to-peer bandwidth – up to 6X faster than PCIe 3.0 alone4 – and enable the connection of up to 4 GPUs in a hive ring configuration (2 hives in 8 GPU servers).
  • Ultra-Fast HBM2 Memory: The AMD Radeon Instinct™ MI60 provides 32GB of HBM2 Error-correcting code (ECC) memory6, and the Radeon Instinct™ MI50 provides 16GB of HBM2 ECC memory. Both GPUs provide full-chip ECC and Reliability, Accessibility and Serviceability (RAS)7 technologies, which are critical to deliver more accurate compute results for large-scale HPC deployments.
  • Secure Virtualized Workload Support: AMD MxGPU Technology, the industry’s only hardware-based GPU virtualization solution, which is based on the industry-standard SR-IOV (Single Root I/O Virtualization) technology, makes it difficult for hackers to attack at the hardware level, helping provide security for virtualized cloud deployments.

    Updated ROCm Open Software Platform
    AMD today also announced a new version of its ROCm open software platform designed to speed development of high-performance, energy-efficient heterogeneous computing systems. In addition to support for the new Radeon Instinct™ accelerators, ROCm software version 2.0 provides updated math libraries for the new DLOPS; support for 64-bit Linux operating systems including CentOS, RHEL and Ubuntu; optimizations of existing components; and support for the latest versions of the most popular deep learning frameworks, including TensorFlow 1.11, PyTorch (Caffe2) and others. Learn more about ROCm 2.0 software here. Availability
    The AMD Radeon Instinct™ MI60 accelerator is expected to ship to datacenter customers by the end of 2018. The AMD Radeon Instinct™ MI50 accelerator is expected to begin shipping to data center customers by the end of Q1 2019. The ROCm 2.0 open software platform is expected to be available by the end of 2018.


  • About the Author

    Thomas De Maesschalck

    Thomas has been messing with computer since early childhood and firmly believes the Internet is the best thing since sliced bread. Enjoys playing with new tech, is fascinated by science, and passionate about financial markets. When not behind a computer, he can be found with running shoes on or lifting heavy weights in the weight room.



    Loading Comments