ASUS Pioneers AI Supercomputing With Generative-AI POD Solutions at ISC 2024


TAIPEI, Taiwan, May 3, 2024 —ASUS, together with its subsidiary Taiwan Web Service Corporation (TWSC), today announced its holistic approach, the GenAI POD Solution, to address the surging demand for AI supercomputing at ISC 2024. ASUS is at the forefront of the AI revolution, and will be showcasing its NVIDIA® MGX-powered AI servers, including both ESC NM1-E1 and ESR1-511N-M1, as well as the ESC N8A-E12 and RS720QN-E11-RS24U HGX GPU servers. These solutions are bolstered by TWSC’s exclusive resource management platform and software stacks, enabling them to tackle diverse generative AI and large language model (LLM) training workloads easily. These integrated solutions feature innovative thermal designs and can be tailored to cater for enterprises, providing comprehensive data center solutions with robust software platforms so clients can be successful in their AI endeavors.

ASUS NVIDIA MGX servers: Tailored AI solution to meet specific needs

The NVIDIA MGX-powered ASUS ESC NM1-E1 is powered by an NVIDIA GH200 Grace Hopper™ Superchip. This potent combination, featuring 72 Arm® Neoverse V9 CPU cores and NVIDIA NVLink-C2C technology, guarantees exceptional performance and efficiency, making it an ideal choice for AI-driven data centers, high-performance computing (HPC), data analytics, and NVIDIA Omniverse™ applications, promising transformative improvements in performance and memory capabilities.
Another highlight of the ASUS showcase — the ASUS ESR1-511N-M1 server — harnesses the power of the NVIDIA GH200 Grace Hopper Superchip and is designed to cater to large-scale AI and HPC applications by facilitating deep-learning (DL) training and inference, data analytics, and high-performance computing. ESR1-511N-M1 boasts an enhanced thermal solution for optimal performance and lower power usage effectiveness (PUE) to align with ESG trends. Its flexible configuration, including a 1U design with the highest compute density and support for up to four E1.S local drives via NVIDIA BlueField®-3, coupled with three PCI Express® (PCIe®) 5.0 x16 slots, facilitate seamless and rapid data transfers.

ASUS NVIDIA HGX servers: Elevating AI with end-to-end H100 eight-GPU power

Designed for generative AI with optimized servers, data-center infrastructure and AI software-development capabilities, ASUS ESC N8A-E12 is a robust 7U dual-socket server that harnesses the power of dual AMD EPYC™ 9004 processors and eight NVIDIA H100 Tensor Core GPUs. It has an enhanced thermal solution to ensure optimal performance and lower PUE. Engineered for AI and data-science advancements, this powerful HGX server offers a unique one-GPU-to-one-NIC configuration for maximal throughput in compute-heavy tasks.
ASUS RS720QN-E11-RS24U is designed for high-performance and compute-intensive workloads, and features a high-density server featuring an NVIDIA Grace CPU Superchip with NVIDIA NVLink-C2C technology. This innovative solution boasts a compact design that’s able to accommodate four nodes within a 2U4N chassis to deliver PCIe 5.0 compatibility and exceptional performance for dual-socket CPUs – making RS720QN-E11-RS24U ideal for data centers, web servers, virtualization clouds and hyperscale environments.

ASUS D2C cooling solution

Direct-to-chip (D2C) cooling offers a swift and straightforward solution that leverages current infrastructure, as well as swift deployment with reduced PUE. ASUS RS720QN-E11-RS24U supports manifolds and cool plates, enabling diverse cooling solutions. Additionally, ASUS servers accommodate a rear-door heat exchanger compliant with standard rack-server designs, eliminating the need to replace all racks — only the rear door is required to enable liquid cooling in the rack. By closely collaborating with industry-leading cooling solution providers, ASUS provides enterprise-grade comprehensive cooling solutions and is committed to minimizing data center PUE, carbon emissions and energy consumption to assist in the design and construction of greener data centers.

Generative AI POD solutions

TWSC boasts extensive experience in deploying and co-maintaining large-scale AIHPC infrastructure for NVIDIA Partner Network cloud partner (NCP), with the National Center for High-performance Computing (NCHC)’s TAIWANIA-2 (#10 / Green 500, November. 2018) and FORERUNNER 1 (#92 / Green 500, November 2023) supercomputer series. Additionally, TWSC’s AI Foundry Service enables quick deployment of AI supercomputing and flexible model optimization for AI 2.0 applications, enabling users to tailor AI demand specifically to their needs.
TWSC’s generative AI POD solutions offer enterprise-grade AI infrastructure with swift rollouts and comprehensive end-to-end services, ensuring high availability and cybersecurity standards. Our solutions empower success stories across academic, research and medical institutions. Comprehensive cost-management capabilities optimize power consumption and streamline OPEX, making TWSC technologies a compelling choice for organizations seeking a reliable and sustainable generative AI platform.