ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

2023/06/12

Built for maximum data-center performance, delivering a next-generation leap for AI infrastructure

  • Premium AI power: Dual-socket Intel® HGX H100 eight-GPU server for large-scale AI training models and HPC
  • One-of-a-kind HPC solution provider: ASUS, Taiwan Web Service (TWS) and ASUS Cloud deliver server design, infrastructure and AI software capabilities
  • Dedicated one-GPU-to-one-NIC topology: Supports up to eight NICs and eight GPUs, delivering the highest throughput for compute-intensive workloads

ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server
ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server
ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server
ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server
ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server
ASUS Unveils ESC N8-E11, an HGX H100 Eight-GPU Server

SYDNEY, Australia, June 12, 2023 — ASUS today announced ESC N8-E11, its most advanced HGX H100 eight-GPU AI server, along with a comprehensive PCI Express® (PCIe®) GPU server portfolio — the ESC8000 and ESC4000 series empowered by Intel® and AMD® platforms to support higher CPU and GPU TDPs to accelerate the development of AI and data science.

ASUS is one of the few HPC solution providers with its own all-dimensional resources that consist of the ASUS server business unit, Taiwan Web Service (TWS) and ASUS Cloud — all part of the ASUS group. This uniquely positions ASUS to deliver in-house AI server design, data-center infrastructure, and AI software-development capabilities, plus a diverse ecosystem of industrial hardware and software partners.

Advanced, powerful AI server reduces data-center PUE

The high-end ASUS ESC N8-E11 is an NVIDIA® HGX H100 AI server incorporating eight NVIDIA H100 Tensor Core GPUs and engineered to reduce the time for large-scale AI training models and HPC. This 7U dual-socket server powered by 4th Gen Intel Xeon® Scalable processors is specifically designed with a dedicated one-GPU-to-one-NIC topology that supports up to eight NICs, delivering the highest throughput for compute intensive workloads. The modular design highly reduces the amount of cables usage, bringing the benefits on the reduced time on system assembling, avoiding cable routing and lowering the risk of airflow choke — ensuring thermal optimization.

The ESC N8-E11 is incorporating fourth generation NVLink and NVSwitch technology, and NVIDIA ConnectX-7 SmartNIC empowering GPUDirect® RDMA and Storage with NVIDIA Magnum IO and NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, to accelerate the development of AI and data science. It is designed with a two-level GPU and CPU sled for thermal efficiency, scalability and unprecedent performance and is ready for direct-to-chip (D2C) liquid cooling, highly reducing a data center’s overall power-usage effectiveness (PUE).

PCIe GPU servers for everything from small enterprise to massive, unified AI training clusters

The ASUS NVIDIA-certified GPU server lineup includes configurations ranging from four to eight GPUs based on Intel and AMD platforms. Selected servers are optimized for NVIDIA OVX, meaning they excel in rendering and digital-twin applications.

ASUS ESC8000-E11 is an eight-slot H100 PCIe GPU server built for the demands of enterprise AI infrastructure to deliver unprecedented performance with industry-leading GPUs, faster GPU interconnect and higher-bandwidth fabric, and supports up to eight GPUs with NVIDIA NVLink Bridge — empowering performance scalability and increased bandwidth, ready to match growing AI and HPC workloads.