Taipei, Taiwan, May 5, 2022 — ASUS, the leading IT Company in server systems, server motherboards and workstations, today released its results for the first time since joining the MLCommons Association last December – instantly setting new performance records in dozens of benchmarked tasks.
Specifically, in the latest round of MLPerf Inference 2.0, ASUS servers set 26 records in the data center Closed division across six AI-benchmark tasks, outperforming all other servers with the same GPU configurations. The achievements consist of 12 records achieved with an ASUS ESC8000A-E11 server configured with eight 80GB NVIDIA® A100 Tensor Core GPUs; and 14 records with an ASUS ESC4000A-E11 server with four 24GB NVIDIA A30 Tensor Core GPUs.
These breakthrough results demonstrate clearly the performance dominance of ASUS servers in the AI arena – bringing significant value to organizations seeking to deploy AI and ensuring optimal performance in data centers.
The MLPerf Inference 2.0 benchmark covers six common AI-inferencing workloads, including image classification (ResNet50), object detection (SSD-ResNet34), medical image segmentation (3D-Unet), speech recognition (RNN-T), natural language processing (BERT) and recommendation (DLRM).
ESC8000A-E11 has achieved multiple leading positions for performance, including:
- Processed 298,105 images classification per second in ResNet50
- Completed the object recognition of 7,462.06 images per second in SSD-ResNet34
- Processed 24.3 medical images per second in 3D-UNet
- Completed 26,005.7 questions and answers per second in BERT
- Completed 2,363,760 click predictions per second in DLRM
ESC8000A-E11 results table
ESC4000A-E11 has achieved multiple leading positions for performance, including:
- Processed 73,814.5 images classification per second in ResNet50
- Completed the object recognition of 1,957.18 images per second in SSD-ResNet34
- Processed 6.83 medical images per second in 3D-UNet
- Completed 27,299.2 speech recognition conversions per second in RNNT
- Completed 6,896.01 questions and answers per second in BERT
- Completed 574,371 click predictions per second in DLRM
ESC4000A-E11 results table
The dozen MLPerf Inference 2.0 12 records set by the NVIDIA-certified, 4U ESC8000A-E11 – configured with eight 80GB NVIDIA A100 PCIe Tensor Core GPUs and two AMD EPYC 7763 CPUs – demonstrates its supreme scalability for AI and machine learning. Its streamlined thermal design, with independent CPU and GPU airflow tunnels, brings high-efficiency cooling solution to air-cooled data centers.
The NVIDIA-certified ESC4000A-E11, housed in the most compact 2U footprint on the market – and configured with four 24GB NVIDIA A30 PCIe Tensor Core GPUs and two AMD EPYC 7763 CPUs – set a total of 14 MPLerf Inference 2.0 records. It offers a wide array of graphics accelerators, plus support for the NVIDIA NVLink high-speed GPU interconnect, to unleash maximum AI performance.