AI Inference Server
8VLink Tiangong 300 Series

The Tiangong 300 Series AI inference server supports 8-slot or 10-slot PCIe GPUs with industry-leading computing power to deliver powerful real-time inference capabilities. It is widely applied in AI training and inference scenarios for intelligent computing centers, universities and research institutions. Equipped with Intel® Xeon® processors and high-speed memory, and featuring flexible configuration options, it boasts outstanding computing performance, strong environmental adaptability, easy deployment and maintenance, as well as cloud-edge collaboration capability, fully meeting the computing demands of artificial intelligence, smart manufacturing, intelligent edge and other application scenarios.

Product Features
  • Powerful Computing Power
  • Flexible Deployment
  • Flexible Configuration
Technical Specifications
Model Tiangong 301
Form Factor 4U Rack-mount GPU Server
Processor Supports mainstream Intel and AMD processors
Memory Up to 32 DDR high-speed memory slots
Storage Supports multiple hard disk configurations with hot-swap capability; flexible deployment of 2.5-inch SATA drives and 2.5-inch NVMe SSDs
GPU Supports 8 / 10 PCIe GPUs
Network Multiple network expansion capabilities
Power Supply Supports N+M redundant power supply modules
Management Integrated with one dedicated GE management port and one VGA port, providing comprehensive features including fault diagnosis, automated O&M and hardware security hardening
Fan Supports 8 (N+1) hot-swappable redundant fans
Dimensions (H×W×D) 175mm×447mm×828mm