SPECIFICATIONS

 A100 40GB PCIeA100 80GB PCIeA100 40GB SXMA100 80GB SXM
FP649.7 TFLOPS
FP64 Tensor Core19.5 TFLOPS
FP3219.5 TFLOPS
Tensor Float 32 (TF32)156 TFLOPS | 312 TFLOPS*
BFLOAT16 Tensor Core312 TFLOPS | 624 TFLOPS*
FP16 Tensor Core312 TFLOPS | 624 TFLOPS*
INT8 Tensor Core624 TOPS | 1248 TOPS*
GPU Memory40GB HBM280GB HBM2e40GB HBM280GB HBM2e
GPU Memory Bandwidth1,555GB/s1,935GB/s1,555GB/s2,039GB/s
Max Thermal Design Power (TDP)250W300W400W400W
Multi-Instance GPUUp to 7 MIGs @ 5GBUp to 7 MIGs @ 10GBUp to 7 MIGs @ 5GBUp to 7 MIGs @ 10GB
Form FactorPCIeSXM
InterconnectNVIDIA® NVLink® Bridge for 2 GPUs: 600GB/s **
PCIe Gen4: 64GB/s
NVLink: 600GB/s
PCIe Gen4: 64GB/s
Server OptionsPartner and NVIDIA-Certified Systems with 1-8 GPUsNVIDIA HGX A100-Partner and NVIDIA-Certified Systems with 4,8, or 16 GPUs
NVIDIA DGX A100 with 8 GPUs



Hinweis: Wir sind nicht für Zollgebühren verantwortlich.



Bezahlung
Anlieferungs-Details
Verkaufsbedingungen
Kontaktieren Sie uns
Rückkopplung