Be the First to Review This Product Write a Review

NVIDIA NVIDIA A800 Graphic Card - 40 GB HBM2

SKU: 900-51001-2200-000-01
Availability: 3
MSRP: $15,187.28
Adding to cart… The item has been added

Need Pricing Options?

Here at Bluum we have a full staff of dedicated Account Executives who will find the best available pricing. Simply add your products to a quote, submit, and we'll do all the rest!

The Supercomputing Platform for Workstations

The NVIDIA® A800 40GB Active GPU, powered by the NVIDIA Ampere architecture, is the ultimate workstation development platform withNVIDIA AI Enterprisesoftware included, delivering powerful performance to accelerate next-generation data science, AI, HPC, andengineering simulation/CAE workloads.

Highlights | Industry Leading Performance | Double-Precision (FP64) Performance

9.7TFLOPS

Tensor Performance

623.8TFLOPS

Memory Bandwidth

1.5TB/s

Features Powered by the NVIDIA Ampere Architecture | Third-Generation Tensor Cores

Performance and versatility for a wide range of AI and HPC applications with support for double-precision (FP64) and Tensor Float 32 (TF32) precision provides up to 2X the performance and efficiency over the previous generation. Hardware support for structural sparsity doubles the throughput for inferencing.

Multi-Instance GPU

Fully isolated and secure multi-tenancy at the hardware level with dedicated high-bandwidth memory, cache, and compute cores.Multi-Instance GPU(MIG) maximizes the utilization of GPU-accelerated infrastructure, allowing an A800 40GB Active GPU to be partitioned into as many as seven independent instances, giving multiple users access to GPU acceleration.

Third-Generation NVIDIA NVLink

Increased GPU-to-GPU interconnect bandwidth provides a single scalable memory to accelerate compute workloads and tackle larger datasets. Connect a pair of NVIDIA A800 40GB Active GPUs withNVIDIA NVLink®to increase the effective memory footprint to 80GB and scale application performance by enabling GPU-to-GPU data transfers at rates up to 400 GB/s (bidirectional).

Ultra-Fast HBM2 Memory

Deliver massive computational throughput with 40GB of high-speed HBM2 memory with a class-leading 1.5 TB/s of memory bandwidth-an over 70% increase compared to previous generation-and significantly more on-chip memory, including a 40MB level 2 cache to accelerate the most computationally intense AI and HPC workloads.

Workloads Supercharge AI and HPC Workflows Across Industries | Generative AI

Using neural networks to identify patterns and structures within existing data, generative AI applications enable users to generate new and original content from a wide variety of inputs and outputs, including images, sounds, animation, and 3D models. Leverage NVIDIA's generative AI solution-NeMo™ Framework, included in NVIDIA AI Enterprise-along with the A800 40GB Active GPU for easy, fast, and customizable generative AI model development.

Engineering Simulation/CAE

The A800 40GB Active GPU delivers remarkable performance for GPU-accelerated computer-aided engineering (CAE) applications. Engineering Analysts and CAE Specialists can run large-scale simulations and engineering analysis codes in full FP64 precision with incredible speed, shortening development timelines and accelerating time to value.

With the addition of RTX-accelerated GPUs, providing display capabilities for pre- and post-processing, designers and engineers can visualize large-scale simulations and models in full-design fidelity.

Data Science and Data Analytics


Accelerate end-to-end data science and analytics workflows with powerful performance to extract meaningful insights from large-scale datasets quickly. By combining the high-performance computing capabilities of the A800 40GB Active with NVIDIA AI Enterprise, data practitioners can leverage a large collection of libraries, tools, and technologies to accelerate data science workflows-from data prep and analysis to modeling.


  • NVIDIA Ampere Architecture Third-Generation Tensor Cores:
  • Powerful double-precision (FP64) capabilities
  • Accelerated training and inference performance
  • Third-Generation NVIDIA® NVLink™:
  • Connect two A800 GPUs to scale up to 80 gigabytes (GB) of memory
  • 400 gigabytes per second (GB/s) of bidirectional bandwidth
  • Ultra-Fast HBM2 Memory:
  • 40GB of high-speed HBM2 memory
  • 1.5 TB/s of memory bandwidth
  • Multi-Instance GPU (MIG):
  • Fully isolated and secure multitenancy
  • Partition up to seven instances

Main Specifications