nvidiaNVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”Secur is a NVIDIA partner based in South Africa, offering NVIDIA products, implementation, integration and support services, these services can be offered even in Botswana, Lesotho, Namibia, Kenya and Nigeria

Enhancing HPC and AI Supercomputers and Applications

Today’s high-performance computing (HPC), AI, and hyperscale infrastructures require faster interconnects and more intelligent networks to analyze data and run complex simulations with greater speed and efficiency. NVIDIA Quantum-2 enhances and extends its In-Network Computing with preconfigured and programmable compute engines, such as the third generation of NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARPv3)™, Message Passing Interface (MPI) Tag Matching, MPI All-to-All, and programmable engines, delivering the best cost per node and ROI.
Benefits of Performance Isolation<

Performance Isolation

The NVIDIA Quantum-2 InfiniBand platform provides innovative proactive monitoring and congestion management to deliver traffic isolations, nearly eliminating performance jitter, and ensuring predictive performance as if the application is being run on a dedicated system.

Cloud-Native Supercomputing

The NVIDIA Cloud-Native Supercomputing platform leverages the NVIDIA® BlueField® data processing unit (DPU) architecture with high-speed, low-latency NVIDIA Quantum-2 InfiniBand networking. The solution delivers bare-metal performance, user management and isolation, data protection, on-demand high-performance computing (HPC), and AI services—simply and securely.


The most powerful end-to-end AI supercomputing platform.

Purpose-Built for the Convergence of Simulation, Data Analytics, and AI

Massive datasets, exploding model sizes, and complex simulations require multiple GPUs with extremely fast interconnections and a fully accelerated software stack. The NVIDIA HGX AI supercomputing platform brings together the full power of NVIDIA GPUs, NVIDIA® NVLink®, NVIDIA InfiniBand networking, and a fully optimized NVIDIA AI and HPC software stack from the NVIDIA NGC catalog to provide highest application performance. With its end-to-end performance and flexibility, NVIDIA HGX enables researchers and scientists to combine simulation, data analytics, and AI to drive scientific progress.


NVIDIA HGX combines NVIDIA A100 Tensor Core GPUs with high-speed interconnects to form the world’s most powerful servers. With 16 A100 GPUs, HGX  has up to 1.3 terabytes (TB) of GPU memory and over 2 terabytes per second (TB/s) of memory bandwidth for unprecedented acceleration.

Compared to previous generations, HGX provides up to a 20X AI speedup out of the box with Tensor Float 32 (TF32) and a 2.5X HPC speedup with FP64. NVIDIA HGX delivers a staggering 10 petaFLOPS, forming the world’s most powerful accelerated scale-up server platform for AI and HPC.
HGX Stack


Up to 3X Higher AI Training on Largest Models

DLRM Training
Up to 3X Higher AI Training on Largest Models
Deep learning models are exploding in size and complexity, requiring a system with large amounts of memory, massive computing power, and fast interconnects for scalability. With NVIDIA NVSwitch providing high-speed, all-to-all GPU communications, HGX can handle the most advanced AI models. With A100 80GB GPUs, GPU memory is doubled, delivering up to 1.3TB of memory in a single HGX. Emerging workloads on the very largest models like deep learning recommendation models (DLRM), which have massive data tables, are accelerated up to 3X over HGX powered by A100 40GB GPUs.


2X Faster than A100 40GB on Big Data Analytics Benchmark
Big data analytics benchmark |  30 analytical retail queries, ETL, ML, NLP on 10TB dataset | V100 32GB, RAPIDS/Dask | A100 40GB and A100 80GB, RAPIDS/Dask/BlazingSQL​

Machine learning models require loading, transforming, and processing extremely large datasets to glean critical insights. With up to 1.3TB of unified memory and all-to-all GPU communications with NVSwitch, HGX powered by A100 80GB GPUs has the capability to load and perform calculations on enormous datasets to derive actionable insights quickly.

On a big data analytics benchmark, A100 80GB delivered insights with 2X higher throughput over A100 40GB, making it ideally suited for emerging workloads with exploding dataset sizes.


HPC applications need to perform an enormous amount of calculations per second. Increasing the compute density of each server node dramatically reduces the number of servers required, resulting in huge savings in cost, power, and space consumed in the data center. For simulations, high-dimension matrix multiplication requires a processor to fetch data from many neighbors for computation, making GPUs connected by NVIDIA NVLink ideal. HPC applications can also leverage TF32 in A100 to achieve up to 11X higher throughput in four years for single-precision, dense matrix-multiply operations.

An HGX powered by A100 80GB GPUs delivers a 2X throughput increase over A100 40GB GPUs on Quantum Espresso, a materials simulation, boosting time to insight.
Top HPC Apps​
11X More HPC Performance in Four Years

Up to 1.8X Higher Performance for HPC Applications

Quantum Espresso​
Up to 1.8X Higher Performance for HPC Applications

Get in Touch

+27 (0) 11-881-5943
Click to email
Request a Quote