MeluXina Technology


18 PFlops


21 PB


HDR 200G

Compute modules:

Cluster – CPU, Accelerator – GPU, Accelerator – FPGA, Large Memory

Data tiers:

Scratch, Project, Backup, Tape


Traditional HPC – computational, AI and Big Data / HPDA

Cloud access:

for complex use cases and persistent services

Cluster – CPU

573 nodes, each equipped with 2 AMD Rome CPUs (2x 64 cores @ 2.6 GHz) and 512 GB of RAM, for a total of 73344 cores and 293 TB RAM, designed to offer outstanding performance for most workloads and compatibility for the broadest range of applications.

Accelerator - GPU

200 GPU-accelerated nodes, each with 2 AMD Rome CPUs (2x 32 cores @ 2.35 GHz) and 4 NVIDIA A100 GPUs with 40 GB HBM. Each node also includes 512 GB of RAM, a local SSD of 1.92 TB and 2 links to the InfiniBand HDR fabric. These nodes provide a significant improvement in computing speed for workloads exhibiting strong data parallelism and are particularly suited for artificial intelligence, machine/deep learning and data analytics workloads.

Accelerator - FPGA

20 FPGA nodes, each composed of 2 AMD Rome CPUs (2x 32 cores @ 2.35 GHz) and 2 Intel Stratix 10MX 16 GB HBM FPGA cards. Each node has 512 GB of RAM, a local SSD of 1.92 TB and dual links to the InfiniBand fabric. FPGA – Field Programmable Gate Array – cards are (re)configurable processors, particularly well suited for real-time applications and streaming data analytics/decision making.

Large Memory

Large memory nodes are similar to MeluXina’s Cluster-CPU nodes but offer a much larger RAM capacity for particularly demanding workloads. Each large memory node is composed of 2 AMD Rome CPUs (2x 64 core @ 2.6 GHz), has 4 TB of memory (4096 GB), 1.92 TB of local storage, and is dual-linked into the InfiniBand fabric. In-memory computing helps accelerate various applications that use large datasets, which would require costly disk accesses (on a lesser system).


MeluXina’s scratch data tier is an 0.5PB, all-flash, blazing 400GB/s, Lustre-based HPC filesystem meant for very intensive I/O operations.


The project data tier is MeluXina’s primary, 12.5 PB and 190 GB/s throughput, capacity tier. User home directories and shared project spaces fit comfortably here. For the extra need for speed – see Scratch.


MeluXina hosts a 7PB data tier meant for backups – those bits of project results where you want a safety copy (just in case).


Long-term archiving of project results can be done on MeluXina’s 5 PB tape library.