HPC Resources/Avitohol

From VI-SEEM Wiki
Jump to: navigation, search

The Supercomputer System Avitohol at IICT-BAS consists of 150 HP Cluster Platform SL250S GEN8 servers, each one equipped with 2 Intel Xeon E5-2650 v2 8C 2600 GHz CPUs and two Intel Xeon Phi 7120P co-processors. Six management nodes control the cluster, with 4 of them dedicated to the provision of access to the storage system through Fibre Channel. The storage system is HP MSA 2040 SAN with a total of 96 TB of raw disk storage capacity. All the servers are interconnected with fully non-blocking FDR Infiniband, using a fat-tree topology. The HP CMU is used for fabric management, together with the torque/Moab combination for local resource management. Most of the computing capacity of the system comes from the Intel Xeon Phi 7120P co-processors, which use the Multiple Integrated Core (MIC) technology. For optimum use of these resources Intel compilers and the Intel MKL are deployed. Since this supercomputer is relatively new, it is in the process of deploying the software and libraries and streamlining the computational environment.

Hpc-avitohol.jpg

Technical Specification

Administrative Data
Name Avitohol
Short Description Bulgarian multifunctional high performance computing cluster
Owner IICT-BAS
Country Bulgaria
Computational Power
Number of servers 150
Server specification HP ProLiant SL250s Gen8
CPU per server 2
CPU type Intel Xeon E5-2650v2 8C 2.6GHz
RAM per server 64 GB
Total number of CPU-cores 2,400
Max number of parallel processes 4,800
Interconnect type FDR InfiniBand
Interconnect latency 1.1 μs
Interconnect bandwidth 56 Gbps
Local filesystem type Lustre
Total storage (TB) 96
Accelerators type Intel Xeon Phi 7120P
Number of cores 61
Accelerators per server 2
Servers equipped with accelerators 150
Peak performance CPU (Tflops) 50
Peak performance accelerators (Tflops) 362
Peak performance (Tflops) 412
Real performance (Tflops) 264
Operating system Red Hat Enterprise Linux for HPC Compute Node
Version 6.7 (Santiago)
Batch system/scheduler Torque/Moab
Development tools Intel Compilers (C/C++, FORTRAN), GNU Compilers, OpenMPI, CUDA, TotalView, Scalasca, TAU, gprof, gdb, pgdbg, Program Database Toolkit
Libraries Intel MKL, HDF5, FFTW, NetCDF, GSL, LAPACK Boost, BLAS
Applications Gromacs, NAMD, Desmond, VMD
Dedication to VI-SEEM
CPU (percent) 10%
Storage (percent) 5%
Accelerators (percent) 10%
CPU (core-hours per year) 2,102,400
Storage in TB 5
Accelerators (hours per year) 16,030,800
Integration
System operational since Jun 2015
Available to the project from PM04
Expected date system to be phased out N/A
Interfaces SSH, gridFTP