HPC Resources/ARIS

From VI-SEEM Wiki
Jump to: navigation, search

ARIS (Advanced Research Infromation System) is an HPC cluster based on IBM’s NeXtScale platform, incorporating the Intel ® Xeon ® E5 v2 processors, (Ivy Bridge) and has a theoretical peak performance (Rpeak) of 190,85 TFlops and a sustained performance (Rmax) of 179,73 TFlops on the Linpack benchmark. With a total of 426 compute nodes that incorporate 2, 10 core CPUs (Ivy Bridge - Intel Xeon E5-2680v2, 2.8 GHZ), it will offer more than 8500 processor cores (CPU cores) interconnected through FDR Infiniband network, a technology offering very low latency and high bandwidth. Each compute node offers 64 Gbyte of RAM. In addition, the system offers about 1 Petabyte (quadrillion bytes) of storage, based on the IBM General Parallel File System (GPFS). The system software allows developing and running scientific applications and provides several pre-installed compilers, scientific libraries and popular scientific application suites.

Hpc-aris.jpg


Technical Specification

Administrative Data
Name ARIS
Short Description Greek Tier-1 HPC System
Owner GRNET S.A.
Country Greece
Computational Power
Number of servers 426
Server specification IBM NeXtScale nx360 M4
CPU per server 2
RAM per server 64 GB
Total number of CPU-cores 8,520
Max number of parallel processes 8,520
Interconnect type FDR-14 Infiniband
Interconnect latency 2.5 μs
Interconnect bandwidth 40 Gbps
Local filesystem type IBM GPFS
Total storage (TB) 1 TB
Accelerators type -
Number of cores -
Accelerators per server -
Servers equipped with accelerators -
Peak performance CPU (Tflops) 190.85
Peak performance accelerators (Tflops) -
Peak performance (Tflops) 190.85
Real performance (Tflops) 179.73
Operating system RedHat Enterprize Linux
Version 6.6
Batch system/scheduler SLURM
Development tools intel,pgi,gnu, intelmpi, openmpi, gdb,gdb-ia,pgdbg,ddd, VTune,Scalasca,mpiP,gprof,pgprof
Libraries ACML, ATLAS, BOOST, ElmerFEM, ELPA, FFTW, GSL, libFLAME, Libint, Libxc, METIS, MKL, OPENBLAS, PARMETIS, SCALAPACK, Voro++, GLPK, JasPer, NETCDF, HDF5, UDUNITS, MED
Applications ABinit, AByS, BigDFT, CP2K, Desmond, GAMESS US, GROMACS, LAMMPS, MDynaMix, MPQC, NAMD, NWChem, Octopus, OpenMD, PLUMED, Quantum ESPRESSO, WRF, WRF-CHEM, Code Saturne, OpenFOAM
Dedication to VI-SEEM
CPU (percent) 5%
Storage (percent) 5%
Accelerators (percent) -
CPU (core-hours per year) 3,000,000
Storage in TB 50
Accelerators (hours per year) -
Integration
System operational since Jul 15
Available to the project from PM01
Expected date system to be phased out N/A
Interfaces SSH