Free Compute for Research in Egypt


The Bibliotheca Alexandrina (BA), the new Library of Alexandria in Egypt, has always been keen on providing researchers with the resources they need - be it books, periodicals, online databases, or the organization of conferences and events that are related to any of the various domains of scientific research. Today, much of the scientific research being conducted requires the use of advanced computational resources for carrying out simulations and analyzing data. For that reason and in holding up to its commitment towards supporting scientific research in Egypt, the Bibliotheca Alexandrina has been operating its own supercomputing facility within the BA's International School of Information Science since 2009.
The BA Supercomputing Facility offers researchers in Egypt merit-based access to a High-Performance Computing (HPC) cluster - the BA-HPC - where they may conduct simulated experiments and process data.


Meet the Machine


The current BA High-Performance Computing (HPC) machine is BA-HPC C2, where "C2" is short for "Compute Cluster, Version 2." The C2 parallel compute cluster is capable of a theoretical peak performance that exceeds 100 TFLOPS. The cluster includes scratch storage provided via the Lustre parallel file system, InfiniBand networking for high-speed interconnect, as well as General-Purpose Graphics Processing Units (GPGPUs) that serve as accelerators.
The new BA-HPC C2 cluster that has recently been installed at the BA facility August 2016 shall continue the mission of the BA Supercomputing Facility to serve as High-Performance Computing platform for research projects at Egypt's universities and research institutes. The C2 cluster shall also provide allocations for researchers within the EC-funded VI-SEEM project, where the BA is to provide HPC as well as storage resources.


BA-HPC C2

Number of servers 98 Interconnect type QDR InfiniBand Peak performance (Tflops) 88.14
Server specification Huawei FusionServer X6800 Module Interconnect latency TBA Real performance (Tflops) 79.32
CPU per server 2 Interconnect bandwidth 40 Gbps Operating system CentOS
RAM per server 128 GB Local filesystem type Lustre Version 6.8
Total number of CPU-cores 1,968 Total storage (TB) 288 TB Batch system/scheduler Open Grid Scheduler
Max number of parallel processes 1,968 Accelerators type NVIDIA Tesla K80 Servers equipped with accelerators 16
RAM per GPU node (GB) 64 Accelerators per server 2 Peak performance accelerators (Tflops) 29.92

History


The first High-Performance Computing (HPC) cluster - BA-HPC C1 - was installed in 2009 as a joint project with the Ministry of Communications and Information Technology in Egypt. C1 continued to serve researchers in Egypt for years onward and well into 2016. C1 also hosted numerous projects allocated through the LinkSCEEM-2 European project, where the BA was partner. And beyond the LinkSCEEM-2 project closure in 2014, C1 went on to hosting projects allocated through the Cy-Tera and Eastern Mediterranean HPC Production Access Call. Resources from C1 were also utilized in carrying out large-scale image processing for the book digitization workflow at the BA. As of August 2016, C1 has been host to 46 projects and has logged a total of 8,526,115 core hours of parallel processing time.

BA-HPC C1

Number of servers 130 Interconnect type DDR InfiniBand Peak performance (Tflops) 11.8
Server specification Sun Blade X6250 Server Module Interconnect latency 3.3 μs Real performance (Tflops) 9.1
CPU per server 2 Interconnect bandwidth 10 Gbps Operating system CentOS
RAM per server 8 GB Local filesystem type Lustre Version 6.7
Total number of CPU-cores 1,040 Total storage (TB) 36 TB Batch system/scheduler Open Grid Scheduler
Max number of parallel processes 1,040 Peak performance CPU (Tflops) 11.8 Accelerators type none

Meet the Users


The BA-HPC has the pleasure to have been host to diverse research projects affiliated with renown academic and research institutions in Egypt, as well as to have also been host to research projects allocated through regional partnerships.
Our users include: