Technical specification and requirements Clause Samples
Technical specification and requirements. Both HemoCell and Palabos are written in C++ and they make use of an on-demand compilation process, wherein the codes need to be compiled for each specific end-user application and then the produced executables can be re-used in future, until a new compilation is needed due to a modification of the code or compilation options. Detailed instructions on how to compile HemoCell are available at [30]. The following software and libraries are needed to compile and run HemoCell: • Openmpi (³ 1.10.2) or Intel Mpi (³ 17.0.5) • GCC (³ 5.2.0) or Intel C++ compiler(³ 17.0.5) • CMake ³ 3.7.2 • HDF5 ³ 1.8.16 • GNU “patch” utility ³ 2.7.5 • h5py ³ 2.6.0-1 • PARMETIS 4.0.3 (Optional) • Palabos 2.0 • Python HemoCell uses Palabos to manage initial domain decomposition and to voxelize geometries. The outputs, after post processing with in-house developed tools, are then visualised through data visualisation software such as Paraview or Visit.
Technical specification and requirements. BAC is a collection of scripts, mainly written in Python, which wrap around common MD codes to facilitate free energy calculations. Except for a recent version of the Python interpreter, there are no external requirements to install and run BAC, however the workflow execution strongly depend on the MD software package employed and its performance on the selected hardware. The main Molecular Dynamics (MD) codes used within BAC are: • NAMD (>2.9) [35] is a parallel MD code designed for high-performance simulations of large biomolecular systems. The code is written in C and is based on Charm++ parallel objects. NAMD uses the Charm++ native communications layer and the program “charmrun” to launch the executable processes. NAMD scales to hundreds of cores for typical simulations and beyond 500,000 cores for the largest simulations. The code is distributed as binary, however compilation from source is necessary if we want to tune performances and use MPI (suggested for use on HPC systems). NAMD requirements:
Technical specification and requirements. The code is written in modern Fortran and C, and its parallelisation is based on mesh partitioning, with MPI as the message passing library for inter-node and tasks level parallelism. In order to improve the efficiency on multi-core shared memory systems, some heavy weight loops are parallelized using OpenMP [10]. Both MPI and OpenMP layers can be used at the same time in a hybrid scheme. GPU acceleration is available through the use of OpenACC [11] directives or CUDA API [12], offloading some specific parts of the code, such as some matrix assembly loops or some types of solvers. Alya is also capable of actively using multi-code coupling on top of MPI / OpenMP parallelisation. To build Alya from source, the following software is needed: • GNU make [13] • Fortran and C compilers • MPI • METIS [14] (optional) • HDF5 [15](optional) • VTK [16](optional) • CUDA (optional) The code is highly portable and compatible with most available compilers including GCC, Intel, Cray and IBM XL compilers. There are no specific requirements on type and version of the compiler used, but OpenMP support and vectorisation capabilities can improve drastically the performances of the code. Alya is compatible with variety of MPI implementations, including Intel MPI and OpenMPI, as well as bespoke libraries for specific hardware such as the Cray-MPI library. The external library METIS is mainly used for domain decomposition at the MPI level. A compatible version of the library is shipped with the source to reduce dependencies and facilitate installation procedure avoiding version conflicts. Input data is generally composed of text files with a set format. Input files can be converted into binary inputs within Alya. Output files are post processed using tools within Alya to generate standard format files, like the widely used so-called ensight format that can be visualized with software visualization tools as the open-source viewer Paraview [17].
