High Performance Computing Center (HPCC) has been created to build the educational and science information infrastructure, to implement the Decree of President of Ukraine on October 20, 2005 № 1497.2005 “About priorities for the introduction of advanced information technologies, perform tasks of national program named “Information and communication Technologies in Education and Science, 2006-2010” and in response to corresponding Order of the Cabinet of Ministers of Ukraine № 301-r of May 31, 2006.

Technical site of HPCC is a location of the equipment of World Data Center (WDC) for Geoinformatics and Sustainable Development. WDC has a separate external Internet channel. Functions of WDC are provided by service servers, which merged into a single subnet, storage servers based on IpStor software (6,5 TB in tolerant system).

The site is equipped with climate control system based on the water cooling APC InRow hardware and APC Symmetra power security system, which protects against voltage drops and allows you to reduce power consumed by the cluster by bringing compute nodes down correctly in case of temporary outage or complete absence of power.


Over the years, NTUU “KPI” cluster supercomputer was updated for two times, each of which increased its computing power: starting from the 44 units that have had a peak performance of about 2 TFLOPS, the main cluster currently has 112 nodes with 624 processors, which provide peak performance of 7 TFLOPS (linpack 5,7 TFLOPS).

The current configuration of the NTUU “KPI” cluster consists of 2 systems.

The first provides a basis of power and runs under the Linux OS.

Second one is a basis for cluster operated by MS Windows HPC Edition, as well as academic cluster, which serves to conduct training courses and provision of laboratory work.

Internal interconnect in both systems is implemented on the basis of separate Infiniband switches. The service network, combining cluster service servers and service laboratory, is built on Gigabit Ethernet technology.

External communication channel is provided by fiber of KPI-Telecom, through which the cluster has access to network resources of URAN.

Specifications of cluster nodes are presented in the table.

System 1



Storage Space


2 x 4-core Intel Xeon E5440 @ 2.83ГГц

8 GB



2 x 2-core Intel Xeon 5160 @ 3.00ГГц

4 GB



System 2

2 x 4-core Intel Xeon E5345 @ 2.33 GHz

8 GB

500 GB



Storage system is based on a LustreFS network file system and has a volume of 6 TB. An additional 20 TB for long term storage are contained in the tape storage device by Overland.

Resources of cluster are provided for computing within AliCE experiment on Large Hadron Collider at CERN. Grid interface to the cluster software is provided by NorduGrid ARC and gLite. In addition to access through grid also local access is available via SSH and RDP protocols to the Linux and Windows parts respectively.

Among the problems being solved on the cluster there are:

  • modern quantum-chemical calculations (ab initio or DFT);
  • classical molecular dynamics and molecular dynamics ab initio;
  • astronomical simulation and others.

Staff ensures operation of cluster, performs administrative tasks and helps solving problems that are encountered by users while working, and setting up or assisting in configuration of new software at their request. It consists mainly of graduate students specialized skills. Research advisor of the Center from its founding is Professor A.I. Petrenko, head of the Center till 2008 had been S. Velichkevych. Since 2008 Center is led by S.G. Stirenko.

With the beginning of the second phase of HPCC development emphasis has been shifted from increasing computing power to improving the use of its resources in education and scientific research at University within its Unified information environment, creating appropriate conditions for cooperation with scientific and educational organizations, institutions and enterprises in the shared use of computational resources and information technologies.

To achieve this HPCC was included to the Design Office of Information Systems (DOIS) managed by Professor A. Y. Savitsky, where other university computing resources have been consolidated.

Along with the main direction of the cluster usage for the implementation of international and joint with NAS of Ukraine Grid-projects and HPC projects, scientific management of which is provided by Proffesor A. I. Petrenko, additionally there were introduced two more directions:

  • direction of implementation and deployment of application software for the Centre to solve complex scientific and engineering problems, Proffesor A. M. Novikov;
  • direction of the system software, technical and information support of the Center, Proffesor G.M. Lutsk.