Computing Nodes | GPU-cards | Data Storage | Service Servers | Interlink | Internet | Power | Cooling | Tasks | Access

Fesenkov Astrophysical Institute operates a computer cluster that consists of several high-performance computing nodes equipped with high-end multi-core CPUs and GPU cards. The theoretical single precision performance of the cluster currently peaks at 75 TFLOPS for CPU operations (820 cores / 1 640 threads) and 1235 TFLOPS for GPU (304 640 CUDA cores). In addition, it has redundant data storage of 230 terabytes in volume. Cluster has 10 gigabit interlink and operates under Linux based OS. Task scheduling is organized using SLURM workload manager. For external users, computation and data storage operations on the FAI cluster can be arranged via online task submission that once approved results in an SSH account to the master node with agreed permissions. For external users, computation and data storage operations on the FAI cluster can be arranged via online task submission that once approved results in an SSH account to the master node with agreed permission.

The FAI cluster is constantly growing, the number of computing servers, gpu cards, storage capacity, and interlink speed are increasing whenever possible.

Computational Nodes

The computer cluster is equipped with three types of computing servers: custom ones based on consumer-grade computer components, professional ones based on Intel Xeon processors, and professional servers based on AMD Epyc processors.

Custom Server
Intel Xeon
AMD Epyc

Custom servers are designed to provide maximum performance per core, which is not available in professional solutions. To do this, they use overclocked high-end processors such as i7-5960X, i9-9900K, i9-10900K. The maximum number of cores in such systems is 10 and thier simultanusly operating frequency reaches 5.5GHz. The amount of RAM varies from 16 to 64 gigabytes.

Professional servers based on Intel Xeon processors are dual-processor workstations from SuperMicro, in particular GPU SuperWorkstation 7049GP-TRT and GPU SuperServer SYS-740GP-TNRT. These systems are equipped with Xeon 6126, Xeon 6226, Xeon 6326 and Xeon 8362 processors. The amount of RAM varies from 80 to 256 gigabytes. The number of cores per processor/server is 12/24, 16/32 and 32/64. It is possible to install up to 4 full-size dual-slot or 2 triple-slot GPU cards in each server.

Professional servers based on AMD Epyc processors are implemented on dual-processor platforms SuperMicro GPU A+Server 4124GS-TNR and GPU A+ Server AS-4125GS-TNRT2. The systems have processors with highest number of cores in the cluster, namely the Epyc 7763 and Epyc 9654, which have 64 and 96 cores, respectively. The amount of RAM in these servers reaches 512 and 768 gigabytes. It is possible to accommodate in them up to 8 or even 10 full-sized dual-slots GPU cards.

The total CPU performance of all servers is 75.2 teraflops, when done using single-precision calculations and the vector instructions (AVX2/AVX512)


The several models of high-end grafics cards are actively used in the cluster, such as Nvidia GTX 1080, Nvidia GTX 1080 Ti and Nvidia RTX 2080 Ti, as well as the newer Nvidia RTX Nvidia RTX 3090 24GB and Nvidia RTX 4090 24G. While these models belong to the consumer and gaming segment, they contain a large number of universal computing cores (Cuda Cores, CC) at affordable price allowing the massive cost-effective general-purpose calculations.

PALIT RTX 3090 GamingPro OC

INNO3D GeForce RTX 4090 X3 OC

The theoretical peak performance of all graphics cards is 1296 teraflops (for single precision calculations).

Data Storage

For data storage, a cluster of two identical Synology RackStation RS4021xs+ NAS servers is used. Each server is equipped with 16 Seagate Exos X18 hard drives with a capacity of 18 terabytes, which are organized in a RAID6 array. The servers are connected by a dedicated 10-gigabit channel for data synchronization. Each server can survive the simultaneous loss of two disks, and the cluster as a whole can survive the complete failure of one of the servers, which ensures high reliability of data storage. The current cluster capacity with 16 disks is 230 terabytes, with special extension modules RX1217/RX1217RP it can be expanded up to 40 disks.

Synolgy RackStation RS4021xs+ High Availability Cluster

All space on the NAS cluster is organized as one BTRFS-volume and is accessible from each computing server in the cluster, as well as from each working station in the institute. This system is essential for storage of observational and computational data, as well as user operating data.

Service Servers

To operate a cluster, one needs a whole set of different services, such as a firewall, DHCP, DNS, LDAP, Grafana, Slurm and many others. For these purposes, the FAI cluster has two dedicated single-unit SuperMicro SuperServer SYS-510P-WTR servers with identical characteristics (Intel Xeon 6314U 32C/64T, RAM 64GB). Both servers are organized into a PROXMOX cluster, where all the necessary services are run in the virtual environment. The high performance characteristics of the servers make it possible to enclose many services in individual containers and virtual machines, while ensuring high speed and low latency in their operation. Thanks to the two-server architecture, if one of the servers fails, all containers located on it migrate to the working server within a minute, ensuring near uninterrupted operation of all services.

Кластер SuperMicro SuperServer SYS-510P-WTR

These servers also provide hosting for Internet services developed at the institute, such as and

All inter-server communications in the cluster are carried out over Ethernet network using two 1-gigabit TP-Link SG1016D and TP-Link SG1024D, and two 10-gigabit NETGEAR XS708T and NETGEAR XS728T switches. Custom servers are linked together via 1 gigabit switch using category 5 UTP cables, while almost all professional (except 4124GS-TNR), network and service servers are equipped with two 10 gigabit Ethernet interfaces and interconnected via 10 gigabit switches using 6a cables.


Internet access for FAI cluster is provided by two independent Internet connections – the main one and the backup one. The main connection is a fiber optic line with 100 mbps bandwidth. The backup connection is 25 mbps radio-bridge line. Both channels operate in load balancing mode, providing the fault tolerance and increasing the effective bandwidth up to 125 mbps. If one of the channels fails, all connections passing through it are automatically re-established through the second channel, ensuring redundancy of the communication line with the cluster.


The cluster is powered through 6 uninterruptible power supplies (UPS) Eaton 9SX 6000iR 6000 VA / 5400 кВт with a total power of 6 × 5.4 kW = 32.4 kW. The peak power consumption of the cluster is 12 kW. Such excess of power is intended to facilitate the fault tolerance, as well as the ability to maintain and replace UPSs on the fly, without shutting down servers. For this reason, all professional servers are connected simultaneously to several independent UPSs, which is possible due to the presence of several (from 2 to 4) power supplies in the servers and due to the presence of a sufficient number (actually 8) of C14 connectors on each Eaton 9SX 6000iR unit. As a result, a disconnection or failure of one UPS or power supply does not results in power interruption for the server. The double conversion function ensures high protection and stable power characteristics, that minimizes the use of batteries, extending their life. Once the wall power is off, the entire cluster is supplied with backup power for 5 minutes, then the remaining power is retained only for the critical load (service servers, NAS-servers, switches, Internet modems and Internet antennas) until batteries are fully drained. This allows for critical load to remain online for approximately 3.5 hours after the power loss.


To cool the cluster, an precessional air conditioner split system Mitsubishi Electric SPLIT EVO INV IN/OUT 0071 LT with a cooling capacity of 17.3 kW is used. This air conditioner is equipped with a free-cooling function, which allows to cool the cluster with outside air without using a compressor most of the year, and thus significantly save energy.


The main designation of the cluster is computer modeling, data storage and processing. In particular, the simulation of star clusters, galactic centers, galaxies and galactic systems dynamics, as well as storage and processing of observational data coming from FAI observatories.


Access to the cluster is organized in two steps via the SSH-protocol . First, a user logins to the gateway server, from which it proceeds to internal servers, usually the master server, from which computational tasks are submitted using the SLURM task scheduler. The access to the cluster can be requested via, or via an application form on Kazakhstan Virtual Observatory web-portal: