• Shukirgaliyev B., Parmentier G., Just A., et al.The star cluster survivability after gas expulsion is independent of the impact of the Galactic tidal field. 2019,MNRAS,Vol.486,is.1,p.1045-1052. 2019MNRAS.486.1045S
  • Kochergin A., Husárik M., Svoreň J., et al.Rapid variations of dust colour in comet 41P/ Tuttle- Giacobini- Kresák. 2019,MNRAS,Vol.485,is.3,p.4013-4023. 2019MNRAS.485.4013L
  • Khoperskov S., Di Matteo P., Gerhard O., et al.The echo of the bar buckling: Phase- space spirals in Gaia Data Release 2. 2019,A&A,V.622,p.L6 (7 pp.). 2019A&A...622L...6K
  • Zinchenko I.A., Just A., Lara-Lopez M. A., et al.Characterizing the radial oxygen abundance distribution in disk galaxies. 2019,A&A,V.623,p.A7 (10 pp.). 2019A&A...623A...7Z
  • Wang S., Jones M., Zakhozhay O.V.,HD 202772A b: A Transiting Hot Jupiter around a Bright, Mildly Evolved Star in a Visual Binary Discovered by TESS. 2019,AJ,V.157,is.2,p.51 (11 pp.). 2019AJ....157...51W
  • González-Morales P. A., Cally P.S., Khomenko E.V.,Fast-to-Alfvén Mode Conversion Mediated by Hall Current. II. Application to the Solar Atmosphere. 2019,ApJ,Vol.870,is.2,p.94 (12 pp.). 2019ApJ...870...94G





Random photo


Astronomic&Space Information and Computing Center

Astronomic&Space Information and Computing Center

The computing laboratory, as part of the Fundamental Astrometry department, was founded in 1965. The head of the laboratory was Dr.Sci Duma D.P. Subsequently the computing laboratory began to deal with measuring devices and was renamed as the Computing and Measuring Center. The head of the Center Dr.Sci Taradiy V.K. established the structure of the Center with the positions for qualified young specialists in programming and electronics engineering. The Astro/Space Information and Computing Center was founded in 1994 on the basis of the Computing and Measuring Center. The head of the ASICC became Dr.Sci. Berczik P.P.

The basic tasks of the Center are: information support of scientific researches of MAO; all kinds of information services providing; MAO-worldwide information exchange support; MAO computing equipment/facilities maintenance; GRID-cluster services.

The ASICC maintains the Observatory's local network of about 200 personal computers.


The Department Staff.

Veles Oleksandr acting Head of ICCASS, senior researcher PhD veles(at)mao.kiev.ua room 117, tel. 7-00
Pakuliak Ludmila senior researcher PhD pakuliak(at)mao.kiev.ua room 220, tel. 3-46
Bulba Tamara lead.engineer   tamara(at)mao.kiev.ua room 231, tel. 3-05
Lobortas Valentin lead.engineer   lobortas(at)mao.kiev.ua room 231, tel. 3-05
Vedenicheva Irina lead.engineer   iv(at)mao.kiev.ua room 231, tel. 3-05
Zolotukhina Anastasia junior researcher   nastya(at)mao.kiev.ua room 218, tel. 3-32
Parusimov Grigoriy engineer I class   parus(at)mao.kiev.ua room 231, tel. 3-05
Sobolenko Margarita engineer I class   sobolenko(at)mao.kiev.ua room 116, tel. 3-47
Ivanov Daniel engineer I class   ivanovdd(at)mao.kiev.ua room 116, tel. 3-47

Scientific and technical group tasks are: programming and technical support of MAO scientific researches, MAO-worldwide information exchange providing, the Observatory computing equipment maintenance.

Local network of MAO NANU. Local network of MAO NANU provides information transferring with speed limit up to 1GBps. Access to the cluster facilities also use 1GBps channel. The UARNET company provides MAO NANU Internet access at present time. The Internet channel capacity increased up to 10GBps since 2012.


MAOLING. CPU 3.4 GHz, 3 GB RAM, 750 GB.

The main MAO server hosts MAO website, ftp-server, the gateway to the MAO local network. This server provides the access to the user's local data from any computer both from the local network and from the outside. LDAP- and DNS- servers are running on MAOLING as well.

OBERON. 2х2 GHz, 16 GB RAM, 1.5 TB.

The OBERON is used for various computational tasks due to its performance. The mirror of the NASA Astrophysics Data System is located on this PC as well.

JANUS / MAIL. 3 GHz, 3 GB RAM, 320 GB. Mail-server based on ZIMBRA software; provides the access to personal MAO mail-boxes through POP-, IMAP- та WEB-interfaces.


The server hosts UAA and Ukrainian Virtual Observatory web-sites and JDA UkrVO control and searching module.




The high-performance computing GRAPE/GRID cluster was set in operation in the Main Astronomical Observatory in 2007 due to the financial support of the National Academy of Sciences of Ukraine. The basic computational element of the cluster consisted of the 9 Grape6-BLX64 cards and provided about 1 Tflops floating point operations for parallel tasks such as dynamic simulation of the evolution of galaxies, galactic nuclei and star clusters.

In 2011 the cluster was upgraded. Grape6-BLX64 cards were replaced by modern graphics accelerators GeForce 8800 GTS 512, thus improving the performance of astrophysical tasks modeling to approximately 4 Tflops. The cluster computing capabilities upgrade became possible due to the assistance of the Astronomisches Rechen-Institut (ARI) am Zentrum für Astronomie der Universität Heidelberg (Germany) and directly Dr. Rainer Spurzem.

In 2013 cluster was equipped by 16 GPU GeForce GTX 660 cards which provide 3-times higher performance for astrophysical tasks compared with GeForce 8800 GTS 512. Currently, the cluster is heterogeneous and consists of 8+3 computing nodes and one manager node. The Gigabit ethernet network is used for communication. Total number of computing cores is 88. The storage system includes the disk array of 5 Tb capacity, where users home directories are located, and RaidZ array of 7.1 Tb.

Each of the first 3 computing nodes is built on the basis of the Intel Xeon 5410 processors at the HP ML350 G5 servers. Each node has two dual-core processors Intel Xeon 5410 with working frequency of 2.33 GHz, 16 Gb RAM. Other 8 nodes are based on the Intel Xeon E5420 processors. Each node has 2 quad-core processors Intel Xeon E5420, with working frequency of 2.50 GHz, 8 Gb RAM and two GPU GeForce GTX 660.


The Debian GNU/Linux 7.0 distribution which is the one of the Linux based (free and open source software) operating systems is installed in the cluster.

GNU-and Intel compilers for C/C++ and Fortran are available for users. OpenMPI 1.4.3 package is used to run parallel applications. Task queue is managed by Torque 2.4.12 and Maui 3.3.1 packages. Task queues available for users are:

Queues Maximum execution time GPU dev Queue description Available resources
cpu_8x6 120 h queue for large CPU tasks, 6 CPU per node 48 CPU-cores Intel Xeon E5420
grid 120 h queue for GRID tasks, 8 CPU per node 24 CPU-cores Intel Xeon 5410
gpu_2 336 h 0, 1 queue for GPU tasks, 2 GPU per node 16 GPU GeForce GTX 660

The queues listed below are available for cluster users: cpu_8x6, gpu_2. The queue grid is used for computations through GRID environment. Immediate task running in this queue is prohibited.

The software package CUDA SDK (version 5.5) is installed on the cluster to perform computations using GPU. It allows one to run software that uses CUDA technology and OpenCL.


The cluster of the MAO of NASU is a part of the Ukrainian Academic GRID segment. The GRID middleware Nordugrid ARC 5.0.4 is installed on the cluster.p>

Members of the virtual organizations have an opportunity to carry out their tasks on the cluster:

  • multiscale;
  • sysbio;
  • moldyngrid;
  • virgo_ua.

To ensure the efficient operation of the cluster on GRID the runtime-environment is installed. It supports virtual organizations moldyngrid and virgo_ua:

  • GROMACS-4.5.5;
  • PHI-GPU;

Using of the computing cluster is free for staff of the MAO and other research institutions under the following agreements: user submits an application and accepts the user rules.


Monitoring of the cluster:


Monitoring of the Ukrainian Academic GRID segment resources:



Any questions of cluster send to: golowood-admin(at)mao.kiev.ua.


In 2011 the computing resources of the MAO have been increased by two test nodes based on processors Intel Core i5-2500K CPU (3.30 GHz), 16 Gb RAM. This upgrade became possible due to financial support from the National Academy of Sciences of Ukraine. At the beginning of 2017 one of the new nodes is equipped with a graphics accelerator Nvidia GeForce GTX Titan, and the other − Nvidia GeForce GTX 1080.

Nvidia GeForce GTX Titan Core 1, ATI HD 6970
Cores: 2688 cores
Total amount of global memory: 6 Gb
GPU Clock Speed: 837-993 MHz
Theoretical Shader Processing Rate (SP): 4.5 Tflops


GeForce GTX 1080 Core2, GF 570 GTX x 2
Cores: 2560 cores
Total amount of global memory: 8 Gb
GPU Clock Speed: 1700 MHz
Theoretical Shader Processing Rate (SP): 8.7 Tflops

Instructions for running tasks on MAO cluster

To put the task to perform, user ought to prepare an executable script or run a task online. First, task is compiled, then put to a queue using the qsub programme (man qsub).


Example of how to run a serial task:


~$ echo "sleep 1m" | qsub

b) using a script:
user makes a script to run a task run.sh:

~$ cat run.sh
#PBS -k oe
#PBS -m abe
#PBS -N run
sleep 1m
exit 0

Script starting:

~$ qsub -V run.sh

(be sure to specify the option -V).

To compile and run parallel tasks on the cluster user can use the GNU or Intel compilers. To choose a compiler use commandsmpi-selector and mpi-selector-menu.

To see a list of available MPI implementations use the command:

~$ mpi-selector --list

At present there are two OpenMPI implementations: openmpi-1.4.3 (GNU) and openmpi-1.4.3-intel (Intel).

To find the version of MPI user should type:

~$ mpi-selector --query

To change MPI implementation user should use the command:

~$ mpi-selector-menu
Current system default: openmpi-1.4.3
Current user default: openmpi-1.4.3

"u" and "s" modifiers can be added to numeric and "U"
commands to specify "user" or "system-wide".

1. openmpi-1.4.3
2. openmpi-1.4.3-intel
U. Unset default
Q. Quit

Selection (1-2[us], U[us], Q): 1u
Defaults already exist; overwrite them? (Y/N) y

Current system default: openmpi-1.4.3
Current user default: openmpi-1.4.3

"u" and "s" modifiers can be added to numeric and "U"
commands to specify "user" or "system-wide".

1. openmpi-1.4.3
2. openmpi-1.4.3-intel
U. Unset default
Q. Quit

Selection (1-2[us], U[us], Q): Q

mpi-selector-menu changes system variables PATH and MANPATH only the next time the user logs in the shell. Therefore, after the command mpi-selector-menu user should to run bash -l or open a new shell.


Example of how to run parallel programmes:

User compiles a programme, which is written using MPI library:

for C programme:

~$ mpicc -o your_program.exe your_program.c -lm

for Fortran programme:

~$ mpif77 -o your_program.exe your_program.f -lm

Startup script for task running has been prepared run-mpi.sh:

~$ cat run-mpi.sh
#PBS -N run-mpi
#PBS -k oe
#PBS -m abe
#PBS -l nodes=8
mpiexec -n 8 ./your_program.exe
exit 0

To define a queue for task execution user should add the line to his script:

#PBS -q cpu_8x6

The cpu_8x6 queue is designed to run CPU tasks, gpu_1 and gpu_2 queues − for GPU tasks.

To run script user should type:

~$ qsub -V run-mpi.sh

(be sure to specify the option -V).

In this case, the task is running on 8 CPU that is indicated by the parameter -n 8 of the mpiexec programme and the parameter #PBS -l nodes=8.

More information about the described options and programs − man qsub, man mpiexec.

Options in the PBS script:

-k: keep the standard output (stdout) and standard error output (stderr).

-m: send e-mail message when the task starts and ends or if the task is removed by batch-system.

-N: the name of the task in the batch system.

User can check the task status in a queue by the command:

~$ qstat

(for the details − man qstat).