Astronomic&Space Information and Computing Center
Astronomic&Space Information and Computing Center
The computing laboratory, as part of the Fundamental Astrometry department, was founded in 1965. The head of the laboratory was Dr.Sci Duma D.P. Subsequently the computing laboratory began to deal with measuring devices and was renamed as the Computing and Measuring Center. The head of the Center Dr.Sci Taradiy V.K. established the structure of the Center with the positions for qualified young specialists in programming and electronics engineering. The Astro/Space Information and Computing Center was founded in 1994 on the basis of the Computing and Measuring Center. The head of the ASICC became Dr.Sci. Berczik P.P.
The basic tasks of the Center are: information support of scientific researches of MAO; all kinds of information services providing; MAO-worldwide information exchange support; MAO computing equipment/facilities maintenance; GRID-cluster services.
The ASICC maintains the Observatory's local network of about 200 personal computers.
The Department Staff.
Veles Oleksandr |
acting Head of ICCASS, senior researcher |
PhD |
veles(at)mao.kiev.ua |
room 117, tel. 7-00 |
Pakuliak Ludmila |
senior researcher |
PhD |
pakuliak(at)mao.kiev.ua |
room 220, tel. 3-46 |
Bulba Tamara |
lead.engineer |
|
tamara(at)mao.kiev.ua |
room 231, tel. 3-05 |
Lobortas Valentin |
lead.engineer |
|
lobortas(at)mao.kiev.ua |
room 231, tel. 3-05 |
Vedenicheva Irina |
lead.engineer |
|
iv(at)mao.kiev.ua |
room 231, tel. 3-05 |
Zolotukhina Anastasia |
junior researcher |
|
nastya(at)mao.kiev.ua |
room 218, tel. 3-32 |
Parusimov Grigoriy |
engineer I class |
|
parus(at)mao.kiev.ua |
room 231, tel. 3-05 |
Sobolenko Margarita |
engineer I class |
|
sobolenko(at)mao.kiev.ua |
room 116, tel. 3-47 |
Ivanov Daniel |
engineer I class |
|
ivanovdd(at)mao.kiev.ua |
room 116, tel. 3-47 |
Scientific and technical group tasks are: programming and technical support of MAO scientific researches, MAO-worldwide information exchange providing, the Observatory computing equipment maintenance.
Local network of MAO NANU. Local network of MAO NANU provides information transferring with speed limit up to 1GBps. Access to the cluster facilities also use 1GBps channel. The UARNET company provides MAO NANU Internet access at present time. The Internet channel capacity increased up to 10GBps since 2012.
SERVERS
MAOLING. CPU 3.4 GHz, 3 GB RAM, 750 GB.
The main MAO server hosts MAO website, ftp-server, the gateway to the MAO local network. This server provides the access to the user's local data from any computer both from the local network and from the outside. LDAP- and DNS- servers are running on MAOLING as well.
OBERON. 2х2 GHz, 16 GB RAM, 1.5 TB.
The OBERON is used for various computational tasks due to its performance. The mirror of the NASA Astrophysics Data System is located on this PC as well.
JANUS / MAIL. 3 GHz, 3 GB RAM, 320 GB. Mail-server based on ZIMBRA software; provides the access to personal MAO mail-boxes through POP-, IMAP- та WEB-interfaces.
VIRGO4.
The server hosts UAA and Ukrainian Virtual Observatory web-sites and JDA UkrVO control and searching module.
CLUSTER OF THE MAIN ASTRONOMICAL OBSERVATORY OF NASU
The high-performance computing GRAPE/GRID cluster was set in operation in the Main Astronomical Observatory in 2007 due to the financial support of the National Academy of Sciences of Ukraine. The basic computational element of the cluster consisted of the 9 Grape6-BLX64 cards and provided about 1 Tflops floating point operations for parallel tasks such as dynamic simulation of the evolution of galaxies, galactic nuclei and star clusters.
In 2011 the cluster was upgraded. Grape6-BLX64 cards were replaced by modern graphics accelerators GeForce 8800 GTS 512, thus improving the performance of astrophysical tasks modeling to approximately 4 Tflops. The cluster computing capabilities upgrade became possible due to the assistance of the Astronomisches Rechen-Institut (ARI) am Zentrum für Astronomie der Universität Heidelberg (Germany) and directly Dr. Rainer Spurzem.
In 2013 cluster was equipped by 16 GPU GeForce GTX 660 cards which provide 3-times higher performance for astrophysical tasks compared with GeForce 8800 GTS 512. Currently, the cluster is heterogeneous and consists of 8+3 computing nodes and one manager node. The Gigabit ethernet network is used for communication. Total number of computing cores is 88. The storage system includes the disk array of 5 Tb capacity, where users home directories are located, and RaidZ array of 7.1 Tb.
Each of the first 3 computing nodes is built on the basis of the Intel Xeon 5410 processors at the HP ML350 G5 servers. Each node has two dual-core processors Intel Xeon 5410 with working frequency of 2.33 GHz, 16 Gb RAM. Other 8 nodes are based on the Intel Xeon E5420 processors. Each node has 2 quad-core processors Intel Xeon E5420, with working frequency of 2.50 GHz, 8 Gb RAM and two GPU GeForce GTX 660.
SOFTWARE
The Debian GNU/Linux 7.0 distribution which is the one of the Linux based (free and open source software) operating systems is installed in the cluster.
GNU-and Intel compilers for C/C++ and Fortran are available for users. OpenMPI 1.4.3 package is used to run parallel applications. Task queue is managed by Torque 2.4.12 and Maui 3.3.1 packages. Task queues available for users are:
Queues |
Maximum execution time |
GPU dev |
Queue description |
Available resources |
cpu_8x6 |
120 h |
− |
queue for large CPU tasks, 6 CPU per node |
48 CPU-cores Intel Xeon E5420 |
grid |
120 h |
− |
queue for GRID tasks, 8 CPU per node |
24 CPU-cores Intel Xeon 5410 |
gpu_2 |
336 h |
0, 1 |
queue for GPU tasks, 2 GPU per node |
16 GPU GeForce GTX 660 |
The queues listed below are available for cluster users: cpu_8x6, gpu_2. The queue grid is used for computations through GRID environment. Immediate task running in this queue is prohibited.
The software package CUDA SDK (version 5.5) is installed on the cluster to perform computations using GPU. It allows one to run software that uses CUDA technology and OpenCL.
GRID
The cluster of the MAO of NASU is a part of the Ukrainian Academic GRID segment. The GRID middleware Nordugrid ARC 5.0.4 is installed on the cluster.p>
Members of the virtual organizations have an opportunity to carry out their tasks on the cluster:
- multiscale;
- sysbio;
- moldyngrid;
- virgo_ua.
To ensure the efficient operation of the cluster on GRID the runtime-environment is installed. It supports virtual organizations moldyngrid and virgo_ua:
- GROMACS-4.5.5;
- PHI-GPU;
- VIRGO/COSMOMC;
- VIRGO/GADGET;
- VIRGO/FERMI;
- VIRGO/XMM.
Using of the computing cluster is free for staff of the MAO and other research institutions under the following agreements: user submits an application and accepts the user rules.
Monitoring of the cluster:
http://golowood.mao.kiev.ua/ganglia/
Monitoring of the Ukrainian Academic GRID segment resources:
http://gridmon.bitp.kiev.ua/
http://www.nordugrid.org/monitor/loadmon.php
Any questions of cluster send to: golowood-admin(at)mao.kiev.ua.
TEST NODES
In 2011 the computing resources of the MAO have been increased by two test nodes based on processors Intel Core i5-2500K CPU (3.30 GHz), 16 Gb RAM. This upgrade became possible due to financial support from the National Academy of Sciences of Ukraine. At the beginning of 2017 one of the new nodes is equipped with a graphics accelerator Nvidia GeForce GTX Titan, and the other − Nvidia GeForce GTX 1080.
Nvidia GeForce GTX Titan |
|
Cores: |
2688 cores |
Total amount of global memory: |
6 Gb |
GPU Clock Speed: |
837-993 MHz |
Theoretical Shader Processing Rate (SP): |
4.5 Tflops |
GeForce GTX 1080 |
|
Cores: |
2560 cores |
Total amount of global memory: |
8 Gb |
GPU Clock Speed: |
1700 MHz |
Theoretical Shader Processing Rate (SP): |
8.7 Tflops |
Instructions for running tasks on MAO cluster
To put the task to perform, user ought to prepare an executable script or run a task online. First, task is compiled, then put to a queue using the qsub programme (man qsub).
Example of how to run a serial task:
a)
~$ echo "sleep 1m" | qsub
b) using a script:
user makes a script to run a task run.sh:
~$ cat run.sh
#PBS -k oe
#PBS -m abe
#PBS -N run
#!/bin/sh
cd $PBS_O_WORKDIR
export PATH=$PATH:$PBS_O_WORKDIR
sleep 1m
exit 0
~$
Script starting:
~$ qsub -V run.sh
(be sure to specify the option -V).
To compile and run parallel tasks on the cluster user can use the GNU or Intel compilers. To choose a compiler use commandsmpi-selector and mpi-selector-menu.
To see a list of available MPI implementations use the command:
~$ mpi-selector --list
openmpi-1.4.3
openmpi-1.4.3-intel
~$
At present there are two OpenMPI implementations: openmpi-1.4.3 (GNU) and openmpi-1.4.3-intel (Intel).
To find the version of MPI user should type:
~$ mpi-selector --query
default:openmpi-1.4.3
level:user
~$
To change MPI implementation user should use the command:
~$ mpi-selector-menu
Current system default: openmpi-1.4.3
Current user default: openmpi-1.4.3
"u" and "s" modifiers can be added to numeric and "U"
commands to specify "user" or "system-wide".
1. openmpi-1.4.3
2. openmpi-1.4.3-intel
U. Unset default
Q. Quit
Selection (1-2[us], U[us], Q): 1u
Defaults already exist; overwrite them? (Y/N) y
Current system default: openmpi-1.4.3
Current user default: openmpi-1.4.3
"u" and "s" modifiers can be added to numeric and "U"
commands to specify "user" or "system-wide".
1. openmpi-1.4.3
2. openmpi-1.4.3-intel
U. Unset default
Q. Quit
Selection (1-2[us], U[us], Q): Q
~$
mpi-selector-menu changes system variables PATH and MANPATH only the next time the user logs in the shell. Therefore, after the command mpi-selector-menu user should to run bash -l or open a new shell.
Example of how to run parallel programmes:
User compiles a programme, which is written using MPI library:
for C programme:
~$ mpicc -o your_program.exe your_program.c -lm
for Fortran programme:
~$ mpif77 -o your_program.exe your_program.f -lm
Startup script for task running has been prepared run-mpi.sh:
~$ cat run-mpi.sh
#PBS -N run-mpi
#PBS -k oe
#PBS -m abe
#PBS -l nodes=8
#!/bin/sh
cd $PBS_O_WORKDIR
export PATH=$PATH:$PBS_O_WORKDIR
mpiexec -n 8 ./your_program.exe
exit 0
~$
To define a queue for task execution user should add the line to his script:
#PBS -q cpu_8x6
The cpu_8x6 queue is designed to run CPU tasks, gpu_1 and gpu_2 queues − for GPU tasks.
To run script user should type:
~$ qsub -V run-mpi.sh
(be sure to specify the option -V).
In this case, the task is running on 8 CPU that is indicated by the parameter -n 8 of the mpiexec programme and the parameter #PBS -l nodes=8.
More information about the described options and programs − man qsub, man mpiexec.
Options in the PBS script:
-k: keep the standard output (stdout) and standard error output (stderr).
-m: send e-mail message when the task starts and ends or if the task is removed by batch-system.
-N: the name of the task in the batch system.
User can check the task status in a queue by the command:
~$ qstat
(for the details − man qstat).
The Laboratory was created in 1983. The Head of the laboratory DSc B.E. Zhilyaev.
Laboratory staff:
- Researcher O.A. Svyatogorov
- Researcher I.A. Verlyuk
- Leading engineer V.N. Perukhov
- Senior researcher V.M. Reshetnyk
- Leading engineer S.M. Pokhvala
The main areas of research:
- High-Speed spectrophotometry of variable stars with the Synchronous Network of Telescopes.
- The study of high-frequency optical variability of the flare stars, chromospherically active ones, cataclysmic variable stars, cosmic gamma bursts, galactic nuclei.
SCIENTIFIC ACHIEVEMENTS
- The Synchronous Network remote Telescope (SNT) had established which is an innovative approach in astrophysics and has no analogues in the world. SNT combines together four telescopes in observatories of Ukraine, Russia, Bulgaria and Greece, equipped with GPS receivers to synchronize local time system of photometers relatively UTC to within one microsecond. SNT uses innovative techniques of observation and innovative software. This provides receiving of information unprecedented quality for the analysis of small-scale variability of stars (Fig. 1).
- The high-frequency brightness oscillations have identified during flares of the UV Cet type stars. Using coronal seismology methods the basic parameters of the flares loops, parameters of flare plasma and magnetic field energy have estimated in a number of active flaring red dwarf stars EV Lac, and YZ CMi (Fig. 2).
- For the first time on the basis of the theory of photons statistics variability of chromospherically active giant V390 Aur has detected in the range of 0.1 - 10 Hz. Observed patterns of brightness variations have proposed to explane by an ensemble of microflares. Under the proposed model, a typical microflare has a maximum amplitude of 0.005 magnitude, frequency of appearence ν0=0.15S-1, and duration of about 4 seconds. Output power of microvariability is estimated as E=8•10-4 of stellar luminosity. The flow of energy in heating the corona is expected to reach (1 - 2)% of total power microflares (Fig. 3)
- For the first time high-speed photometry of galaxies in the UBVRI system was performed simultaneously with multiple remote telescopes. Telescopes were working simultaneously to within 1 millisecond synchronization. The nuclei of two galaxies have observed with a sampling time of 0.01 seconds. To search for flares the method of integral transformation of the light curves was utilized using the cumulative Poisson distribution. Observations showed coincidence of the events with duration of a few hundredths of a second and amplitude of 0.4 magnitude in the B filter in the nucleus of the galaxy NGC7331. Application of coincidence technique for Seyfert galaxies NGC1068 allowed us to detect a short burst, consisting of a fast pulse rise time of ~ 0.1 sec and the damping time about 1 sec. Observations of short-time flashes in the nuclei of galaxies confirms the hypothesis of existence of intermediate mass black holes in the centers of galaxies and dense globular clusters (Fig. 4).
- For the first time high-frequency fluctuations in short gamma bursts from the BATSE 3B catalog have revealed. High-frequency oscillations with periods in the range of milliseconds and amplitudes of several tens of percent of flare luminance can be related to the accretion of matter that formed after the tidal destruction of a neutron star in a binary black hole system. A possible scenario for this phenomenon is merging of black holes and neutron stars of solar mass (Fig. 5).
- Observations of the asteroid 15 Eunomia at the Andrushivka observatory with the Zeiss-600 telescope equipped with a low-resolution grism spectrograph (R ~ 200, figs 6, 7) allowed the calculation of the spectral reflectance in the range 3700-10000 Å (fig 8). Spectral monitoring lasted to about four hours exhibits rapid variations in the spectral bands of the olivine group minerals (fig 9), which includes tephroite (Mn2SiO4), monticellite (CaMgSiO4) on a time scale of about a few minutes. Variations in intensity of the bands range from a few to about 25%. These variations are caused by “spots” of minetals on the asteroid surface, crossing the terminator during rotation. Variations in the number of small features at 375 nm, 410 nm, 950 nm and others indicate uneven distribution of agents on the surface of a small planet.
Fig. 1. Geographical location of the Synchronous Network of Telescopes: 1 - Terskol, 2 m and 60 cm; 2 - CrAO, 1.25 m and 50 " 3 - Belogradchik, 60 cm, 4 - Rozhen, 2 m, 5 - Stefanion, 30"; 6 – MAO of NAS of Ukraine Coordinating Center
Fig. 2. Fragment of track of a flare of EV Lac on 15 October 1996 nearly the point of maximum. Approximately 1 minute track runs along the line of blackbody radiation, the temperature is reduced from 20,000 to 12,000 K.
Fig. 3. The relative power spectra of V390 Aur (circle) and the reference star (squares) with ± 1σ error corridor. September 25/26, 2009, the U filter.
Fig. 4. A flare in the nucleus of the Seyfert galaxy NGC1068, 22 September 2004, 00: 30: 00.19 UT. Simultaneous observations at the 2-meter telescope on the peak Terskol (top picture) and the 50 inch telescope of the Crimean Astrophysical Observatory (bottom picture) in the filter B. The light curve with a resolution of 10 milliseconds merged to a resolution of 0.5 sec. Joint confidence probability of a flare event is 99.999880 percent.
Fig. 5. The light curve of BATSE trigger 432 for 50-100 keV energy channel data segmented by TTE to 100 μs (top) and its wavelet power spectrum (bottom). The contours correspond to the confidence levels of 90 and 95% according to the χ22 distribution.
Fig. 6. The series of 109 spectra of the main belt asteroid Eunomia with a time resolution of 2 min. (Andrushivka, Zeiss-600).
Fig.7. The white light curve of Eunomia.
Fig. 8. The albedo spectrum of Eunomia (upper) and relative variations in it (bottom).
Fig. 9. Relative variations in the albedo spectrum and absorbance of the minerals of the olivine group at 375 nm, 410 nm, 950 nm, which includes tephroite (Mn2SiO4), monticellite (CaMgSiO4) are shown [1]. Variations are caused by “spots” of minetals on the asteroid surface, crossing the terminator during rotation.
[1] URL minerals.gps.caltech.edu