Resources & Support Service
The Services provided by the School can be summarized as follows:
- Computational Genomics
- Routine sequence analysis support software is provided to the user community by the scientific staff.
- National Facility for molecular modeling (Northern Region). Used for training during workshops and research by
groups on campus - Mirror Biological data for use on BioGRID.
- High-performance computational facilities along with university facility.
SC&IS : High Performance Computing Facility (HPCF)
The University High Performance Computing Facility at Room No.31, in the SC&IS, new building, was funded from the DST Purse & UGC UPE-II program, and envisaged as an important tool to enable us to raise the level of our research to remain academically competitive, especially for research problems which involved large data sets, and numerical calculations. The facility has recently been upgraded with a 220 Compute Cores, 128 SMP Cores, 2880 GPU Cores (1 x Tesla K40) Cluster. Apart from this cluster we also have 160 Compute Cores , 48 SMP Cores & approx. 5000 GPU Cores (2 x Tesla K20) Cluster.
The HPCF Centre is conceived of as a functionally distributed supercomputing environment, housing leading-edge computing systems, with sophisticated software packages, and connected by a powerful high-speed fibre-optic network. The computing facilities are connected to the campus LAN, WLAN and also to the Internet.
The technology which is used to build this cluster is xCAT and the scheduler used is PBS. By using this scheduler we can manage the user applications and we can implement the policies. It is a very powerful tool. In order to provide the low latency we have used the separate switches for MPI(Infiniband), Storage(10G) and IPMI(1 Gig Ethernet). In this cluster we have attached the storage with 127 TB capacity.
Brief Architectural information:
- New Cluster (2016)
- Processor : Intel Xeon E5 2630
- NO. of Master Nodes : 1
- NO. of Computing Nodes : 11
- NO. of SMP Nodes : 2
- NO. of Hybrid (CPU-GPU) Nodes : 1
- CLUSTER Software : xCAT
- SERVER Model : SUPERMICRO / TYRONE
- NAS Appliance Model : TYRONE
- Total Peak Performance : 7.74 TF
Brief Architectural information:
for Boston Cluster (2013)
- Processor : AMD Opteron 6300 series
- NO. of Master Nodes : 1
- NO. of Computing Nodes : 3
- NO. of SMP Nodes : 1
- NO. of Hybrid (CPU-GPU) Nodes : 1
- CLUSTER Software : ROCKS 6.x
- SERVER Model : BOSTON
- NAS Appliance Model : BOSTON Super Server
- Total Peak Performance : 1.3 TF
Calculation procedure for peak performance:
- No of nodes 11
- Memory RAM 128 GB
- Hard Disk Capacity/each node : 1 TB
- Storage Cap. 127 TB
- No .of processors and cores: 2 X 10 = 20 (10 core + dual socket)
- CPU speed : 2.2 GHz
- No. of floating point operations per seconds for INTEL processor: 10 (since it is a dual core)
- Total peak performance : No of nodes X No .of processors and cores X Cpu speed X No of floating point operations per second = 11 X 20 X 2.2GHz X 16 = 7.74 TF
Calculation procedure for peak performance:
- No of nodes 3
- Memory RAM 48 GB
- Hard Disk Capacity/each node : 500 GB
- Storage Cap. 50 TB (formatted)
- No. of processors and cores: 2 X 16 = 32 (16 core + dual socket)
- CPU speed : 2.3 GHz
Softwares used in UPOE cluster:
- Ganglia : monitoring tool
- MPI : parallel Processing
- HPC software : High performance LINPAC (performance testing tool )
- Software's used in HPC cluster
- R, Amber + Q.C tools, GRID, GOLPE, MATLAB, GNUPLOT, OPENEYE, ADF, AUTODOCK, GROMACS etc.,
- Schedu
Softwares used in UPOE cluster:
- Ganglia : monitoring tool
- MPI : parallel Processing
- HPC software : High performance LINPAC (performance testing tool )
- Software's used in HPC cluster
- R, Amber + Q.C tools, GRID, GOLPE, MATLAB, GNUPLOT, OPENEYE, ADF, AUTODOCK, GROMACS etc.,
Scheduler used:
PBS: Job scheduler software tool, in this we have already implemented the fair share policy i.e. all users can get equal priority, using this we can submit batch and parallel jobs
Application softwares and compilers:
- Open MPI
- C, C++, FORTRAN compilers