Difference between revisions of "HPC"

DETAILS
[+/-]

Widgets

Widgets<bs-widget-edit>

Wanted pages
Who is online?
From SNUWIKI
Jump to: navigation, search
(Introduction:)
 
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This Page is for High Performance Computing Facilities at Shiv Nadar University.
+
[[File:snulogo.png|none|link=|239x62px]]
  
HPC Infrastructure
 
  
Documents
 
  
FAQ
+
==High Performance Computing Cluster: - Magus==
 +
{| style="height: 364px; border-color: #3d2525; background-color: #fafafa;" border="1" width="1042"
 +
|-
 +
| colspan="3"|
 +
===Introduction:===
 +
'''''Magus''''', is a 60-node, 992-core IBM HPC cluster, delivering a theoretical peak performance of ~30 TF.
  
Contacts
+
 
 +
 
 +
* '''New Intel Hash well Architecture Processors on 30 Nodes'''
 +
* '''Total ~ 6 TB RAM. '''
 +
* '''50 TB of Shared IBM GPFS Parallel File System '''
 +
* '''8 High CPU & Memory Nodes'''
 +
 
 +
 
 +
To request a new Account on Magus please use the below link
 +
 
 +
* [http://hpc.snu.edu.in/hpcAccount/ http://hpc.snu.edu.in/hpcAccount/]
 +
 
 +
 
 +
Download ppt for the presentation held on 06-08-2015
 +
 
 +
* [[Media:HPC_Magus_Introduction_06_08_2015.pdf|Download File]]
 +
 
 +
 Download ppt for the presentation held on 24-10-2019
 +
 
 +
* [http://wiki.snu.edu.in/index.php?action=ajax&title=-&rs=SecureFileStore::getFile&f=/c/c8/HPC_Magus_Introduction_24-10-2019.pdf Download File]
 +
 
 +
===Linux Command Reference:===
 +
* [[Media:Linux_Command_Reference.pdf|Download File]]
 +
 
 +
|-
 +
| colspan="3"|
 +
===LSF Command Reference:===
 +
* [[Media:LSF_Commands_Reference.pdf|Download File]]
 +
 
 +
|-
 +
| colspan="3"|
 +
*
 +
 
 +
===[[Magus Queues|Queues on Magus]]===
 +
|-
 +
||
 +
===Softwares===
 +
||
 +
||
 +
|-
 +
||
 +
* [[HPC VASP|Vasp]]
 +
 
 +
||
 +
* [[HPC Quantum Espresso|Quantum Espresso]]
 +
 
 +
||
 +
* [[Gaussian]]
 +
 
 +
|}

Latest revision as of 15:51, 24 October 2019

snulogo.png


High Performance Computing Cluster: - Magus

Introduction:

Magus, is a 60-node, 992-core IBM HPC cluster, delivering a theoretical peak performance of ~30 TF.


  • New Intel Hash well Architecture Processors on 30 Nodes
  • Total ~ 6 TB RAM.
  • 50 TB of Shared IBM GPFS Parallel File System
  • 8 High CPU & Memory Nodes


To request a new Account on Magus please use the below link


Download ppt for the presentation held on 06-08-2015

 Download ppt for the presentation held on 24-10-2019

Linux Command Reference:

LSF Command Reference:

Queues on Magus

Softwares

Authors