Skip to end of metadata
Go to start of metadata

Researchers sometimes need text about Research Computing's resource and/or support, to include in a grant application. If the below text will not be sufficient, please contact Research Computing.




The Research Computing Group supports a large High Performance Compute Cluster, use of a broad range of software applications across life sciences and biomedical research domains. It also provides a wide variety of training, as well as informatics and data analysis consulting. 


O2 is a shared, heterogeneous High Performance Compute facility which includes 350+ compute nodes, 11,000+ compute cores, assorted GPU cards, and more than 50TB of memory. The vast majority of the compute nodes are built on Intel architecture. Most compute nodes have between 224GB and 256GB of memory; a few have up to 1TB of memory. O2 is also connected to several enterprise storage systems (optimized for scratch or more permanent storage) with over 20 petabytes of network and local data storage capacity. O2 is located in a state-of-the-art, off-campus data center with multiple critical systems replicated at a secondary location for disaster recovery.


O2 also has hundreds of applications for computational analysis available on the cluster, including major computational tools (Matlab, Mathematica), computer languages (R, Python, Perl) and modern software for many life science research disciplines. Jobs are managed by the SLURM scheduler and the nodes are based on CentOS 7.2.15 Linux architecture. Hundreds of HMS-affiliated researchers use O2 for big and small projects. In 2018 more than 5 million jobs were submitted to O2, thus it is well-equipped to assist with your project.

  • No labels