O2 is a new platform for Linux-based high performance computing at Harvard Medical School. The name is derived from being the next generation of the HMS "Orchestra" cluster, hence "O"2.
- O2 is managed by the Research Computing Group, part of HMS IT.
O2 is an HPC cluster built on Linux and the Slurm open source job scheduler.
- Please submit a request for O2 help or feedback.
- Follow us on Twitter for updates and alerts about service outages.
Getting Started with O2 and Slurm
- Basic guide to using the Slurm job scheduler ← New users, start here!
- Switching workflows from Orchestra to O2 ← Orchestra users, start here!
Additional Information about O2 and Slurm
- How to choose a partition (the Slurm equivalent to job queues in LSF)
- Troubleshooting your O2 jobs
- Using research applications in O2
- Using Matlab on O2
- Parallel Jobs in O2
- Personal Python packages
- Personal R packages
- Personal Perl packages
- Installing custom software
- Copying files to and from O2
- O2 HPC Cluster and Computing Nodes Hardware Information
- Job Priority Calculation
- Examples of O2 commands
- Using O2 GPU resources
General UNIX Information
- There are a number of storage options available for research data.
- If you have an Orchestra account, you will get access to the same home directory and shared network storage from O2.
- March 14: /n/scratch2 maintenance is completed and the FileSystem is operating normally.
- April 3, 2018: O2 will be taken offline for part of the day to migrate /home to a new fileserver. Maintenance schedule is to be announced.
- Classes are offered each semester to the HMS community to help you ramp up your research skills!
- Please check the User Training page for courses, dates, and registration!
- RC's Office Hours are held every Wednesday, 1 - 3 PM in Gordon Hall Suite 500
- Please contact us first with a support request before dropping in at office hours so we can better help you!
There is no content with the specified labels