Skip to end of metadata
Go to start of metadata

How much space do I have?

It varies based upon the filesystem / directory, so please reference the Filesystem Quotas page for a comprehensive overview of quotas per user and/or per group.

Where do I put my files? It Depends

For most users' purposes, a filesystem is just a directory, like /home. However, filesystems can differ with respect to:

  • speeds of reading/writing data
  • backup policies
  • user/group access permissions
  • how much can be stored on them. The filesystem quotas page describes how to find out how much space you are using.

Where to put your files depends on:

  • size of the data
  • who needs to see them
  • whether they are temporary data or require backups
  • how you will be accessing them

It is almost never a good idea to have more than, say, 10,000 files in a directory. Your work and others' will be faster if you split that huge directory into a bunch of smaller sub-directories.

IMPORTANT NOTE: None of the standard filesystems are automatically encrypted, and cannot be used for HIPAA-protected or other secure data (Harvard's data security above level 3) unless those data have been de-identified.

Home directory (/home/ab123)

Every user gets a home directory where they land when logging into O2. For eCommons ID ab123 , this would be /home/ab123. This is a good place to put small data sets, lab notes, scripts, and important analysis results. Your home directory is of limited size, so if it fills up you'll need to use other filesystems. Home directories are backed up nightly.

For a small data analysis not requiring large data sets or huge output, a standard workflow would be:

  • (Optionally) Copy data from a desktop or other location to the home directory
  • Run analysis, writing output to the home directory
  • (Optionally) Copy data back to a desktop

Group directories

  • /n/groups/mygroup/
  • /n/data1/institution/department/lab/
  • /n/data2/institution/department/lab/

A group directory is used by a lab (or a set of researchers sharing data). These directories can be read by any member of the lab, which is quite useful when multiple researchers need to see the same data. Unlike home directories, the entire lab directory has a quota, and lab members work together to keep the space from filling up. These directories are used for large data sets, reference data, or scripts used by a whole lab. Eligible labs can use the Storage Request Form to request a group directory or to increase its quota. Group directories are backed up nightly. PIs are not currently charged for storage, but may be charged for usage beyond a certain base level in the future. 

You might run an analysis on data in your home directory using reference data from your lab directory. You might then put results into the lab directory for other lab members to use.

Scratch directory (/n/scratch3/users/a/ab123)

On June 15, 2020: the OLD /n/scratch2 filesystem was made READ-ONLY, in preparation for retirement.

On June 26, 2020: the OLD /n/scratch2 filesystem was taken OFFLINE and retired permanently.

Any data left on scratch2 is not retrievable as the system hardware is being removed from our data center and recycled.

Each user is entitled to space (10 TB) under the /n/scratch3 filesystem. You can create a scratch3 directory for storing temporary data.

** These files are not backed up and will be deleted if they are not accessed for 30 days. **

Note: It is against HMS IT policy to artificially refresh last access time of any file located under /n/scratch3.

It is not recommend to use "striping", or reading/writing a single file through multiple "pipes" to the filesystem, for most cases when writing to the /n/scratch3 filesystem. Contact rchelp@hms.harvard.edu if you have any questions.

For workflows that allow for full control of temp/intermediate files, you can leave your input data under your home or group (if available) directory, make the first step in the workflow read from the original directory, do all of the temp/intermediate writes to /n/scratch3, and perform the final write back to the home or group location. So in a 5 step pipeline, step 1 reads from /n/groups or /home, steps 2-4 write intermediate files to /n/scratch3, and step 5 reads from /n/scratch3 and writes back to the final output /n/groups or /home directory. Here is a suggested workflow:

  • Create a directory under /n/scratch3 if needed
  • Set up your workflow so that the input is read from /n/groups or /home, but temporary/intermediate files are written to your scratch3 directory.
  • Write any needed results back to /n/groups or /home
  • Delete temporary data, or let it be auto-deleted

For workflows that write temp/intermediate files to the current directory, you can create a directory under /n/scratch3 and cd to it. Run the workflow from your scratch3 directory, specifying full paths to input files in /n/groups or /home and full final output paths to /n/groups or /home. Here is a suggested workflow using example ID "ab123":

  • Create a directory under /n/scratch3 if needed.
  • Set up your workflow so that full paths are used to refer to input files in /n/groups or /home.
  • Change directories (cd) to your /n/scratch3 directory, and run the analysis from there:
    • cd /n/scratch3/users/a/ab123
  • Write or copy any needed results back to /n/groups, /home, or your desktop, with copies submitted as an sbatch job or from an interactive session:
    • srun --pty -p interactive -t 0-12:00 /bin/bash
  • Delete temporary data, or let it be auto-deleted

For workflows that allow little flexibility in the location of temporary/intermediate files, data can be copied over to /n/scratch3, computed against there, and copied back to /n/groups or /home. This creates a redundant copy of the input, takes up storage space, and requires time to transfer the data to and from /n/scratch3. Here is a suggested workflow:

  • Create a directory under /n/scratch3 if needed.
  • Copy data from /n/groups, /home, or your desktop to your scratch3 directory. We recommend submitting this as an sbatch job, or be copied from an interactive session (e.g. srun --pty -p interactive -t 0-12:00 /bin/bash)
  • Run the analysis in your scratch3 directory, writing all temporary/intermediate files to this space
  • Copy any needed results back to your home or group directory on O2 via a cluster job or from an interactive session, or download to your desktop via the O2 file transfer servers (transfer.rc.hms.harvard.edu)
  • Delete temporary data, or let it be auto-deleted

IMPORTANT NOTE: If you are transferring files to /n/scratch3 using a tool and flag to preserve timestamps (e.g. rsync -a or -t), those files will also be subject to the deletion policy based on the original timestamp. If the preserved timestamp on a file is greater than 30 days, it will be deleted the next day, even if it had just been moved. This behavior may also occur if you are installing software on /n/scratch3 for personal usage for whatever reason; if there is a step inside the installation process that is simply copying files, and timestamps are preserved, your software may appear to stop functioning randomly as those files get purged prematurely. This is dangerous because the user rarely has insight as to when this occurs. Please be very judicious about handling files when moving them to or generating them on /n/scratch3; as mentioned above, if you are affected by this behavior, the files are unrecoverable.

Accessing folders on "research.files.med.harvard.edu" from O2

The file server "research.files.med.harvard.edu" is mostly used by desktop systems, but can be accessed from O2 from a few selected systems.

  • O2's dedicated transfer servers mount "research.files.med.harvard.edu" at the path: /n/files . This allows an easy place to copy files between /n/files and other O2 directories without having to submit jobs. See File Transfer for more information.
  • O2 login and most compute nodes do not mount /n/files
  • For those users who must use /n/files during batch jobs, you can request access to use the transfer job partition, which has a few low-power compute nodes that mounted /n/files for this purpose. See File Transfer for more information. 

Restoring Accidentally Deleted Files

Most shared filesystems retain snapshots for up to 60 days, the exception being temporary filesystems. If snapshots are available for a directory, they are located in a hidden directory called .snapshot. (This directory will not be visible by doing an ls or even ls -a.) . To retrieve a backup:

  • From a command prompt on O2, type cd .snapshot and then ls to see available backups of that directory. 

  • Inside the .snapshot directory, there will be directories with date/times in their names, containing a copy of all files at that date/time. Each sub-directory will also have its own .snapshot directory.
  • There are two types of directories within .snapshot, daily snapshots (retained for 14 days) and weekly snapshots (retained for 60 days); these are distinguished based upon the inclusion of "daily" or "weekly" in the directory name. You can identify a directory to restore a backup from based upon when the file or directory you want to restore was created and when it was accidentally deleted.
  • You can't write files to these directories, but you can copy files from here back to the original directories with the cp command.

Here is an example of how to restore a file from the .snapshot directory:

# change into .snapshot directory
mfk8@login01:~ $ cd .snapshot


# See contents of .snapshot directory. 
# The available snapshot directories are named with a timestamp of when these backups were taken. 
# In this example, the directory names contain the prefix "O2_home_" because we are in a user's home directory.
mfk8@login01:.snapshot $ ls
FSAnalyze-Snapshot-Current-1533427124  O2_home_daily_2018-10-28_02-00  O2_home_daily_2018-11-04_02-00   O2_home_weekly_2018-10-07_16-00
home.daily                             O2_home_daily_2018-10-29_02-00  O2_home_daily_2018-11-05_02-00   O2_home_weekly_2018-10-14_16-00
home.weekly                            O2_home_daily_2018-10-30_02-00  O2_home_daily_2018-11-06_02-00   O2_home_weekly_2018-10-21_16-00
O2_home_daily_2018-10-24_02-00         O2_home_daily_2018-10-31_02-00  O2_home_weekly_2018-09-09_16-00  O2_home_weekly_2018-10-28_16-00
O2_home_daily_2018-10-25_02-00         O2_home_daily_2018-11-01_02-00  O2_home_weekly_2018-09-16_16-00  O2_home_weekly_2018-11-04_16-00
O2_home_daily_2018-10-26_02-00         O2_home_daily_2018-11-02_02-00  O2_home_weekly_2018-09-23_16-00  SIQ-41aaccf519955ee9fff3befe969e62d7-latest
O2_home_daily_2018-10-27_02-00         O2_home_daily_2018-11-03_02-00  O2_home_weekly_2018-09-30_16-00


# To restore a file we accidentally deleted on November 6, we can change to the previous day's backup directory:
mfk8@login01:.snapshot $ cd O2_home_daily_2018-11-05_02-00


# Then we can list the contents of the directory:
mfk8@login01:O2_home_daily_2018-11-05_02-00 $ ls
file1.txt  file2.txt  file3.txt


# Once the file to restore has been identified, we can copy it back to the home directory:
mfk8@login01:O2_home_daily_2018-11-05_02-00 $ cp file1.txt ../../


# If you instead needed to restore a directory, use 'cp -r' instead of solely 'cp'

Copying data to O2 and between filesystems

See File Transfer for information on moving data to/from desktops, or between filesystems.

Shared Filesystems

These filesystems are housed on a central file server and are available from any system within O2.

filesystem

use

/n/groups

shared group data storage (Contact Research Computing if you need a group space)

/n/data1

shared group data storage

/n/data2

shared group data storage

/home

individual account data storage

/n/app

add-on software packages

Note: The /n/files filesystem, which allowed shared group data storage (access to eCommons collaborations), is not accessible from O2 compute or login nodes, only from the transfer partition. This partition has restricted access, so you will need to request access to run jobs there. See File Transfer for more details.

Temporary Filesystems

These filesystems tend to allow fast read and writes, but are not backed up If you are doing significant I/O on a networked filesystem (like /n/groups or/home), it is often better to copy files from your home or group directory, process them, and copy output back, than to operate directly on files in your home or group directory. 

/tmp is the standard UNIX temporary directory, and /tmp is a different hard drive on each machine. A file you place in /tmp on a login node is not available in /tmp on a compute node or even on a different login node. If a job writes to /tmp, it will write to /tmp on the node the job is running on. Your job should copy it back to a shared filesystem like /home, because it may get deleted from the compute node.

Temporary filesystems are never backed up and are periodically automatically purged of unused data. The contents of these filesystems may also be deleted in the event of a system being rebooted or reinstalled.

[ Information below here is not important for most users ]

Synchronized Filesystems

These filesystems are housed on local disks on individual machines. We keep these filesystems synchronized using our deployment management infrastructure.

filesystem

use

/

top of UNIX filesystem

/usr

most installed software

/var

variable data such as logs and databases

Synchronized O2 filesystems are never backed up. The source system images from which compute nodes and application servers are built are backed up daily, and these can be used to reinstall a system.

  • No labels