Home directories
  • Your home directory on Discovery is physically on either the ZFS storage system or the Isilon parallel filesystem, and are available from all the compute nodes.
    • Some users that are associated with members that have their own headnode may have their home directories physically on that system.
  • Your disk quota for your home directory is 20Gb.
  • To view your home directory disk usage, use the quota command.
  • Additional disk-space may be leased by members for $50/50Gb for 4 years of use. (1Tb maximum)
Snapshots
  • Snapshots are available of your home directory if your home is in /home, /ihome or /cgl/home.
  • They can be used easily to recover files from.
  • For /home and /cgl/home, cd to ~/.zfs/snapshot.
    • In that directory you will find six daily snapshots directories, Monday to Saturday, and Weeklys named Week##, which is taken on Sundays.
    • The dailies are refreshed every week and the weeklies are refreshed every year.
  • For /ihome, cd to ~/.snapshot (this is an invisible directory, but it is there)
    • There will be three types of snapshots in this directory.
      • Daily – each of these are kept for one week and are taken every day.
        • The format of this snapshot is daily-<username>_YYYY-MM-DD
        • There will be a link to the last daily snapshot named daily-<username>
      • Weekly – each of these are kept for one month and are taken on Sundays
        • The format of this snapshot is weekly-<username>_YYYY-MM-DD
        • There will be a link to the last weekly snapshot named weekly-<username>
      • Monthly – each of these are kept for one year and are taken on the 1st of the month.
        • The format of this snapshot is monthly-<username>_YYYY-MM-DD
        • There will be a link to the last monthly snapshot named monthly-<username>
AFS Home
  • If the user has an AFS account, they may have link to their AFS home directory in their Discovery home directory.
  • You will need to use the klog command prior to getting access to your files in AFS.
Scratch Space
  • Depending on the node, there is anywhere from 175G to 707G in the /scratch  partition for local storage.
    • The following table shows the size per cell:
      cell
      size
      cell
      size
      cell
      size
      a 134Gb b 135Gb c 820Gb
      d 820Gb e 820Gb f 849Gb
  • If your program reads or writes large amounts of data, it will run more efficiently it your data is on local scratch space (/scratch).
  • There is central scratch space available if your job needs to read/write common data across all the nodes.
    • This is available in /global/scratch.
    • There is approximately 4TB of space available.
  • Data in these directories are routinely cleaned by the system if they haven’t been accessed for 10 days or more.