A central aspect in the design of HoreKa has been the enormous amount of data generated by scientific research projects. A multi-level data storage concept guarantees high-throughput processing of data using several different storage systems.
The core of this design are two large-scale, parallel file systems based on IBM Spectrum Scale (also known as GPFS) used for globally visible user data. Individual home and project directories are automatically created for each user on the Spectrum Scale home file system, and the environment variables $HOME and $PROJECT point to these directories. Each user can also create so-called workspaces on the Spectrum Scale work file system. Another workspace file system with name pfs6 is based on Lustre and is available for special requirements.
Other storage locations include a temporary directory called $TMP that is is located on the local solid state disks (SSDs) of a node, and is therefore only visible on an individual node while a job is running. In order to create a temporary directory which is visible on all nodes of a batch job, users can request a temporary BeeGFS On Demand (BeeOND) file system. Access to BeeOND file systems is only possible from the nodes of the batch job and while the job is running.
Users with access to the LSDF can also use the corresponding Spectrum Scale file systems which are mounted on all nodes.
The characteristics of the file systems are shown in the following table.
|Property||$HOME / $PROJECT||workspace||pfs6||$TMP||BeeOND||LSDF|
|Visibility||global||global||global||local||job local||LSDF users|
|Lifetime||permanent||limited||limited||job walltime||job walltime||permanent|
|Disk space||2.5 PB||13.5 PB||250 TB||800 GB||n * 750 GB||13 PB|
|Total read perf||25 GB/s||110 GB/s||45 GB/s||750 MB/s||n * 700 MB/s||37 GB/s|
|Total write perf||25 GB/s||110 GB/s||38 GB/s||750 MB/s||n * 700 MB/s||37 GB/s|
|Read perf/node||10 GB/s||10 GB/s||5 GB/s||750 MB/s||10 GB/s||10 GB/s|
|Write perf/node||10 GB/s||10 GB/s||5 GB/s||750 MB/s||10 GB/s||10 GB/s|
global : all nodes see the same file system
local : each node has its own local file system
job local : only available within the currently running job
permanent : data is stored permanently (across job runs and reboots)
limited : data is stored across job runs and reboots, but will be deleted at some time
job walltime : files are removed at end of the batch job.
Selecting the appropriate file system¶
In general, you should separate your data and store it on the appropriate file system.
Permanently required data like software or important results should be stored below $HOME or $PROJECT, but capacity limits (so-called "quotas") apply. Permanent data which is not needed for months or exceeds the capacity restrictions should be sent to other large scale (e.g. to the LSDF) or archive (e.g. bwDataArchive) storage systems and deleted from the home file system.
Temporary data which is only needed on a single node and which does not exceed the disk space shown in the table above should be stored below $TMP. Temporary data which is only needed during job runs should be stored on BeeOND. Scratch data which can be easily recomputed or which is the result of one job and input for another job should be stored below so-called workspaces. The lifetime of data in workspaces is limited and depends on the lifetime of the workspace.
If you accidentally deleted data on $HOME/$PROJECT, you can usually copy back an older version from a so-called snapshot path. In addition there is also a the possibility to restore files from a backup. Please see the Backup and Archival section for more information.
$HOME / $PROJECT¶
If your account is member of one project group the environment variables $HOME and $PROJECT point to the same directory.
If your account is member of more than one project group, you will need to select the environment variable $PROJECT. After login $HOME and $PROJECT points to the directory
/home/<your_oldest_project_group>/<your_account>. If you now want to change to another project group, you can do it by executing the commands:
$ newgrp <another_project_group> $ cd $PROJECT
To perform the above mentioned commands does not change the accounting, i.e. changing the project group does not change the project group which a batch job is accounted on. Hint: Accounting can be changed by the option ''-A'' (''--access'') of the command ''sbatch''.
For your project group (i.e. all coworkers of your project) a fixed amount of disk space for the project directories is reserved. The disk space is controlled by so-called quotas. The default quota limit per project group is 10 TB and 20 million inodes.
Workspaces are directory trees which are available for a limited amount of time (few months). The corresponding Spectrum Scale work file system has no backup, i.e. you should use workspaces for data which can be recreated, e.g. by running the same batch jobs once again. This is only needed in the very unlikely case that the file system gets corrupt.
Initially workspaces have a maximum lifetime of 60 days. You can extend the lifetime 3 times for another 60 days but you should do this near the end of the lifetime since the new lifetime starts when you execute the command which requests the extension.
If a workspace has inadvertently expired we can restore the data during a limited time (few weeks). In this case you should create a new workspace and report the name of the new and of the expired workspace in an email to the hotline or by opening a ticket.
For your account (user ID) there is a quota limit for all of your workspaces and for the expired workspaces (as long as they are not yet completely removed). The default quota limit per user is 250 TB and 50 million inodes.
To create a workspace you need to state ''name'' of your workspace and ''lifetime'' in days. Note that maximum integer for ''lifetime'' is 60. Execution of:
$ ws_allocate blah 30
Info: could not read email from users config ~/.ws_user.conf. Info: reminder email will be sent to local user account Info: creating workspace. /hkfs/work/workspace/scratch/USERNAME-blah remaining extensions : 3 remaining time in days: 30
For more information read the program's help, i.e. ''$ man ws_allocate''.
Reminder for workspace deletion¶
By default you will get an email about an expiring workspace 7 days before a workspace expires. You can adapt this time by using the option ''-r
You can also send yourself a calender entry which reminds you when a workspace will be automatically deleted:
$ ws_send_ical <workspace> <email>
List all your workspaces¶
To list all your workspaces, execute:
which will return you:
- Workspace ID
- Workspace location
- creation date, remaining time and expiration date
- available extensions
Find workspace location¶
Workspace location/path can be prompted for any workspace ''ID'' using ws_find, in case of workspace ''blah'':
$ ws_find blah
returns the one-liner:
Extend lifetime of your workspace¶
Any workspace's lifetime can be only extended three times. There two similar commands to extend workspace lifetime:
$ ws_extend blah 40which extends workspace ID ''blah'' by ''40'' days from now,
$ ws_allocate -x blah 40which extends workspace ID ''blah'' by ''40'' days from now.
Delete a workspace¶
$ ws_release blah # Manually erase your workspace blah
Workspaces on flash storage¶
There is another workspace file system for special requirements available. The file system is called pfs6 and is based on the parallel file system Lustre. Access will be granted on special request.
Advantages of this file system¶
- All storage devices are based on flash (no hard disks) with low access times. Hence performance is better compared to other parallel file systems for read and write access with small blocks and with small files, i.e. IOPS rates are improved.
- The file system is mounted on bwUniCluster 2.0 and HoreKa, i.e. it can be used to share data between these clusters.
- Only HoreKa users or KIT users of bwUniCluster 2.0 can use this file system.
- Access is granted on request and for appropriate requirements. In order to request access, please open a ticket with the subject pfs6 access required and describe the following topics:
- number of files you want to store on this file system
- needed capacity
- number of nodes used by your typical jobs
- special I/O requirements of your jobs
Using the file system¶
After access is granted, you can use the file system in the same way as a normal workspace. You just have to specify the name of the flash-based workspace file system using the option
-F to all the commands that manage workspaces. On HoreKa it is called
ffhk, on bwUniCluster 2.0 it is
ffuc. For example, to create a workspace with name
myws and a lifetime of 60 days on HoreKa execute:
ws_allocate -F ffhk myws 60
If you want to use the pfs6 file system on bwUniCluster 2.0 and HoreKa at the same time, please note that you only have to manage a particular workspace on one of the clusters since the name of the workspace directory is different. However, the path to each workspace is visible and can be used on both clusters.
Other features are similar to normal workspaces. For example, we are able to restore expired workspaces for few weeks and you have to open a ticket to request the restore. There are quota limits with a default limit of 5 TB capacity and 25 millions inodes per user. You can check your current usage with
lfs quota -uh $(whoami) /pfs/work8
The environment variable $TMP contains the name of a directory which is local to each node. This means that different tasks of a parallel application use different directories when they do not utilize the same node. This directory should be used for temporary files being accessed from the local node during job runtime. The $TMP directory is located on an extremely fast 960 GB NVMe SSD disk. This means that performance on small files is much better than on the parallel file systems.
Inside batch jobs $TMP is newly set. $TMP contains the job ID and the job's starting time so that the subdirectory name is unique for each job. At the end of the job the directory $TMP is removed.
On login nodes $TMP also points to a fast directory on a local NVMe SSD disk but this directory is not unique. It is recommended to create your own unique subdirectory on these nodes. This directory should be used for the installation of software packages. This means that the software package to be installed should be unpacked, compiled and linked in a subdirectory of $TMP. The real installation of the package (e.g. make install) should be made in(to) the $HOME or $PROJECT folder.
LSDF Online Storage¶
Users of the LSDF Online Storage can access the storage on HoreKa. Therefore the environment variables $LSDF, $LSDFPROJECTS and $LSDFHOME are set.
The LSDF Online Storage is available on all Login- and Compute Nodes at all points in time. In case of maintenance the jobs will not be started, if the LSDF Constraint batch job parameter is used. For details see here.
BeeOND (BeeGFS On-Demand)¶
Users of the cluster HoreKa can request a private BeeOND (BeeGFS) parallel filesystem for each job. The file system is created during job startup and purged after your job.
Important: All data on the private BeeOND filesystem will be deleted after your job. Make sure you have copied your data back within your job to the global filesystem, e.g. $HOME, $PROJECT, any workspace or the LSDF.
BeeOND/BeeGFS can be used like any other parallel file system. Tools like cp or rsync can be used to copy data in and out.
A BeeOND file system is only created if your batch job requests this creation. For details see here.
Snapshots and backup¶
In case you inadvertently deleted some of your data, want to go back to a previous version or compare your data with a previous version you can use so-called snapshot. Snapshots are a point in time copy of your data. For the home file system there will be snapshots of the last 7 days, of the last 4 weeks and of the last 6 months. For the workspaces there will be snapshots of the last 7 days. For the home file system snapshots will be located below
/home/<project_group>/.snapshots. For the work (workspace) file system snapshots will be located below
There are also regular backups of all data of the project directories, whereas ACLs and extended attributes will not be saved by the backup. Please contact the hotline or open a ticket if you need us to restore backup data.
The commands to display your used quotas and quota limits currently do not work on login nodes and you have to do it inside an interactive job. Such a job can be started with the following command:
$ salloc -p dev_cpuonly -n 1 -t 20 --mem=500
$ /usr/lpp/mmfs/bin/mmlsquota -j $PROJECT_GROUP --block-size G -C hkn.scc.kit.edu hkfs-home
$ /usr/lpp/mmfs/bin/mmlsquota -u $(whoami) --block-size G -C hkn.scc.kit.edu hkfs-work
File system performance tuning¶
Hints on file system performance tuning can be found here.