Per-project scratch and storage folder ====================================== About ----- .. warning:: The per-project scratch and storage folder is *not* intended to be used as a storage space for keeping an archive of files. **Its purpose is to store operational data — files retained for a brief duration.** Furthermore, Discoverer Petascale Supercomputer reserves the right to remove any of the files stored therein that have not been accessed within the last 61 days. Allocations on ``/valhalla`` ---------------------------- └ Available on clusters: Discoverer CPU, Discoverer GPU └ Total storage capacity: 5.7 PB └ Per-project allocation: 50 TB and 50000000 files Each project that is onboarded on Discoverer since February 10, 2025, receives a project folder on our Cray ClusterStor E1000 storage cluster (HPE part number S-9100, based on NVMe), mounted on all nodes as ``/valhalla``. The total storage capacity of that LustreFS cluster is 5.7 PB. Depending on the requirements of the projects, they should receive on ``/valhalla`` a project folder with a storage capacity that matches the following allocation limits: * Benchmark projects: 5 TB * Development projects: 10 TB * Regular projects: 50 TB Additional space on ``/valhalla`` may be provided as an exception only after approval on our side, based on the appropriate justification received in advanced from the project principal (PI). Project folder location and naming convention --------------------------------------------- The project scratch folder location is: ``/valhalla/projects/project_id`` Where ``project_id`` coincides with the Slurm account assigned to the project. For example, if the Slurm account ID is ``ehpc-reg-2025d0-000``, then the project folder is located at ``/valhalla/projects/ehpc-reg-2025d0-000``. Size/Quota ---------- The amount of disk space and number of files allocated for the project depends on the class of the project (one of benchmark, development, regular) and the outcome of the initial negotiation between the applicant and Discoverer. Here we only show how to find the quota limits and current utilization. Initially, it is necessary to obtain the numerical identification number (ID) of the project. Here is an example of how to do that for a project with name of the Slurm account ``ehpc-reg-2025d0-000`` hosted on ``/valhalla`` (the same applies for the other storage locations): .. code-block:: bash lfs project -d /valhalla/projects/ehpc-reg-2025d0-000 When that command is successfully executed, a similar outcome will be displayed: .. code-block:: bash 901 P /valhalla/projects/ehpc-reg-2025d0-000 The number that the output starts with is the ID of the project — in the example above, that is 901. Afterwards, that number should be passed to the ``lfs`` tool in order to display the quota limits and current utilization of the project folder. .. code-block:: bash lfs quota -p 901 /valhalla/projects/ehpc-reg-2025d0-000 The document :doc:`calculating_the_disk_usage_basics` explains how to interpret the numeric information displayed by the quota check. Data retention policy --------------------- The files stored in the project folder are retained for 45 days after the project has been deactivated. Afterwards, they undergo a complete and irreversible wipe out. Help ---- If you experience issues with accessing your per-project scratch folder and the content stored there, contact the Discoverer HPC support team (see :doc:`help`). .. _`Luster file system`: https://en.wikipedia.org/wiki/Lustre_(file_system)