Resources per team: maximum 36'000 compute node hours and up to 5 TB of storage per team
Resource Access: ssh access or interactive access through Jupiter notebooks. UI based applications can be run via x11 forwarding.
Data cube access: Will be made available in a shared location by SDC2 organizers
Resource management: The teams have to use the SLURM workload manager. All analyses have to be submitted through jobs to this workload manager.
Software management: Participants should install their own software, but support can be requested to the CSCS support team (firstname.lastname@example.org).
Documentation: Resource access information can be hosted on the SDC2 webpage. Information on how to access Piz Daint is available at CSCS' user portal (user.cscs.ch) .
Support: Support is available via ticketing system (email@example.com). Tickets response are limited to business days. Moderate knowledge of Linux and Job Schedulers is expected.
Resource location: Switzerland
Named after Piz Daint, a prominent peak in Grisons that overlooks the Fuorn pass, this supercomputer is a hybrid Cray XC40/XC50 system and is the flagship system for national HPC Service.
Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores, 64GB RAM) and NVIDIA® Tesla® P100 16GB - 5704 Nodes
Technical information can be found at https://www.cscs.ch/computers/piz-daint/
Per user resource
Up to 36'000 compute node hours on the GPU part of the system. If the GPU cannot be used, the CPU on the node can still be used
Up to 5 TB of storage per team
10 GB of home space per user (dedicated) and up to 8.8 PB of scratch capacity to use (shared)
Up to 2400 nodes can be requested by each job
The support team provides a list of supported applications on its portal (https://user.cscs.ch/computing/applications/).
Other software are installed on the system.
Volume of resource
The teams can use up to 36'000 compute node hours and up to 5 TB of storage.
Specific amounts should be specified when the request is made.
GPUs if any
The teams can make use of the P100 GPUs on the system (recommended) but can still use just the CPUs on the nodes.
Each group must submit a format Small Development Project proposal in order to get access to the resources that are available: https://www.cscs.ch/user-lab/allocation-schemes/development-projects/
To start the process, applicants have to send first an email to firstname.lastname@example.org requesting to have their accounts opened in order to be able to apply for a development project.
Approval is given at CSCS discretion after passing a technical review, which can take around 1 month.
Users should be aware that the service is shared by other users and their usage patterns may impact others. The typical problematic areas that groups should pay special attention when writing the proposal are to to avoid:
Many small files in the $SCRATCH file system (Lustre filesystem)
Thousands of short-lived jobs submitted to the queue using very few nodes (In this case, the GREASY scheduler should be used - https://user.cscs.ch/tools/high_throughput/)
Query too frequently the queue status (e.g . watch squeue). The SLURM scheduler has a 5 min scheduling cycle, probing it every 2 seconds makes no difference.
Running applications on the login nodes of the cluster. Piz Daint has dedicated pre and post partitions for this propose.
Access to CSCS Piz Daint is described at CSCS' user portal (https://user.cscs.ch/access/running/piz_daint/)
How to run a workflow
Running jobs at Piz Daint are described at CSCS' user portal (https://user.cscs.ch/access/running/piz_daint/).
Accessing the data cube
Will be made available in a shared location by SDC2 organizers (to be communicated at the beginning of the challenge)
Users can install/compile their own software by themselves
CSCS provided software can be accessed through environment modules.
CSCS provides support for running container images on Piz Daint. More information can be found at https://user.cscs.ch/tools/containers/
Documentation hosted on SDC2 website and at CSCS' user portal (https://user.cscs.ch)
Support can be requested by email to the CSCS user support ticketing system email@example.com. Tickets response are limited to business days.
Moderate knowledge of Linux and Job Schedulers is expected.
Credits and acknowledgements
Users must quote and acknowledge the use of CSCS resources in all publications related to their production and development projects as follows: "This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID ###"