Billing policy¶
Running jobs on the compute nodes and storing data in storage space will consume the billing units allocated to your project:
- Compute is billed in units of CPU-core-hours for CPU nodes and GPU-hours for GPU nodes.
- Storage space is billed in units of TB-hours.
How to check your billing units¶
In order to check how many billing units you have used, you can use the following command:
It will report the CPU and GPU-hours allocated and consumed for all the project you are a part of. The tool also reports the storage billing units.
A description of how the jobs are billed is provided in the next sections.
Compute billing¶
Compute is billed whenever you submit a job to the Slurm job scheduler.
CPU billing¶
For CPU compute, your project is allocated CPU-core-hours that are consumed when running jobs on the CPU nodes. The CPU-core-hours are billed as:
For example, allocating 32 CPU cores in a job running for 2 hours consumes:
CPU Slurm partition billing details¶
For some Slurm partitions special billing rules apply.
CPU Standard and bench Slurm partitions¶
The standard
and bench
Slurm partitions are operated in exclusive mode: the
entire node will always be allocated. Thus, 128 CPU-core-hours are billed for
every allocated node and per hour even if your job has requested less than 128
cores per node.
For example, allocating 16 nodes in a job running for 12 hours consumes:
CPU Small Slurm partition¶
When using the small
Slurm partition you are billed per allocated core.
However, if you are above a certain threshold of memory allocated per core,
i.e. you use the high memory nodes in LUMI-C, you are billed per
slice of 2GB memory (which is still billed in units of CPU-core-hours).
Specifically, the formula that is used for billing is:
Thus,
- if you use 2GB or less of memory per core, you are charged per allocated cores.
- if you use more than 2GB of memory per core, you are charged per 2GB slice of memory.
For example, allocating 4 CPU-cores and 4GB of memory in a job running for 1 day consumes:
Allocating 4 CPU-cores and 32GB of memory in a job running for 1 day consumes:
GPU billing¶
For GPU compute, your project is allocated GPU-core-hours that are consumed when running jobs on the GPU nodes. A GPU hours corresponds to the allocation of a full MI250x module (2 GCDs) for one hour.
For the standard-g
partition, where full nodes are allocated, the 4 GPUs
modules are billed
i.e., one node hours correspond to 4 GPU-hours. If you allocate 4 nodes in the
standard-g
partition and that your job runs for 24 hours, you will consume
For the small-g
and dev-g
Slurm partitions, where allocation can be done at
the level of Graphics Compute Dies (GCD), you will be billed at a 0.5 rate per
GCD allocated. However, if you allocate more than 8 CPU cores or more than 64 GB
of memory per GCD you will be billed per slice of 8 cores or 64 GB of memory.
The billing formula is:
GPU-hours-billed = (
max(
ceil(CPU-cores-allocated / 8),
ceil(memory-allocated / 64GB),
GCDs-allocated )
* runtime-of-job) * 0.5
For example, for a job allocating 2 GCDs and running for 24 hours, you will consume
If you allocate 1 GCD for 24 hours but allocate 128 GB of memory, then you will be billed for this memory:
Storage billing¶
For storage, your project is allocated TB-hours. Storage is billed whenever you store data in your project folders. Storage is billed by volume used over time. The billing units are TB-hours.
The amount of TB-hours billed depends on the type of storage you are using. See the data storage options page for an overview of the type of storage used in the different storage options.
Main storage (LUMI-P) billing¶
The main storage backed by LUMI-P is billed directly as:
For example, storing 1.2 TB of data for 4 days consumes:
Flash storage (LUMI-F) billing¶
The flash storage backed by LUMI-F is billed at a 10x rate compared to the main storage:
For example, storing 1.2 TB of data for 4 days consumes:
Object storage (LUMI-O) billing¶
The object storage backed by LUMI-O is billed at a 0.5x rate compared to the main storage:
For example, storing 1.2 TB of data for 4 days consumes: