Reservations

Request access to A100 GPUs or reserve dedicated nodes

Please be patient with submitted requests.

Admins review reservation requests in batches as time allows. Pinging in Matrix immediately after submitting will not get you a faster response — it only adds noise to a queue we already see. Check this page regularly for status updates. A request marked "In progress" means it has already been granted and is in effect — you can start using the resource; there is no further step.

GPU

A100 GPU Access

Request access to A100 GPUs for research workloads

We've noticed that many A100 resources are being used for jobs that don't require their capabilities. To ensure fair and efficient utilization, we enforce per-namespace limits on A100 usage. If your work requires access to A100 GPUs, submit the form below and we'll adjust the limit for your namespace. Groups that contributed A100 hardware to the cluster have unrestricted access by default.

Submit A100 Access Request
Track your request status:

View and monitor your submitted requests below. "In progress" = granted and in effect; no further action needed.

Info

H100 / H200 / GH200 — not user-requestable

Reserved for hardware contributors and LLM spare-cycle workloads

There is no access form for H100, H200, or GH200 GPUs. These resources are reserved for the groups that contributed the hardware to the cluster, and for LLM workloads consuming spare cycles between the owners' jobs. Sending an access request will not grant access to these GPU types.

If you want to use an H100, H200, or GH200 anyway, you can run your pod at priorityClassName: opportunistic (or opportunistic2). The per-namespace GPU quota does not apply to opportunistic-tier pods, so any user can consume these GPUs in spare cycles — with the understanding that your pod can be preempted at any time by an owner's workload. See the Priority Classes doc and the GPU Pods doc for the YAML.

Nodes

Node Reservation

Reserve dedicated nodes for experiments and workshops

Node reservations are exclusive for groups that contribute hardware to the NRP cluster and want to reserve their own hardware for dedicated use, as well as groups that have a specific use case that would benefit from whole-node reservations. We can taint a number of nodes for the specified amount of time based on availability and provided reasons. This ensures your jobs have dedicated resources when you need them. After your reservation is approved, set the appropriate taint on your pods to use the reserved nodes.

Submit Node Reservation Request
Track your reservation status:

View and monitor your submitted reservations below.