Hosted LLMs

Frontier open-weight LLMs, hosted on the NRP

Free access for researchers and educators to a rotating catalog of frontier open-weight models — through a hosted chat interface, an OpenAI-compatible API, and ready-made configs for the major coding CLIs.

Open WebUI chat running on the NRP

Three steps

How to get access

Step 1: Get an NRP account

Sign in once with your institutional or research identity to register a Nautilus account.

Step 2: Request an LLM-enabled namespace

Reach out and we will enable LLM access on your namespace so its members can mint tokens.

Step 3: Generate your LLM token

Use the token page to create a personal API token, then plug it into Open WebUI, Chatbox, or any OpenAI-compatible client.

NRP globe

Models we host

Frontier open-weight models with strong reasoning, coding, and multimodal capabilities. Pick the one that fits your task.

qwen3 logo

qwen3

Flagship frontier multimodal MoE — Claude/Gemini-level performance. Best for general reasoning, long-context work, and multimodal tasks.

kimi logo

kimi

Moonshot's 1T-parameter frontier coding model with multimodal inputs. Best for agentic coding workflows.

gemma logo

gemma

Google's Gemma 4 — multimodal, optional reasoning, efficient frontier performance. A lighter-weight model that still handles images.

glm-4.7 logo

glm-4.7

Zhipu's 358B frontier coding model with official FP8 weights. Strong on code and reasoning tasks.

minimax-m2 logo

minimax-m2

Efficient frontier coding model — 230B in native FP8, fits comfortably on four A100s.

olmo logo

olmo

Allen AI's fully open-source 32B instruction model with transparent training data — pick when reproducibility matters.

gpt-oss logo

gpt-oss

OpenAI's open-weights agentic model — tiny GPU footprint, strong tool use, LTS candidate.

Use the client you already love

Hosted chat interfaces, desktop apps, and coding CLIs all work with NRP-hosted LLMs through the OpenAI-compatible endpoint.

Open WebUI logo
Open WebUI
Chatbox logo
Chatbox
Cherry Studio logo
Cherry Studio
Claude Code logo
Claude Code
OpenCode logo
OpenCode
Crush logo
Crush
Kimi CLI logo
Kimi CLI
Copilot CLI logo
Copilot CLI

How LLMs are being used on the NRP

Researchers, educators, and NRP operators are using hosted LLMs to analyze documents, build software, diagnose workloads, and understand platform usage.

Diagnosing failed nodes

NRP infrastructure is connected to LLM-based tooling so operators can query and diagnose failing nodes using cluster context.

Diagnosing user pods

Users can diagnose problematic pods by inspecting logs, events, and related pod details through NRP-assisted workflows.

Document understanding and sentiment

Researchers can parse and summarize large document collections, then extract signals such as sentiment from sources like securities filings.

Agentic coding for classrooms and research groups

Classrooms and research groups can use coding agents with NRP-hosted models to develop, review, and iterate on software projects.

Building MCP servers for accounting insights

NRP develops MCP servers that plug into the NRP accounting system to help teams understand platform usage and surface operational insights.

Ready to get started?

We will get you onto NRP-hosted LLMs and answer any questions about access, namespaces, and tokens.