Skip to main content
V-Lab

Model Context Protocol · Early Access

Connect V-Lab to your assistant.

V-Lab’s research-grade risk data now responds to the AI assistant you already use. That includes volatility, SRISK, liquidity, climate benchmarks, and more. No pipelines, no scraping. Just ask your assistant in plain English.

A question asked of an AI, connected to V-Lab

“What was Citigroup’s SRISK at the end of January, and how does it rank globally?”

$108B

SRISK

#8

Global rank

+14%

MoM

V-Lab reading · 30 January 2026

Early access · Active development

V-Lab MCP is pre-1.0. New tools and fields are added regularly. Within a minor version, existing field names and response shapes will not change; a major-version bump signals possible breaking changes. Reconnect your client occasionally to see new tools to pick up the latest capabilities. Use submit_feedback to shape what ships next. Listings at Anthropic’s MCP Directory and the community registries (mcp.so, Smithery, Glama) are pending as of launch. For now, add V-Lab manually using the instructions below.

What you can ask

The measures you already cite, now callable.

Every tool corresponds to a published V-Lab measure: the same GARCH, SRISK, CRISK, and COVOL series cited across the finance literature. Each response names the underlying V-Lab analysis, so you can trace any number back to its source on vlab.stern.nyu.edu. Measures built on peer-reviewed research from the Volatility and Risk Institute at NYU Stern.

Volatility & long-run risk

GARCH-family volatility, long-run VaR, and stress scenarios across 39,000+ firms and 90 markets. Queryable by ticker, country, or sector.

Systemic & climate risk

SRISK, CRISK, country-level rankings, and climate-benchmark correlations. Ask “Which G-SIBs moved most this week?” and get a ranked answer with citations.

Liquidity & common factors

Illiquidity composites, COVOL events, and common volatility factors across markets delivered to your assistant as structured, citable data.

How it works

How to connect.

  1. 01

    Create a V-Lab account

    Sign up at vlab.stern.nyu.edu. It’s free.

  2. 02

    Connect your assistant

    Pick your client below (Claude, ChatGPT, Cursor, VS Code, and more) and copy the config. OAuth means no API key for most clients.

  3. 03

    Ask away

    Query V-Lab in plain English. Use submit_feedback from inside your client to send us bugs and feature requests.

Try it

Three questions to start with.

Three prompts that exercise the core of the server. Paste any of them into your assistant once you have V-Lab connected.

What’s Citigroup’s SRISK as of last month, and how does it rank globally?

Single-firm systemic-risk lookup with global context.

Compare US and Japanese systemic risk over the last 12 months.

Cross-country SRISK time series — demonstrates the currency-aware response and per-country breakdown.

Show me which sectors have the most deteriorating liquidity in the US right now.

Sector-level ILLIQ change ranking — surfaces the worst-trending corners of the market in one call.

Setup

Pick your client.

Server URL: https://vlab.stern.nyu.edu/mcp

Help wanted

These client instructions are early, and we’d rather hear from you than leave a trap for the next reader. If yours doesn’t work, or if your client supports a cleaner path than the one we describe, let us know and we’ll update the page.

AI Assistants

Claude Desktop offers two paths. The Connectors UI is the quickest and requires no file editing. The claude_desktop_config.json file only accepts stdio (local command) entries, so users on Free or older versions should use the mcp-remote stdio bridge below.

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json

  • Windows: %APPDATA%\Claude\claude_desktop_config.json

{ "mcpServers": { "vlab": { "command": "npx", "args": [ "-y", "mcp-remote", "https://vlab.stern.nyu.edu/mcp" ] } } }
Option 1: Connectors UI (recommended, Pro/Team/Enterprise). Settings → Connectors → Add custom connector → paste https://vlab.stern.nyu.edu/mcp. Claude Desktop handles OAuth in your browser. Option 2: mcp-remote bridge (any tier, shown above). Paste into your config file. mcp-remote proxies the remote server through stdio; OAuth still runs in your browser on first use. Restart Claude Desktop after saving.

Create or edit .mcp.json in your project directory:

{ "mcpServers": { "vlab": { "type": "http", "url": "https://vlab.stern.nyu.edu/mcp" } } }
Claude Code will prompt you to sign in with your V-Lab account on first use. No API key needed.Claude Code detects the file automatically on next launch.

In ChatGPT settings, go to Connectors > Add connector > Model Context Protocol and enter the server URL:

https://vlab.stern.nyu.edu/mcp
ChatGPT uses OAuth 2.1 for MCP authentication. It will redirect you to sign in with your V-Lab account.

Edit ~/.gemini/settings.json (or project-level .gemini/settings.json). Gemini CLI doesn’t natively handle remote-OAuth MCP yet, so wrap V-Lab with the mcp-remote stdio bridge:

{ "mcpServers": { "vlab": { "command": "npx", "args": [ "-y", "mcp-remote", "https://vlab.stern.nyu.edu/mcp" ] } } }
mcp-remote proxies V-Lab through a local stdio process and handles OAuth in your browser on first use. No API key required.

Code Editors

Create .cursor/mcp.json in your project root, or go to Settings > Tools & MCP > New MCP Server:

{ "mcpServers": { "vlab": { "url": "https://vlab.stern.nyu.edu/mcp" } } }
Cursor will open a browser to sign in with your V-Lab account on first use.

Create .vscode/mcp.json in your workspace:

{ "servers": { "vlab": { "type": "http", "url": "https://vlab.stern.nyu.edu/mcp" } } }
VS Code will prompt you to sign in on first use.

Edit ~/.codeium/windsurf/mcp_config.json. Windsurf doesn’t natively handle remote-OAuth MCP yet, so wrap V-Lab with the mcp-remote stdio bridge:

{ "mcpServers": { "vlab": { "command": "npx", "args": [ "-y", "mcp-remote", "https://vlab.stern.nyu.edu/mcp" ] } } }
mcp-remote proxies V-Lab through a local stdio process and handles OAuth in your browser on first use. No API key required.

Open Zed > Settings > Open Settings. Zed expects an mcp-remote stdio bridge under the context_servers key:

{ "context_servers": { "vlab": { "command": "npx", "args": [ "-y", "mcp-remote", "https://vlab.stern.nyu.edu/mcp" ] } } }
mcp-remote proxies V-Lab through a local stdio process and handles OAuth in your browser on first use. No API key required.

Open Source & Self-Hosted

Edit ~/.continue/config.yaml (or workspace-level .continue/config.yaml). Continue expects a stdio command, so bridge V-Lab through mcp-remote:

mcpServers: - name: vlab type: stdio command: npx args: - -y - mcp-remote - https://vlab.stern.nyu.edu/mcp
mcp-remote proxies V-Lab through a local stdio process and handles OAuth in your browser on first use. No API key required.Works with any model backend including OpenAI, Anthropic, and local models via Ollama.

Open the Cline sidebar > MCP Servers > Configure > “Configure MCP Servers”:

{ "mcpServers": { "vlab": { "command": "npx", "args": [ "-y", "mcp-remote", "https://vlab.stern.nyu.edu/mcp" ] } } }
mcp-remote proxies V-Lab through a local stdio process and handles OAuth in your browser on first use. No API key required.Works with any model provider, including local models via Ollama or LM Studio.

Ollama is an LLM runtime, not an MCP client. Use it as the model backend in any of the clients above (Continue.dev, Cline, etc.). Configure the MCP server in your chosen client, and point the model configuration at your Ollama instance.

  • 401 Unauthorized: Try disconnecting and reconnecting the server to trigger a fresh OAuth sign-in.

  • OAuth sign-in loop: Ensure your client supports OAuth 2.1 with PKCE. If your client doesn’t handle remote OAuth natively, use the mcp-remote stdio bridge shown in its setup section above.

  • 429 Too Many Requests: Rate limit exceeded. Wait a moment and try again.

  • Connection refused: Ensure the URL uses https (not http).

  • Client doesn’t connect: Verify your client supports Streamable HTTP transport and you’re using the correct field name for the URL. Older clients using SSE transport may need to upgrade. V-Lab does not serve a legacy SSE endpoint; if a guide you’re reading references /mcp/sse, it’s out of date.

  • Still stuck? If your client won’t connect and the sections above haven’t helped, ask your assistant what’s wrong. Paste the error, the client name, and the config you’re using. Modern LLMs know MCP well and are surprisingly good at diagnosing configuration issues. Still stuck after that, drop us a note with the client, OS, and full error text.

Reporting security issues

Found a security vulnerability? Please report it privately to vlab@stern.nyu.edu. We investigate every report and will acknowledge yours within two business days.

Working with MCP

A few habits that pay for themselves.

Model Context Protocol is still young. The MCP ecosystem rewards a bit of discipline. Here’s how to get the most out of V-Lab without blowing up your context window.

  1. 01

    Fewer servers, clearer context

    Every active MCP loads its tool definitions into your context window. If you’re focused on V-Lab work, disable MCPs you aren’t using. Leave only what your assistant actually needs to reach for.

  2. 02

    Prefer project-scoped configs

    Claude Code’s .mcp.json, Cursor’s .cursor/mcp.json, and VS Code’s .vscode/mcp.json attach V-Lab to a single workspace rather than your whole machine. Keeps your global context clean and makes it obvious which projects have V-Lab available.

  3. 03

    Be specific in your prompts

    Tool calls burn tokens. “Rank these five G-SIBs by current SRISK” beats “tell me about systemic risk.” Narrower asks produce sharper answers and fewer unnecessary calls.

  4. 04

    Reconnect to pick up new tools

    V-Lab MCP ships new tools and refinements on a regular cadence. Restart or reconnect your client every week or two so your assistant can see the latest capabilities.

  5. 05

    Use submit_feedback in-flow

    If a tool returns something odd or you want a capability that doesn’t exist, ask your assistant to call submit_feedback right there. It’s faster than a web form and lands directly with the V-Lab research team.

Feedback, built in

See something wrong? Report it from inside your client.

V-Lab MCP exposes a submit_feedback tool. Use it from inside your client to report data issues, request features, or flag confusing behavior — no context switch, no web form, no support ticket. It goes straight to the V-Lab research team.

"Use submit_feedback to suggest a new tool, request a parameter, or flag anything that doesn't behave as expected."

Disclaimer & citation

V-Lab data is provided on an “as is” basis for informational and academic research purposes only. It does not constitute investment advice or professional financial consultation, and users assume all risk associated with use of V-Lab data. When using V-Lab data in published work, please cite V-Lab accordingly. See full legal provisions and citation guidance.

Your LLM speaks V-Lab.

Connect V-Lab to the AI assistant you already use.

View setup guide