MCP Server

geeViz includes a Model Context Protocol (MCP) server that lets AI coding assistants (GitHub Copilot, Cursor, Claude Code, Windsurf, etc.) and desktop AI apps (Claude Desktop, ChatGPT, Gemini CLI) interact with geeViz and Google Earth Engine directly.

Why MCP?

An AI coding agent already has powerful general-purpose tools — it can search the web, read local files, grep source code, and write scripts. So why add an MCP server on top of that?

The core problem is that Earth Engine is a live, authenticated cloud platform. General-purpose tools can read about GEE but cannot interact with it. The MCP server bridges that gap by giving the agent 30 purpose-built tools that execute against your authenticated GEE session and the actual geeViz codebase. The table below compares what each approach can do for common GEE tasks:

Task

Vanilla coding agent (web search, grep, file read)

geeViz MCP server

Look up a geeViz function signature

Grep source files or search the web — may find outdated docs, wrong version, or miss internal helpers

get_api_reference runs Python inspect on the installed code — always returns the real signature and docstring

Find which module has a function

grep -r across all .py files, manually parse results

search_functions searches all 10 geeViz modules in one call, returns structured results

Check what bands a dataset has

Search the GEE data catalog website, parse HTML, hope the page is current

inspect_asset calls ee.data.getInfo() on the live asset — returns real bands, CRS, scale, date range, properties

Get image count and date range for a filtered collection

Write and run a script with multiple getInfo() calls

inspect_asset with optional start_date/end_date/region_var returns count, date range, band info in one structured call

Search for GEE datasets by keyword

Web search, browse the GEE catalog, read blog posts

search_datasets searches 700+ official and community datasets offline (24h-cached catalog), returns ranked results with asset IDs

Get detailed dataset metadata (bands, classes, scale/offset)

Find and parse the STAC JSON page for the dataset

get_catalog_info fetches the full STAC record and returns structured band info, class descriptions, viz params

Test a code snippet

Write to a file, run it in a terminal, read stdout/stderr

run_code executes in a persistent REPL namespace (like Jupyter) with ee, Map, gv, gil pre-loaded — variables persist across calls, errors are returned inline

Build up an analysis incrementally

Each script run starts fresh; agent must manage state manually

run_code namespace persists — build up variables, test each step, inspect intermediate results with get_namespace

See what variables exist after several code steps

Re-read the script, mentally track assignments

get_namespace returns all user-defined variables with type and repr — no getInfo() calls

Visualize results on a map

Write code to call Map.view(), tell the user to open a browser

view_map opens the geeView map and returns the URL directly

Get a visual preview of an image

Write a getThumbURL script, fetch the image, save to disk

get_thumbnail returns a PNG/GIF that the LLM can see and reason about visually

Sample pixel values or chart zonal statistics

Write reduceRegion scripts, detect thematic vs continuous data, choose reducers, build Plotly figures manually

extract_and_chart handles point sampling, time series, bar charts, and Sankey diagrams in one call — auto-detects data type, picks the right reducer, and returns a DataFrame + Plotly chart HTML

Geocode a place name to a GEE geometry

Call a geocoding API, manually construct ee.Geometry

geocode returns coordinates, bounding box, and searches GEE boundary collections (WDPA, GAUL, TIGER) for matching polygons with ready-to-use EE code

Export an image to an asset

Write export code, look up pyramidingPolicy options, handle overwrite

export_to_asset wraps geeViz’s exporter with validation, overwrite support, and pyramiding policy

Export to Drive or Cloud Storage

Write export boilerplate, remember required parameters

export_to_drive / export_to_cloud_storage handle all params with sensible defaults (COG enabled, etc.)

Check task status or cancel tasks

Write ee.data.getTaskList() code, filter manually

track_tasks / cancel_tasks return structured task info, support name filtering

Manage assets (copy, move, delete, permissions)

Write 5-10 lines of ee.data.* calls per operation

copy_asset, move_asset, delete_asset, create_folder, update_acl — one call each with validation

Read a geeViz example script

find + cat the example file, hope you guess the filename

list_examples shows all 40+ examples with descriptions; get_example returns full source for .py or .ipynb

Save the session as a reusable script

Manually copy code blocks from the conversation

save_session exports the full run_code history as a .py or .ipynb file

In short: a vanilla agent can read about GEE; the MCP lets it use GEE. Every tool returns structured data rather than text to parse, handles authentication and error cases, and exposes domain-specific parameters (reducers, CRS, pyramiding policies, STAC metadata) that a general-purpose search would never surface reliably.

What is MCP?

MCP (Model Context Protocol) is an open standard that connects AI tools to external capabilities via tools — callable functions the AI can invoke during a conversation. The geeViz MCP server exposes 30 tools that the AI discovers automatically when it connects. No special prompting or configuration beyond the initial setup is required.

Quick Start

This section walks through the complete setup. It takes about two minutes.

Step 1: Install geeViz

The mcp SDK is included as a dependency of geeViz, so a single install is all you need:

$ pip install geeViz

You can confirm the server is available:

$ python -m geeViz.mcp.server --help

Step 2: Make sure Earth Engine auth works

The MCP server initializes Earth Engine on its first tool call. If you haven’t authenticated recently, do it now so the server doesn’t hang waiting for a browser prompt:

$ python -c "import ee; ee.Authenticate(); ee.Initialize(project='your-project-id'); print(ee.Number(1).getInfo())"

If that prints 1, you’re good.

Step 3: Add the config for your editor

Pick your editor and create the config file shown below. Each one tells the editor how to start the geeViz MCP server as a subprocess.

Cursor

Create .cursor/mcp.json in your project root (or add via Cursor Settings → MCP):

{
  "mcpServers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

VS Code / GitHub Copilot

Create .vscode/mcp.json in your project root:

{
  "servers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"],
      "cwd": "${workspaceFolder}"
    }
  }
}

For best results, also create .github/copilot-instructions.md to tell Copilot how to use the tools (see Agent Instructions File below).

Claude Code

Create .claude/mcp.json in your project root:

{
  "mcpServers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

Windsurf / Other MCP Clients

Any MCP client that supports stdio transport can connect. The server command is always:

$ python -m geeViz.mcp.server

Important

geeViz must be importable from the working directory. If you installed via pip install geeViz, any directory works. If you are using a development checkout, set the working directory to the parent of the geeViz package folder.

Step 4: Verify it works

Open your AI assistant’s chat and ask it something that requires the MCP tools:

"What bands does COPERNICUS/S2_SR_HARMONIZED have?"

If the MCP server is connected, the AI will call inspect_asset("COPERNICUS/S2_SR_HARMONIZED") and return the real band list from Earth Engine. If it just guesses from memory, the server isn’t connected — check your config file path and restart the editor.

You can also try:

"List the geeViz example scripts that involve LANDTRENDR"

The AI should call list_examples(filter="LANDTRENDR") and return actual filenames from your geeViz installation.

Agent Instructions File

MCP gives the AI tools, but it doesn’t always know when to use them. The geeViz MCP server solves this by automatically serving agent instructions to every connected client via the MCP instructions protocol field. When your AI assistant connects, it receives rules, workflow patterns, and the full list of all 30 tools — no manual setup required.

The instructions are loaded from geeViz/mcp/agent-instructions.md, which also ships with the package for reference. If your editor supports additional instructions files, you can copy the contents there for extra reinforcement:

Editor

Instructions file

VS Code / GitHub Copilot

.github/copilot-instructions.md

Cursor

.cursorrules or Cursor Settings > Rules

Claude Code

CLAUDE.md in the project root

Windsurf

.windsurfrules

Tip

MCP tools vs instructions files — what’s the difference?

An instructions file is static text injected into the AI’s context. It tells the AI what to do, but gives it no new capabilities. The AI still cannot verify its code, check an asset’s bands, or test whether something runs.

MCP tools are callable functions the AI invokes during its response. It can stop mid-thought, call get_api_reference, read the real signature, and write correct code.

Use both. The instructions file tells the AI when to reach for the tools. The MCP server gives it the tools to reach for. Without instructions, the AI has tools but may not think to use them. Without MCP, the instructions are just more docs for the AI to hallucinate from.

HTTP Transport (Advanced)

For non-stdio clients, the server supports HTTP transport via environment variables. See also Tools Reference in the “Using Without an IDE” section for more details.

$ set MCP_TRANSPORT=streamable-http
$ set MCP_HOST=127.0.0.1
$ set MCP_PORT=8000
$ python -m geeViz.mcp.server

Using Without an IDE

If you can’t use the MCP server through a coding IDE, there are several other options for local use.

Desktop AI Apps

These are the lowest-friction options — install the app, add the config, and start chatting.

Claude Desktop

Add to your Claude Desktop config file (%APPDATA%\Claude\claude_desktop_config.json on Windows, ~/Library/Application Support/Claude/claude_desktop_config.json on macOS):

{
  "mcpServers": {
    "geeViz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

Restart Claude Desktop. The 30 geeViz tools will appear automatically.

ChatGPT Desktop

ChatGPT Desktop also supports MCP servers. Add the same server command (python -m geeViz.mcp.server) in ChatGPT’s MCP configuration.

Terminal

Gemini CLI

Google’s Gemini CLI supports MCP servers directly:

$ gemini --mcp-server "python -m geeViz.mcp.server"

Or add to your .gemini/settings.json:

{
  "mcpServers": {
    "geeViz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

Claude Code (CLI)

Claude Code is a terminal-based AI agent (not an IDE). Add the server to your project:

$ claude mcp add geeViz python -- -m geeViz.mcp.server

Or create .mcp.json in your project root:

{
  "mcpServers": {
    "geeViz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

Python Script or Jupyter Notebook

You can connect to the MCP server programmatically using the mcp Python client library and pipe tool calls through any LLM API (Gemini, Claude, OpenAI). This approach gives you full control over prompts, tool routing, and output handling — ideal for batch testing, automated workflows, or custom integrations.

The geeViz package includes two examples:

  • geeViz/mcp/test_mcp.ipynb — Jupyter notebook that tests all 30 tools via Gemini

  • geeViz/mcp/test_mcp_comparison.py — Three-way comparison: bare Gemini vs Google Search vs MCP server

Both use python-dotenv to load a GOOGLE_API_KEY from a .env file. The core pattern:

import subprocess
from mcp.client.session import ClientSession
from mcp.client.stdio import StdioServerParameters, stdio_client

server_params = StdioServerParameters(
    command="python",
    args=["-m", "geeViz.mcp.server"],
)

# errlog=subprocess.DEVNULL needed in Jupyter on Windows
async with stdio_client(server_params, errlog=subprocess.DEVNULL) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()
        tools = await session.list_tools()       # discover all 30 tools
        result = await session.call_tool(         # call any tool
            name="get_version_info", arguments={}
        )

This connects to the MCP server as a subprocess and exposes the same 30 tools that IDE integrations use. You can then feed tool schemas and results to any LLM via its API.

HTTP Server

Run the MCP server with HTTP transport for access from any HTTP-capable MCP client:

$ set MCP_TRANSPORT=streamable-http
$ set MCP_PORT=8080
$ python -m geeViz.mcp.server

Any MCP client that supports streamable-http transport can connect to http://localhost:8080/mcp. This is also the path to Cloud Run deployment for remote access.

Choosing the Right Option

Option

Setup effort

Best for

Notes

IDE (Cursor, VS Code, etc.)

Low

Daily development

Tightest integration — tools appear inline while coding

Claude Desktop / ChatGPT

Low

Chat-style exploration

No coding required, conversational interface

Gemini CLI / Claude Code

Low

Terminal users

Full agent capabilities from the command line

Python script / notebook

Medium

Batch testing, custom workflows

Full control over prompts and output handling

HTTP server

Medium

Remote/shared access, Cloud Run

Any HTTP MCP client can connect; path to hosted deployment

Tools Reference

The server exposes 30 tools organized into categories including code execution, map control, API introspection, asset inspection, zonal summary & charting, export, task management, dataset discovery, and more.

For the complete list of tools with their parameters and docstrings, see the auto-generated API reference: geeViz.mcp.server.

How It Works

Architecture

The MCP server uses lazy initialization — it does not import geeViz or initialize Earth Engine until the first tool call that needs it. This keeps startup fast and avoids authentication prompts when running --help.

A persistent namespace (a Python dict) acts as shared state across run_code calls:

run_code("x = 42")        →  _namespace["x"] = 42
run_code("print(x)")      →  prints 42 (x is still there)
run_code(..., reset=True)  →  _namespace cleared, ee/Map/gv/gil re-added

The Map object in this namespace is the same singleton (geeViz.geeView.Map) that view_map, get_map_layers, and clear_map operate on. No object passing is needed.

Script Saving

Every successful run_code call appends the code to an internal history and writes it to a .py file in geeViz/mcp/generated_scripts/. The file includes:

  • Standard geeViz imports (gv, gil, ee, Map)

  • Each code block labeled with its call number

  • Full standalone script — copy it out and run it directly

Timeouts

run_code uses a background thread with a configurable timeout (default 120 seconds). On Windows, a hung getInfo() call cannot be forcibly terminated — the thread continues in background. This is a known platform limitation.

Example Workflow

Here is what a typical AI-assisted session looks like with the MCP server. The AI calls tools behind the scenes:

User: "Do LANDTRENDR change detection near Bozeman and show me the results"

AI calls: list_examples(filter="LANDTRENDR")
AI calls: get_example("LANDTRENDRWrapper")
AI calls: get_api_reference("changeDetectionLib", "simpleLANDTRENDR")
AI calls: run_code("""
    import geeViz.changeDetectionLib as cdl
    studyArea = ee.Geometry.Point([-111.04, 45.68]).buffer(20000)
    ...
""")
AI calls: run_code("Map.centerObject(studyArea)")
AI calls: view_map()

AI responds: "Here's your LANDTRENDR analysis. The map is open at
http://localhost:1234/geeView/... and the script has been saved to
geeViz/mcp/generated_scripts/session_20260226_143022.py"

The AI looked up real examples, checked the actual function signature, executed working code, and gave the user both a live map and a saved script — all grounded in the real geeViz codebase rather than training data.