MCP Server

geeViz includes a Model Context Protocol (MCP) server that lets AI coding assistants (Cursor, Claude Code, Windsurf, etc.) interact with geeViz and Google Earth Engine directly.

Why MCP instead of docs?

geeViz already has good documentation — the geeviz.org website, inline pydoc strings in every module, and 40+ example scripts. When you use an AI assistant without MCP, it can only work from whatever it memorized during training. That leads to predictable problems:

  • Stale knowledge. AI models are trained on a snapshot of the internet. If geeViz added a parameter, renamed a function, or changed a default since that snapshot, the AI doesn’t know. It will confidently generate code using the old API.

  • Hallucinated signatures. The AI may remember that getLandsatWrapper exists but guess at parameter names or defaults rather than checking. Pydoc strings are thorough, but the AI never actually reads them — it reconstructs them from memory.

  • No way to verify. Without execution, the AI cannot tell whether its code works. It hands you a script and hopes for the best. If there is a bug, you are the one who discovers it at runtime.

  • No asset awareness. Documentation describes functions, not your data. The AI has no way to know what bands an ImageCollection has, what date range it covers, or what assets exist in your project folder.

The MCP server solves each of these by giving the AI live access to the same things a human developer would use: the actual source code, real function signatures via Python’s inspect module, the full example scripts on disk, a persistent Python REPL that can run and test code, and the Earth Engine API for querying assets and tasks. The AI is no longer guessing from memory — it is looking things up and trying them.

Concretely, with the MCP server the AI can:

  • Execute code in a persistent Python/GEE session (like Jupyter cells) and see whether it works before giving it to you

  • Inspect live Earth Engine assets — band names, CRS, scale, date ranges, collection sizes — for your specific data

  • Look up real function signatures and docstrings via inspect — always current, always complete

  • Read actual example scripts from disk — the same 40+ .py and .ipynb files that ship with geeViz

  • Search across all modules at once when it doesn’t know which module a function lives in

  • Control the interactive map — add layers, open the viewer, check what’s on the map, clear and start over

  • Save a runnable script of everything it built, so you have a standalone .py file at the end

The docs are still valuable for learning geeViz yourself. MCP is for when you want the AI to use geeViz correctly on your behalf.

What is MCP?

MCP (Model Context Protocol) is an open standard that connects AI tools to external capabilities via tools — callable functions the AI can invoke during a conversation. The geeViz MCP server exposes 33 tools that the AI discovers automatically when it connects. No special prompting or configuration beyond the initial setup is required.

Quick Start

This section walks through the complete setup. It takes about two minutes.

Step 1: Install geeViz

The mcp SDK is included as a dependency of geeViz, so a single install is all you need:

$ pip install geeViz

You can confirm the server is available:

$ python -m geeViz.mcp.server --help

Step 2: Make sure Earth Engine auth works

The MCP server initializes Earth Engine on its first tool call. If you haven’t authenticated recently, do it now so the server doesn’t hang waiting for a browser prompt:

$ python -c "import ee; ee.Authenticate(); ee.Initialize(project='your-project-id'); print(ee.Number(1).getInfo())"

If that prints 1, you’re good.

Step 3: Add the config for your editor

Pick your editor and create the config file shown below. Each one tells the editor how to start the geeViz MCP server as a subprocess.

Cursor

Create .cursor/mcp.json in your project root (or add via Cursor Settings → MCP):

{
  "mcpServers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

VS Code / GitHub Copilot

Create .vscode/mcp.json in your project root:

{
  "servers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"],
      "cwd": "${workspaceFolder}"
    }
  }
}

For best results, also create .github/copilot-instructions.md to tell Copilot how to use the tools (see Agent Instructions File below).

Claude Code

Create .claude/mcp.json in your project root:

{
  "mcpServers": {
    "geeviz": {
      "command": "python",
      "args": ["-m", "geeViz.mcp.server"]
    }
  }
}

Windsurf / Other MCP Clients

Any MCP client that supports stdio transport can connect. The server command is always:

$ python -m geeViz.mcp.server

Important

geeViz must be importable from the working directory. If you installed via pip install geeViz, any directory works. If you are using a development checkout, set the working directory to the parent of the geeViz package folder.

Step 4: Verify it works

Open your AI assistant’s chat and ask it something that requires the MCP tools:

"What bands does COPERNICUS/S2_SR_HARMONIZED have?"

If the MCP server is connected, the AI will call inspect_asset("COPERNICUS/S2_SR_HARMONIZED") and return the real band list from Earth Engine. If it just guesses from memory, the server isn’t connected — check your config file path and restart the editor.

You can also try:

"List the geeViz example scripts that involve LANDTRENDR"

The AI should call list_examples(filter="LANDTRENDR") and return actual filenames from your geeViz installation.

Agent Instructions File

MCP gives the AI tools, but it doesn’t always know when to use them. The geeViz MCP server solves this by automatically serving agent instructions to every connected client via the MCP instructions protocol field. When your AI assistant connects, it receives rules, workflow patterns, and the full list of all 33 tools — no manual setup required.

The instructions are loaded from geeViz/mcp/agent-instructions.md, which also ships with the package for reference. If your editor supports additional instructions files, you can copy the contents there for extra reinforcement:

Editor

Instructions file

VS Code / GitHub Copilot

.github/copilot-instructions.md

Cursor

.cursorrules or Cursor Settings > Rules

Claude Code

CLAUDE.md in the project root

Windsurf

.windsurfrules

Tip

MCP tools vs instructions files — what’s the difference?

An instructions file is static text injected into the AI’s context. It tells the AI what to do, but gives it no new capabilities. The AI still cannot verify its code, check an asset’s bands, or test whether something runs.

MCP tools are callable functions the AI invokes during its response. It can stop mid-thought, call get_api_reference, read the real signature, and write correct code.

Use both. The instructions file tells the AI when to reach for the tools. The MCP server gives it the tools to reach for. Without instructions, the AI has tools but may not think to use them. Without MCP, the instructions are just more docs for the AI to hallucinate from.

HTTP Transport (Advanced)

For non-stdio clients, the server supports HTTP transport via environment variables:

$ set MCP_TRANSPORT=streamable-http
$ set MCP_HOST=127.0.0.1
$ set MCP_PORT=8000
$ python -m geeViz.mcp.server

Tools Reference

The server exposes 33 tools organized into nine categories.

Code Execution

run_code(code, timeout=120, reset=False)

Execute Python/GEE code in a persistent REPL namespace. Variables persist across calls (like Jupyter cells). The namespace is pre-populated with ee, Map, gv (geeViz.geeView), and gil (geeViz.getImagesLib).

Every successful call is automatically saved to a script file. The response includes a script_path field pointing to the generated .py file.

Set reset=True to clear all variables and start fresh.

# First call
run_code("x = ee.Number(42).getInfo()")

# Second call -- x is still available
run_code("print(x)")  # prints 42
save_script(filename="")

Save the accumulated run_code history to a standalone .py file with all necessary imports. Useful for handing off a working script to the user.

Map Control

view_map(open_browser=True)

Open the geeView interactive map and return the URL. All layers added via run_code (using the Map object) will appear. This is the same Map singleton — no object passing needed.

get_map_layers()

Return the current state of the map: layer names, visibility, visualization parameters, and active commands. Useful for debugging why a map looks wrong.

clear_map()

Remove all layers and commands from the map. Resets to a blank state.

API Introspection

get_api_reference(module, function_name="")

Look up the signature and full docstring of any geeViz function. Uses Python’s inspect module, so it always reflects the installed code — zero maintenance required.

Valid modules: geeView, getImagesLib, changeDetectionLib, gee2Pandas, assetManagerLib, taskManagerLib, foliumView, phEEnoViz, cloudStorageManagerLib.

list_functions(module, filter="")

List all public functions and classes in a module with one-line descriptions. The optional filter parameter does case-insensitive substring matching — important for getImagesLib which has 100+ functions.

search_functions(query)

Search across all geeViz modules at once by function name or docstring. Use this when you don’t know which module a function lives in.

search_functions("cloud mask")  # finds functions across all modules

Asset & Task Management

inspect_asset(asset_id)

Get detailed metadata for any GEE asset. Returns band names/types, CRS, scale, date range, size, column names, and properties depending on asset type (Image, ImageCollection, FeatureCollection).

list_assets(folder)

List assets in a GEE folder or collection. Returns id, type, and size for each asset (max 200).

track_tasks(name_filter="")

Get the status of recent Earth Engine tasks — description, state, type, start time, and error messages (max 50 most recent).

Example Discovery

list_examples(filter="")

List available geeViz example scripts (40+ .py and .ipynb files) with descriptions extracted from docstrings or markdown headers.

get_example(example_name)

Read the full source code of an example. Accepts names with or without extension. For notebooks, extracts code and markdown cells.

Geocoding & Environment

geocode(place_name, use_boundaries=False)

Geocode a place name to coordinates using OpenStreetMap Nominatim. Returns latitude, longitude, bounding box, and ready-to-paste ee.Geometry code snippets. When use_boundaries=True, also searches GEE boundary collections (WDPA protected areas, FAO/GAUL admin boundaries, TIGER US states/counties) for matching polygons and returns asset IDs with filter expressions.

get_version_info()

Return version strings for geeViz, the Earth Engine Python API, Python itself, and the platform. Useful for debugging environment issues.

get_namespace()

Inspect user-defined variables in the persistent REPL namespace. Shows name, type (with Earth Engine-specific type detection), and a truncated repr for each variable. Excludes the built-in entries (ee, Map, gv, gil). No getInfo() calls are made — pure Python-side introspection.

get_project_info()

Return the current Earth Engine project ID and a sample of root assets. Useful for confirming which project the session is using.

Export

export_to_asset(image_var, asset_id, region_var="", scale=30, crs="EPSG:4326", overwrite=False, pyramiding_policy="mean")

Export an ee.Image from the REPL namespace to a GEE asset using geeViz’s exportToAssetWrapper. Supports overwrite and pyramiding policy. Use track_tasks() to monitor progress.

export_to_drive(image_var, output_name, drive_folder, region_var, scale=30, crs="EPSG:4326", output_no_data=-32768)

Export an ee.Image to Google Drive using geeViz’s exportToDriveWrapper. Region is required for Drive exports.

export_to_cloud_storage(image_var, output_name, bucket, region_var, scale=30, crs="EPSG:4326", output_no_data=-32768, file_format="GeoTIFF", overwrite=False)

Export an ee.Image to Google Cloud Storage using geeViz’s exportToCloudStorageWrapper. Defaults to Cloud Optimized GeoTIFF.

save_notebook(filename="")

Save the accumulated run_code history as a Jupyter notebook (.ipynb). Creates one code cell per run_code call, with a markdown header and import cell at the top. Complements save_script for users who prefer notebooks.

Task Management

cancel_tasks(name_filter="")

Cancel running and ready Earth Engine tasks. If name_filter is provided, cancels only tasks matching that substring. Otherwise cancels all ready/running tasks. Uses geeViz’s taskManagerLib.

Data Sampling & Time Series

sample_values(image_var, geometry_var="", lon=None, lat=None, scale=30, reducer="first")

Sample pixel values from an ee.Image at a point (lon/lat) or over a region. Supports reducers: first, mean, median, min, max, sum, stdDev, count.

get_time_series(collection_var, geometry_var="", lon=None, lat=None, band="", start_date="", end_date="", scale=30, reducer="mean")

Extract time series of band values from an ee.ImageCollection. Returns date-value pairs. If matplotlib is available, returns a line chart PNG directly.

Asset Management

delete_asset(asset_id)

Delete a single GEE asset. Checks existence before deleting. Single-asset only (not recursive) for safety.

copy_asset(source_id, dest_id, overwrite=False)

Copy a GEE asset to a new location. If overwrite is True and the destination exists, deletes it first.

move_asset(source_id, dest_id, overwrite=False)

Move a GEE asset (copy to destination, then delete source). Source is only deleted after a successful copy.

create_folder(folder_path, folder_type="Folder")

Create a GEE folder or ImageCollection. Creates intermediate folders recursively. folder_type can be "Folder" or "ImageCollection".

update_acl(asset_id, all_users_can_read=False, readers="", writers="")

Update permissions (ACL) on a GEE asset. readers and writers are comma-separated email addresses.

get_collection_info(collection_id, start_date="", end_date="", region_var="")

Get summary info for an ImageCollection by asset ID — image count, date range, band names/types, scale. Accepts optional date and region filters. Does not require a namespace variable.

Dataset Discovery

search_datasets(query, source="all", max_results=10)

Search the GEE dataset catalog by keyword. Searches both the official Earth Engine catalog (~500+ datasets) and the community catalog (~200+ datasets) maintained by samapriya. Uses word-level matching against title, tags, id, and provider fields with relevance scoring (title=3, tags=2, id=2, provider=1).

The source parameter controls which catalog to search: "official", "community", or "all" (default). Results are cached locally for 24 hours.

search_datasets("landsat surface reflectance")
search_datasets("fire", source="community")
get_dataset_info(dataset_id)

Get the full STAC JSON record for a specific GEE dataset. Fetches the record from earthengine-stac.storage.googleapis.com and returns it as-is — bands (with classes, wavelengths, scale/offset), full description, temporal/spatial extent, keywords, license, visualization parameters, provider info, links, and everything else in the STAC spec.

This is the “drill down” companion to search_datasets — use search_datasets to find datasets, then get_dataset_info for full details. Only works for official GEE datasets; for community datasets use inspect_asset instead.

get_dataset_info("LANDSAT/LC09/C02/T1_L2")
get_thumbnail(variable, viz_params="{}", dimensions=512, region_var="")

Get a PNG thumbnail of an ee.Image or animated GIF of an ee.ImageCollection and return it directly to the LLM for visual context. Use run_code first to create the variable in the REPL namespace.

For ee.Image, returns a single PNG thumbnail. For ee.ImageCollection, returns an animated GIF (up to 20 frames) via getVideoThumbURL. Always provide viz_params (bands, min, max) for useful results.

# After run_code("img = ee.Image('USGS/SRTMGL1_003')")
get_thumbnail("img", '{"min": 0, "max": 3000}')

How It Works

Architecture

The MCP server uses lazy initialization — it does not import geeViz or initialize Earth Engine until the first tool call that needs it. This keeps startup fast and avoids authentication prompts when running --help.

A persistent namespace (a Python dict) acts as shared state across run_code calls:

run_code("x = 42")        →  _namespace["x"] = 42
run_code("print(x)")      →  prints 42 (x is still there)
run_code(..., reset=True)  →  _namespace cleared, ee/Map/gv/gil re-added

The Map object in this namespace is the same singleton (geeViz.geeView.Map) that view_map, get_map_layers, and clear_map operate on. No object passing is needed.

Script Saving

Every successful run_code call appends the code to an internal history and writes it to a .py file in geeViz/mcp/generated_scripts/. The file includes:

  • Standard geeViz imports (gv, gil, ee, Map)

  • Each code block labeled with its call number

  • Full standalone script — copy it out and run it directly

Timeouts

run_code uses a background thread with a configurable timeout (default 120 seconds). On Windows, a hung getInfo() call cannot be forcibly terminated — the thread continues in background. This is a known platform limitation.

Example Workflow

Here is what a typical AI-assisted session looks like with the MCP server. The AI calls tools behind the scenes:

User: "Do LANDTRENDR change detection near Bozeman and show me the results"

AI calls: list_examples(filter="LANDTRENDR")
AI calls: get_example("LANDTRENDRWrapper")
AI calls: get_api_reference("changeDetectionLib", "simpleLANDTRENDR")
AI calls: run_code("""
    import geeViz.changeDetectionLib as cdl
    studyArea = ee.Geometry.Point([-111.04, 45.68]).buffer(20000)
    ...
""")
AI calls: run_code("Map.centerObject(studyArea)")
AI calls: view_map()

AI responds: "Here's your LANDTRENDR analysis. The map is open at
http://localhost:1234/geeView/... and the script has been saved to
geeViz/mcp/generated_scripts/session_20260226_143022.py"

The AI looked up real examples, checked the actual function signature, executed working code, and gave the user both a live map and a saved script — all grounded in the real geeViz codebase rather than training data.