Skip to content

Architecture

mcnoaa-tides is a FastMCP server that wraps the NOAA CO-OPS Tides and Currents API. This page describes how the pieces fit together — the server lifecycle, caching strategy, parallel fetch patterns, and module organization.

flowchart LR
    User([User])
    LLM[LLM Client]
    MCP[MCP Transport<br/>stdio / HTTP]
    Server[mcnoaa-tides<br/>FastMCP Server]
    Cache[(Station<br/>Cache)]
    DataAPI[NOAA Data API<br/>Predictions &<br/>Observations]
    MetaAPI[NOAA Metadata API<br/>Station Catalog]

    User -->|natural language| LLM
    LLM -->|tool call| MCP
    MCP --> Server
    Server -->|cache hit| Cache
    Server -->|parallel fetch| DataAPI
    Server -->|catalog refresh| MetaAPI
    Cache -.->|24h TTL| MetaAPI

    style User fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style LLM fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style MCP fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style Server fill:#1a8a8f,stroke:#5ec4c8,color:#0a1517
    style Cache fill:#0d3b3e,stroke:#5ec4c8,color:#e8f0f0
    style DataAPI fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style MetaAPI fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0

The server uses FastMCP’s lifespan context manager to own the NOAAClient instance. Every tool call receives the same client through the lifespan context, so there is exactly one HTTP connection pool and one station cache for the entire server process.

  1. Startup — The lifespan manager creates a NOAAClient and calls initialize(), which opens an httpx.AsyncClient (with a 30-second timeout) and pre-warms the station cache by fetching the full station catalog from the NOAA metadata API.

  2. Pre-warm fallback — If the station cache request fails during startup (network down, NOAA API outage), the server does not crash. It creates a bare HTTP client and logs a warning to stderr. The cache will populate on the first station-related request.

  3. Running — The initialized client is yielded into the FastMCP context as {"noaa_client": client}. Every tool accesses it via ctx.lifespan_context["noaa_client"].

  4. Shutdown — The finally block calls client.close(), which properly closes the httpx.AsyncClient and releases its connection pool.

# Simplified view of the lifespan flow
@asynccontextmanager
async def lifespan(server: FastMCP):
client = NOAAClient()
try:
await client.initialize() # HTTP client + station cache
except Exception:
client._http = httpx.AsyncClient(timeout=30) # fallback
try:
yield {"noaa_client": client}
finally:
await client.close()

The station catalog contains roughly 301 entries. Fetching it on every request would be wasteful, so NOAAClient caches it in memory with a 24-hour TTL.

PropertyValue
TTL86,400 seconds (24 hours)
StorageIn-memory list[Station] on the client instance
Refresh triggerAny call to get_stations() after TTL expires
Failure behaviorServe stale data if cache was previously populated; raise if cache was never populated

The refresh strategy is stale-while-revalidate: when the TTL expires, get_stations() attempts a background refresh. If the refresh fails but stale data exists, the stale list is returned and a warning is logged to stderr. This means the station list might be up to 48 hours old in the worst case (24h TTL + 24h failed refresh window), but requests never fail just because the metadata API is temporarily unreachable.

mcnoaa-tides talks to two distinct NOAA endpoints. Neither requires an API key.

Data API

Base URL: api.tidesandcurrents.noaa.gov/api/prod/datagetter

Returns observations and predictions — water levels, tide predictions, wind, air temperature, water temperature, air pressure, and more. Accepts station ID, product name, date range, datum, and units.

Metadata API

Base URL: api.tidesandcurrents.noaa.gov/mdapi/prod/webapi

Returns station information — the station catalog (stations.json), individual station details, available sensors, datums, and products. Used for station discovery and the station cache.

Station IDs are 7-digit numbers (e.g., 9447130 for Seattle, 8454000 for Providence). The client validates this format with a regex before making any request.

Every data API request includes the query parameter application=mcnoaa-tides-mcp so NOAA can identify the source if needed.

Several tools fire multiple API calls simultaneously using asyncio.gather. This is the primary performance optimization — a snapshot that needs 6 products takes about as long as a single product request rather than 6 times as long.

Fetches 6 products in parallel:

flowchart LR
    Tool[marine_conditions<br/>_snapshot]
    P[predictions<br/>hilo]
    WL[water_level]
    WT[water_temperature]
    AT[air_temperature]
    W[wind]
    AP[air_pressure]
    R[Combined<br/>Response]

    Tool --> P & WL & WT & AT & W & AP
    P & WL & WT & AT & W & AP --> R

    style Tool fill:#1a8a8f,stroke:#5ec4c8,color:#0a1517
    style R fill:#1a8a8f,stroke:#5ec4c8,color:#0a1517
    style P fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style WL fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style WT fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style AT fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style W fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
    style AP fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0

Each product is fetched independently. If a product fails (sensor not available at the station, temporary API error), it is recorded under an "unavailable" key in the response rather than failing the entire request.

Fetches 4 products in parallel:

predictions (hilo) | wind | water_temperature | air_pressure

Uses a _safe_fetch wrapper that returns None on failure for the meteorological products. The hilo prediction fetch is not wrapped — if tides cannot be retrieved, the tool raises rather than returning a deployment briefing without tide data.

Fetches 6 products in parallel (same set as marine_conditions_snapshot), then passes the collected data to the chart renderer. Unavailable products are omitted from the dashboard panels.

The pattern across all parallel-fetch tools is the same: wrap each product request in a try/except, collect successes and failures separately, and report failures without crashing the overall request. This is important because not every station supports every product — a station might have a tide gauge but no wind sensor.

The codebase is split into focused modules, each with a register(mcp) function that attaches tools to the FastMCP server instance.

  • Directorysrc/mcnoaa_tides/
    • server.py — FastMCP app, lifespan, registers all modules
    • client.py — NOAAClient (HTTP, caching, search, haversine)
    • models.py — Pydantic models (Station, TidePrediction, etc.)
    • tidal.py — Pure tidal phase classification (no I/O)
    • resources.py — MCP resources (station catalog, detail, nearby)
    • prompts.py — MCP prompt templates (fishing trip, safety check, etc.)
    • Directorytools/
      • stations.py — search_stations, find_nearest_stations, get_station_info
      • tides.py — get_tide_predictions, get_observed_water_levels
      • meteorological.py — get_meteorological_data (8 product types)
      • conditions.py — marine_conditions_snapshot
      • smartpot.py — tidal_phase, deployment_briefing, catch_tidal_context, water_level_anomaly
      • charts.py — visualize_tides, visualize_conditions
      • diagnostics.py — test_client_capabilities
    • Directorycharts/
      • tides.py — Matplotlib/Plotly tide chart renderers
      • conditions.py — Matplotlib/Plotly conditions dashboard renderers

Each tool module has a single register() function. The server imports all seven and calls them in sequence during startup. This keeps the server module small and makes it straightforward to add new tool groups without touching existing ones.

tidal.py contains the phase classification algorithm and prediction interpolation functions. It has no I/O, no FastMCP dependencies, and no async code — just datetime math. This makes it independently testable and reusable. The SmartPot tools import from tidal.py to classify phases and detect anomalies.

Beyond tools, the server registers three resources (station catalog, station detail, nearby stations) accessible via noaa:// URIs, and four prompt templates (fishing trip planning, marine safety check, SmartPot deployment, and catch analysis) that guide LLMs through multi-step workflows.

The server supports two transport modes, selected by the MCP_TRANSPORT environment variable:

TransportWhen to useHow it runs
stdio (default)Local usage via uvx mcnoaa-tides or Claude CodeReads JSON-RPC from stdin, writes to stdout
streamable-httpDocker deployment, remote accessStarts an HTTP server on MCP_HOST:MCP_PORT (defaults to 0.0.0.0:8000)
Terminal window
# stdio (default -- just run it)
uvx mcnoaa-tides
# streamable-http (Docker or remote)
MCP_TRANSPORT=streamable-http MCP_PORT=8000 uvx mcnoaa-tides

The version banner is always printed to stderr (never stdout) to avoid corrupting the JSON-RPC stream in stdio mode.

Beyond tools, resources, and prompts, the server uses several MCP protocol features for observability and client interop.

Every tool declares ToolAnnotations from the MCP spec. These are metadata hints that clients can use for automatic retry policies, caching, and approval workflows.

from mcp.types import ToolAnnotations
_ANNOTATIONS = ToolAnnotations(readOnlyHint=True, openWorldHint=True)
@mcp.tool(tags={"discovery"}, annotations=_ANNOTATIONS)
async def search_stations(ctx: Context, ...) -> list[dict]:
...

All tools set readOnlyHint=True (they never write data). The openWorldHint is True for tools that call NOAA’s API and False for test_client_capabilities, which only inspects the MCP session.

Tools send diagnostic messages to the client via ctx.info(), ctx.warning(), and ctx.debug(). These are delivered as MCP notifications/message — separate from the tool return value and intended for the client’s log stream.

The logging strategy is intentionally sparse:

  • ctx.info() before data fetches, providing station ID and product context
  • ctx.warning() for actionable signals: preliminary data quality flags, unavailable products, NO-GO/CAUTION deployment assessments, and elevated/high anomaly risk levels
  • ctx.debug() for search match counts, proximity results, and phase classification details

Three tools use ctx.sample() to request natural-language summaries from the connected client’s LLM:

ToolSummary content
marine_conditions_snapshot2-3 sentence marine weather briefing for a boat captain
deployment_briefingConcise deployment briefing paragraph for a crab pot crew
water_level_anomaly2-sentence risk summary for a marine operator

Sampling is purely additive — the raw structured data is always returned regardless. Each ctx.sample() call is wrapped in try/except Exception: pass because not all clients support sampling. A compact subset of the data (latest readings only, not hundreds of 6-minute records) is sent to keep the sampling prompt efficient.

The test_client_capabilities diagnostic tool introspects ctx.session.client_params to report what the connected client supports. This lets users (and the LLM) discover which optional features are available before attempting them:

  • Sampling — Can the server ask the client’s LLM to generate summaries?
  • Elicitation — Can the server prompt the user for input mid-tool-call?
  • Roots — Does the client expose its project directory context?
  • Tasks — Does the client support long-running background operations?