mcnoaa-tides is a FastMCP server that wraps the NOAA CO-OPS Tides and Currents API. This page describes how the pieces fit together — the server lifecycle, caching strategy, parallel fetch patterns, and module organization.
The server uses FastMCP’s lifespan context manager to own the NOAAClient instance. Every tool call receives the same client through the lifespan context, so there is exactly one HTTP connection pool and one station cache for the entire server process.
Startup — The lifespan manager creates a NOAAClient and calls initialize(), which opens an httpx.AsyncClient (with a 30-second timeout) and pre-warms the station cache by fetching the full station catalog from the NOAA metadata API.
Pre-warm fallback — If the station cache request fails during startup (network down, NOAA API outage), the server does not crash. It creates a bare HTTP client and logs a warning to stderr. The cache will populate on the first station-related request.
Running — The initialized client is yielded into the FastMCP context as {"noaa_client": client}. Every tool accesses it via ctx.lifespan_context["noaa_client"].
Shutdown — The finally block calls client.close(), which properly closes the httpx.AsyncClient and releases its connection pool.
# Simplified view of the lifespan flow
@asynccontextmanager
asyncdeflifespan(server: FastMCP):
client =NOAAClient()
try:
await client.initialize() # HTTP client + station cache
The station catalog contains roughly 301 entries. Fetching it on every request would be wasteful, so NOAAClient caches it in memory with a 24-hour TTL.
Property
Value
TTL
86,400 seconds (24 hours)
Storage
In-memory list[Station] on the client instance
Refresh trigger
Any call to get_stations() after TTL expires
Failure behavior
Serve stale data if cache was previously populated; raise if cache was never populated
The refresh strategy is stale-while-revalidate: when the TTL expires, get_stations() attempts a background refresh. If the refresh fails but stale data exists, the stale list is returned and a warning is logged to stderr. This means the station list might be up to 48 hours old in the worst case (24h TTL + 24h failed refresh window), but requests never fail just because the metadata API is temporarily unreachable.
mcnoaa-tides talks to two distinct NOAA endpoints. Neither requires an API key.
Data API
Base URL:api.tidesandcurrents.noaa.gov/api/prod/datagetter
Returns observations and predictions — water levels, tide predictions, wind, air temperature, water temperature, air pressure, and more. Accepts station ID, product name, date range, datum, and units.
Metadata API
Base URL:api.tidesandcurrents.noaa.gov/mdapi/prod/webapi
Returns station information — the station catalog (stations.json), individual station details, available sensors, datums, and products. Used for station discovery and the station cache.
Station IDs are 7-digit numbers (e.g., 9447130 for Seattle, 8454000 for Providence). The client validates this format with a regex before making any request.
Every data API request includes the query parameter application=mcnoaa-tides-mcp so NOAA can identify the source if needed.
Several tools fire multiple API calls simultaneously using asyncio.gather. This is the primary performance optimization — a snapshot that needs 6 products takes about as long as a single product request rather than 6 times as long.
flowchart LR
Tool[marine_conditions<br/>_snapshot]
P[predictions<br/>hilo]
WL[water_level]
WT[water_temperature]
AT[air_temperature]
W[wind]
AP[air_pressure]
R[Combined<br/>Response]
Tool --> P & WL & WT & AT & W & AP
P & WL & WT & AT & W & AP --> R
style Tool fill:#1a8a8f,stroke:#5ec4c8,color:#0a1517
style R fill:#1a8a8f,stroke:#5ec4c8,color:#0a1517
style P fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
style WL fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
style WT fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
style AT fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
style W fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
style AP fill:#0d3b3e,stroke:#1a8a8f,color:#e8f0f0
Each product is fetched independently. If a product fails (sensor not available at the station, temporary API error), it is recorded under an "unavailable" key in the response rather than failing the entire request.
Uses a _safe_fetch wrapper that returns None on failure for the meteorological products. The hilo prediction fetch is not wrapped — if tides cannot be retrieved, the tool raises rather than returning a deployment briefing without tide data.
Fetches 6 products in parallel (same set as marine_conditions_snapshot), then passes the collected data to the chart renderer. Unavailable products are omitted from the dashboard panels.
The pattern across all parallel-fetch tools is the same: wrap each product request in a try/except, collect successes and failures separately, and report failures without crashing the overall request. This is important because not every station supports every product — a station might have a tide gauge but no wind sensor.
Each tool module has a single register() function. The server imports all seven and calls them in sequence during startup. This keeps the server module small and makes it straightforward to add new tool groups without touching existing ones.
tidal.py contains the phase classification algorithm and prediction interpolation functions. It has no I/O, no FastMCP dependencies, and no async code — just datetime math. This makes it independently testable and reusable. The SmartPot tools import from tidal.py to classify phases and detect anomalies.
Beyond tools, the server registers three resources (station catalog, station detail, nearby stations) accessible via noaa:// URIs, and four prompt templates (fishing trip planning, marine safety check, SmartPot deployment, and catch analysis) that guide LLMs through multi-step workflows.
Every tool declares ToolAnnotations from the MCP spec. These are metadata hints that clients can use for automatic retry policies, caching, and approval workflows.
All tools set readOnlyHint=True (they never write data). The openWorldHint is True for tools that call NOAA’s API and False for test_client_capabilities, which only inspects the MCP session.
Tools send diagnostic messages to the client via ctx.info(), ctx.warning(), and ctx.debug(). These are delivered as MCP notifications/message — separate from the tool return value and intended for the client’s log stream.
The logging strategy is intentionally sparse:
ctx.info() before data fetches, providing station ID and product context
ctx.warning() for actionable signals: preliminary data quality flags, unavailable products, NO-GO/CAUTION deployment assessments, and elevated/high anomaly risk levels
ctx.debug() for search match counts, proximity results, and phase classification details
Three tools use ctx.sample() to request natural-language summaries from the connected client’s LLM:
Tool
Summary content
marine_conditions_snapshot
2-3 sentence marine weather briefing for a boat captain
deployment_briefing
Concise deployment briefing paragraph for a crab pot crew
water_level_anomaly
2-sentence risk summary for a marine operator
Sampling is purely additive — the raw structured data is always returned regardless. Each ctx.sample() call is wrapped in try/except Exception: pass because not all clients support sampling. A compact subset of the data (latest readings only, not hundreds of 6-minute records) is sent to keep the sampling prompt efficient.
The test_client_capabilities diagnostic tool introspects ctx.session.client_params to report what the connected client supports. This lets users (and the LLM) discover which optional features are available before attempting them:
Sampling — Can the server ask the client’s LLM to generate summaries?
Elicitation — Can the server prompt the user for input mid-tool-call?
Roots — Does the client expose its project directory context?
Tasks — Does the client support long-running background operations?