Skip to main content

The two layers

Every request is evaluated against two independent limits on the same key.
  1. Sliding 60-second window — your tier’s requests-per-minute ceiling.
  2. 1-second burst cap — your tier’s requests-per-second ceiling.
Whichever limit you hit first triggers a 429 rate_limited response.

Per-tier ceilings

TierRequests / minuteBurst (req / sec)
Watch305
See12020
Know60060
Exact values are the source of truth in Tiers and Quotas and are derived directly from api/config.py.

Headers on every response

Every response — success or failure — includes your current window state.
HeaderMeaning
X-RateLimit-LimitYour tier’s per-minute ceiling.
X-RateLimit-RemainingRequests remaining in the current 60-second window.
X-RateLimit-ResetUNIX seconds until the window resets.
A 429 additionally carries:
HeaderMeaning
Retry-AfterSeconds to wait before retrying. Always present on 429.

The 429 response body

{
  "error": "rate_limited",
  "message": "Rate limit exceeded. Retry after 1 second.",
  "retry_after": 1,
  "limit": 30,
  "window_seconds": 60
}

Correct backoff pattern

Respect Retry-After. Do not retry faster. Do not retry synchronously in a tight loop. Use jitter.
Python
import os, time, random, httpx

def get_with_backoff(url: str, max_retries: int = 5) -> httpx.Response:
    headers = {"X-API-Key": os.environ["EXORDE_API_KEY"]}
    for attempt in range(max_retries):
        r = httpx.get(url, headers=headers, timeout=10)
        if r.status_code != 429:
            return r
        retry_after = int(r.headers.get("Retry-After", "1"))
        sleep_s = retry_after + random.uniform(0, 0.5 * (2 ** attempt))
        time.sleep(sleep_s)
    r.raise_for_status()
    return r
Node
async function getWithBackoff(url, maxRetries = 5) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const r = await fetch(url, {
      headers: { "X-API-Key": process.env.EXORDE_API_KEY },
    });
    if (r.status !== 429) return r;
    const retryAfter = Number(r.headers.get("Retry-After") ?? 1);
    const jitter = Math.random() * 0.5 * Math.pow(2, attempt);
    await new Promise(res => setTimeout(res, (retryAfter + jitter) * 1000));
  }
  throw new Error("Rate-limit retries exhausted");
}
PowerShell
function Invoke-WithBackoff($Uri, $MaxRetries = 5) {
  for ($i = 0; $i -lt $MaxRetries; $i++) {
    try {
      return Invoke-RestMethod -Uri $Uri -Headers @{ "X-API-Key" = $env:EXORDE_API_KEY }
    } catch {
      if ($_.Exception.Response.StatusCode.value__ -ne 429) { throw }
      $retry = [int]$_.Exception.Response.Headers["Retry-After"]
      Start-Sleep -Seconds ($retry + (Get-Random -Maximum 1))
    }
  }
  throw "Rate-limit retries exhausted"
}

Planning your request budget

The following heuristics work well in practice.
  • Polling dashboards: for Watch-tier (30 rpm) poll each topic no faster than every 10 seconds. For See-tier (120 rpm) every 2 seconds is fine.
  • Batch pipelines: serialise requests, not parallelise, at Watch-tier. At See and Know you can safely run two to six parallel workers.
  • Evidence drill-downs: post fetches are the slowest endpoints (500–1000 ms). Stagger them.
Rate limits are distinct from the quotas below, each enforced separately.
  • Alert subscriptions — cap on the number of active webhooks per key.
  • Custom watchlists — cap on the number of custom scopes per client.
  • Evidence lookback — maximum age of posts returned by evidence endpoints.
See Tiers and Quotas for exact values per tier.