Pagination 📄

Every index endpoint in the Cardda API uses offset-based pagination controlled by query parameters.

Query parameters

ParameterTypeDefaultDescription
_startinteger0Zero-based offset of the first record to return.
_endinteger_start + 25Exclusive upper bound. The server returns records [_start, _end).
_orderasc / descdescSort direction. Most endpoints expect lowercase; a handful (InternalRevenueServiceAPI and a couple of legacy BankingAPI resources) accept uppercase ASC / DESC instead.
_fieldstringcreated_atField to sort by. Most endpoints accept created_at, updated_at, and the resource's "natural" fields (e.g. amount on transactions).

Maximum page size. _end - _start is capped at 100 server-side. Requesting a wider window silently truncates to 100.

Response headers

Each paginated response includes:

HeaderExampleMeaning
X-Total-Count1234Total number of records that match the query (across all pages).
Content-Rangeitems 0-24/1234Standard Content-Range header for offset/total.
Access-Control-Expose-HeadersContent-Range, X-Total-CountLets browser clients read the headers above.

The body is always an array of resource objects (no data envelope on index endpoints — the envelope is only used on resources where pagination cursors live in the body, like GET /v1/companies).

Example

curl -i 'https://api.cardda.com/v1/banking/bank_transactions?_start=0&_end=25&_order=desc&_field=created_at' \
  -H 'Authorization: Bearer YOUR_API_KEY' \
  -H 'company-id: 550e8400-e29b-41d4-a716-446655440000'
HTTP/1.1 200 OK
X-Total-Count: 1234
Content-Range: items 0-24/1234

[
  { "id": "...", "amount": 12500, "status": "authorized", ... },
  { "id": "...", ... },
  ...
]

Iterating through every page

async function* listAllBankTransactions({ apiKey, companyId, query = "" }) {
  const PAGE = 100;
  let start = 0;
  while (true) {
    const url = new URL("https://api.cardda.com/v1/banking/bank_transactions");
    url.search = query;
    url.searchParams.set("_start", start);
    url.searchParams.set("_end", start + PAGE);
    const res = await fetch(url, {
      headers: { Authorization: `Bearer ${apiKey}`, "company-id": companyId },
    });
    if (res.status === 429) {
      const wait = Number(res.headers.get("Retry-After") ?? "1");
      await new Promise(r => setTimeout(r, wait * 1000));
      continue;
    }
    if (!res.ok) throw new Error(`HTTP ${res.status}`);
    const items = await res.json();
    if (items.length === 0) return;
    yield* items;
    if (items.length < PAGE) return; // last page
    start += PAGE;
  }
}

for await (const tx of listAllBankTransactions({ apiKey, companyId })) {
  console.log(tx.id, tx.amount);
}
import requests, time

def list_all_bank_transactions(api_key, company_id, query=None):
    page = 100
    start = 0
    base = "https://api.cardda.com/v1/banking/bank_transactions"
    headers = {"Authorization": f"Bearer {api_key}", "company-id": company_id}
    while True:
        params = dict(query or {})
        params["_start"] = start
        params["_end"] = start + page
        r = requests.get(base, headers=headers, params=params, timeout=30)
        if r.status_code == 429:
            time.sleep(int(r.headers.get("Retry-After", "1")))
            continue
        r.raise_for_status()
        items = r.json()
        if not items:
            return
        yield from items
        if len(items) < page:
            return
        start += page

Best practices

  • Always sort. Without _order and _field, results are technically deterministic but slower. Sort by created_at desc for "latest first" feeds.
  • Filter before paginating. Combine with filters — e.g. ?status[$in]=["pending","authorized"]&created_at[$gte]=2026-01-01 — to keep X-Total-Count small.
  • Use the maximum page size for backfills. Use _end - _start = 100 for migration jobs.
  • Use a cursor for live feeds. For continuous listening, query for created_at[$gt]=<last_seen>&_order=asc rather than offset-based pagination.
  • Respect rate limits. See Rate limits for the policy and back-off recommendations.

Related

  • Filters — narrow the result set with MongoDB-style operators.
  • Rate limits — back-off when paginating large windows.