Integration teams building against F&O often hit the same pattern: their Python or Node script pulls 5,000 records, processes them, mails a summary to finance. Runs green for weeks. One morning the summary shows data from two days ago and the team doesn't notice until a finance lead asks why.
The script "succeeded" - no error, right record count. The data was stale. OData is standard; F&O's implementation is mostly standard, with a handful of platform-specific quirks that bite integrations quietly.
Authentication flow
F&O OData uses Azure AD OAuth2 client credentials. Register an app in Azure AD, grant it the AXSHAApplication permission, configure it as delegated or application user in F&O, request a bearer token from https://login.microsoftonline.com/{tenant}/oauth2/v2.0/token.
The scope for F&O is the resource URL itself: https://yourorg.operations.dynamics.com/.default.
Tokens last 60 minutes. For long-running jobs, refresh proactively at the 50-minute mark rather than waiting for a 401.
The company-context gotcha
F&O is multi-company. OData calls without an explicit company context query the default legal entity. Any integration targeting "all customers" without specifying company silently filters to the user's default LE.
Two ways to query across companies:
- Per-query company header: add Company: USMF to the HTTP request. Subsequent queries in that session are scoped to USMF.
- Cross-company query: add ?cross-company=true to the URL. F&O returns records from all companies the user can access, with a DataAreaId field showing which company each record comes from.
Integrations pulling master data across a multi-entity tenant without ?cross-company=true return incomplete datasets - and the incompleteness is invisible because the query still succeeds.
$filter syntax
OData $filter uses its own expression language. F&O's implementation mostly follows the spec with a few quirks:
Standard:
F&O-specific:
Getting these wrong returns a 400 with a barely-useful error message. Every integration that talks to F&O should carry a cheat-sheet of enum paths for the entities it queries.
Paging with $skiptoken
F&O paginates at 10,000 records per page by default. Beyond that, @odata.nextLink returns a $skiptoken for the next page. Standard OData.
The trap: querying without explicit ordering can produce overlapping or missing records across pages if the query plan changes between calls.
Always $orderby an immutable field when paginating. RecId is the safest choice - monotonic, unique, never modified:
Follow @odata.nextLink verbatim for subsequent pages rather than reconstructing the URL.
For very large pulls (hundreds of thousands of records), move to DMF export packages + Azure Blob rather than OData. OData is transactional-level, not bulk-level.
Change tracking for recurring integration
For integrations running daily and pulling deltas, enable Change Tracking on the entity (Enable Track Changes = Yes) and use ?$deltatoken:
First call: GET /data/Customers?$deltatoken=null - returns all records plus a delta token.
Next call: GET /data/Customers?$deltatoken= - returns only records changed since the last call.
Tokens are opaque strings. Store them per-integration and per-entity. Don't assume portability across environments.
The silent-failure pattern
Going back to the integration that returned stale data: the script queried customer balances without a company context (returned only US records), without $orderby=RecId (page boundaries shifted between calls), and without change tracking (pulled everything daily, sometimes with page overlap).
The fix was four lines: add Company header, add $orderby=RecId, paginate via nextLink, log record counts per page. Every integration that talks to F&O OData benefits from the same checklist.
Error handling for production
Production-grade F&O OData integrations handle:
- Retry on 429 Throttling - F&O throttles around 10 requests/sec. Back off exponentially.
- Retry on 503/504 - transient, retry 3x with backoff.
- No retry on 400/401/403 - logic error, alert humans.
- Idempotency keys on POST - F&O accepts OData-EntityId + If-Match headers for optimistic concurrency.
- Correlation ID logging - every request logs a GUID in Request-Id; F&O logs match by the same GUID, making cross-system debugging possible.
When to move off OData
OData is fine for transactional integrations at modest volumes (tens of thousands per day). Above that:
- Bulk exports: DMF + Azure Blob + consumer-driven pull
- Real-time events: Business Events + Azure Service Bus, not polling OData
- Master data sync: dual-write to Dataverse, not OData in both directions
Volume, latency, and directionality drive the choice. OData is the default, not the universal answer.