Loading...

Dataverse virtual tables on SQL: three latency patterns

Virtual tables surface external data inside Dataverse without copying it. The user experience matches any other table. The performance does not - and the gap is what decides whether virtual tables fit your use case.

dataverse-virtual-tables-sql-latency

Virtual tables let Dataverse surface data that lives in another system - SQL Server, Cosmos DB, a REST API - without copying it into Dataverse. The user-facing experience is the same as any table: views, forms, lookups, relationships. The underlying retrieval goes out over the wire to the external source on every read.

On paper, the appeal is obvious: no sync, no duplication, always current. In practice, the latency characteristics are what decide whether virtual tables fit the use case. Here are the three patterns we benchmark on every project that proposes virtual tables, and the rules we apply.

What virtual tables are (and are not)

A virtual table in Dataverse is defined by:

  • A virtual entity (the table schema)
  • A virtual entity provider (the bridge between Dataverse and the external source)
  • Metadata mapping (column names and types from the external source to the Dataverse view)

When a user opens a view of the virtual table, Dataverse calls the provider. The provider queries the external source. Results come back and are rendered.

Crucially:

  • No data is persisted in Dataverse. The virtual table has zero rows stored.
  • Relationships to native tables are limited. You can have a lookup from a native table to a virtual table; you cannot have rollups or complex cross-table queries involving virtual tables.
  • No audit, no field-level security, no duplicate detection. Features that rely on storing data in Dataverse do not apply.

The benchmark we run

Before committing to a virtual table, we run a three-scenario benchmark:

  1. Single-row lookup: fetch one row by primary key. "Open the detail form."
  2. List view: fetch 50 rows with a filter. "Load the default view of active orders."
  3. Integration write + read-back: a native-table row gets created with a lookup to the virtual table; the form immediately displays joined data.

We measure end-to-end latency from the Power Platform host making the call to the response being renderable. Targets:

  • Single-row lookup: under 500ms
  • List view: under 1200ms
  • Write + read-back: under 2 seconds

Pattern 1: SQL Server behind the virtual table, local-region

Benchmark typical for a SQL Server database hosted in the same Azure region as the Dataverse environment, under normal load:

  • Single-row lookup: 150-300ms (good)
  • List view: 400-900ms (good)
  • Write + read-back: 800-1500ms (marginal but workable)

The dominant cost is the network round trip plus the SQL query plan. Co-located SQL performs acceptably for most scenarios.

When this pattern works: order history lookups, reference data where the source of truth is a warehouse database, reporting-like views where the data is massive and duplicating it is impractical.

Pattern 2: REST API behind the virtual table, internet-routed

API hosted on the client's infrastructure or a third-party service, not co-located:

  • Single-row lookup: 800-1500ms (marginal)
  • List view: 2000-5000ms (poor)
  • Write + read-back: 3000-6000ms (fails the 2-second budget)

The API call adds 300-800ms baseline latency per call. Lists of 50 rows that require multiple backend queries compound quickly.

When this pattern fails: anything user-facing that sees more than 50 rows at a time. Form-by-form lookup works; view loading does not.

When this pattern can work: low-volume scenarios where users tolerate loading spinners, reference data that changes rarely and can be cached aggressively.

Pattern 3: virtual table with provider-side caching

The clever play: between Dataverse and the external source, insert a cache (Redis, Cosmos DB with TTL, or SQL Server acting as a cache table). The virtual entity provider reads from the cache; a separate process keeps the cache fresh from the real source.

  • Single-row lookup: 80-150ms (excellent)
  • List view: 200-500ms (excellent)
  • Write + read-back: 400-800ms (good)

Trade-off: the cache introduces staleness. A row changed in the backing source may not appear in Dataverse for seconds-to-minutes, depending on the cache refresh strategy.

When this pattern works: high-volume read, low-volume write, users tolerate brief staleness. Most reporting and reference data scenarios.

When this pattern complicates: the cache is now a separate system to monitor, invalidate, and age. Added operational surface.

The three questions before choosing virtual tables

1. How much data is there?

Under 100,000 rows and growing slowly: just copy it into a native Dataverse table via a dataflow. You get all the platform features and no performance concerns.

Over 10 million rows, or rapidly growing: virtual table starts to look attractive. Storing 10M rows in Dataverse costs capacity and money.

Between those: judgment call based on the other two questions.

2. How often does the data change?

Changes rarely (reference data, historical records): a native table with periodic sync via dataflow is fine. Virtual table gives you no advantage.

Changes frequently (live operational data, real-time trading, IoT telemetry): virtual table can be appropriate if the source supports it; otherwise, a sync + cache pattern is needed.

3. How do users interact with it?

Browse and drill-down (list views, detail forms): virtual table works if latency is acceptable per Pattern 1 or 3.

Bulk operations (mass edits, imports, reporting queries across millions of rows): virtual table is not suitable. The external source would be hammered; Dataverse would time out.

Lookup field targets (native tables referencing the virtual table's rows): works, but any navigation breaks rollups and calculated columns.

The bake-off we did last year

A client wanted their 15-million-row order history from SQL Server accessible in the Dynamics Sales UI. Three options evaluated:

  • Copy all 15M rows into Dataverse: $$$ on Dataverse capacity, monthly sync refreshes, stale data for up to 24 hours.
  • Virtual table, direct to SQL: real-time data, no storage cost, per-request latency of 400-900ms per view.
  • Virtual table with Cosmos DB cache: near-real-time data (30-second cache TTL), fast reads (150-300ms), added operational complexity.

We benchmarked all three in a UAT environment clone. Option 1's cost was $12,000/year in Dataverse storage alone. Option 2's view load time was borderline at peak hours (SQL contention). Option 3 won on price and performance.

The one caveat: option 3 requires operational investment to keep the cache healthy. We built a monitoring dashboard and a cache-warm routine on top of it. The client's internal team took this on after handoff; it has been stable for nine months.

When we don't recommend virtual tables

Three scenarios where we steer the client away from virtual tables entirely:

  • Heavy write scenarios. Virtual table providers have to implement Create/Update/Delete, and debugging provider-side transaction semantics is an order of magnitude harder than debugging normal CRUD.
  • Feature requirements that need Dataverse native behavior. Field-level security, audit, alerts, duplicate detection, business process flows - none of these apply to virtual tables.
  • Low-volume reference data. Under 5000 rows, just copy it. The engineering time saved is worth the capacity cost.

Virtual tables are a powerful tool for specific scale and data-freshness requirements. They are the wrong first answer for most "we have existing data in another system" cases. Copy the data unless there is a specific reason not to.

Contact Us Now

Share Your Story

We build trust by delivering what we promise – the first time and every time!

We'd love to hear your vision. Our IT experts will reach out to you during business hours to discuss making it happen.

WHY CHOOSE US

"Collaborate, Elevate, Celebrate where Associates - Create Project Excellence"

SapotaCorp beyond the IT industry standard, we are

  • Certificated
  • Assured quality
  • Extra maintenance

Tell us about your project

close