Loading...

Plugin logging in Dataverse: ILogger + Application Insights

Dataverse plugins have two logging destinations: the built-in PluginTraceLog table and Azure Application Insights via the ILogger interface. Neither on its own is enough in production. The combination, wired correctly, is what lets us debug plugin failures in minutes instead of hours.

plugin-logging-application-insights

A plugin fails in production. The user sees a generic "Something went wrong" message. Your first investigative step is to find out what the plugin was actually doing when it failed - which row, which user, what input, what exception. Good logging turns this from a multi-hour dig into a two-minute query. Bad logging turns it into "please reproduce."

Dataverse gives you two logging channels for plugins. Most projects use one poorly; the few that use both well rarely end up debugging in production. Here is the pattern we ship.

The two channels

PluginTraceLog: a Dataverse table that stores trace output from plugins. Entries are rows in the plugintracelog table. You write to it via ITracingService.Trace(...). Entries are visible in the Power Platform admin center under "Plug-in Trace Log" or via Web API queries.

Good at: quick inline debugging during development, per-execution context capture.

Limited by: 24-hour default retention (configurable up to 30 days), 10MB max size per entry, difficult to query across many executions, no cross-plugin correlation.

ILogger / Application Insights: Dataverse supports the .NET ILogger interface in plugins. Logs flow to an Azure Application Insights instance you configure per environment. Entries are log records with structured properties.

Good at: long-term retention, full KQL querying, cross-service correlation, dashboards, alerts.

Limited by: setup overhead (Application Insights instance, plugin configuration), cost scales with volume, some latency between log emit and visibility in Application Insights.

Why we use both

Trace log for inline, per-execution debugging: the developer who wrote the plugin writes trace entries at key decision points. When the plugin fails, the trace log shows exactly what path it took.

Application Insights for operational observability: the ops team watches an Application Insights dashboard. Alerts fire on error rates. Retention is 90 days or more. Queries span every plugin in the tenant.

Neither covers the other. The combination covers both needs.

The pattern we use

Every plugin inherits from a base class that sets up both channels:

Each concrete plugin implements ExecuteCore and uses both tracing and logger for its own messages. The base class ensures consistent start/end/failure logging with the correlation ID that ties everything together.

What to log

Our guidance to the team:

Always log:

  • Plugin start and end with correlation ID
  • Exception details with full stack trace
  • The primary ID of the target record

Log at Info level:

  • Key decision points ("customer is premium, applying discount")
  • External API calls with request summary and response summary
  • Non-error but unusual conditions ("record has no account, skipping")

Log at Debug level (off by default, on when troubleshooting):

  • Field values being read or modified
  • Intermediate calculation results
  • Loop iteration details

Never log:

  • Passwords, API keys, or other secrets
  • Full row contents (can contain PII; use only the IDs)
  • Values larger than a few hundred characters (truncate or summarize)

The PII point is worth calling out. A plugin that logs the full Account row to trace log exposes customer names, emails, phone numbers in the plugintracelog table, which Dataverse administrators can read. Always log IDs + the specific field you care about, never whole entities.

Application Insights configuration

Configure Application Insights at the environment level via Power Platform admin center: Environment → Settings → Resources → Application Insights → Configure. Point at an Application Insights resource in Azure.

Once configured, ILogger in plugins writes to that Application Insights instance automatically. No per-plugin connection string needed.

Each environment should point at its own Application Insights instance - Dev logs to Dev's AI, Prod to Prod's. This keeps the log stream clean per environment and lets you set different retention and alerting policies.

The KQL queries we run

Find all errors in the last hour:

Trace a specific plugin execution end-to-end:

Error rate per plugin in the last 24 hours:

These queries live in a saved query library in Application Insights. When an alert fires, the on-call engineer opens the relevant saved query, pastes in the correlation ID or time range, and sees the full picture.

Alerts that matter

Alert types we set up on Application Insights for plugin health:

  • Error rate > 1% on any plugin in 15 minutes → page on-call
  • Specific exception types (TimeoutException, SqlException) → notify team channel
  • Dataverse throttling exceptions → pause any flows that might be driving the load
  • First-time error in a new plugin → notify the developer who wrote it

We resist the urge to alert on everything. Alerts that fire too often get ignored; alerts that fire rarely get attention. Five well-targeted alerts per environment is our usual set.

Sampling for cost

At high plugin volumes, Application Insights ingestion cost becomes meaningful. We sample at the ILogger level:

  • 100% of Error and above: every failure is logged fully.
  • 25% of Information: representative sample for baseline metrics.
  • 5% of Debug: just enough for occasional troubleshooting.

PluginTraceLog stays 100% for the 24-hour retention window - that is for immediate per-execution debugging, cost is negligible.

What we disable in Prod

PluginTraceLog verbosity is configurable per environment. We keep it at "Exception only" in Prod unless actively troubleshooting. When a specific issue is being investigated, an engineer temporarily raises it to "All" for an hour, reproduces, captures logs, drops it back.

"All" in Prod for extended periods produces many MB per minute of trace data; the table fills to its size cap and older entries get trimmed. Staying in "Exception only" as the default preserves the data that matters most.

The incident we avoided

A client's plugin started failing intermittently for about 1% of requests. With our pattern, the on-call engineer opened the saved "Error rate per plugin" query, found the specific plugin, filtered to its traces, found the exception trail, traced it to a specific Dataverse field that sometimes contained unexpected null values. The fix was a 5-line null check.

Total time from alert to fix: 22 minutes.

Without the combined tracing + Application Insights setup, the same investigation on the same issue would have taken hours - scraping PluginTraceLog entries individually, trying to correlate across executions, guessing at patterns.

Good logging is a constraint that feels like friction when writing plugins and feels like magic when debugging them. The base class above is thirty lines of code that pays for itself on the first production issue.

Contact Us Now

Share Your Story

We build trust by delivering what we promise – the first time and every time!

We'd love to hear your vision. Our IT experts will reach out to you during business hours to discuss making it happen.

WHY CHOOSE US

"Collaborate, Elevate, Celebrate where Associates - Create Project Excellence"

SapotaCorp beyond the IT industry standard, we are

  • Certificated
  • Assured quality
  • Extra maintenance

Tell us about your project

close