Loading...

Multi-environment ALM for Dataverse: Dev to Production pipeline

Most Power Platform projects fail ALM not because pipelines are hard, but because teams pick three environments when they need four - and skip the guardrails that keep managed solutions from biting them at 11pm on a Friday. Here's the topology, pipeline and checklist we run in production.

Industrial pipes as metaphor for ALM deployment pipeline

Every Power Platform engagement we inherit eventually reveals the same ALM story. Three environments - Dev, Test, Prod. One pipeline (or no pipeline). A shared Dev environment where five makers are pushing changes to the same solution. Connection references that were filled in manually on the first deploy and forgotten since. A managed solution that someone couldn't import because of a layering conflict nobody can explain. Fire drill on a Friday night, every other release.

The painful part isn't any single decision. It's that three environments plus manual promotion plus loose solution hygiene compounds. Each corner you cut adds one more thing that can drift. After three months, you're spending more hours firefighting than shipping features.

This is the topology, pipeline and set of guardrails we now ship on every Dataverse engagement where more than one developer touches the solution. It is boring on purpose. Boring ALM is how you get boring deploys - which is exactly what customers pay for.

Our four-environment topology

Three environments conflates two jobs - build and accept. When your Test environment also serves as the place your client signs off on a release, every QA test reset wipes the client's acceptance run. When your Dev environment is shared, every developer's half-finished experiment is one solution export away from leaking into Test.

Four environments cost more in license and setup, but each has a single, clear job.

DEV is where developers and makers work. Two valid shapes:

  • Per-developer sandbox: one DEV environment per developer. Safer, more expensive in licensing. We use this when there are more than two developers or when they work on different solutions in parallel.
  • Shared Developer environment: one DEV for the whole team. Cheaper, workable with two or fewer developers who coordinate well and work on different parts of the solution.

TEST is where the unpacked solution XML from main gets built into a managed solution and auto-deployed after every merge. No human ever clicks anything in Test - if a configuration exists there, it came from the pipeline. QA runs integration tests here. Broken deploys fail here, not in UAT.

UAT is the client's environment. It receives a managed solution build only when a release tag is cut. It stays stable for the client to run acceptance tests. This is also the environment we hand read access to stakeholders who want to "see the feature."

PROD is customer-facing. A deploy to PROD requires an approved release tag, a manual approval gate in the pipeline, and a rollback plan documented in the release ticket.

We have seen teams argue for five environments (adding a "Staging" between UAT and PROD that mirrors PROD topology exactly). In our experience, the marginal value is small unless you run heavy pre-production load tests. For most line-of-business applications, the fifth environment pays for a license that never catches a real issue.

Solution strategy: managed, unmanaged, or both

The rule we run: DEV and TEST hold unmanaged solutions. UAT and PROD hold managed solutions only. Managed solutions enforce layering and give you a clean uninstall path if a release goes wrong. Unmanaged solutions give developers the flexibility to refactor without the layering getting in the way.

On publisher prefix, the one-time decision that you cannot easily undo: pick a short, client-specific prefix (three to six letters) at the start of the project, and use it on every custom table, column, choice, and relationship. The default "new_" publisher is what you get when nobody thinks about this, and you will regret it the first time a managed solution import fails because a column collides with a later unmanaged customization.

On solution segmentation, our rule is one solution per bounded context, not per table. A single "Customer Onboarding" solution containing the ten tables, eight flows, three canvas apps and six security roles that implement that capability is manageable. Ten micro-solutions - one per table - is not manageable, because half the dependencies cross solutions and you spend every release chasing dependency errors.

The one exception we make is a separate "Shared" solution for genuinely reusable artifacts - common reference data schemas, utility flows, shared plugin assemblies. That solution deploys first in every pipeline stage.

Azure DevOps pipeline structure

The source of truth is the unpacked solution XML checked into git, not the binary .zip. Unpacked XML is reviewable in a pull request - you can see that your colleague added a new column, changed a form layout, or modified a business rule. Binary solutions in git are write-only history.

We use the Microsoft Power Platform Build Tools extension for Azure DevOps, which wraps the CLI operations we need. A typical pipeline looks like this:

Three things worth calling out:

  1. Service principal authentication (PowerPlatformSPN) rather than a human account. Human credentials rotate, expire, and sometimes get disabled at 2am by the IT team. Service principals with carefully scoped application user rights in each environment do not.
  2. Deployment settings files (uat.json, prod.json) for environment variables and connection references. These files live in git but pull their actual secret values from pipeline variable groups, so you never commit the UAT SQL connection string or the PROD webhook URL.
  3. Approval gates on UAT and PROD. Azure DevOps environments have a built-in approver list feature. Use it. A release that needs five minutes to import a solution should not skip the ten-second pause to confirm someone wants it in PROD.

Guardrails we refuse to skip

Every ALM failure we have seen traces back to one of these being absent.

  • Solution Checker runs as a blocking gate, not a polite suggestion. Any Error-level rule violation fails the build. Warnings we document as accepted exceptions in the release notes.
  • Pull requests review the unpacked XML diff, not "trust the developer." This is the single biggest culture change we ask for. If the diff is too large to review, the PR is too large.
  • No direct changes in Test, UAT or Prod. Environment-level permissions lock customization. The service principal is the only thing that can deploy into those environments, which means the pipeline is the only thing that can.
  • Environment variables for every value that differs across environments - API base URLs, tenant IDs, feature flags. A solution that is identical byte-for-byte across UAT and PROD is a solution that deploys the same way.
  • Connection references instead of connections. A flow that hard-codes its SharePoint connection works exactly once, in one environment. A flow that references a named connection reference can point to a different SharePoint per environment without any change in the flow itself.
  • Reference data shipped via dataflows or import jobs, not manual typing. If the Onboarding solution depends on eight option-set values and three reference-data rows to function, those rows are part of the deployable artifact.

What breaks: four gotchas we fixed on real projects

1. Managed solution layering conflict after an unmanaged hotfix

A production issue gets "fixed" with a direct unmanaged change in PROD to unblock a customer. Two weeks later, the next scheduled release fails to import because the managed layer tries to overwrite a property that an unmanaged layer is now actively claiming.

The fix is always the same - either remove the unmanaged customization ("Remove active customization" per component) or merge the hotfix into DEV, promote it properly, and then remove the direct fix. The prevention is the "no direct changes in PROD" rule above.

2. Missing dependency on Friday night

Your managed solution imports fine in Test. It fails in UAT with "Dependency not found." Root cause: your solution depends on a custom choice column from a Shared solution, but the Shared solution version in UAT is older than the one in Test. Your UAT deploy forgot to include the Shared solution update.

The fix is to re-run UAT deploy after the Shared solution update. The prevention is two pipeline rules: Shared solution deploys first in every stage, and the pipeline exports the version manifest after each stage for diff checking.

3. Environment variable not populating because of a typo in the schema name

You add a new environment variable in DEV with schema name prefix_ApiBaseUrl. The deployment settings file references prefix_apibaseurl. The import succeeds (case-insensitive matching on the filename level), but the variable value at runtime is empty, and your integration silently fails - no error, just zero records processed.

The fix is to make the deployment settings file the authority. We now generate it from the solution manifest directly, rather than handwriting it. A ten-line PowerShell step reads the environmentvariabledefinitions.xml inside the solution and emits the expected deploymentSettings.json skeleton.

4. Plugin assembly rollback blocked by managed solution

A plugin assembly update in a managed solution causes an unexpected exception under production load. You want to roll it back to the previous version. Managed solutions do not support direct rollback; you have to uninstall the solution (which removes the upgrade) or import the previous managed solution version as a patch.

The fix in the moment is to ship a hotfix release with the previous assembly code packaged as a fresh managed update. The prevention: keep plugin assemblies on a feature-flag pattern so you can disable the new behavior without a redeploy.

Rollout checklist we run pre-Prod

Every production release, no exceptions, no "just this once":

  • Solution Checker passes with no Error-level findings
  • Unpacked XML diff reviewed in the release PR (two reviewers)
  • Release notes list every changed component and any breaking changes
  • Deployment settings file values for PROD are filled in and validated
  • Shared/dependency solutions are on the correct version in PROD
  • Integration test suite green against UAT after the previous UAT deploy
  • Reference data row counts match expectation (import job dry-run report)
  • Rollback plan documented in the release ticket (previous solution zip attached)
  • Manual approver for PROD environment notified
  • Business owner aware the release is going in and on standby for smoke-test

The list looks long. In practice the pipeline automates eight of them and a human ticks the last two. A release we have shipped twenty times takes five minutes of human time.

What this costs

The honest numbers:

  • Initial setup: two engineering weeks to stand up the environments, pipeline, service principals, solution segmentation and the first few releases through the pipe. Less if we are reusing our template repository; more if we are unwinding an existing broken setup.
  • Per-release runtime: five to ten minutes of pipeline execution, two to five minutes of human approval time.
  • Licensing: four Dataverse environments instead of three, with the UAT and TEST environments typically sized smaller than PROD. The exact cost depends on your Dataverse capacity and whether you need Dynamics 365 apps; for most Power Apps per-app plan engagements, the marginal environment cost is modest.

The comparison that matters is cost of the pipeline vs. cost of drift. A single Friday-night hotfire on a drifted environment is two to four engineering hours plus the client trust hit. After three of those per quarter - which is what we commonly see on unmanaged three-environment setups - the pipeline has paid for itself.

When you should not do this

This level of rigor is overkill for:

  • Proof-of-concept work or internal prototypes with no customer-facing footprint. A single environment and manual export is fine.
  • Single-developer, short-duration projects where the only "team" is one person who remembers the full history of every change.
  • Apps with a blast radius of one team. If the worst case of a broken deploy is one person redoing their morning, you do not need a four-stage pipeline.

The inflection point is roughly: more than one developer, more than one business-critical flow, or any customer-facing capability. Below that bar, a shared Dev and a Prod with a weekly manual promotion is fine.

If your setup already hurts

Most Power Platform codebases we are asked to "clean up" share the same three patterns: unmanaged solution in PROD, no source control, and one shared environment for everyone. The unwind path is real work, but it is finite - usually two to three weeks for a single-solution system, longer for multi-solution estates.

If this post described your Friday nights, the SapotaCorp Power Platform team has done this unwind on projects ranging from single-tenant line-of-business systems to multi-country Dataverse deployments. Tell us what you're working with and we can sketch a migration plan in a first call.

Contact Us Now

Share Your Story

We build trust by delivering what we promise – the first time and every time!

We'd love to hear your vision. Our IT experts will reach out to you during business hours to discuss making it happen.

WHY CHOOSE US

"Collaborate, Elevate, Celebrate where Associates - Create Project Excellence"

SapotaCorp beyond the IT industry standard, we are

  • Certificated
  • Assured quality
  • Extra maintenance

Tell us about your project

close