Logistics and distribution operators running Dynamics 365 Supply Chain Management increasingly need the product of live operational data: shipments tracked by GPS, inventory reads from RFID, vehicle status from telemetry. The question isn't whether to do it - the question is whether to build custom pipelines or use what Microsoft already ships.
Three architectures show up in discussion. Two of them fail at scale.
The pattern that doesn't survive contact with reality
Teams unfamiliar with the Azure telemetry stack often draft one of:
- Excel + batch imports: store sensor readings in spreadsheets, import to F&O via DMF every hour. Fails because sensor volumes (thousands of readings per second across a fleet) break Excel long before they get close to F&O's batch window.
- Third-party integration platform with daily pushes: rents the problem and pushes it outside the team's control. Real-time predictive insights become yesterday's snapshot.
- SharePoint manual logging: proposed surprisingly often in early architecture workshops. Doesn't scale past one pilot truck.
None of these deliver near-real-time data and none scale to fleet size.
The Azure-native pipeline
The pattern Microsoft's reference architectures recommend for this scenario:
- Ingestion layer: Azure IoT Hub receives telemetry from all devices (sensors, RFID readers, vehicle telemetry). One MQTT or AMQP endpoint per device class, scaled by IoT Hub units.
- Stream processing: Azure Stream Analytics reads from IoT Hub, windows the data (e.g., 1-minute rolling windows for vehicle position, 5-minute for inventory reads), runs aggregations and anomaly detection inline.
- Enrichment and routing: Stream Analytics outputs to multiple sinks - Dataverse for operational data consumed by Power Apps, Azure Service Bus for workflow triggers, Azure Blob for cold-storage analytics.
- Into D365: Power Automate flows triggered by Dataverse changes (or Service Bus messages) update F&O via data entities or business events. For very high throughput, bypass Power Automate and call F&O directly via a custom service.
- Analytics layer: the cold Blob store feeds Azure Synapse for predictive models; the predictions loop back via Dataverse into F&O's master planning inputs.
This stack scales to millions of events per day without custom infrastructure.
Trade-offs inside the pattern
The architecture has its own decisions:
- Stream Analytics vs Azure Functions: Stream Analytics has declarative SQL-like syntax and built-in windowing. Functions give more flexibility at the cost of more code. Stream Analytics first, functions when you hit a wall.
- Dataverse as the hot store vs F&O direct: Dataverse tolerates frequent writes better than F&O tables. Write to Dataverse, let dual-write or custom sync push to F&O on the schedule F&O can digest.
- Power Automate vs custom service: Power Automate is fine up to a few hundred triggers per minute. Above that, a custom service on the F&O side is the escape hatch.
- Data volume in F&O: don't push every sensor reading into F&O. Aggregate in Stream Analytics (e.g., "vehicle X at location Y for 15-minute window"), write the aggregates, let F&O carry only what planners actually query.
The last point catches most projects: the instinct to "have all the data in F&O" is what tanks F&O performance.
Security and operations
- Device identity: each device gets an IoT Hub identity with X.509 cert or SAS key. Revocation handled via IoT Hub device registry.
- Network isolation: Private Link between IoT Hub, Stream Analytics, and the F&O perimeter. Public endpoints only where unavoidable.
- Monitoring: Azure Monitor dashboards on IoT Hub throughput, Stream Analytics latency, Dataverse write volumes. Alerts when any sink falls behind.
- Cost controls: IoT Hub units and Stream Analytics streaming units are the two cost drivers. Size them against measured peak throughput, not imagined volume.
What ships with the architecture
A working IoT-to-F&O integration has: IoT Hub with device provisioning automation, Stream Analytics jobs for each data class with versioned SQL queries, Dataverse custom tables for the operational projections, Power Automate or custom-service writers to F&O, and a runbook for adding new device types without architecture review each time.
The build cost is front-loaded. The recurring cost is much lower than a custom integration platform because the glue layers (IoT Hub, Stream Analytics, Dataverse, Power Automate) are Microsoft-managed. That's the recommendation: lean on managed services, avoid infrastructure you'd otherwise own forever.