Main project image Main project image Dark

RSM Workday Abstraction Layer

C#, Azure, Integrations

Set of Azure Resources combined to abstract interacting with Workday for ease of re-use in implmeenting integrations.

Visit the project ↗

Table of Contents

  1. Overview
  2. The Problem
  3. The Goal
  4. The Solution
  5. Lessons Learned

Overview

As part of implementation, as well as further after, a common set of libraries and services became needed for interacting with Workday, abstracting alot of the nuonces out so that other applications may easily interact with Workday.

These included many ASP.NET microservices, MSSQL databases, Azure File Shares, Azure Functions, and Azure Service Bus Publishers/Subscribers, all in support of integrating Workday with the enterprise.

More to Come (Portfolio in Development)!!!


The Problem

With the enterprise moving from other various applications to Workday for alot of the finance portions of the business, there was a need to be able to have an integration layer between the SaaS applicaiton and the rest of the enterprise. This would enable rapid development and support of the integrations over time.


The Goal

There were a few main goals with this:

  1. Create a centeralized locaiton for enterprise applicaitons to go to to interact with workday
  2. Utilize new / better standards that were being implemented enterprise wide
  3. Have a Data First mindset when building the application set out

The Solution

Diagram

Diagram

Monolithic API Architecture (Initial Implementation)

The initial abstraction layer was implemented as a monolithic ASP.NET Core Web API application, consolidating all integration endpoints, business logic, and Workday interaction patterns within a single deployable unit. This architecture served multiple enterprise applications through a unified RESTful interface, handling authentication, data transformation, and Workday API orchestration.

However, this tightly coupled architecture presented significant challenges: any modification to a single integration endpoint required full application redeployment, introducing regression risks across all integrations. The lack of deployment isolation meant that a bug fix for one integration could inadvertently impact others, creating a bottleneck for rapid iteration and increasing the blast radius of potential failures.

Microservice Architecture Migration

The architecture was refactored to adopt a domain-driven microservices pattern, with each integration deployed as an isolated, containerized ASP.NET Core service orchestrated through Azure Kubernetes Service (AKS). This decoupling enabled independent deployment pipelines, isolated failure domains, and technology stack flexibility per service.

Each microservice was containerized using Docker and deployed to AKS with Helm charts for configuration management. This architectural shift delivered immediate benefits: development teams could experiment with new patterns and libraries within a single service boundary without impacting production integrations. Deployment velocity increased significantly as changes were scoped to individual services, and horizontal scaling could be tuned per integration based on specific load characteristics rather than scaling the entire monolith.

Shared NuGet Package Library

With code distributed across multiple repositories, cross-cutting concerns and common integration patterns were abstracted into a suite of private NuGet packages hosted in Azure Artifacts. This library ecosystem standardized Workday interactions and eliminated code duplication across microservices.

Key packages included:

This approach ensured consistent error handling, logging patterns, and API interaction standards across all microservices while enabling versioned upgrades of shared functionality.

SQL Azure Elastic Pool Optimization

Initial database architecture deployed separate Azure SQL databases for each integration to maintain regulatory compliance and data isolation boundaries. Performance monitoring revealed low average DTU utilization across databases with sporadic, non-overlapping peak loads—resulting in over-provisioned resources and unnecessary costs.

We consolidated these isolated databases into an Azure SQL Elastic Pool, maintaining logical separation through database-level security while sharing a common pool of compute and storage resources (eDTUs). This architectural change delivered:

The elastic pool maintained full database isolation for compliance while enabling dynamic resource sharing—databases experiencing peak load could burst above their minimum allocation, borrowing unused capacity from idle databases.

API Management Gateway & Selective Exposure

As integration requirements evolved to support external partner systems, selective API exposure became critical. Azure API Management (APIM) was implemented as a centralized gateway layer, providing controlled internet-facing access to specific endpoints while maintaining internal service security.

The APIM layer provided:

This facade pattern protected internal microservices from direct internet exposure while providing enterprise-grade API governance, versioning, and transformation capabilities (e.g., header injection, payload manipulation) without modifying service code.

File-Based Integration Pipeline

Certain integrations required bulk data transfer that exceeded practical API payload limits. A hybrid storage architecture was implemented to handle large file-based integrations between Workday and Azure infrastructure.

The pipeline architecture consisted of:

Developer Experience Benefits: Azure File Shares supporting SMB 3.0 protocol allowed local mounting as network drives (e.g., \\storageaccount.file.core.windows.net\share mapped to Z:\), eliminating the need for FTP clients during development and testing. Developers could interact with integration files using native file explorer, VS Code, or command-line tools, dramatically simplifying the debug and staging workflow.

Files were processed asynchronously via Azure Functions triggered by blob storage events or Service Bus messages, enabling parallel processing and retry logic for failed transformations.

Distributed Maintenance Mode Orchestration

Workday’s weekly maintenance windows render all APIs unavailable, requiring a system-wide graceful degradation strategy. A dedicated maintenance orchestration microservice was developed to coordinate state management across the distributed architecture.

Architecture Components:

State Transition Flow:

  1. Timer-triggered function or manual API call initiates maintenance mode
  2. Maintenance controller updates Redis state (maintenance:active = true)
  3. Redis Pub/Sub message broadcast to all subscribed services/pods
  4. Azure Functions disabled via ARM API to prevent time-based executions
  5. Reverse flow when exiting maintenance window

Service Implementation Patterns:

Long-Running Background Services (e.g., Service Bus processors):

Synchronous API Operations (e.g., REST endpoints):

This distributed coordination ensured zero failed transactions during maintenance windows while maintaining full observability through centralized state management.


Lessons Learned

What Went Well

Microservice Architecture Migration The transition from monolithic to microservices architecture, while challenging, delivered immediate value in deployment velocity and failure isolation. The ability to experiment with new patterns in isolated services without impacting production integrations proved invaluable for rapid iteration.

SQL Elastic Pool Cost Optimization Consolidating separate Azure SQL databases into an elastic pool delivered substantial cost savings (~40% reduction) while maintaining regulatory compliance through database-level isolation. The dynamic resource sharing model perfectly matched our sporadic, non-overlapping workload patterns.

Shared NuGet Package Ecosystem Creating a suite of private NuGet packages for common Workday interactions standardized our integration patterns across all microservices. The WorkSharp SDK wrappers with built-in retry policies and circuit breakers significantly improved reliability.

Redis-Based Maintenance Orchestration The distributed maintenance mode coordination using Redis Pub/Sub effectively solved Workday’s weekly maintenance window challenges. The ability to gracefully pause/resume processing across all services prevented failed transactions during downtime.

What Could Be Improved

Earlier Investment in Observability Implementing comprehensive observability (structured logging, correlation IDs, distributed tracing) retroactively was exponentially harder than building it in from the start. Cross-service debugging consumed significant time that could have been avoided with earlier investment in monitoring infrastructure.

NuGet Package Versioning Governance While shared libraries eliminated code duplication, we underestimated the versioning challenges. Services running different package versions created subtle incompatibilities. We should have established semantic versioning standards and automated dependency update pipelines earlier.

Service Boundaries in Initial Design Starting with a monolithic architecture created significant refactoring debt during the microservices migration. Even for an MVP, designing with proper separation of concerns and service boundaries would have made the evolution far less painful.

Maintenance Mode Implementation Standardization The Redis Pub/Sub maintenance pattern worked well but required every new service to implement it correctly. Missing implementations led to production incidents. We should have abstracted this into base classes or middleware packages earlier to make correct implementation the default path.

Key Takeaways

Observability Must Be Foundational Distributed systems require observability infrastructure from day one. Retrofitting structured logging, distributed tracing, and centralized monitoring is exponentially harder than building it in from the start. Treat observability as a first-class architectural concern, not an afterthought.

Design for Evolution, Not Just Today Even when building MVPs under tight timelines, invest in proper service boundaries and modular design. A well-structured modular monolith can evolve into microservices with significantly less refactoring debt than a tightly coupled monolith.

Shared Code Needs Shared Standards Proliferating shared libraries without governance creates coordination overhead. Establish semantic versioning, automated dependency updates, and deprecation policies before distributing common code across teams—version sprawl compounds quickly.

Make the Right Path the Easy Path Infrastructure patterns like maintenance mode coordination should be abstracted into base classes or middleware packages. When every new service must reimplement critical patterns correctly, production incidents become inevitable. Build framework-level support for cross-cutting concerns.