Main project image Main project image Dark

RSM Inbound Contracts Integration

C#, WD Reports, WD SOAP APIs, WD Studio

Architected and led development of an asynchronous integration automating project and contract creation in Workday from external CRM systems. Replaced manual CSV workflows with Azure Service Bus pub/sub architecture supporting opportunity tracking, maintenance-aware processing, and real-time synchronization.


Table of Contents

  1. Overview
  2. Role
  3. The Problem
  4. The Goal
  5. The Solution
  6. Development Management
  7. Lessons Learned

Overview

This inbound integration into Workday allows for the enterprise to create and maintain projects & contracts in the start of their lifecycle. It allows not only the creation of active projects and contracts, but also the creation, maintenance, and conversion of opportunity projects and contracts from a CRM system.


Role

My role on this project was the Lead from our team to solution, design, and manage development of the Worday portions of this integration


The problem

In RSM, when a new opportunity for work arises, the employees in charge of setting up all of the contracts and projects inside of workday need to be able to take information from our CRM and put it into workday. The current process that was developed for this need operation was a non-ideal set of integrations that had users handling raw data files and interacting in multiple areas. The existing integration had users:

  1. Fill out an extensive Excel sheet for the projects / contracts / opportunities that they wanted to see in Workday
  2. Utilize an Excel macro in the sheet that generated a CSV file
  3. Upload the CSV file to Workday via a STUDIO integration, which
    1. Parsed the file
    2. Saved the data in custom Extend Business Objects
  4. Wait for the STUDIO integration to complete
  5. Enter an Extend application & view the imported data
  6. Verify data looks correct
  7. Submit the data into Workday

This would kick off an Orchestration to do all fo the heavly lifting of converting Extend Business Objects into actual native Workday Business Objects.

If the items were for an opportunity and not “active”, the users would then also need to:

  1. Maintain the submitted data inside of the Extend UI until the S.O.W. was signed
  2. Once the S.O.W. was signed, they would re-submit the extend form which would:
    1. Update the “opportunity” projects to “active” projects
    2. Create contract items

The issue with this is that our firm is moving in a different direction with our CRM and our project/contract lifecycle, and we didn’t want users to jump back & forth between many different applicaitons, much less be handling raw data files like CSV files. We wanted to be able to have a seamless integration with Workday and the new application in order to


The Goal

The hope was that a different team would build out a UI inside of the applicaiton stack that we are moving towards, and users would interact with only that UI. In order to accomidate that, there would need to be near-real-time integrations with Workday and that application to ensure that data is synchronized between the applicaitions.

We needed to be able to support 4 main operations:


The Solution

Requirements

  1. Fully Hands-off integration for end users
  2. Asyncronous integration between the other applicaiton an Workday
  3. Keep existing functionality of current integrations
  4. Ease of testability of the Workday side seperate from the other applicaiton
  5. Hard deadline of 3 months, including End to End & User Acceptance Testing

Pub-Sub approach

We knew that because of Workday’s weekly maintenance period where APIs are unavailable, we needed to have an asyncronous integration with workday, as we are a Global firm that may have office hours extending into that maintenance window. This is something that we have dealt with before, and the best way to do this is with a Pub/Sub approach. We already have a Maitnenance Microservice that publishes out to a Redis Cache statuses & events for when Workday enters into a maintenance window. This allows us to create a Service Bus subscriber that will listen to 2 items - Redis for Maintenance events and Azure Service Bus for new Requests. We can then pause/resume our subscriber on Maintenance events published by our Maintenance Microservice, allowing us to disable the processing of events form the other applicaiton until workday is available again.

This allows the other applicaiton to not really care if Workday is in a Maintenance window or not, as they can keep pushing messages, and we will process them as we can, and respond in turn with another Pub/Sub topic back to the opther applciaiton.

Processing of Messages

Lucily for us, the current integration had an Orchestrations in the existing Extend applicaiton that dealt with the inserting/updating/managing of these workday objects from Extend to Workday native business objects. The approach we ultimatly landed on was that we would Lift & Shift that Orchestration into seperate Use-case specific orchestrations, and trigger those from our subscriber, depending on what operation was being sent.

This operation was sent on a Service Bus Message property so we could determine the proper deserialization modles for the message, as each operation had slightly different data contracts. Once we determined our operation, we could resolve the proper C# registered service, deserialize the message into the proper modle, and invoke the service. That service would then in turn launch a specific Integration System that was linked to a specific Orchestration. We ended up with 3 different Itnegration Systems / Orchestrations that were Use-case specific, one for each of the operations:

The final operation “Update Opportunity” was stripped down in scope to only be able to update start/end dates of the opporunity projects, so for the time being and to reduce complexity, we kept all of that logic directly in the the Abstraction Layer Microservice. In order to updated those dates, we just utilized the REST API for Projects, and made some very simple API calls.

Returning a Response

As the other applicaiton needed to know whether the operation was a success or a failure, we needed to have a secondary Pub/Sub topic back to that other applciation. The question then became how do we know what happened in the orchestrations?

The route we settled upon was to have the Orchestration, at the very end of its processing, make an API call back to our Microservice, which would forward the message to a seperate response topic. This involved managing API availability from our Azure environment, as well as ensuring that Workday would be able to connect to our OAuth secured APIs. The solution involved creating an External CredStore, generating credentials from inside of Azure, ensuring those credentials had the proper App Roles, and hooking it all together so that our orchestrations can make those calls.

We also needed to add global error handlers in our orchestraitons to make API calls to our APIs if there are issues with the orchestration runs.

Diagram

Diagram

Development Management

With other conflicting priorities for our team going on at the time, it was determined that I would “take point” from our team’s standpoint on this specific initiative. During my time on this projectec, I was responsible for:

Our timeframe for reaching our delivable was quite tight. 3 Months to architect, develop, test, and deliver a solution of this size was no joke. We came to an agreed upon soluiton pretty fast, and I was able to be pulled out of my other work to do some pre-work for this integration with the remainder of the time left in that sprint. During that time I not only started to scaffold our our Microservice and the orchestrations, but I also was a part of creating and refining ADO stories that went along with all the pieces of work for each segment.

We decided on a structure that focused on a developer doing a large chunk of work on a single sprint, and then QA testing it the sprint after, allowing for less “handoff” time and more focus on getting the proper code down. Starting at the end of the integration and working forward seemed quite benifical as when testers finished one part and went to the next part, it was more of a “now we do the same things, but trigger it this way, and all of this other stuff should have happened IN ADDTIION TO what you already have tested” This allowd our QA to build upon test plans that they already had completed.


Lessons Learned

What Went Well

Reusing Existing Orchestrations The decision to lift and shift the existing orchestrations rather than rebuild from scratch saved us significant development time. While we did need to split them into use-case-specific orchestrations, the core business logic was already tested and proven.

API-First Approach for Testing By designing our microservice with API endpoints alongside the Service Bus subscriber, we enabled independent testing of the Workday integrations without requiring the other application’s involvement. This proved invaluable during development and troubleshooting.

Working Backwards from the Goal Starting development at the end of the integration (the orchestrations) and working backward to the entry point meant that when QA started testing, each subsequent phase built upon what they had already verified. This incremental approach reduced confusion and made test planning more efficient.

Pub/Sub Architecture Decision The asynchronous pub/sub pattern with Redis-based maintenance awareness proved to be the right architectural choice. It elegantly solved the Workday maintenance window problem while keeping the other application decoupled from Workday’s availability.

What Could Be Improved

Scope Creep on “Update Opportunity” Initially, we planned to handle all update scenarios through orchestrations. We ended up simplifying the “Update Opportunity” operation to only handle date changes via REST API. While this worked, having a clearer scope definition earlier would have prevented some rework.

Earlier Integration Testing While our component-level testing was solid, we could have benefited from earlier end-to-end integration testing with the other application’s team. Some message contract assumptions had to be adjusted late in the development cycle.

Documentation During Development With the tight timeline, documentation sometimes took a back seat to development. Creating documentation concurrently with development would have made knowledge transfer easier.

Key Takeaways

Technical Leadership Is About Enablement Taking point on this project taught me that technical leadership isn’t just about making decisions—it’s about enabling the team to execute efficiently. Creating clear ADO stories, establishing testable patterns, and removing blockers were as important as the architectural decisions.

Tight Deadlines Require Smart Tradeoffs With only 3 months to deliver, we had to be ruthless about scope and creative about reusing existing assets. The orchestration lift-and-shift was a perfect example of finding the balance between “perfect” and “done.”

Design for Testability The dual-interface design (Service Bus + API endpoints) made testing dramatically easier. Building testability into the architecture from day one pays dividends throughout the development lifecycle.

Cross-Team Communication Is Critical Regular sync meetings with the other application’s team, even when we didn’t think we needed them, prevented misunderstandings about message contracts, timing expectations, and integration behavior.