The client is a company dedicated to using analytics and science to help healthcare stakeholders find better solutions for patients, and it’s a global leader in protecting individual patient privacy. The company uses a wide variety of privacy-enhancing technologies and safeguards to protect individual privacy while generating and analyzing the information that helps their customers drive human health outcomes forward.
MuleSoft’s Anypoint Platform powers our client's Integration Platform as a Service (iPaaS) capability, which allows life sciences organizations to connect company-wide data and applications in a standard and streamlined manner.
The client's iPaaS technology (powered by MuleSoft) is integral to their offering released in December 2017 that connects marketing, sales, and other business functions to enable a new commercial model of orchestration in life sciences.
The client's platform was built on strong fundamental principles like flexibility, throughput, and scalability, to name a few. It also quickly became an easy platform to onboard new clients to their new offering. This involves huge amounts of data synchronization through initial data load and also real-time synchronization capabilities, all through the same API’s built on MuleSoft Anypoint Platform.
As the new clients started adopting the platform (multi-tenancy) across global markets, and integrating the new systems into the client's internal systems, certain region clients raised concerns about data reliability and scalability.
Our client partnered with AVIO and MuleSoft to do a health check and third party analysis on the infrastructure, inspect the API layout, and address the concerns raised by their clients.
AVIO, in collaboration with MuleSoft, focused on evaluating the platform's infrastructure, architecture, API design, coding standards, deployment pipelines, and best practices.
AVIO focused on specific use case health-checks, starting with individual application code fitness, followed by a holistic examination of these assets operating in concert.
- Development best practices, utilizing AVIO mule-linter to automate static code analysis.
- Does it minimally describe the service and how to execute tests?
- Does a README.md exist? (linter)
- Naming conventions followed: files, flows, component configurations. (linter)
- Property usage: any hard-coded properties.
- Flow structure: are sub-flows leveraged in orchestration components, e.g. choice routers, foreach, etc? Using inline components in lieu of flow references complicates testing. Seeing this is generally an indication that MUnits may not be in good health.
- Flow reference doc:name matches the flow name, i.e Mule UI should provide a clear indication of the flow that is being referenced vs a narrative description.
- Is a category configured for logging? (linter)
- Project structure: common configuration, does the application use global error handling? (linter)
- Munit coverage configuration. (linter)
- Dependency review
- Are the dependencies up to date? If not, review issues fixed in newer versions to correlate potential fixes for client issues.
- Integration/orchestration pattern review
- Synchronous vs asynchronous components. Is there an opportunity to use parallel processing to improve response times? E.g. foreach vs parallel-foreach.
- Error handling framework
- Is a global error handler configured and cohesive with the APIkit error handler?
- Is there the potential for errors to propagate unhandled, and if so, is this appropriate for the flow?
- Logging (level of logging and volume of data logged)
- E.g. the internal workings of the client's custom logger are asynchronous, however, the connector itself is still synchronous. This has the potential to introduce errors in business flows.
- Caching implementation
- Is caching used? Are distinct strategies used? Where should caching be used?
- Look for areas that stray from the principle of single-responsibility.
- Analyze reusable APIs for better throughput
- Leverage Anypoint Monitoring to find endpoints and/or flows that are heavily utilized and have unreasonable response times (>5000ms).
- MUnit coverage
- Verified that all tests executed without issue and documented the coverage percent.
- Tuning with thread profiles
- Some core services are Mule 3 and, depending on the components/scopes used (request-reply, scatter-gather, etc), may require tuning.
- Does the vcore size support the load on the application? I.e. will an application with default threading fully consume the heap on a heavy load?
- Performance testing
- For applications identified as heavily utilized and/or suffering from slow response times.
- Application construction recommendations through AVIO’s proprietary Mule Linter.
- Compiled digital artifacts identifying recommended improvements for each application and categorized them by risk and timeline
- Started the process of cataloging recommendations as Jira tasks.
- Identified high volume core applications that need deeper analysis and executed on the analysis.
- Identified applications for migration to Mule 4.
- Identified DataWeave coding practice applied in core service that was significantly impacting response time and memory utilization.
- Significant performance improvements for some core infrastructure endpoints.
- Defined a practical approach to performance testing.
- Delivered base-line performance metrics for poor-performing endpoints, both before and after recommended enhancements.
Additionally, in order to achieve transparency within the client's platform and to better service clients, AVIO is assisting with designing and implementing two new initiatives within the client's organization. These are Record Traceability/Reprocessing and Monitoring initiatives. Both are central to supporting their proprietary MuleSoft platform environment, as well as supporting clients with accurate timely details, should questions arise.