In order to measure and track the effectiveness, and by default the ROI, of a MuleSoft Anypoint platform within an organization, there are three areas AVIO Consulting recommends organizations quantitatively track. Below are three ways to track and measure MuleSoft Anypoint Platform metrics.
1. Delivery Metrics
Many clients engage with AVIO because they are not confident about how consistently their project teams leverage MuleSoft. When determining how effectively development teams are operating, a majority of the metrics to be evaluated are not MuleSoft specific, but rather complementary to Mule.
Organizations who have established development best practices and have consistent delivery processes have a much higher rate of successful projects because they ensure consistency between project teams, reduce technical debt, and maintain a high velocity of delivery. The key delivery metrics organizations need to define and measure within their software development processes include:
- Use case standardization
- Best practices consistency
- Automated test coverage
- Automated pipelines
When use cases are captured accurately and consistently between projects, velocity increases as each new project doesn’t need to come up with their version of a use case. By standardizing on a use case that contains a functional requirement, acceptance criteria, and a definition of done, teams will be able to more rapidly move from requirement to implementation.
Furthermore, when teams use a standardized set of best practices, code reviews are more efficient, technical debt is decreased, and automated tools, such as linters, can be utilized to ensure code quality. Automated test code coverage should also be measured to ensure future enhancements and changes can be made without the need for a significant manual testing effort. Finally, by standardizing on a branching strategy and utilizing automated pipelines to promote code between environments, organizations can more efficiently and effectively move features and bug fixes to production.
When modern application development practices are followed, the overall key metric begins to improve, development cycle time. Organizations that can continue to accelerate their software development through automation and standardization will see the trust between the business and IT skyrocket. Accelerated timelines are not only driven by standardization on best practices around software development but also when project teams are able to leverage the next key metric which should be captured, API reuse.
2. Execution Metrics
Once an organization is able to promote a MuleSoft application into production, there are a set of key execution MuleSoft Anypoint platform metrics that AVIO recommends organizations track and monitor. These metrics are some of the easiest to quantify and communicate as they are obtained directly from the execution of APIs and flows within a MuleSoft Anypoint platform. The operations team responsible for managing the performance and scalability of the MuleSoft Anypoint platform will need to ensure they are capturing the following execution metrics:
- API Performance
- Response rate
- Error rate
- Error and Exception
- Run time exceptions
- Incidents reported by consumer
- API Usage
- Direct/Indirect Value
- Unique consumers
- API invocation
API Performance Metrics
After deploying APIs to a production environment, the operations team needs to ensure the APIs are highly performant and respond accordingly. If response rate metrics are not captured and analyzed for each API, it becomes difficult to understand if SLAs are being met and if APIs are performant.
Therefore, it is important to monitor latency for each individual API call. If latency is not measured, it becomes very difficult to understand where in a flow of an API a bottleneck occurs. A well architected MuleSoft Anypoint platform with well written APIs is able to handle a large throughput of requests.
To understand how efficiently the Anypoint platform is processing API requests, the throughput of the platform needs to be monitored as well. AVIO recommends utilizing a dashboard to visually represent the API performance metrics so that they are easily distributed and communicated to the development teams, the operations team, as well as interested stakeholders.
Error and Exception Metrics
The ability of an API to properly handle and reports errors is critical to the success of an API implementation. One of the most important error metrics to track is the error rate. Errors rates need to be evaluated and proactively addressed to ensure APIs are not buggy and error prone. Non 200 status codes should be captured and investigated to determine if the API was poorly written or if there are other issues to be addressed, such as a bot attacking an API to find a exploit. Error logs need to be proactively monitored and addressed, ideally in an automated manner, so that issues can be found, triaged, and resolved quickly.
An additional metric that should be captured is the number of reported issues submitted for an API. The submission of incidents for an API could point to a lack of proper documentation, a misunderstanding of the API capabilities, or the use of an API in a manner not consistent with the requirements. By evaluating the type and number of incidents submitted for an API, an organization can determine if changes are needed to better support the API consumers.
Organizations are increasingly adopting API-first design principles and should track the business value of APIs through both direct and indirect value. Some APIs can have a direct impact on revenue, enabling sales via a third party for example, while other APIs have an indirect impact through reduction in cost, ease of use, or improved user experience.
One key metric to understanding how well APIs are decreasing development costs is the number of unique consumers for an API. Increasing unique consumers highlights that an API is providing value to the development community. Similarly, tracking API invocation will highlight APIs that are experiencing an increase in traffic. AVIO’s clients often invest in MuleSoft to achieve faster time to market for development projects by building a library of high value APIs. By tracking which APIs have an increase in unique consumers in addition to API calls, organizations can validate if developers are utilizing the organization’s library of APIs.
Observability is the ability to measure the internal states of a system from knowledge of its external outputs. For APIs, these outputs could be logs, synthetic monitoring data, metrics, distributed traces, and more. As mentioned above, monitoring your MuleSoft Anypoint platform execution metrics is important and AVIO recommends clients configure dashboards and alerts to proactively alert them when something is wrong. However, with monitoring, you’ll need to predict the types of problems you’ll see before you see them to be sure you’re monitoring the correct information.
Observability provides the capability to explore the data and find out what is going wrong.
It is more about gathering telemetry and logs so issues can be identified. Understanding the current platform state through the evaluation of metrics, logs, and traces can help set the path for achieving consistent, predictable results. When evaluating the overall MuleSoft ROI, observability can provide context in a number of ways:
- Performance monitoring
- Operational improvements
- Infrastructure monitoring
- End-user experience improvements
- Business analytics
- Issues caught early in development
As mentioned above, performance, operational, and infrastructure metrics are critical measures in determining overall value when using MuleSoft. Capturing and aggregating the metrics via an observability strategy enables organizations to have real-time metrics on how well the MuleSoft platform is performing.
AVIO’s Mule 4 OpenTelemetry module is a custom MuleSoft extension developed for instrumenting MuleSoft applications to export tracing specific telemetry data to any OpenTelemetry compliant collector. Using the module allows Mule applications to have an active role in distributed tracing scenarios and be insightful and actionable participants in the overall distributed trace.
AVIO’s Mule 4 OpenTelemetry module is compatible with tools such as Splunk, Datadog, and Elastic and enables the log aggregation, metrics collection, and functional monitoring required to ensure MuleSoft APIs and flows are performing as expected.
By using the AVIO Mule 4 OpenTelemetry module, organizations can capture log events such as the start and end of APIs and flows, changes in important business values, and path execution as well as have application performance monitoring details such as JVM and server metrics collected. Finally, the AVIO Mule 4 OpenTelemetry module can enable full tracing to ensure specific bottlenecks, internal and external to Mule, are identified and can be resolved in a timely manner.
AVIO's Observability Package
Would you drive a car blindfolded? Of course not. So, why be in the dark on your Mule application’s performance and behavior?
Enabling organizations to track and measure MuleSoft ROI doesn’t have to be a labor-intensive effort. By tracking and monitoring your delivery and execution metrics coupled with an application observability strategy, organizations can quickly gain a full understanding of how effectively their organization is leveraging the MuleSoft Anypoint platform.