AVIO Consulting

Gaining Visibility into Mule Flows with ELK

Jul 26, 2016 | ELK, MuleSoft

Visibility is a concept that never is appreciated until you are troubleshooting an issue or trying to better understand what occurred within a process. What did the payload look like when it was received? What did it look like when it was sent? What errors occurred? Where exactly did it fail in the process? All these questions, along with many others are typical during analysis of a troubled process.

Within Mule flows, these questions can be provided quite easily with the help of ELK, ARM and the Mule Agent.

ELK

ELK is an open source tool used to capture and index data, to enable searchability along with generation of graphs, reports, alerts, and dashboards. ELK is a combination of three tools: Elasticsearch, Logstash, and Kibana. Logstash captures and indexes data, Elasticsearch is used to store and search the data, and Kibana provides the ability to explore and visualize data through dashboards.

ARM

Anypoint Runtime Manager (ARM) provides management capability for on-premise applications within the Anypoint Platform. One capability of ARM is that it can configure the location of the log files written to by Mule events.

Mule Agent

The Mule Agent (included with the Mule ESB distribution 3.7 and above), is utilized by ARM, enabling Anypoint Platform to communicate with an on-premise Mule (to learn about installing and configuring see https://docs.mulesoft.com/runtime-manager/sending-data-from-arm-to-external-monitoring-software). It also provides event tracking capabilities within Mule flows (Mule events), giving real-time information about messages being processed.

Once an on-premise server is registered within the Runtime Manager, configuration can be made for logging of these Mule events. This configuration is done in a global manner for the specified Mule server, identifying the log file where the Mule events are written to on the Mule server. To configure this, select the registered server in the Runtime Manager and select ‘Plugins’:

Image removed.

 The location, along with the log file name on the selected server is then specified:

Image removed.

The level that the Mule events will be captured at is also configurable within the ARM:

Image removed.

Each of these levels captures events as they occur within the message flow, with Business Events capturing the least and Debug capturing the most (use the latter with caution, due to potential for performance impact). These events are captured by the Mule Agent, without the need to add any components within the flow. Depending on the tracking level chosen, the agent will capture the events within a flow as shown below:

Image removed.

The best part is that this configuration is done at runtime, without the need to redeploy or restart servers.

Once the configuration is completed within ARM, the events from the message flow are written to the log file location on the Mule server, specified within ARM (i.e. $MULE_HOME/logs/events.log). These events are written to the file in JSON format:

{"application":"business-indicator-sample-1.0.0-SNAPSHOT",
"notificationType":"PipelineMessageNotification",
"action":"pipeline request message processing end",
"resourceIdentifier":"business-indicator-sampleFlow",
"source":"business-indicator-sampleFlow",
"muleMessage":"Message Two",
"path":null,
"annotations":null,
"muleMessageId":"4f996a80-4eaa-11e6-8bfb-7e4c20524153",
"rootMuleMessageId":"4f996a80-4eaa-11e6-8bfb-7e4c20524153",
"muleEventId":"0-4f996a82-4eaa-11e6-8bfb-e4c20524153",
"customEventProperties":null,
"customEventName":null,
"timestamp":"2016-07-20T11:46:46827-0700"}

What about ELK?

Now that the events are captured within a log file on the Mule server, Logstash can be used to monitor and capture the data within the file. A Logstash configuration file defines the pipeline (input, filter, output). The example below is a Logstash configuration file where the input  is the events.log that was configured within ARM. The filter specified is a grok command pattern that parses the JSON written to the events.log. The output is to Elasticsearch, used to enable easy searching.

input {
	file {
    	path => "c:mulesoftmule-enterprise-standalone-3.7.3logsevents.log"
    }
}
filter {
	grok {
    	match => {"message" => "%{WORD:applicationName}, %{WORD:notification}, %{WORD:actionName},  %{WORD:resource}, %{WORD:sourceName}, %{WORD:payload}, %{WORD:componentPath}, %{WORD:componentAnnotations}, %{WORD:muleId}, %{WORD:rootId},%{WORD:eventId},%{WORD:customEventProps},%{WORD:customEvent},%{TIMESTAMP_ISO8601:timestamp}}"}
    }
}
output {
	elasticsearch {}

Consider Filebeat, another open source tool, that can be used to forward events to a central ELK server.

Elasticsearch

As noted above, my configuration is forwarding the output to Elasticsearch. This piece of the ELK solution really makes life easier when troubleshooting. Instead of combing through endless log files to get a sense of what happened, Elasticsearch provides easy searching capabilities that provides better insight into the Mule events that were captured. Using tools like Sense, a Kibana app that provides an interactive console, captured Mule events can be queried.

API Access

Since configuration within ARM is at a global level (configured for all deployed applications on Mule), you might wonder if it is possible to configure the Mule Agent such that it captures events in a more targeted manner. Fortunately, the Mule Agent has APIs that provide the ability to change the logging levels for only certain applications, for example.

To enable access to these APIs, a configuration change is required to the Mule Agent on the Mule ESB server. Within the $MULE_HOME/conf/mule-agent.yml file, set the ‘enabled’ flag to true, along with specifying the port within ‘transports’:

transports:
	rest.agent.transport:
		enabled: true
		port: 9997

Once this configuration change is completed, a JSON request can be made at runtime. The API (mule.agent.tracking.service) provides functionality to ‘PATCH’ the current configuration, enabling the logging levels of Mule events to be configured at the application level down to the flow level. For example, the global settings might be set to track at the Business Events level, but using the API, the configuration can be updated to add an additional logging level for a flow, setting it to Debug. Further details can be found here.

Using these tools will provide more insight and visibility into your Mule flows.

 

MuleSoft Health Check

Can MuleSoft Health Check Accelerate Your Digital Evolution?

{{cta(’48d25112-e97f-4e5e-bb34-daba7a364f4a’)}}