AVIO Consulting

Log4j2 SQS Appender

Log4j2 SQS appender refers to a customized appender for log4j2. An appender is a log events destination. It can be simple (e.g. a console) or complex (e.g. an RDBMS). A layout determines the way logs are presented, while filters are used to filter data based on the pre-described criteria.

Log4j2 is the newest version of the Log4j framework. When this appender is used with Mule apps or Java, it pushes application logs to the AWS SQS Queue that has been specified.

For decades, Log4J has been one of the most popular frameworks for logging events in the Java ecosystem. It owes its popularity to its simplicity and flexibility. Log4j2, an improvement of log4j is bound to be even more popular.

Configuration of Log4j

When inserting a log request into your application code, you must take your time to plan your process. Keep in mind that around 4% of your code will be dedicated to logging. Even for relatively small applications, the 4% can amount to thousands of logging statements. For this reason, it makes sense to manage log statements without manual modifications. You can use any of these methods to accomplish Log4j configuration:

  • Using a configuration file in XML, YAML, properties, or JSON format.

  • Programmatically by developing a Configuration implementation and ConfigurationFactory

  • Programmatically adding components to default configurations through calling the APIs that have been exposed at the Configuration interface.

  • Programmatically through calling methods available on the internal Logger class.

Here’s how a basic log4j2.xml configuration file looks like:

#
<?xml version="1.0" encoding="UTF-8"?>
<Configuration>
  <Appenders>
    <Console name="STDOUT" target="SYSTEM_OUT">
      <PatternLayout pattern="%d %-5p [%t] %C{2} %m%n"/>
    </Console>
  </Appenders>
  <Loggers>
    <Root level="debug">
      <AppenderRef ref="STDOUT"/>
    </Root>
  </Loggers>
</Configuration>

 

This log4j2.xml configuration has a root logger and a console appender. The layout of its pattern specifies the pattern that ought to be used for logging the statements.

You can add this attribute status = <WARN | DEBUG | ERROR | FATAL | TRACE | INFO> to debug log4j2.xml loading. You add it to the log4j2.xml configuration tag.

You could also choose to add a monitor interval loading your configuration after the interval period you will set. For instance, to set the interval every 30 seconds, you can use this attribute: monitorInterval = 30.

{{cta(‘b8ba5abd-1f2d-4416-bf68-91f04e543129′,’justifycenter’)}}

Examples of Appenders

Appenders are named to reference them from Loggers. Here are some of them:

AsyncAppender

This appender accepts other appenders’ references and causes the writing of LogEvents on a separate Thread.

SocketAppender

This refers to an OutputStreamAppender with an output written to a specific remote destination. A port or host specifies the destination.

JDBCAppender

JDBCAppender uses standard JDBC to write log events to a relational database table.

FlumeAppender

The FlumeAppender sends LogEvents as serialized Avro events to a Flume agent for consumption.

NoSQLAppender

The NoSQLAppender uses internal lightweight provider interfaces to write log events to NoSQL databases.

Using Log4j2.xml for Logging into Console

You can use the following log4j2.xml file for logging into Console:

log4j2.xml
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="INFO">
    <Appenders>
        <Console name="console" target="SYSTEM_OUT">
            <PatternLayout
                pattern="[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n" />
        </Console>
    </Appenders>
    <Loggers>
        <Root level="debug" additivity="false">
            <AppenderRef ref="console" />
        </Root>
    </Loggers>
</Configuration>

 

It is important to note that if no configuration file is located, DefaultConfiguration will be used. As a result, the logging output goes to the console.

Logging in CloudHub

When deploying Mule Apps to CloudHub, we need not be concerned about the logging configuration. CloudHub automatically writes the logs to the console. It also allows you to search by the level and by keyword. What’s more, it makes it possible to change the root log level of applications on the fly.

So, why is it not a good idea to use CloudHub default logging?

Every time you deploy Mule Apps to CloudHub, logging capabilities in terms of persistence, visibility, etc. become quite limited. This is especially the case when your applications generate substantial amounts of logs. The following are some things to 

Before you settle on long term use of default CloudHub logging, consider the following:

  • Persists logs 30 days or 100 MB whichever comes first.

  • You will lose all the logs of an application once you delete it.

  • To see logs, you will have to log into AnyPoint Platform to see a CloudHub console for every application.

  • It is not easy to ship logs to your preferred system. You can only access them through the REST API.

  • You will not have a single place where you can analyze all your application logs, more so if you are using an API-led architecture

The reasons above explain why it is necessary to have a logging framework to generate logs consistently and in a structured manner.

Therefore, you would need a framework capable of generating logs in a structured, consistent manner. You can use ELK, or any other external log analyzer to feed all the logs for efficient data analysis.

Shipping Logs to an External System

The first thing you want to do is raise a MuleSoft support ticket so that the MuleSoft team can disable CloudHub logging. This will allow you to customize your configuration.

Next, set up elastic stack for log data analysis. Keep in mind that when enabling custom logging in CloudHub, it is possible to change the log level of the application’s loggers. You can do this on the fly using CloudHub API.

After disabling CloudHub logging, there are several methods you can use to ship the application’s logs to an external system. The following are some of them:

  • Use socket appender or such other Log4j2 appenders that can post the logs to a predefined destination

  • Use the CloudHub API that MuleSoft provides to retrieve the logs and send them to your destination

  • Use your preferred log4j2 appender to push your CloudHub application logs to the AWS SQS queue. This method is often preferred because it makes it possible to scale the SQS queue when the log flow increases. Doing so ensures there’s no loss of log messages.

AVIO’s expertise in this space is the easiest, most efficient to take advantage of the opportunities that lurk in the MuleSoft platform for business acceleration.

AVIO Consulting provides the formula to accelerate digital evolution and innovation. We offer thought leadership and proven practices for modern software development as well as a highly productive delivery team focused on enterprise integration with the MuleSoft platform.

{{cta(’42c703f3-4e69-433a-b9bf-3031ddd3afd4′,’justifycenter’)}}