sorenpoulsen.com header

Set up SLF4J with logback's AsyncAppender

This post is a quick overview of how to set up the Simple Logging Facade for Java (SLF4J) with logback's AsyncAppender in a Maven project.

Dependencies

Add the logging facade slf4j-api and the implementation classes logback-classic to the project's pom.xml file. The API will detect and bind the implementation automatically at runtime, no further configuration needed in that regard.

<dependency>
  <groupId>org.slf4j</groupId>
  <artifactId>slf4j-api</artifactId>
  <version>1.7.32</version>
</dependency>

<dependency>
  <groupId>ch.qos.logback</groupId>
  <artifactId>logback-classic</artifactId>
  <version>1.2.10</version>
</dependency>

Configuring logback

By default logback reads its configuration from a file named logback.xml at the root of the classpath, but we could also set the path explicitly with a Java system property:

java -Dlogback.configurationFile=/path/to/logback.xml

For a Maven project the simplest option is to drop the file in the /src/main/resources folder which will cause Maven to package the file in the root of the classpath.

Let's start out with this logback.xml sample file and then follow up with some reasoning behind the configuration mostly centered around the use of the AsyncAppender:

<?xml version="1.0" encoding="UTF-8" ?>
<configuration>

    <appender name="rolling" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${catalina.base}/logs/myapp.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            <fileNamePattern>${catalina.base}/logs/myapp.%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern>
            <maxFileSize>100MB</maxFileSize>
            <maxHistory>200</maxHistory>
            <totalSizeCap>5GB</totalSizeCap>
        </rollingPolicy>
        <encoder>
            <pattern>%date{ISO8601} %-5level %thread %logger %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="async" class="ch.qos.logback.classic.AsyncAppender">
        <appender-ref ref="rolling" />
        <discardingThreshold>0</discardingThreshold>        
        <queueSize>256</queueSize>
        <neverBlock>false</neverBlock>
        <includeCallerData>false</includeCallerData>
    </appender>

    <root level="info">
        <appender-ref ref="async"/>
    </root>

</configuration>

In this sample configuration the root logger references the AsyncAppender which in turn references the RollingFileAppender.

Performance by default over compliance?

Logback by default drops log events under certain circumstances. This is probably not going to be compatible with most organisations. Fortunately it's easy to change this configuration. But first lets understand why it does that.

Behind the scenes the AsyncAppender is using Java's BlockingQueue. Every time an application thread logs an event, it's really putting the event on a queue and a worker thread takes the log event off the queue and passes it to the next appender, which would typically do some sort of I/O operation that blocks the thread, like writing the event to disk.

Through this decoupling the application thread does not need to wait for the worker thread to complete the I/O operation.

However, the AsyncAppender is only asynchronous in so far that the queue is not full. Therefore Logback by default discards log events except those at WARN and ERROR level when the queue has 20% or less capacity remaining. This is a good idea from purely a performance point of view, but it's difficult to see this as being compliant with the policies in most organisations especially in these times of the EU GDPR directive. As a consequence we set discardingTreshold to 0 to avoid discarding log events.

The AsyncAppender also has an option named "neverBlock" to drop log events when the queue is completely full. In the sample configuration this is set explicitly to false although this is also the default.

Dealing with bursts and slow-downs

Most of the time the size of the BlockingQueue doesn't matter much, as long as the worker I/O thread can keep up with the logging throughput. But it does matter in temporary situations where the I/O worker thread cannot keep up.

This might be caused by unusual load on the disk that forces the I/O worker thread to slow down - the disk is a shared resource after all. Or it could be the application experiences an unusual situation that triggers a burst of log events.

If these types of slow-downs and bursts harms the responsiveness of the application then try to increase the queueSize from its default of 256 to say 1024 so that it can better soak up the spikes. It wont make the sustained I/O of the RollingFileAppender any faster of course. Another option is to quelch loggers that are bursty or use INFO level to spam technical stuff, by lowering their specific log level.

Finally the option "includeCallerData" determines if data that's expensive to extract, such as the caller's classname, is included in the log event passed to the next appender. If possible only reference "cheap" data. The sample configuration explicitly sets includeCallerData to false and this is also the default.

{{model.usr.name}}
{{cmt.user.name}}
{{cmt.user.name}}
{{childcmt.user.name}}
{{childcmt.user.name}}