Reputation: 2939
For some circumstances I need to force flushing in logback's file appender immediately. I've found in docs this option is enabled by default. Mysteriously this doesn't work. As I see in the sources underlying process involves BufferedOutputSream
correctly. Is there any issues with BufferedOutputSream.flush()
? Probably this is rather related to the flushing issue.
Update: I found the issue on Windows XP Pro SP 3 and on Red Hat Enterprise Linux Server release 5.3 (Tikanga). I used these libs:
jcl-over-slf4j-1.6.6.jar
logback-classic-1.0.6.jar
logback-core-1.0.6.jar
slf4j-api-1.6.6.jar
The logback.xml
is:
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/somepath/file.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>file.log.%i</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>3</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>5MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</pattern>
</encoder>
</appender>
<root level="debug">
<appender-ref ref="FILE"/>
</root>
</configuration>
Updated: I'd provide a unit test but that seems not so simple. Let me describe the issue more clearly.
immidiateFlush
is true by
default so method flush()
is invoked explicitlyA bit later when some underlying buffer was flowed the event appears in the file. So the question is: does output stream guarantee immediate flush?
To be honest I've already solve this by implementing my own ImmediateRollingFileAppender
that leverages facility of FileDescriptor
of immediate syncing. Anybody interested in can follow this.
So this is not a logback issue.
Upvotes: 19
Views: 20763
Reputation: 33
Many monitoring middleware find the new events by check for updates of time stamp and file size. For this reason, it need to time the event is recorded, time stamp and file size is updated.
This class could solve it
public class MyFileAppender<E> extends FileAppender<E> {
protected void writeOut(E event) throws IOException {
super.writeOut(event);
ResilientFileOutputStream resilientFos = (ResilientFileOutputStream) super.getOutputStream();
resilientFos.flush();
resilientFos.getChannel().force(true);
}
}
Set MyFileAppender class for appender.
<appender name="FILE" class="MyFileAppender">
I hope that is logback solves this problem.
Upvotes: -1
Reputation: 10361
You have done a good job - well done. Here is a proposal to get it more concise:
public class ImmediateFlushPatternLayoutEncoder extends PatternLayoutEncoder {
public void doEncode(ILoggingEvent event) throws IOException {
super.doEncode(event);
if (isImmediateFlush()) {
if (outputStream.os instanceof FileOutputStream) {
((FileOutputStream) outputStream.os).getFD().sync();
}
}
}
}
In configuration, you have to use that specific encoder:
<encoder class="ch.qos.logback.core.recovery.ImmediateFlushPatternLayoutEncoder">
Not tested. Probably there will be visibility issues with fields, requiring to use logback ch.qos.logback.core.recovery
package itself.
By the way, I invite you to submit a bug report to logback to get an additional option immediateSync
on LayoutWrappingEncoder
.
Upvotes: 1
Reputation: 2939
I decided to bring my solution to everybody. Let me clarify first of all that this is not a logback issue and not a JRE problem. This is described in the javadoc and generally shouldn't be an issue until you are faced with some old school integration solutions over the file syncing.
So this is a logback appender implemented to flush immediately:
public class ImmediateFileAppender<E> extends RollingFileAppender<E> {
@Override
public void openFile(String file_name) throws IOException {
synchronized (lock) {
File file = new File(file_name);
if (FileUtil.isParentDirectoryCreationRequired(file)) {
boolean result = FileUtil.createMissingParentDirectories(file);
if (!result) {
addError("Failed to create parent directories for [" + file.getAbsolutePath() + "]");
}
}
ImmediateResilientFileOutputStream resilientFos = new ImmediateResilientFileOutputStream(file, append);
resilientFos.setContext(context);
setOutputStream(resilientFos);
}
}
@Override
protected void writeOut(E event) throws IOException {
super.writeOut(event);
}
}
This is corresponding output stream utility class. Because of some methods and fields of original ResilientOutputStreamBase
that was supposed for extending initially have packaged access modifiers I had to extend OutputStream
instead and just copy the rest and unchanged of ResilientOutputStreamBase
and ResilientFileOutputStream
to this new one. I just display the changed code:
public class ImmediateResilientFileOutputStream extends OutputStream {
// merged code from ResilientOutputStreamBase and ResilientFileOutputStream
protected FileOutputStream os;
public FileOutputStream openNewOutputStream() throws IOException {
return new FileOutputStream(file, true);
}
@Override
public void flush() {
if (os != null) {
try {
os.flush();
os.getFD().sync(); // this's make sence
postSuccessfulWrite();
} catch (IOException e) {
postIOFailure(e);
}
}
}
}
And finally the config:
<appender name="FOR_INTEGRATION" class="package.ImmediateFileAppender">
<file>/somepath/for_integration.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
<fileNamePattern>for_integration.log.%i</fileNamePattern>
<minIndex>1</minIndex>
<maxIndex>3</maxIndex>
</rollingPolicy>
<triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
<maxFileSize>5MB</maxFileSize>
</triggeringPolicy>
<encoder>
<pattern>%d{HH:mm:ss.SSS} - %msg%n</pattern>
<immediateFlush>true</immediateFlush>
</encoder>
</appender>
Upvotes: 12