Home > Java > JavaBase > Understand Java log levels, duplicate recording, and log loss issues

Understand Java log levels, duplicate recording, and log loss issues

coldplay.xixi
Release: 2020-12-11 17:26:34
forward
9721 people have browsed it

java Basic TutorialThe column introduces how to solve problems such as Java log levels

Understand Java log levels, duplicate recording, and log loss issues

##Related free learning recommendations:

java Basic Tutorial

1 Common causes of log errors

1.1 There are many log frameworks

Different class libraries may use different log frameworks, and compatibility is a problem

1.2 The configuration is complex and error-prone

Log configuration files are usually very complicated. Many students are used to copying configuration files directly from other projects or online blogs, but do not study carefully how to modify them. Common errors occur in duplicate logging, performance of synchronous logging, and misconfiguration of asynchronous logging.

1.3 There are some misunderstandings in logging itself

For example, the cost of obtaining log content is not considered, the log level is used indiscriminately, etc.

2 SLF4J

Logback, Log4j, Log4j2, commons-logging, JDK’s own java.util.logging, etc. are all log frameworks of the Java system, and there are indeed many. Different class libraries may also choose to use different logging frameworks. As a result, unified management of logs becomes very difficult.

    SLF4J (Simple Logging Facade For Java) provides a unified log facade API to solve this problem

    . That is, the purple part in the figure implements the neutral logging API
  • bridging function, and the blue part bridges various logging framework APIs (green part) to the SLF4J API. In this way, even if you use various logging APIs to record logs in your program, you can eventually bridge to the SLF4J facade API.
  • Adaptation function, the red part, can realize the binding of SLF4J API and the actual log framework (gray part).
  • SLF4J is just a logging standard, but it still requires an actual logging framework. The logging framework itself does not implement the SLF4J API, so pre-conversion is required. Logback is implemented according to the SLF4J API standard, so there is no need to bind modules for conversion.
Although you can use

log4j-over-slf4j to implement Log4j bridging to SLF4J, you can also use slf4j-log4j12 to implement SLF4J adaptation to Log4j, and also use them A column is drawn, but it cannot use them at the same time, otherwise an infinite loop will occur. The same goes for jcl and jul.

Although there are four gray log implementation frameworks in the picture, Logback and Log4j are most commonly used in daily business, both of which were developed by the same person. Logback can be considered an improved version of Log4j, which is more recommended and is basically mainstream.

Spring Boot’s logging framework is also Logback. So why can we use Logback directly without manually introducing the Logback package?

spring-boot-starter module dependency

spring-boot-starter-loggingModule
spring-boot-starter-loggingModule automatically introducedlogback -classic (contains SLF4J and Logback logging framework) and some adapters for SLF4J. Among them, log4j-to-slf4j is used to bridge the Log4j2 API to SLF4J, and jul-to-slf4j is used to bridge the java.util.logging API to SLF4J.

3 Duplicate log recording

Duplicate log recording not only brings unnecessary trouble to log viewing and statistical work, but also increases the burden on the disk and log collection system.

Logger configuration inheritance relationship leads to repeated log recording

  • Define a method to implement the recording of debug, info, warn and error logs


  • Logback configuration


  • There is nothing wrong with the configuration, but duplicate log records appear after executing the method


  • Analysis

    The Appender of CONSOLE is mounted to two Loggers at the same time, the defined
    and <root>, due to The defined is inherited from <root>, so the same log will be recorded through the logger and sent to the root record, so there will be duplicate records in the log under the application package. .

What is the original intention of such a configuration?

In my heart, I want to implement a custom logger configuration so that the logs in the application can temporarily enable DEBUG level logging. In fact, there is no need to mount the Appender repeatedly, just remove the Appender mounted under
:

<logger name="org.javaedge.time.commonmistakes.logging" level="DEBUG"/>
Copy after login
If you customize

you need to Log output to different Appenders: For example,

    The application log is output to the file app.log
  • Other framework logs are output to the console
can Set the

additivity attribute of to false, so that the Appender# of <root> will not be inherited.
##

Incorrect configuration of LevelFilter causes duplicate logs

  • While recording logs to the console, log records are recorded to two files at different levels

  • Execution results

  • The info.log file contains INFO, WARN and ERROR three-level logs, which are not as expected

  • error.log contains WARN and ERROR level logs, resulting in repeated log collection

  • ##Accident Accountability

    Some companies use automated ELK solutions to collect logs , the log will be output to the console and file at the same time. Developers will not care about the logs recorded in the file when testing locally. In test and production environments, because developers do not have server access rights, repeated problems in the original log file are difficult to find. .

Why are the logs repeated?

ThresholdFilter source code analysis

    When
  • log level ≥ configuration level returns NEUTRAL and continues to call the next filter on the filter chain
  • Otherwise, return
  • DENY and directly refuse to record the log
In this case, we will set

ThresholdFilter to WARN, so WARN and ERROR level logs can be recorded.

LevelFilter

is used to compare log levels and then handle them accordingly.

    If there is a match, the processing method defined in
  • onMatch is called: by default, it is handed over to the next filter for processing (the default value defined in the AbstractMatcherFilter base class)
  • Otherwise Call the processing method defined by
  • onMismatch: By default, it is also handed over to the next filter


and

ThresholdFilter Different, LevelFilterOnly configuring level cannot really work.

Since the onMatch and onMismatch attributes are not configured, the filter fails, causing logs of INFO and above levels to be recorded.

Correction

Configuring the onMatch attribute of LevelFilter to ACCEPT means receiving INFO level logs; configuring the onMismatch attribute to DENY means not logging except the INFO level:


In this way, the

_info.log file will only have INFO level logs, and there will be no duplicate logs.

4 Does asynchronous logging improve performance?

After you know how to correctly output logs to files, you should consider how to avoid logging from becoming a system performance bottleneck. This can solve the problem of how to record logs when the IO performance of disks (such as mechanical disks) is poor and the log volume is large.

Define the following log configuration. There are two Appenders in total:

FILE is a FileAppender used to record all logs;

CONSOLE is a ConsoleAppender used to record records with time Flagged logs.

Output a large number of logs to a file. The log file will be very large. If the performance test results are also mixed in, it will be difficult to find that log. Therefore, EvaluatorFilter is used here to filter logs according to tags, and the filtered logs are output to the console separately. In this case, a time mark is added to the log that outputs the test results.

Use tags and EvaluatorFilter together to filter logs by tags.

    Test code: Record a specified number of large logs. Each log contains 1MB of simulated data. Finally, a method execution time-consuming log marked with time is recorded:
After executing the program, you can see that the time taken to record 1,000 logs and 10,000 log calls is 5.1 seconds and 39 seconds respectively


For code that only records file logs, this takes too long.

Source code analysis

FileAppender inherits from OutputStreamAppender


When appending logs, the logs are written directly into the OutputStream, which is a synchronous log recording

所以日志大量写入才会旷日持久。如何才能实现大量日志写入时,不会过多影响业务逻辑执行耗时而影响吞吐量呢?

AsyncAppender

使用Logback的AsyncAppender

即可实现异步日志记录。AsyncAppender类似装饰模式,在不改变类原有基本功能情况下为其增添新功能。这便可把AsyncAppender附加在其他Appender,将其变为异步。

定义一个异步Appender ASYNCFILE,包装之前的同步文件日志记录的FileAppender, 即可实现异步记录日志到文件

  • 记录1000次日志和10000次日志的调用耗时,分别是537毫秒和1019毫秒

异步日志真的如此高性能?并不,因为这并没有记录下所有日志。

AsyncAppender异步日志坑

  • 记录异步日志撑爆内存
  • 记录异步日志出现日志丢失
  • 记录异步日志出现阻塞。

案例

模拟慢日志记录场景:
首先,自定义一个继承自ConsoleAppenderMySlowAppender,作为记录到控制台的输出器,写入日志时休眠1秒。

  • 配置文件中使用AsyncAppender,将MySlowAppender包装为异步日志记录

  • 测试代码

  • 耗时很短但出现日志丢失:要记录1000条日志,最终控制台只能搜索到215条日志,而且日志行号变问号。

  • 原因分析
    AsyncAppender提供了一些配置参数,而当前没用对。

源码解析

  • includeCallerData
    默认false:方法行号、方法名等信息不显示
  • queueSize
    控制阻塞队列大小,使用的ArrayBlockingQueue阻塞队列,默认容量256:内存中最多保存256条日志
  • discardingThreshold
    丢弃日志的阈值,为防止队列满后发生阻塞。默认队列剩余容量 < 队列长度的20%,就会丢弃TRACE、DEBUG和INFO级日志
  • neverBlock
    控制队列满时,加入的数据是否直接丢弃,不会阻塞等待,默认是false
    • 队列满时:offer不阻塞,而put会阻塞
    • neverBlock为true时,使用offer
public class AsyncAppender extends AsyncAppenderBase<ILoggingEvent> {
	// 是否收集调用方数据
    boolean includeCallerData = false;
    protected boolean isDiscardable(ILoggingEvent event) {
        Level level = event.getLevel();
        // 丢弃 ≤ INFO级日志
        return level.toInt() <= Level.INFO_INT;
    }
    protected void preprocess(ILoggingEvent eventObject) {
        eventObject.prepareForDeferredProcessing();
        if (includeCallerData)
            eventObject.getCallerData();
    }}public class AsyncAppenderBase<E> extends UnsynchronizedAppenderBase<E> implements AppenderAttachable<E> {

	// 阻塞队列:实现异步日志的核心
    BlockingQueue<E> blockingQueue;
    // 默认队列大小
    public static final int DEFAULT_QUEUE_SIZE = 256;
    int queueSize = DEFAULT_QUEUE_SIZE;
    static final int UNDEFINED = -1;
    int discardingThreshold = UNDEFINED;
    // 当队列满时:加入数据时是否直接丢弃,不会阻塞等待
    boolean neverBlock = false;

    @Override
    public void start() {
       	...
        blockingQueue = new ArrayBlockingQueue<E>(queueSize);
        if (discardingThreshold == UNDEFINED)
        //默认丢弃阈值是队列剩余量低于队列长度的20%,参见isQueueBelowDiscardingThreshold方法
            discardingThreshold = queueSize / 5;
        ...
    }

    @Override
    protected void append(E eventObject) {
        if (isQueueBelowDiscardingThreshold() && isDiscardable(eventObject)) { //判断是否可以丢数据
            return;
        }
        preprocess(eventObject);
        put(eventObject);
    }

    private boolean isQueueBelowDiscardingThreshold() {
        return (blockingQueue.remainingCapacity() < discardingThreshold);
    }

    private void put(E eventObject) {
        if (neverBlock) { //根据neverBlock决定使用不阻塞的offer还是阻塞的put方法
            blockingQueue.offer(eventObject);
        } else {
            putUninterruptibly(eventObject);
        }
    }
    //以阻塞方式添加数据到队列
    private void putUninterruptibly(E eventObject) {
        boolean interrupted = false;
        try {
            while (true) {
                try {
                    blockingQueue.put(eventObject);
                    break;
                } catch (InterruptedException e) {
                    interrupted = true;
                }
            }
        } finally {
            if (interrupted) {
                Thread.currentThread().interrupt();
            }
        }
    }}
Copy after login

默认队列大小256,达到80%后开始丢弃<=INFO级日志后,即可理解日志中为什么只有两百多条INFO日志了。

queueSize 过大

可能导致OOM

queueSize 较小

默认值256就已经算很小了,且discardingThreshold设置为大于0(或为默认值),队列剩余容量少于discardingThreshold的配置就会丢弃<=INFO日志。这里的坑点有两个:

  1. 因为discardingThreshold,所以设置queueSize时容易踩坑。
    比如本案例最大日志并发1000,即便置queueSize为1000,同样会导致日志丢失
  2. discardingThreshold参数容易有歧义,它不是百分比,而是日志条数。对于总容量10000队列,若希望队列剩余容量少于1000时丢弃,需配置为1000

neverBlock 默认false

意味总可能会出现阻塞。

  • discardingThreshold = 0,那么队列满时再有日志写入就会阻塞
  • discardingThreshold != 0,也只丢弃≤INFO级日志,出现大量错误日志时,还是会阻塞

queueSize、discardingThreshold和neverBlock三参密不可分,务必按业务需求设置:

  • 若优先绝对性能,设置neverBlock = true,永不阻塞
  • 若优先绝不丢数据,设置discardingThreshold = 0,即使≤INFO级日志也不会丢。但最好把queueSize设置大一点,毕竟默认的queueSize显然太小,太容易阻塞。
  • 若兼顾,可丢弃不重要日志,把queueSize设置大点,再设置合理的discardingThreshold

以上日志配置最常见两个误区

Let’s look at the misunderstandings in the log record itself.

Using log placeholders eliminates the need to determine the log level?

SLF4J’s {} placeholder syntax will only obtain the actual parameters when the log is actually recorded, thus solving the problem of log Performance issues with data acquisition.
Is this correct?

  • Verification code: It takes 1 second to return the result

If DEBUG log is recorded, and set to record only>=INFO level Log, will the program also take 1 second?
Three methods to test:

  • Concatenate strings to record slowString
  • Use placeholder method to record slowString
  • First determine whether the log level is DEBUG enabled.


The first two methods call slowString, so they both take 1 second. And the second method is to use placeholders to record slowString. Although this method allows passing Object without explicitly splicing String, it is only a delay (if the log is not recorded, it will be omitted) Log parameter object.toString() and String concatenation takes time.

In this case, unless the log level is determined in advance, slowString must be called.
Therefore, using {} placeholder cannot solve the performance problem of log data acquisition by delaying parameter value acquisition.

In addition to judging the log level in advance, you can also obtain delayed parameter content through lambda expressions. However, SLF4J's API does not yet support lambda, so you need to use the Log4j2 log API and replace Lombok's @Slf4j annotation with the **@Log4j2** annotation to provide a method for lambda expression parameters:

Call debug like this, signature Supplier, the parameters will be delayed until the log is actually needed to be obtained:



So debug4 will not call the slowString method

just replace it with Log4j2 API, the real logging is still through Logback, this is the benefit of SLF4J adaptation.

Summary

  • SLF4J unifies the Java logging framework. When using SLF4J, it is important to understand its bridging API and bindings. If an SLF4J error message appears when the program starts, it may be a configuration problem. You can use Maven's dependency:tree command to sort out dependencies.
  • Asynchronous logs solve performance problems by trading space for time. But the space is limited after all. When the space is full, you should consider blocking and waiting or discarding the log. If you prefer not to discard important logs, then choose blocking waiting; if you prefer that the program does not block due to logging, then you need to discard the logs.
  • The parameterized logging method provided by the log framework cannot completely replace log level judgment. If your log volume is large and the cost of obtaining log parameters is also high, you must determine the log level to avoid time-consuming acquisition of log parameters without logging.

The above is the detailed content of Understand Java log levels, duplicate recording, and log loss issues. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:csdn.net
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template