Home> Java> javaTutorial> body text

How to solve the problem of frequent full gc due to abnormal use of Java memory

WBOY
Release: 2023-05-16 12:31:11
forward
1402 people have browsed it

Problem System

Daily inspection found that frequent full gc appeared on the application line

Phenomena

Frequent full gc appeared on the application line


How to solve the problem of frequent full gc due to abnormal use of Java memory

Troubleshooting process

Analyze dump

Pull the dump file: Interlude: If you specify:live when dumping, the jvm will do it first before dumping full gc, and dump full gc will be printed in the gc log. This kind of troubleshooting for online abnormal memory conditions not caused by memory leaks will bring inconvenience, causing us to dump several times.

Analyze the dump file:

a. It was found that a large number of long[] arrays occupy the maximum space, and there are exceptions


How to solve the problem of frequent full gc due to abnormal use of Java memory

b . Check the gc root node and find that most of these long[] data are held by org.HdrHistogram.Histogram. Each Histogram object will hold a 2048size long[]

c. Check the number of Histogram instances. , there are actually 50,000. Compared with the stack of normal projects, it is about 100 times


How to solve the problem of frequent full gc due to abnormal use of Java memory

d. Here is another episode. I used to use mat analysis at the beginning. However, the report generated by mat is more useful for analyzing leaks. For analyzing abnormal memory, it is not as easy to use as jvisualvm.exe and idea profiler

Troubleshooting reasons

Start locally and you can reproduce this type of memory Usage, so I started a local service with normal memory and the problematic application to analyze the memory comparison

The profiler of idea is used here, which is very convenient

Discover the difference:

Compared with normal applications, it is found that the references of the abnormal applications have abnormal references from
How to solve the problem of frequent full gc due to abnormal use of Java memory
How to solve the problem of frequent full gc due to abnormal use of Java memory

##● rx.internal.operators.OnSubscribeReduceSeed$ReduceSeedSubscriber. I suspect this is it. Abnormal references are the reason why these instances cannot be recycled in the new generation but are accumulated in the old generation to trigger full gc

Troubleshooting the differences:

After briefly looking at the relevant code, I can't see the reason. Direct debug comparison

The system has indeed entered the relevant code and added a reference to the Histogram, while the normal application does not have


How to solve the problem of frequent full gc due to abnormal use of Java memory

, but you can’t tell why just by looking at it. At this time, I paid attention to the thread pool in the lower left corner. This thread pool is quite strange. It is the thread pool of Metric.

Metric is used by Hystrix to count relevant indicators for its own dashboard or users to obtain, so as to understand the system circuit breaker. Functions of parameters and indicators
Looking at the stack again, the logic to get here is


How to solve the problem of frequent full gc due to abnormal use of Java memory

This stream is used to count system indicators per unit time, causing Hystrix to use Histogram's long array implements a sliding window-like effect to count indicators per unit time

Histogram itself is used by Hystrix to implement a bucket sliding window-like function to count traffic per unit time, but because the indicator parameters are turned on, hystrix In order to count indicators in a longer time range, a new object holds more Histogram references (unit time) for aggregation. Because these references are used to count longer time range periods, they will be held for a long time because the references are held for a long time. In the old age, the essence is not a memory leak, so it can be recycled after each full gc.

Solution

My first reaction was when I saw the above differences and the weird thread pool. It is to turn off the metric so that the application does not add references to this logic. According to the official documentation, this configuration is turned on by default, and confirm that this function only affects the indicator statistics and does not affect the function of the circuit breaker itself. Use the configuration hystrix.metrics.enabled=false Configure to close

After adding the configuration, verify and view the stack, the references return to normal, and the system does not add more Histogram instances after a period of time. After releasing it online and observing for a period of time, the full gc problem is indeed solved.

Root Cause

After discovering the solution and verifying it at that time, I did not have time to study the reason why the default configuration of hystrix.metrics.enabled is true but other applications do not have this full gc problem. Solve it first and then continue to follow up to investigate the root cause to prevent the same problem from occurring in other projects

The suspicious thread pool previously discovered is HystrixMetricsPoller. After inspection, the thread pool is composed of HystrixMetricsPollerConfiguration


How to solve the problem of frequent full gc due to abnormal use of Java memory

The class is enabled, mainly relying on hystrix.metrics.enabled, but the default is true. Why are other projects not enabled?

After searching the source code, the opening of this class is related to an annotation.


How to solve the problem of frequent full gc due to abnormal use of Java memory

After comparing the code, it turns out that only the abnormal application uses this annotation. The purpose of this annotation is to turn on the circuit breaker

But after research, it was found that without using this annotation, functions such as circuit breakers are still available. The reason is that after the spring-cloud version, spring uses hystrix to encapsulate openfeign. Instead of integrating the entire hystrix system, spring-cloud may have also discovered problems with hystrix memory usage

So in higher versions (at least our version), feign is switched on and off through feign.hystrix.enabled Circuit breaker (if this switch is turned off, simply adding the @EnableCircuitBreaker annotation to the circuit breaker will not take effect)

In fact, in higher versions of spring-cloud, the @EnableCircuitBreaker annotation has been marked as obsolete. , but maybe because we are an intermediate version, there are situations where it is neither marked as abandoned nor actually useful.

In short, the circuit breaking function of feign is only controlled by feign.hystrix.enabled, after adding the @EnableCircuitBreaker annotation It will just open all other indicators and other functions of Hystrix


How to solve the problem of frequent full gc due to abnormal use of Java memory

The above is the detailed content of How to solve the problem of frequent full gc due to abnormal use of Java memory. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:yisu.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!