Java
javaTutorial
A practical guide to efficiently sending Java application logs to the ELK Stack
A practical guide to efficiently sending Java application logs to the ELK Stack

This article aims to guide developers on how to send logs generated by Java applications (based on Spring Boot and slf4j) directly from Docker containers to ELK Stack (Elasticsearch, Logstash, Kibana). We will explore the best practices for using Filebeat as a log collector to avoid local file storage and achieve efficient log transmission and analysis.
Reasons to choose Filebeat
In a microservice architecture, efficient log management is crucial. Sending logs directly to the ELK Stack via HTTP is a feasible solution, but using Filebeat is often a better choice. Filebeat is a lightweight log collector specially designed for safe and reliable transmission of log data. It has the following advantages:
- Reliability: Filebeat can guarantee at least one transmission of the log, ensuring no data loss even in the event of network interruption.
- Low resource usage: Filebeat takes up very little resources and has almost negligible impact on the performance of Java applications.
- Simple configuration: Filebeat's configuration is relatively simple and easy to integrate into Docker containers.
- Deep integration with ELK Stack: Filebeat is tightly integrated with Elasticsearch and Logstash, making it easy to send log data to ELK Stack for processing and analysis.
Implementation steps
Here are the detailed steps to send Java application logs to ELK Stack:
-
Configuring log output for Java applications
Make sure your Java application uses slf4j as the logging facade and configures a suitable logging implementation, such as Logback or Log4j2. Configure the log to output the log to the console (standard output). This is key because Docker containers capture the standard output and standard error streams by default.
For example, using Logback, you can configure the following in the logback.xml file:
<configuration> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern> </encoder> </appender> <root level="INFO"> <appender-ref ref="STDOUT"></appender-ref> </root> </configuration> -
Configure Filebeat
Install Filebeat in your Docker container. You can install it through Dockerfile or other methods.
Filebeat's configuration file (filebeat.yml) needs to be modified accordingly. Here is an example configuration:
filebeat.inputs: - type: docker containers.ids: '*' # Uncomment to specify which containers logs to read #containers.ids: ["container_id1", "container_id2"] # Configure processors to enhance or manipulate events generated by the input. processors: - add_cloud_metadata: ~ - add_docker_metadata: ~ - add_host_metadata: ~ output.logstash: hosts: ["logstash:5044"] # Replace with your Logstash host and port- filebeat.inputs.type: docker specifies Filebeat to read logs from the Docker container.
- containers.ids: '*' specifies Filebeat to monitor the logs of all containers. You can specify a specific container ID if needed.
- output.logstash.hosts specifies the address of Logstash. You need to replace this with your actual Logstash hostname and port.
-
Docker Compose configuration (optional)
If your ELK Stack and Java applications are running in Docker containers, you can use Docker Compose to manage them. Here is a sample docker-compose.yml file:
version: "3.7" services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0 ports: - "9200:9200" environment: - "discovery.type=single-node" logstash: image: docker.elastic.co/logstash/logstash:7.17.0 ports: - "5044:5044" depends_on: -elasticsearch volumes: - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf kibana: image: docker.elastic.co/kibana/kibana:7.17.0 ports: - "5601:5601" depends_on: -elasticsearch java-app: build: ./java-app depends_on: -elasticsearch filebeat: image: docker.elastic.co/beats/filebeat:7.17.0 volumes: - ./filebeat.yml:/usr/share/filebeat/filebeat.yml - /var/lib/docker/containers:/var/lib/docker/containers:ro # Mount the Docker container logs depends_on: -java-app - logstashNote: Dockerfile needs to be included in the java-app directory.
You need to create the logstash.conf and filebeat.yml files and place them in the same directory as the docker-compose.yml file.
-
Logstash configuration (optional)
Logstash can be used to parse and transform log data. If your log format is complex, you can use Logstash to extract key information. The following is a sample logstash.conf file:
input { beats { port => 5044 } } filter { grok { match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{DATA:thread}\] %{LOGLEVEL:level} %{DATA:logger} - %{GREEDYDATA:message}" } } } output { elasticsearch { hosts => ["elasticsearch:9200"] # Replace with your Elasticsearch host and port index => "java-app-%{ YYYY.MM.dd}" } }- input.beats.port specifies the port on which Logstash listens for log data sent by Filebeat.
- filter.grok.match Use Grok filters to parse log messages. You need to modify the Grok expression according to your log format.
- output.elasticsearch.hosts specifies the address of Elasticsearch.
- output.elasticsearch.index specifies the index name of Elasticsearch.
-
Start the ELK Stack and Java applications
Use the docker-compose up command to start the ELK Stack and Java applications.
-
Verify log data
In Kibana, you can create an index schema and then search and analyze log data from Java applications.
Things to note
- Make sure Filebeat can access the Docker container's log directory. Typically, you need to mount the /var/lib/docker/containers directory into the Filebeat container.
- Modify Logstash's Grok expression according to your log format.
- Configure other options for Filebeat and Logstash according to your needs.
- Regularly check the Filebeat and Logstash logs to ensure they are functioning properly.
- Consider using container orchestration tools such as Kubernetes to manage ELK Stack and Java applications.
Summarize
Using Filebeat to send Java application logs to the ELK Stack is an efficient and reliable method. By configuring Filebeat and Logstash, you can easily collect, parse, and analyze log data of Java applications to better understand the running status of the application. Remember to adjust the configuration to your specific needs and monitor the health of your system regularly.
The above is the detailed content of A practical guide to efficiently sending Java application logs to the ELK Stack. For more information, please follow other related articles on the PHP Chinese website!
Hot AI Tools
Undress AI Tool
Undress images for free
AI Clothes Remover
Online AI tool for removing clothes from photos.
Undresser.AI Undress
AI-powered app for creating realistic nude photos
ArtGPT
AI image generator for creative art from text prompts.
Stock Market GPT
AI powered investment research for smarter decisions
Hot Article
Popular tool
Notepad++7.3.1
Easy-to-use and free code editor
SublimeText3 Chinese version
Chinese version, very easy to use
Zend Studio 13.0.1
Powerful PHP integrated development environment
Dreamweaver CS6
Visual web development tools
SublimeText3 Mac version
God-level code editing software (SublimeText3)
Hot Topics
20518
7
13631
4
How to configure Spark distributed computing environment in Java_Java big data processing
Mar 09, 2026 pm 08:45 PM
Spark cannot run in local mode, ClassNotFoundException: org.apache.spark.sql.SparkSession. This is the most common first step of getting stuck: even the dependencies are not correct. Only spark-core_2.12 is written in Maven, but spark-sql_2.12 is not added. SparkSession crashes as soon as it is built. The Scala version must strictly match the official Spark compiled version - Spark3.4.x uses Scala2.12 by default. If you use spark-sqljar of 2.13, the class loader cannot directly find the main class. Practical advice: Go to mvnre
How to safely map user-entered weekday string to integer value and implement date offset operation in Java
Mar 09, 2026 pm 09:43 PM
This article introduces a concise and maintainable way to map the weekday string (such as "Monday") to the corresponding serial number (1-7), and use the modulo operation to realize the forward and backward offset of any number of days (such as Monday plus 4 days to get Friday), avoiding lengthy if chains and hard-coded logic.
How to generate a list of duplicate elements using Java's Collections.nCopies_Initialization tips
Mar 06, 2026 am 06:24 AM
Collections.nCopies returns an immutable view. Calling add/remove will throw UnsupportedOperationException; it needs to be wrapped with newArrayList() to modify it, and it is disabled for mutable objects.
What is exception masking (Suppressed Exceptions) in Java_Multiple resource shutdown exception handling
Mar 10, 2026 pm 06:57 PM
What is SuppressedException: It is not "swallowed", but actively archived by the JVM. SuppressedException is not an exception loss, but the JVM quietly attaches the secondary exception to the main exception under the premise that "only one exception must be thrown" for you to verify afterwards. It is automatically triggered by the JVM in only two scenarios: one is that the resource closure in try-with-resources fails, and the other is that you manually call addSuppressed() in finally. The key difference is: the former is fully automatic and safe; the latter requires you to keep it to yourself, and it can be written as shadowing if you are not careful. try-
How to use Homebrew to install Java on Mac_A must-have Java tool chain for developers
Mar 09, 2026 pm 09:48 PM
Homebrew installs the latest stable version of openjdk (such as JDK22) by default, not the LTS version; you need to explicitly execute brewinstallopenjdk@17 or brewinstallopenjdk@21 to install the LTS version, and manually configure PATH and JAVA_HOME to be correctly recognized by the system and IDE.
How to correctly implement runtime file writing in Java applications (avoiding JAR internal write failures)
Mar 09, 2026 pm 07:57 PM
After a Java application is packaged as a JAR, data cannot be written directly to the resources in the JAR package (such as test.txt) because the JAR is essentially a read-only ZIP archive; the correct approach is to write variable data to an external path (such as a user directory, a temporary directory, or a configuration-specified path).
What is the underlying principle of array expansion in Java_Java memory dynamic adjustment analysis
Mar 09, 2026 pm 09:45 PM
ArrayList.add() triggers expansion because grow() is called when size is equal to elementData.length. The first add allocates 10 capacity, and subsequent expansion is 1.5 times and not less than the minimum requirement, relying on delayed initialization and System.arraycopy optimization.
How to safely read a line of integer input in Java and avoid Scanner blocking
Mar 06, 2026 am 06:21 AM
This article introduces typical blocking problems when using Scanner to read multiple integers in a single line. It points out that hasNextInt() will wait indefinitely when there is no subsequent input, and recommends a safe alternative with nextLine() string splitting as the core.





