search
  • Sign In
  • Sign Up
Password reset successful

Follow the proiects vou are interested in andi aet the latestnews about them taster

Table of Contents
1. Scenario overview and data model
2. Build custom grouping keys
3. Detailed explanation of Stream API implementation steps
3.1 Prepare data
3.2 Flat processing (flatMap)
3.3 Filter
3.4 GroupingBy & counting
3.5 Conversion results (entrySet().stream().map)
3.6 Sorted
3.7 Collect results (toList)
4. Complete code example
5. Notes and Summary
Home Java javaTutorial Java 8 Streams: Implementing multi-condition filtering, grouping by month and counting statistics

Java 8 Streams: Implementing multi-condition filtering, grouping by month and counting statistics

Dec 01, 2025 pm 12:42 PM

Java 8 Streams: Implementing multi-condition filtering, grouping by month and counting statistics

This article explains in detail how to use the Java 8 Streams API to efficiently handle complex data aggregation requirements, including multi-condition filtering, grouping by date and month, and counting statistics for specific event types (such as JOIN/EXIT). By building custom grouping keys and chained Stream operations, the conversion from original data structures to structured statistical results is achieved, and complete code examples and key step analysis are provided.

1. Scenario overview and data model

In daily development, we often need to perform complex aggregation operations on data, such as counting the total number of people of different event types (such as "entry" or "resignation") every month from a set of personnel event records. This tutorial will use a specific Java scenario as an example to demonstrate how to achieve this goal using the Java 8 Stream API.

Suppose we have the following Person event data:

 import java.time.LocalDate;

//Define event type enumeration public enum State {
    JOIN, // Join EXIT // Resign}

//Person class, stores personnel event information public class Person {
    private String id;
    private String name;
    private String surname;
    private State event; // Event type: JOIN or EXIT
    private Object value; // Other values, private LocalDate eventDate is not used in this example; // Event date public Person(String id, State event, LocalDate eventDate) {
        this.id = id;
        this.event = event;
        this.eventDate = eventDate;
    }

    // Getters
    public String getId() { return id; }
    public State getEvent() { return event; }
    public LocalDate getEventDate() { return eventDate; }

    //Add other methods as needed}

Our original data may be stored in a Map>, where the key is the pId and the value is the list of Person events corresponding to the pId.

Ultimately we hope to obtain statistical results in the following form:

 Month Info Total Number
1 JOIN 2
1 EXIT 1
2 EXIT 1
3 JOIN 1

To store this result, we define a data transfer object (DTO):

 public class DTO {
    private int month;
    private State info;
    private int totalEmployees;

    public DTO(int month, State info, int totalEmployees) {
        this.month = month;
        this.info = info;
        this.totalEmployees = totalEmployees;
    }

    // Getters
    public int getMonth() { return month; }
    public State getInfo() { return info; }
    public int getTotalEmployees() { return totalEmployees; }

    @Override
    public String toString() {
        return "DTO{"  
               "month=" month  
               ", info=" info  
               ", totalEmployees=" totalEmployees  
               '}';
    }
}

2. Build custom grouping keys

When performing group statistics, we need a key that can uniquely identify the group. In this example, the grouping is by Month and Event Type. We can create a custom class or the record introduced in Java 16 to serve as this composite key. Using record is a cleaner option:

 // Java 16 uses record as the grouping key public record MonthState(int month, State info) {}

// For Java 8-15, you need a normal class and override equals() and hashCode() methods /*
public class MonthState {
    private int month;
    private State info;

    public MonthState(int month, State info) {
        this.month = month;
        this.info = info;
    }

    public int getMonth() { return month; }
    public State getInfo() { return info; }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        MonthState that = (MonthState) o;
        return month == that.month && info == that.info;
    }

    @Override
    public int hashCode() {
        return Objects.hash(month, info);
    }
}
*/

record will automatically generate constructors, equals(), hashCode() and toString() methods, which is very suitable as an immutable data carrier.

3. Detailed explanation of Stream API implementation steps

Now, we will build the Stream pipeline step by step to complete the data aggregation:

3.1 Prepare data

First, simulate some raw data:

 import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

public class StreamAggregationDemo {

    public static void main(String[] args) {
        // Simulate original data Map<string list>&gt; personListById = Map.of(
            "per1", List.of(new Person("per1", State.JOIN, LocalDate.of(2022, 1, 10))),
            "per2", List.of(new Person("per2", State.JOIN, LocalDate.of(2022, 1, 10))),
            "per3", List.of(
                new Person("per3", State.EXIT, LocalDate.of(2022, 1, 10)),
                new Person("per3", State.EXIT, LocalDate.of(2022, 2, 10))
            ),
            "per4", List.of(new Person("per4", State.JOIN, LocalDate.of(2022, 3, 10)))
        );

        // ... Stream pipeline will be constructed here}
}</string>

3.2 Flat processing (flatMap)

Since the original data is a Map>, we need to flatten the Map value (List) into a single stream of Person objects.

 personListById.values().stream() // Get the Stream of all List<person> in the Map
    .flatMap(List::stream) // Flatten each List<person> into a Stream of Person objects</person></person>

3.3 Filter

Next, we only care about the two event types JOIN and EXIT.

 .filter(person -&gt; person.getEvent() == State.EXIT || person.getEvent() == State.JOIN)

3.4 GroupingBy & counting

This is the core step, we will use Collectors.groupingBy() method. It requires two parameters:

  1. Classifier function : used to extract grouping keys from elements in the stream. Here we use p -> new MonthState(p.getEventDate().getMonthValue(), p.getEvent()) to create MonthState as the grouping key.
  2. Downstream collector : used to aggregate elements in each group. Here we use Collectors.counting() to count the number of elements in each group.
 .collect(Collectors.groupingBy(
    p -&gt; new MonthState(p.getEventDate().getMonthValue(), p.getEvent()), // Grouping key: month and event type Collectors.counting() // Count the elements in each grouping))

This step will return a Map, where the key is the MonthState object and the value is the number of Persons under the combination.

3.5 Conversion results (entrySet().stream().map)

Now that we have a Map, we need to convert it back to the List form we defined.

  1. entrySet().stream(): Convert the Map's Entry collection into a Stream>.
  2. map(e -> new DTO(e.getKey().month(), e.getKey().info(), (int) (long) e.getValue())): Map each Map.Entry to a DTO object. Note that Collectors.counting() returns Long type and needs to be cast to int.
 .entrySet().stream() // Convert Map to Stream<map.entry long>&gt;
.map(e -&gt; new DTO(e.getKey().month(), e.getKey().info(), (int) (long) e.getValue())) // Map to DTO</map.entry>

3.6 Sorted

To make the results more readable, we can sort the DTO list by month.

 .sorted(Comparator.comparing(DTO::getMonth)) // Sort by month

3.7 Collect results (toList)

Finally, collect all DTO objects in the Stream into a List.

 .toList(); // Collect as List<dto> (Java 16) or .collect(Collectors.toList()) (Java 8-15)</dto>

4. Complete code example

Combine all the above steps to get a complete Stream pipeline:

 import java.time.LocalDate;
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;

//Define event type enumeration enum State {
    JOIN, // Join EXIT // Resign}

// Person class, stores personnel event information class Person {
    private String id;
    private State event; // Event type: JOIN or EXIT
    private LocalDate eventDate; // event date public Person(String id, State event, LocalDate eventDate) {
        this.id = id;
        this.event = event;
        this.eventDate = eventDate;
    }

    public String getId() { return id; }
    public State getEvent() { return event; }
    public LocalDate getEventDate() { return eventDate; }
}

//Group key record MonthState(int month, State info) {}

// Result DTO
class DTO {
    private int month;
    private State info;
    private int totalEmployees;

    public DTO(int month, State info, int totalEmployees) {
        this.month = month;
        this.info = info;
        this.totalEmployees = totalEmployees;
    }

    public int getMonth() { return month; }
    public State getInfo() { return info; }
    public int getTotalEmployees() { return totalEmployees; }

    @Override
    public String toString() {
        return "DTO{"  
               "month=" month  
               ", info=" info  
               ", totalEmployees=" totalEmployees  
               '}';
    }
}

public class StreamAggregationDemo {

    public static void main(String[] args) {
        // Simulate original data Map> personListById = Map.of(
            "per1", List.of(new Person("per1", State.JOIN, LocalDate.of(2022, 1, 10))),
            "per2", List.of(new Person("per2", State.JOIN, LocalDate.of(2022, 1, 10))),
            "per3", List.of(
                new Person("per3", State.EXIT, LocalDate.of(2022, 1, 10)),
                new Person("per3", State.EXIT, LocalDate.of(2022, 2, 10))
            ),
            "per4", List.of(new Person("per4", State.JOIN, LocalDate.of(2022, 3, 10)))
        );

        List result = personListById.values().stream()
            .flatMap(List::stream) // Flatten List
            .filter(person -&gt; person.getEvent() == State.EXIT || person.getEvent() == State.JOIN) // Filter event type.collect(Collectors.groupingBy(
                p -> new MonthState(p.getEventDate().getMonthValue(), p.getEvent()), // Group by month and event type Collectors.counting() // Count the number of each group))
            .entrySet().stream() // Convert Map to Stream
            .map(e -> new DTO(e.getKey().month(), e.getKey().info(), (int) (long) e.getValue())) // Map to DTO
            .sorted(Comparator.comparing(DTO::getMonth)) // Sort by month.toList(); // Collect the results (abbreviation for Java 16, Java 8-15 uses .collect(Collectors.toList()))

        //Print the result result.forEach(System.out::println);
        /*
        Expected output:
        DTO{month=1, info=JOIN, totalEmployees=2}
        DTO{month=1, info=EXIT, totalEmployees=1}
        DTO{month=2, info=EXIT, totalEmployees=1}
        DTO{month=3, info=JOIN, totalEmployees=1}
        */
    }
}

5. Notes and Summary

  • Importance of Custom Grouping Key : When you need to group based on multiple properties, it is crucial to create a custom object (such as MonthState) that contains these properties as a grouping key. For Java 8-15, be sure to implement equals() and hashCode() methods correctly; for Java 16, record is an ideal and concise choice.
  • Chained calls for Stream operations : Java Stream API implements a declarative data processing method through chained calls (flatMap -> filter -> collect -> map -> sorted -> toList), with high code readability and clear logic.
  • Downstream collector : The second parameter of Collectors.groupingBy() is a downstream collector, which determines how to aggregate the elements in each group. Collectors.counting() is used for counting, Collectors.summingInt(), Collectors.averagingDouble(), etc. are used for summing and averaging, and Collectors.mapping() is used for further conversion, etc.
  • Type conversion : Collectors.counting() returns the Long type. If the target DTO field is int, explicit type conversion (int) (long) e.getValue() is required.
  • Performance considerations : For very large data sets, the Stream API generally performs well, especially if it can be parallelized. However, complex Stream pipelines may also bring certain overhead, and performance testing and optimization should be performed based on actual conditions.

Through this tutorial, you should be able to master how to use the Java 8 Stream API for multi-condition filtering, grouping by composite keys, and counting statistics to efficiently handle complex data aggregation tasks.

The above is the detailed content of Java 8 Streams: Implementing multi-condition filtering, grouping by month and counting statistics. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

ArtGPT

ArtGPT

AI image generator for creative art from text prompts.

Stock Market GPT

Stock Market GPT

AI powered investment research for smarter decisions

Popular tool

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to configure Spark distributed computing environment in Java_Java big data processing How to configure Spark distributed computing environment in Java_Java big data processing Mar 09, 2026 pm 08:45 PM

Spark cannot run in local mode, ClassNotFoundException: org.apache.spark.sql.SparkSession. This is the most common first step of getting stuck: even the dependencies are not correct. Only spark-core_2.12 is written in Maven, but spark-sql_2.12 is not added. SparkSession crashes as soon as it is built. The Scala version must strictly match the official Spark compiled version - Spark3.4.x uses Scala2.12 by default. If you use spark-sqljar of 2.13, the class loader cannot directly find the main class. Practical advice: Go to mvnre

The correct way to send emails in batches using JavaMail API in Java The correct way to send emails in batches using JavaMail API in Java Mar 04, 2026 am 10:33 AM

This article explains in detail how to correctly set multiple recipients (BCC/CC/TO) through javax.mail in Java, solves common misunderstandings - repeatedly calling setRecipients() causes only the first/last address to take effect, and provides a safe and reusable code implementation.

Elementary practice: How to write a simple console blog searcher in Java_String matching Elementary practice: How to write a simple console blog searcher in Java_String matching Mar 04, 2026 am 10:39 AM

String.contains() is not suitable for blog search because it only supports strict substring matching and cannot handle case, spaces, punctuation, spelling errors, synonyms and fuzzy queries; preprocessing toLowerCase() indexOf() or escaped wildcard regular matching (such as .*java.*config.*) is a more practical lightweight alternative.

How to safely map user-entered weekday string to integer value and implement date offset operation in Java How to safely map user-entered weekday string to integer value and implement date offset operation in Java Mar 09, 2026 pm 09:43 PM

This article introduces a concise and maintainable way to map the weekday string (such as "Monday") to the corresponding serial number (1-7), and use the modulo operation to realize the forward and backward offset of any number of days (such as Monday plus 4 days to get Friday), avoiding lengthy if chains and hard-coded logic.

How to generate a list of duplicate elements using Java's Collections.nCopies_Initialization tips How to generate a list of duplicate elements using Java's Collections.nCopies_Initialization tips Mar 06, 2026 am 06:24 AM

Collections.nCopies returns an immutable view. Calling add/remove will throw UnsupportedOperationException; it needs to be wrapped with newArrayList() to modify it, and it is disabled for mutable objects.

How to use Homebrew to install Java on Mac_A must-have Java tool chain for developers How to use Homebrew to install Java on Mac_A must-have Java tool chain for developers Mar 09, 2026 pm 09:48 PM

Homebrew installs the latest stable version of openjdk (such as JDK22) by default, not the LTS version; you need to explicitly execute brewinstallopenjdk@17 or brewinstallopenjdk@21 to install the LTS version, and manually configure PATH and JAVA_HOME to be correctly recognized by the system and IDE.

What is exception masking (Suppressed Exceptions) in Java_Multiple resource shutdown exception handling What is exception masking (Suppressed Exceptions) in Java_Multiple resource shutdown exception handling Mar 10, 2026 pm 06:57 PM

What is SuppressedException: It is not "swallowed", but actively archived by the JVM. SuppressedException is not an exception loss, but the JVM quietly attaches the secondary exception to the main exception under the premise that "only one exception must be thrown" for you to verify afterwards. It is automatically triggered by the JVM in only two scenarios: one is that the resource closure in try-with-resources fails, and the other is that you manually call addSuppressed() in finally. The key difference is: the former is fully automatic and safe; the latter requires you to keep it to yourself, and it can be written as shadowing if you are not careful. try-

How to correctly implement runtime file writing in Java applications (avoiding JAR internal write failures) How to correctly implement runtime file writing in Java applications (avoiding JAR internal write failures) Mar 09, 2026 pm 07:57 PM

After a Java application is packaged as a JAR, data cannot be written directly to the resources in the JAR package (such as test.txt) because the JAR is essentially a read-only ZIP archive; the correct approach is to write variable data to an external path (such as a user directory, a temporary directory, or a configuration-specified path).

Related articles