Home > Java > Java Tutorial > body text

MapReduce principle

(*-*)浩
Release: 2019-06-05 14:15:18
Original
8270 people have browsed it

MapReduce is a programming model for parallel operations on large-scale data sets (larger than 1TB). The concepts "Map" and "Reduce", which are their main ideas, are borrowed from functional programming languages, as well as features borrowed from vector programming languages.

MapReduce principle

It greatly facilitates programmers to run their own programs on distributed systems without knowing distributed parallel programming. The current software implementation specifies a Map function to map a set of key-value pairs into a new set of key-value pairs, and specifies a concurrent Reduce function to ensure that all mapped key-value pairs are Each of them share the same set of keys.

Working principle(Recommended learning: Java video tutorial)

MapReduce execution process

MapReduce principleThe above picture is the flow chart given in the paper. Everything starts from the user program at the top. The user program is linked to the MapReduce library and implements the most basic Map function and Reduce function. The order of execution in the figure is marked with numbers.

1. The MapReduce library first divides the input file of the user program into M parts (M is user-defined). Each part is usually 16MB to 64MB, as shown on the left side of the figure, divided into split0~4; then Use fork to copy the user process to other machines in the cluster.

2. One copy of the user program is called master, and the others are called workers. The master is responsible for scheduling and allocating jobs (Map jobs or Reduce jobs) to idle workers. The number of workers can also be determined by the user. Specified.

3. The worker assigned to the Map job begins to read the input data of the corresponding shard. The number of Map jobs is determined by M and corresponds to split one-to-one; the Map job extracts the key from the input data. Value pairs, each key-value pair is passed to the map function as a parameter, and the intermediate key-value pairs generated by the map function are cached in memory.

4. The cached intermediate key-value pairs will be regularly written to the local disk and divided into R areas. The size of R is defined by the user. In the future, each area will correspond to a Reduce job; these The location of the intermediate key-value pair will be notified to the master, and the master is responsible for forwarding the information to the Reduce worker.

5. The master notifies the worker assigned the Reduce job where the partition it is responsible for is located (there must be more than one place, and the intermediate key-value pairs generated by each Map job may be mapped to all R different partitions), After the Reduce worker reads all the intermediate key-value pairs it is responsible for, it first sorts them so that key-value pairs with the same key are gathered together. Because different keys may be mapped to the same partition, that is, the same Reduce job (who has fewer partitions), sorting is necessary.

6. The reduce worker traverses the sorted intermediate key-value pairs. For each unique key, it passes the key and associated value to the reduce function. The output generated by the reduce function will be added to the output of this partition. in the file.

7. When all Map and Reduce jobs are completed, the master wakes up the genuine user program, and the MapReduce function call returns the code of the user program.

After all executions are completed, the MapReduce output is placed in the output files of R partitions (each corresponding to a Reduce job). Users usually do not need to merge these R files, but use them as input to another MapReduce program for processing. During the entire process, the input data comes from the underlying distributed file system (GFS), the intermediate data is placed in the local file system, and the final output data is written to the underlying distributed file system (GFS). And we should pay attention to the difference between Map/Reduce jobs and map/reduce functions: Map jobs process a shard of input data and may need to call the map function multiple times to process each input key-value pair; Reduce jobs process the intermediate keys of a partition Value pairs, during which the reduce function is called once for each different key, and the Reduce job finally corresponds to an output file.

For more Java-related technical articles, please visit the Java Development Tutorial column to learn!

The above is the detailed content of MapReduce principle. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!