当前位置: 代码迷 >> 综合 >> MapReduce框架-combiner
  详细解决方案

MapReduce框架-combiner

热度:0   发布时间:2024-02-20 07:40:20.0

Combiner

1.对combiner的理解

combiner其实属于优化方案,由于带宽限制,应该尽量map和reduce之间的数据传输数量。它在Map端把同一个key的键值对合并在一起并计算,计算规则与reduce一致,所以combiner也可以看作特殊的Reducer。
执行combiner操作要求开发者必须在程序中设置了combiner(程序中通过job.setCombinerClass(myCombine.class)自定义combiner操作)

2.哪里使用combiner?

1,map输出数据根据分区排序完成后,在写入文件之前会执行一次combine操作(前提是作业中设置了这个操作);
2,如果map输出比较大,溢出文件个数大于3(此值可以通过属性min.num.spills.for.combine配置)时,在merge的过程(多个spill文件合并为一个大文件)中前还会执行combiner操作;

3.注意事项

不是每种作业都可以做combiner操作的,只有满足以下条件才可以:
1,combiner只应该用于那种Reduce的输入key/value与输出key/value类型完全一致,因为combine本质上就是reduce操作。
2,计算逻辑上,combine操作后不会影响计算结果,像求和,最大值就不会影响,求平均值就影响了。

4.wordcount的cominer开发实例

package com.sl.bigdatatest.mapreduce;import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;public class WordCount {
    public static void main(String[] args) throws Exception {
    if (args.length < 2) {
    System.err.println("Uage:<in> <out>");System.exit(2);}String inputPath = args[0];Path outputPath = new Path(args[1]);//1.configurationConfiguration conf = new Configuration();URI uri = new URI("hdfs://192.168.0.200:9000");FileSystem fileSystem = FileSystem.get(uri, conf);if (fileSystem.exists(outputPath)) {
    boolean b = fileSystem.delete(outputPath, true);System.out.println("已存在目录删除:"+b);}//2.建立jobJob job = Job.getInstance(conf, WordCount.class.getName());job.setJarByClass(WordCount.class);//3.输入文件FileInputFormat.setInputPaths(job, new Path(inputPath));//4.格式化输入文件job.setInputFormatClass(TextInputFormat.class);//5.mapjob.setMapperClass(MapWordCountTask.class);job.setMapOutputKeyClass(Text.class);job.setMapOutputValueClass(LongWritable.class);//6.reducejob.setReducerClass(ReduceWordCountTask.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(LongWritable.class);/**指定本job使用combiner组件,组件所用的类为ReduceWordCountTask**/job.setCombinerClass(ReduceWordCountTask.class);//7.输出文件FileOutputFormat.setOutputPath(job, outputPath);//8.输出文件格式化job.setOutputFormatClass(TextOutputFormat.class);//9.提交给集群执行job.waitForCompletion(true);}public static class MapWordCountTask extends Mapper<LongWritable, Text, Text, LongWritable> {
    private Text k2 = new Text();private LongWritable v2 = new LongWritable();@Overrideprotected void map(LongWritable key, Text value, Context context) throws Exception {
    String content = value.toString();StringTokenizer st = new StringTokenizer(content);while (st.hasMoreElements()) {
    k2.set(st.nextToken());v2.set(1L);context.write(k2, v2);}}}public static class ReduceWordCountTask extends Reducer<Text, LongWritable, Text, LongWritable> {
    private LongWritable v3 = new LongWritable();@Overrideprotected void reduce(Text k2, Iterable<LongWritable> v2s,Context context) throws Exception {
    long sum = 0;for (LongWritable longWritable : v2s) {
    sum += longWritable.get();v3.set(sum);}context.write(k2, v3);}}
}
  相关解决方案