当前位置: 代码迷 >> Eclipse >> hadoop eclipse plugin 集成出错
  详细解决方案

hadoop eclipse plugin 集成出错

热度:745   发布时间:2016-04-23 00:26:45.0
hadoop eclipse plugin 集成报错

把hadoop-eclipse-plugin-2.2.0.jar放到Eclipse的plugin目录,重启Eclipse后,运行WordCount报错:

Java代码如下:

package com.lyq.study.example;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;

public class WordCount {

? public static class TokenizerMapper
?????? extends Mapper<Object, Text, Text, IntWritable>{
???
??? private final static IntWritable one = new IntWritable(1);
??? private Text word = new Text();
?????
??? public void map(Object key, Text value, Context context
??????????????????? ) throws IOException, InterruptedException {
????? StringTokenizer itr = new StringTokenizer(value.toString());
????? while (itr.hasMoreTokens()) {
??????? word.set(itr.nextToken());
??????? context.write(word, one);
????? }
??? }
? }
?
? public static class IntSumReducer
?????? extends Reducer<Text,IntWritable,Text,IntWritable> {
??? private IntWritable result = new IntWritable();

??? public void reduce(Text key, Iterable<IntWritable> values,
?????????????????????? Context context
?????????????????????? ) throws IOException, InterruptedException {
????? int sum = 0;
????? for (IntWritable val : values) {
??????? sum += val.get();
????? }
????? result.set(sum);
????? context.write(key, result);
??? }
? }

? public static void main(String[] args) throws Exception {
??? Configuration conf = new Configuration();
??? conf.set("fs.defaultFS", "hdfs://master129:9000/");
??? conf.set("mapreduce.framework.name", "local");
??? conf.set("mapred.job.tracker", "master129:9001");
??? conf.set("hbase.zookeeper.quorum", "master129,slave130,slave131,slave132");
??? args = new String[]{"hdfs://master129:9000/test/input/","hdfs://master129:9000/test/output"};
??? String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
??? if (otherArgs.length != 2) {
????? System.err.println("Usage: wordcount <in> <out>");
????? System.exit(2);
??? }
??? Job job = new Job(conf, "word count");
??? job.setJarByClass(WordCount.class);
??? job.setMapperClass(TokenizerMapper.class);
??? job.setCombinerClass(IntSumReducer.class);
??? job.setReducerClass(IntSumReducer.class);
??? job.setOutputKeyClass(Text.class);
??? job.setOutputValueClass(IntWritable.class);
??? FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
??? FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
??? System.exit(job.waitForCompletion(true) ? 0 : 1);
? }
}
控制台报错如下:

Exception in thread "main" java.lang.UnsatisfiedLinkError: D:\app\hadoop-2.2.0\bin\hadoop.dll: Can't load AMD 64-bit .dll on a IA 32-bit platform
?at java.lang.ClassLoader$NativeLibrary.load(Native Method)
?at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1807)
?at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1732)
?at java.lang.Runtime.loadLibrary0(Runtime.java:823)
?at java.lang.System.loadLibrary(System.java:1028)
?at com.lyq.study.example.A.main(A.java:5)

?

解决方法:

把笔记本上安装的32位jdk,改成64位的。

?

  相关解决方案