文章详情

  • 游戏榜单
  • 软件榜单
关闭导航
热搜榜
热门下载
热门标签
php爱好者> php文档>hadoop的安装配置

hadoop的安装配置

时间:2010-08-09  来源:flying5


1、安装java和ssh
    apt-get     Ubuntu一般默认安装有ssh客户端,并没有安装服务器端,输入"apt-get install ssh"便会将服务器安装好,然后使用"/etc/init.d/ssh start"将服务器运行起来。      2、创建hadoop用户组和hadoop用户  #addgroup hadoop  #adduser --ingroup hadoop hadoop
3、配置ssh 切换到hadoop用户下 #su - hadoop 生成密钥对 hadoop@ubuntu:~$ssh-keygen -t rsa -P "" 将公钥拷贝到服务器上 hadoop@ubuntu:~$cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
4、安装Hadoop Hadoop不需要安装解压后就可以用了,以root用户运行下面的命令。 #cd /usr/local #tar xzf hadoop-0.20.0.tar.gz #mv hadoop-0.20.0 hadoop #chown -R hadoop:hadoop hadoop
5、配置Hadoop       打开conf/hadoop-env.sh,将“#export JAVA_HOME=/usr/lib/j2sdk1.5-sun”改成“export JAVA_HOME=/usr/lib/jvm/java-6-sun“就好了,当然要看安装的java版本了,Ubuntu 9.10的源的Java版本就是1.6。       接着修改core-site.xml文件,填入以下内容(/local/hadoop-datastore/hadoop-hadoop目录必须存在,并且需要将目录属主改成hadoop用户): <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>   <!-- Put site-specific property overrides in this file. -->   <configuration>   <property>   <name>hadoop.tmp.dir</name>   <value>/usr/local/hadoop-datastore/hadoop-${user.name}</value>   <description>A base for other temporary directories.</description> </property>   <property>   <name>fs.default.name</name>   <value>hdfs://localhost:54310</value>   <description>The name of the default file system.  A URI whose   scheme and authority determine the FileSystem implementation.  The   uri's scheme determines the config property (fs.SCHEME.impl) naming   the FileSystem implementation class.  The uri's authority is used to   determine the host, port, etc. for a filesystem.</description> </property>   <property>   <name>dfs.replication</name>   <value>1</value>   <description>Default block replication.   The actual number of replications can be specified when the file is created.   The default is used if replication is not specified in create time.   </description> </property>   </configuration>         然后编辑mapred-site.xml文件,输入以下内容:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>   <name>mapred.job.tracker</name>   <value>localhost:54311</value>   <description>The host and port that the MapReduce job tracker runs   at.  If "local", then jobs are run in-process as a single map   and reduce task.   </description> </property>
</configuration>                0.20.0版本之后的Hadoop是core-site.xml         6、初始化name node节点
./hadoop namenode -format
09/10/31 23:30:10 INFO namenode.NameNode: STARTUP_MSG:  /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG:   host = ecy-geek/127.0.1.1 STARTUP_MSG:   args = [-format] STARTUP_MSG:   version = 0.20.1 STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r 810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009 ************************************************************/ Re-format filesystem in /usr/local/hadoop-datastore/hadoop-hadoop/dfs/name ? (Y or N) y Format aborted in /usr/local/hadoop-datastore/hadoop-hadoop/dfs/name 09/10/31 23:30:16 INFO namenode.NameNode: SHUTDOWN_MSG:  /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at ecy-geek/127.0.1.1 ************************************************************/       7、运行Hadoop
./start-all.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hadoop-namenode-ecy-geek.out localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-hadoop-datanode-ecy-geek.out localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-ecy-geek.out starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hadoop-jobtracker-ecy-geek.out localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hadoop-tasktracker-ecy-geek.out            使用jps可见namenode,datanode,secondarynamenode,jobtracker,tasktracker都运行起来了
jps 21581 NameNode 21975 SecondaryNameNode 22238 TaskTracker 22477 Jps 22053 JobTracker 21777 DataNode          Hadoop提供了方便的Web UI以查看相关信息:     http://localhost:50030/ - web UI for MapReduce job tracker(s) http://localhost:50060/ - web UI for task tracker(s) http://localhost:50070/ - web UI for HDFS name node(s)
ps:hadoop的安装配置可以参考 http://www.tbdata.org/archives/266,写的很详细
相关阅读 更多 +
排行榜 更多 +
辰域智控app

辰域智控app

系统工具 下载
网医联盟app

网医联盟app

运动健身 下载
汇丰汇选App

汇丰汇选App

金融理财 下载