绑定完请刷新页面
取消
刷新

分享好友

×
取消 复制
面向生产环境的大集群模式安装Hadoop-2
2019-12-17 18:02:50

5、修改挂载点的属性

[root@hadoop1 ~]# chmod777/home/hadoop/.ssh/

  6、重启nfs

[root@hadoop1 ~]# service nfs restart

Shutting down NFS mountd: [FAILED]Shutting down NFS daemon: [FAILED]Shutting down NFS quotas: [FAILED]Shutting down NFS services: [FAILED]Starting NFS services: [ OK ]Starting NFS quotas: [ OK ]Starting NFS daemon: [ OK ]Starting NFS mountd: [ OK ]

  7、在本机挂载测试

[root@hadoop1 ~]# mount192.168.1.161:/home/hadoop/.ssh/mnt[root@hadoop1 ~]# mount/dev/sda1on/ type ext3 (rw)procon/proctypeproc (rw)

sysfs on/sys type sysfs (rw)

devpts on/dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on/dev/shm type tmpfs (rw)

none on/proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on/var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

nfsd on/proc/fs/nfsd type nfsd (rw)192.168.1.161:/home/hadoop/.sshon/mnt type nfs (rw,addr=192.168.1.161)[root@hadoop1 ~]# ll/home/hadoop/.ssh/total 8-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub[root@hadoop1 ~]# ll/mnt

total 8-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub

四、nfs整合ssh密钥

  1、先将id_rsa.pub拷贝成authorized_keys

[hadoop@hadoop1 ~]$ cp .ssh/id_rsa.pub .ssh/authorized_keys

  2、再登陆hadoop2和hadoop3创建hadoop用户并用hadoop登陆,然后生成每个机器的ssh的rsa密钥

----hadoop2和hadoop3操作一样----

[

root@hadoop2 dns]# useradd hadoop[root@hadoop2 dns]# passwd hadoop

Changing password foruser hadoop.

New UNIX password:

BAD PASSWORD: it isbasedon a dictionary word

Retype new UNIX password:

passwd: all authentication tokens updated successfully.[root@hadoop2 dns]# su- hadoop[hadoop@hadoop2 ~]$ ssh-keygen-t rsa

Generating public/private rsakey pair.

Enter fileinwhichtosavethekey(/home/hadoop/.ssh/id_rsa):

Created directory '/home/hadoop/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in/home/hadoop/.ssh/id_rsa.

Your publickeyhas been savedin/home/hadoop/.ssh/id_rsa.pub.

The keyfingerprintis:

3c:9d:07:2a:7d:3d:e3:d3:22:0c:0e:8b:5d:96:93:e1 hadoop@hadoop2

  3、在hadoop2和hadoop3上挂载nfs

[root@hadoop2 dns]# mount192.168.1.161:/home/hadoop/.ssh/mnt[root@hadoop2 dns]# ll/mnt

total 12-rw-r--r-- 1 hadoop hadoop 396 Aug 25 11:04 authorized_keys-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub[root@hadoop2 dns]# mount/dev/sda1on/ type ext3 (rw)procon/proctypeproc (rw)

sysfs on/sys type sysfs (rw)

devpts on/dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on/dev/shm type tmpfs (rw)

none on/proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on/var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)192.168.1.161:/home/hadoop/.sshon/mnt type nfs (rw,addr=192.168.1.161)

  4、把hadoop2和hadoop3的公钥id_rsa_pub都添加到/mnt/authorized_keys里

[root@hadoop2 dns]# cat/home/hadoop/.ssh/id_rsa.pub>>/mnt/authorized_keys [root@hadoop3 ~]# cat/home/hadoop/.ssh/id_rsa.pub>>/mnt/authorized_keys

  5、查看authorized_keys内容

[hadoop@hadoop1 ~]$ cat/mnt/authorized_keys

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA32vNwXv/23k0yF7QeITb61J5uccudHB3gQBtqCnB7wsOtIhdUsVIfcxGmPnWp6S9V+Ob+b73Vrl2xsxP4i0N8Cu1l2ZcU9jevc+o37yX4nW2oTBFVEP31y9E9fXkYf3cKiF0UrvunL59qgNnVUbq8qRtFr5QPAx6lGY0TYZiPaPr+POwNKF1IZvToqABsOnNimv0DNmAhbd3QyM7GaR/ZRQKOCMF8NYljo6exoDk9xPq/wCHC/rBnAU3gUlwi7Kn/tk2dirwvYZuqP3VO+w5zd6sYxscD8+UNK99XdOARzTlc8/iEPHy+JSBa6sQI2hOAOCAuHBtTymoJFUDH9YqXQ==hadoop@hadoop1ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4lTx6JTZlhoLI4Yyo0a6YeDmIgz60pYwYKwVL+p4wfp9OWB2/sEyf9iCsK8i94mnWMfNsRehqAG2ucPmWz1s/Kufxu/6uc8hJjDlOOMUOE7ENyN0Zre5MHj8jauDRhY4y37Rh3Crx86wzq79isDqJOWnKyjPQDjUH45780Hvtk87ckwNNSFhwuRgTFKhz0bQloJuHazU1/W924wmicqeEUSGhUFEkXUeJu7FqQjJcPjoRNqyTEuCHiYVh9HjOrUPdosfYqmQfuZ/x2gmsGRUdfTl32rkoZW43ay8CFV/MKqAFucEOiiHW7xttmm3zJgcyLptGhjo7NtvAQwKkPfG6w==hadoop@hadoop2ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs7fkzQMR6yVqLBVAnJqTxFPO9NNngrmYDNZMbWXDz6V8J4Z7zC46odUERe3CNjC+v3X8rwvUWlALYtvMNonQwhnpvqe2s0CpDithSFkOt5fQarRYP5JtAjHvF5b22NqcyltF+ywLT4zKAg4tjgGV5nLafI2hsNjgljUOXkRjpwSSUpLmLayWnepLIwioCPPGIkM40balUOEWEASzaI4DaPoywmoVUrByou71i1F1VizXpbhIWW+LE2cANAy1xmP0zYBa+/O4mvpgZjWLtLpKFR/1nRZPh1emy+OB6RcoJl3Awmhcsyyjd4Q8jfOYsH78PKpnwJfyhtUEIENrzUV63w==hadoop@hadoop3

  6、做软连接(此步骤不需要在nfs服务器端做,只在客户端做)

[hadoop@hadoop2 ~]$ ln-s/mnt/authorized_keys/home/hadoop/.ssh/authorized_keys[hadoop@hadoop2 ~]$ ll/home/hadoop/.ssh/authorized_keys

lrwxrwxrwx 1hadoop hadoop20Aug2511:14/home/hadoop/.ssh/authorized_keys->/mnt/authorized_keys[hadoop@hadoop3 ~]$ ln-s/mnt/authorized_keys/home/hadoop/.ssh/authorized_keys[hadoop@hadoop3 ~]$ ll/home/hadoop/.ssh/authorized_keys

lrwxrwxrwx 1hadoop hadoop20Aug2511:15/home/hadoop/.ssh/authorized_keys->/mnt/authorized_keys

  7、修改权限

[hadoop@hadoop1 ~]$ chmod700/home/hadoop/.ssh/

    备注:如果不修改的话,在进行登陆的时候会出现需要密码。

8、测试是否实验无密码登陆

[hadoop@hadoop1 ~]$ ssh hadoop2

The authenticity ofhost'hadoop2 (192.168.1.162)'can't be established.

RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'hadoop2,192.168.1.162' (RSA) to the list of known hosts.

[hadoop@hadoop2 ~]$ ssh hadoop3

The authenticity of host 'hadoop3 (192.168.1.163)' can't be established.

RSA keyfingerprintisca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.

Are you sure you want tocontinueconnecting (yes/no)? yes

Warning: Permanently added 'hadoop3,192.168.1.163'(RSA)tothe listof known hosts.[hadoop@hadoop3 ~]$

五、批量安装Hadoop

  1、先在hadoop1上把namenode安装完成,安装hadoop分布式可以参考:Hadoop集群安装

[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves|awk'{print "scp -rp hadoop-0.20.2 hadoop@"$1":/home/hadoop/"}'> scp.sh[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves|awk'{print "scp -rp temp hadoop@"$1":/home/hadoop/"}'>> scp.sh[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves|awk'{print "scp -rp user hadoop@"$1":/home/hadoop/"}'>> scp.sh[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves|awk'{print "scp -rp jdk1.7 hadoop@"$1":/home/hadoop/"}'>> scp.sh[hadoop@hadoop1 ~]$ ls

hadoop-0.20.2jdk1.7scp.shtempuser[hadoop@hadoop1 ~]$ cat scp.sh

scp -rp hadoop-0.20.2hadoop@192.168.1.162:/home/hadoop/scp -rp hadoop-0.20.2hadoop@192.168.1.163:/home/hadoop/scp -rptemphadoop@192.168.1.162:/home/hadoop/scp -rptemphadoop@192.168.1.163:/home/hadoop/scp -rpuserhadoop@192.168.1.162:/home/hadoop/scp -rpuserhadoop@192.168.1.163:/home/hadoop/scp -rp jdk1.7hadoop@192.168.1.162:/home/hadoop/scp -rp jdk1.7hadoop@192.168.1.163:/home/hadoop/

  2、格式化namenode

[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/hadoop namenode-format13/08/2511:52:39 INFO namenode.NameNode: STARTUP_MSG: /************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = hadoop1/192.168.1.161

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 0.20.2

STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010

************************************************************/Re-format filesystemin/home/hadoop/user/name ? (Yor N) Y13/08/2511:52:46INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop13/08/2511:52:46INFO namenode.FSNamesystem: supergroup=supergroup13/08/2511:52:46INFO namenode.FSNamesystem: isPermissionEnabled=true13/08/2511:52:47INFO common.Storage:Imagefileofsize96savedin0 seconds.13/08/2511:52:48INFO common.Storage: Storage directory/home/hadoop/user/name has been successfully formatted.13/08/2511:52:48 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.161

************************************************************/

  3、启动hadoop

[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/start-all.sh

starting namenode, logging to/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoop1.out192.168.1.163: starting datanode, loggingto/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop3.out192.168.1.162: starting datanode, loggingto/home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop2.out

The authenticity ofhost'192.168.1.161 (192.168.1.161)'can't be established.

RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.

Are you sure you want to continue connecting (yes/no)? yes

192.168.1.161: Warning: Permanently added '192.168.1.161' (RSA) to the list of known hosts.

192.168.1.161: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop1.out

starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoop1.out

192.168.1.162: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop2.out

192.168.1.163: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop3.out

  4、查看各个节点

[hadoop@hadoop1 ~]$ jdk1.7/bin/jps4416 Jps4344 JobTracker4306 SecondaryNameNode4157 NameNode[hadoop@hadoop2 ~]$ jdk1.7/bin/jps3699 TaskTracker3636 DataNode3752 Jps[hadoop@hadoop3 ~]$ jdk1.7/bin/jps4763 TaskTracker4834 Jps4653DataNode

 六、重点说明

    1、如果重启以后无法自动挂载nfs,可以在/etc/rc.d/rc.local文件中添加:

       /bin/mount -a

    2、如果IP是自动获取的,请在DNS主机的/etc/rc.d/rc.local文件添加:

/bin/cat/app/resolv.conf>/etc/resolv.conf

[root@node1 ~]# cat/app/resolv.conf

; generated by/sbin/dhclient-script

#search localdomain

#nameserver 192.168.1.151

    其它主机的/etc/rc.d/rc.local添加:

/bin/cat/app/resolv.conf>/etc/resolv.conf

[root@node2 ~]# cat/app/resolv.conf

; generated by/sbin/dhclient-script

#search localdomain

nameserver 192.168.1.151

分享好友

分享这个小栈给你的朋友们,一起进步吧。

凉城时光
创建时间:2019-12-04 10:57:57
朋友 我们一起聊运维
展开
订阅须知

• 所有用户可根据关注领域订阅专区或所有专区

• 付费订阅:虚拟交易,一经交易不退款;若特殊情况,可3日内客服咨询

• 专区发布评论属默认订阅所评论专区(除付费小栈外)

栈主、嘉宾

查看更多
  • 我没
    栈主

小栈成员

查看更多
  • unnamed personq
  • unnamed personq
  • bluetooth
  • amadan
戳我,来吐槽~