Environment:
Debian, 5 nodes
Hadoop-2.7.5
Kerberos Install
1.apt-get install krb5-{admin-server,kdc} -y
2.modify /etc/krb5.conf
eg:
[libdefaults]
default_realm = TEST.XXX.COM
allow_weak_crypto = true
[realms]
HADOOP.HZ.NETEASE.COM = {
kdc = hadoop1
admin_server = hadoop1
default_domain = test.hadoop.org
}
[domain_realm]
.server.163.org = TEST.XXX.COM
server.163.org = TEST.XXX.COM
[logging]
kdc = FILE:/var/log/kerberos/krb5kdc.log
admin_server = FILE:/var/log/kerberos/kadmin.log
default = FILE:/var/log/kerberos/krb5lib.log
2.2 create kerberos log dir
mkdir /var/log/kerberos
touch /var/log/kerberos/{krb5kdc,kadmin,krb5lib}.log
chmod -R 750 /var/log/kerbero
3.create database for kerberos
kdb5_util create -s -r TEST.XXX.COM
4.install kerberos client in all nodes
apt-get install krb5-{config,user} libpam-krb5
5.start kerberos server
service krb5-admin-server restart
service krb5-kdc restart
6.create kerberos users and keytab files for hadoop instances
eg:
kadmin.local
addprinc -randkey hdfs/hadoop1@TEST.XXX.COM
xst -k hdfs.keytab hdfs/hadoop1
Hadoop Install
1.unzip hadoop-2.7.5.tar.gz in all nodes
eg:
tar -zxvf software/hadoop-2.7.5.tar.gz -C ~/bigdata/
2.set security config on hadoop by environment
configuration demo refer to https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_security/content/add-security-info-config.html
Init Sasl Configuration
1.execute below command on CA node(hadoop1)
<!-- CN=CA node(eg hadoop1) -->
openssl req -new -x509 -keyout test_ca_key -out test_ca_cert -days 9999 -subj '/C=CN/ST=zhejiang/L=hangzhou/O=hadoop/OU=security/CN=hadoop1'
2.copy test_ca_key test_ca_cert to all nodes and execute below commands
<!-- CN=CA node(eg hadoop1) -->
keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=hadoop1, OU=test, O=test, L=hangzhou, ST=zhejiang, C=cn"
keytool -keystore truststore -alias CARoot -import -file test_ca_cert
keytool -certreq -alias localhost -keystore keystore -file cert
<!-- pass: means keystore password -->
openssl x509 -req -CA test_ca_cert -CAkey test_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:hadoop
keytool -keystore keystore -alias CARoot -import -file test_ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed
3.Config ssl-server.xml and ssl-client.xml in hadoop
set *.keystore.password and *.keystore.keypassword and *.truststore.password
set *.keystore.location and *.truststore.location
Note: rember ssl.server.keystore.keypassword and ssl.server.keystore.password and ssl.server.truststore.password you have set in step 2
Hadoop Start
1.init namenodes
execute command on active namenode:
bin/hdfs namenode -format
execute command on standby namenode:
bin/hdfs namenode -bootstrapStandby
2.start namenodes and journalnodes
sbin/start-dfs.sh
3.transitionToActive
bin/hdfs haadmin -transitionToActive nn1
4.start datanodes
sbin/hadoop-daemons.sh start datanode
6.start rm and nm
sbin/start-yarn.sh
7.start jhs
sbin/mr-jobhistory-daemon.sh start historyserver
Quote:
http://midactstech.blogspot.hk/2013/07/how-to-install-mit-kerberos-5-server-on.html
https://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/SecureMode.html
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_security/content/setting_up_kerberos_authentication_for_non_ambari_clusters.html
https://zh.hortonworks.com/blog/deploying-https-hdfs/
本文来自网易实践者社区,经作者陈洪授权发布。