How to deploy Hue on HDFS with Namenode HA
Follow the instructions below to deploy Hue on Hadoop Distributed File System (HDFS) with NameNode High Availability (HA).
1. Install Hadoop HttpFS on the Hue server.
[root@admin ~]# yum install hadoop-httpfs
2. Create a link for the hadoop-httpfs service.
3. Make changes to the file /usr/phd//etc/rc.d/init.d/hadoop-httpfs.
Important Note: Search the word "hdp" and replace it with "phd".
5. Comment out the following line in file /usr/phd//hadoop-httpfs/sbin/taniaarraindegia.esy.es.
export CATALINA_BASE=/etc/hadoop-httpfs/tomcat-deployment6. Create a link for taniaarraindegia.esy.es.
[root@admin ]# mkdir /usr/phd/current/hadoop-httpfs/libexec [root@admin ]# ln -s /usr/phd/current/hadoop-client/libexec/taniaarraindegia.esy.es /usr/phd/current/hadoop-httpfs/libexec/taniaarraindegia.esy.es7. Modify /etc/hadoop-httpfs/conf/taniaarraindegia.esy.es on the Hue server to configure HttpFS to talk to the cluster.
<property> <name>taniaarraindegia.esy.es</name> <value>*</value> </property> <property> <name>taniaarraindegia.esy.es</name> <value>*</value> </property>8. Modify taniaarraindegia.esy.es on the Ambari web UI by adding the following properties. Restart the HDFS is needed for the changes to take effect.
<property> <name>taniaarraindegia.esy.es</name> <value>*</value> </property> <property> <name>taniaarraindegia.esy.es</name> <value>*</value> </property>9. On the Hue server, modify the subsection [hadoop][[hdfs_clusters]][[[default]]] in /etc/hue/conf/taniaarraindegia.esy.es.
fs_defaultfs | the taniaarraindegia.esy.estFS property in taniaarraindegia.esy.es |
webhdfs_url | URL to HttpFS server |
Example:
Start the hadoop-httpfs service.
[root@admin conf]# service hadoop-httpfs startRestart the Hue service.
[root@admin conf]# service hue restart
-