小记:网上搜了很多fuse挂载hdfs的文档,基本都是坑,要不就是版本太低,要不就是xxxx,很好奇这是为什么。。。试了n多次,一把鼻涕一把泪,终于挂载成功,分享出来,希望大家少走些弯路。。。
环境
1. 安装依赖包
yum -y install lzo-devel zlib-devel gcc gcc-c++ autoconf automake libtool openssl-devel fuse-devel cmake
2. 安装protobuf
wget https://github.com/google/protobuf/releases/download/v2.5.0/protobuf-2.5.0.tar.gz -O protobuf-2.5.0.tar.gz
tar -zxvf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/opt/protobuf-2.5.0
make && make install
vim /etc/profile
#加入下面一行
export PATH=/opt/protobuf-2.5.0/bin:$PATH
#测试是否安装成功
protoc --version
fuse挂载hdfs
下载hadoop 源码包,这里是hadoop-2.9.1-src.tar.gz
tar -zxvf hadoop-2.9.1-src.tar.gz
cd hadoop-2.9.1-src
#编译本地库,时间灰常的长。。。
mvn package -Pdist,native -DskipTests -Dtar
如果没有安装maven,参照下面:
wget http://mirrors.tuna.tsinghua.edu.cn/apache/maven/maven-3/3.3.9/binaries/apache-maven-3.3.9-bin.tar.gz
tar -zxvf apache-maven-3.3.9-bin.tar.gz
vim /etc/profile
#加入下面两行
M2_HOME=/root/soft/apache-maven-3.3.9 #解压的那个目录
export PATH=${M2_HOME}/bin:${PATH}
source /etc/profile
#测试
mvn -v
漫长等待后,会产生两个重要的文件:
- /root/soft/hadoop-2.9.1-src/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/fuse-dfs
- /root/soft/hadoop-2.9.1-src/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/fuse_dfs_wrapper.sh
我们把fuse_dfs_wrapper.sh复制出来,修改一下
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#新加几个参数
HADOOP_HOME=/opt/hadoop-2.9.1
HADOOP_PREFIX=/root/soft/hadoop-2.9.1-src
HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
if [ "$HADOOP_PREFIX" = "" ]; then
echo "HADOOP_PREFIX is empty. Set it to the root directory of Hadoop source code"
exit 1
fi
export FUSEDFS_PATH="$HADOOP_PREFIX/hadoop-hdfs-project/hadoop-hdfs-native-client/target/main/native/fuse-dfs"
#export LIBHDFS_PATH="$HADOOP_PREFIX/hadoop-hdfs-project/hadoop-hdfs-native-client/target/usr/local/lib"
#把上面的改成下面的
export LIBHDFS_PATH="$HADOOP_PREFIX/hadoop-hdfs-project/hadoop-hdfs-native-client/target/native/target/usr/local/lib"
if [ "$OS_ARCH" = "" ]; then
export OS_ARCH=amd64
fi
if [ "$JAVA_HOME" = "" ]; then
export JAVA_HOME=/usr/local/java
fi
if [ "$LD_LIBRARY_PATH" = "" ]; then
export LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/$OS_ARCH/server:/usr/local/lib
fi
while IFS= read -r -d '' file
do
export CLASSPATH=$CLASSPATH:$file
done < <(find "$HADOOP_PREFIX/hadoop-client" -name "*.jar" -print0)
while IFS= read -r -d '' file
do
export CLASSPATH=$CLASSPATH:$file
done < <(find "$HADOOP_PREFIX/hadoop-hdfs-project" -name "*.jar" -print0)
export CLASSPATH=$HADOOP_CONF_DIR:$CLASSPATH
export PATH=$FUSEDFS_PATH:$PATH
export LD_LIBRARY_PATH=$LIBHDFS_PATH:$JAVA_HOME/jre/lib/$OS_ARCH/server
fuse_dfs "$@"
最后一步,挂载
mkdir /mnt/fuse_hdfs
./fuse_dfs_wrapper.sh hdfs://namenode:9000 /mnt/fuse_hdfs/
然后就可以看到hdfs里面的数据了
卸载挂载点
umount /mnt/fuse_hdfs
网友评论